Mechanical Models

Guest Post by Willis Eschenbach

[NOTE the update at the end of the post.] I’ve continued my peregrinations following the spoor of the global climate model data cited in my last post. This was data from 19 global climate models. There are two parts to the data, the inputs and the outputs. The inputs to the models are the annual forcings (the change in downwelling radiation at the top of the atmosphere) for the period 1860 to 2100. The outputs of the models are the temperature hindcasts/forecasts for the same period, 1860 to 2100. Figure 1 shows an overview of the two datasets (model forcings and modeled temperatures) the nineteen models, for the historical period 1860-2000.

model output & inputFigure 1. Forcing (red lines, W/m2) and modeled temperatures (blue lines, °C) from 19 global climate models for the period 1860-2000. Light vertical lines show the timing of the major volcanic eruptions. The value shown in upper part of each panel is the decadal trend in the temperatures.  For comparison, the trend in the HadCRUT observational dataset is 0.04°C/decade, while the models range from 0.01 to 0.1°C/decade, a tenfold variation. The value in the lower part of each panel is the decadal trend in forcing. Click any graphic to enlarge.

The most surprising thing to me about this is the wide disparity in the amount, trend, and overall shape of the different forcings. Even the effects of the volcanic eruptions (sharp downwards excursions in the forcings [red line]), which I expected to be similar between the models, have large variations between the models. Look at the rightmost eruption in each panel, Pinatubo in 1991. The GFDL-ESM2M model shows a very large volcanic effect from Pinatubo, over 3 W/m2. Compare that to the effect of Pinatubo in the ACCESS1-0 model, only about 1 W/m2.

And the shapes of the forcings are all over the map. GISS-E2-R increases almost monotonically except for the volcanoes. On the other hand, the MIROC-ESM and HadGEM2-ES forcings have a big hump in the middle. (Note also how the temperatures from those models have a big hump in the middle as well.) Some historical forcings have little annual variability, while others are all over the map. Each model is using its own personal forcing, presumably chosen because it produces the best results …

Next, as you can see from even a superficial examination of the data, the output of the models is quite similar to the input. How similar? Well, as I’ve shown before, the input of the models (forcings) can be transformed into an accurate emulation of the output (temperature hindcasts/forecasts) through the use of a one-line iterative model.

Now, the current climate paradigm is that over time, the changes in global surface air temperature evolve as a linear function of the changes in global top-of-atmosphere forcing.  The canonical equation expressing this relationship is:

∆T = lambda * ∆F           [Equation 1]

In this equation, “∆T” is the change in temperature from the previous year. It can also be written as T[n] – T[n-1], where n is the time of the observation. Similarly, “∆F” is the change in forcing from the previous year, which can be written as F[n] – F[n-1]. Finally, lambda is the transient climate response (°C / W/m^2). Because I don’t have their modeled ocean heat storage data, lambda does not represent the equilibrium climate sensitivity. Instead, lambda in all of my calculations represents the transient climate response, or TCR.

The way that I am modeling the models is to use a simple lagging of the effects of Equation 1. The equation used is:

∆T = lambda * ∆F * ( 1-e^( -1/tau )) + ( T[n-1] – T[n-2] ) * e^(-1/tau)  [Eqn. 2]

In Equation 2, T is temperature (°C), n is time (years), ∆T is T[n] – T[n-1], lambda is the sensitivity (°C / W/m^2), ∆F is the change in forcing F[n] – F[n-1] (W/m2), and tau is the time constant (years) for the lag in the system.

So … what does that all say? Well, it says two things.

First, it says that the world is slow to warm up and cool down. So when you have a sudden change in forcing, for example from a volcano, the temperature changes more slowly. The amount of lag in the system (in years) is given by the time constant tau.

Next, just as in Equation 1, Equation 2 scales the input by the transient climate response lambda.

So what Equation 2 does is to lag and scale the forcings. It lags them by tau, the time constant and it scales them by lambda, the transient climate response (TCR).

In this dataset, the TCR ranges from 0.36 to 0.88 depending on the model. It is the expected change in the temperature (in degrees C) from a 1 W/m2 change in forcing. The transient climate response (TCR) is the rapid response of the climate to a change in forcing. It does not include the amount of energy which has gone into the ocean. As a result, the equilibrium climate sensitivity (ECS) will always be larger than the TCR. The observations in the Otto study indicate that over the last 50 years, ECS has remained stable at about 30% larger than the TCR (lambda). I have used that estimate in Figure 2 below. See my comment here for a discussion of the derivation of this relationship between ECS and TCR.

Using the two free parameters lambda and tau to lag and scale the input, I fit the above equation to each model in turn. I used the full length (1860-2100) of the same dataset shown in Figure 1, the RCP 4.5 scenario. Note that the same equation is applied to the different forcings in all instances, and only the two parameters are varied. The results are shown in Figure 2.

modeled temperature and emulation longFigure 2. Temperatures (hindcast & forecast) from 19 models for the period 1860 to 2100 (light blue), and emulations using the simple lagged model shown in Equation 1 (dark blue). The value for “tau” is the time constant for the lag in the system. The ECS is the equilibrium climate sensitivity (in degrees C) to a doubling of CO2 (“2xCO2″). Following the work of Otto, the ECS is estimated in all cases as being 30% larger than “lambda”, which is the transient climate response (TCR). See the end note regarding units. Click to enlarge.

In all cases, the use of Equation 2 on the model forcings and temperatures results in a very accurate, faithful match to the model temperature output. Note that the worst r^2 of the group is 0.94, and the median r^2 is 0.99. In other words, no matter what each of the models is actually doing internally, functionally they are all just lagging and resizing the inputs.

Other than the accuracy and fidelity of the emulation of every single one of the model outputs, there are some issues I want to discuss. One is the meaning of this type of “black box” analysis. Another are the implications of the fact that all of these modeled temperatures are so accurately represented by this simplistic formula. And finally, I’ll talk about the elusive “equilibrium climate sensitivity”.

Black Box Analyses

A “black box” analysis is an attempt to determine what is going on inside a “black box”, such as a climate model. In Figure 3, I repeat a drawing I did for an earlier discussion of these issues. I see that it used an earlier version of the CCSM model than the one used in the new data above, which is CCSM4.

ccsm3 as a black boxFigure 3. My depiction of the global climate model CCSM3 as a black box, where only the inputs and outputs are known.

In a “black box” analysis, all that we know are the inputs (forcings) and the outputs (global average surface air temperatures). We don’t know what’s inside the box. The game is to figure out what a set of possible rules might be that would reliably transform the given input (forcings) into the output (temperatures). Figure 2 demonstrates that functionally, the output temperatures of every one of the climate models shown above in Figure 2 can be accurately and faithfully emulated by simply lagging and scaling the input forcings.

Note that a black box analysis is much like the historical development of the calculations for the location of the planets. The same conditions applied to that situation, in that no one knew the rules governing the movements of the planets. The first successful solution to that black box problem utilized an intricate method called “epicycles”. It worked fine, in that it was able to predict the planetary locations, but it was hugely complex. It was replaced by a sun-centered method of calculation that gave the same results but was much simpler.

I bring that up to highlight the fact that in a “black box” puzzle as shown in Figure 3, you want to find not just a solution, but the simplest solution you can find. Equation 2 certainly qualifies as simple, it is a one-line equation.

Finally, be clear that I am not saying that the models are actually scaling and lagging the forcings. A black box analysis just finds the simplest equation that can transform the input into the output, but that equation says nothing about what actually might be going on inside the black box. Instead, the equation functions the same as whatever might be going on inside the box—given a set of inputs, the equation gives the same outputs as the black box. Thus we can say that they are functionally very similar.

Implications

The finding that functionally all the climate models do is to merely lag and rescale the inputs has some interesting implications. The first one that comes to mind is that regarding the models, as the forcings go, so goes the temperature. If the forcings have a hump in the middle, the hindcast temperatures will have a hump in the middle. That’s why I titled this post “Mechanical Models”. They are mechanistic slaves to the forcings.

Another implication of the mechanical nature of the models is that the models are working “properly”. By that, I mean that the programmers of the models firmly believe that Equation 1 rules the evolution of global temperatures … and the models reflect that exactly, as Figure 2 shows. The models are obeying Equation 1 slavishly, which means they have successfully implemented the ideas of the programmers.

Climate Sensitivity

Finally, to the question of the elusive “climate sensitivity”. Me, I hold that in a system such as the climate which contains emergent thermostatic mechanisms, the concept of “climate sensitivity” has no real meaning. In part this is because the climate sensitivity varies depending on the temperature. In part this is because the temperature regulation is done by emergent, local phenomena.

However, the models are built around the hypothesis that the change in temperature is a linear function of forcing. To remind folks, the canonical equation, the equation around which the models are built, is Equation 1 above, ∆T = lambda ∆F, where ∆T is the change in temperature (°C), lambda is the sensitivity (°C per W/m2), and ∆F is the change in forcing (W/m2)

In Equation 1, lambda is the climate sensitivity. If the ∆F calculations include the ocean heat gains and losses, then lambda is the equilibrium climate sensitivity or ECS. If (as in my calculations above) ∆F does not include the ocean heat gains and losses, then lambda is the short-term climate sensitivity, called the “transient climate response” or TCR.

Now, an oddity that I had noted in my prior investigations was that the transient climate response lambda was closely related to the trend ratio, which is the ratio of the trend of the temperature to the trend of the forcing associated with each model run. I speculated at that time (based on only the few models for which I had data back then) that lambda would be equal to the trend ratio. With access now to the nineteen models shown above, I can give a more nuanced view of the situation. As Figure 4 shows, it turns out to be slightly different from what I speculated.

transient climate response vs trend ratioFigure 4. Transient climate response “lambda” compared to the trend ratio (temperature trend / forcing trend) for the 19 models shown in the above figures. Red line shows where lambda equals the trend ratio. Blue line is the linear fit of the actual data. The equation of the blue line is lambda = trend ratio * 1.03 – 0.05 °C / W/m-2.

Figure 4 shows that if we know the input and output of a given climate model, we can closely estimate the transient climate response lambda of the model. The internal workings of the various models don’t seem to matter—in all cases, lambda turns out to be about equal to the trend ratio.

The final curiosity occurs because all of the models need to emulate the historical temperature trend 1860-2000. Not that they do it at all well, as Figure 1 shows. But since they all have different forcings, and they are at least attempting to emulate the historical record, that means that at least to a first order, the difference in the reported climate sensitivities of the models is the result of their differing choices of forcings.

Conclusions? Well, the most obvious conclusion is that the models are simply incapable of a main task they have been asked to do. This is the determination of the climate sensitivity. All of these models do a passable job of emulating the historical temperatures, but since they use different forcings they have very different sensitivities, and there is no way to pick between them.

Another conclusion is that the sensitivity lambda of a given model is well estimated by the trend ratio of the temperatures and forcings. This means that if your model is trying to replicate the historical trend, the only variable is the trend of the forcings. This means that the sensitivity lambda is a function of your particular idiosyncratic choice of forcings.

Are there more conclusions? Sure … but I’ve worked on this dang post long enough. I’m just going to publish it as it is. Comments, suggestions, and expansions welcome.

Best regards to everyone,

w.

A NOTE ON THE UNITS

The “climate sensitivity” is commonly expressed in two different units. One is the change in temperature (in °C) corresponding to a 1 W/m2 change in forcing. The second is the change in temperature corresponding to a 3.7 W/m2 change in forcing. Since 3.7 W/m2 is the amount of additional forcing expected from a change in CO2, this is referred to as the climate sensitivity (in degrees C) to a doubling of CO2. This is often abbreviated as “°C / 2xCO2

DATA AND CODE: As usual, my R code is a snarl, but for what it’s worth it’s here, and the data is in an Excel spreadsheet here.

[UPDATE]. From the comments:

Nick Stokes says:
December 2, 2013 at 2:47 am

In fact, the close association with the “canonical equation” is not surprising. F et al say:

“The FT06 method makes use of a global linearized energy budget approach where the top of atmosphere (TOA) change in energy imbalance (N) is split between a climate forcing component (F) and a component associated with climate feedbacks that is proportional to globally averaged surface temperature change (ΔT), such that:
N = F – α ΔT (1)
where α is the climate feedback parameter in units of W m-2 K-1 and is the reciprocal of the climate sensitivity parameter.”

IOW, they have used that equation to derive the adjusted forcings. It’s not surprising that if you use the thus calculated AFs to back derive the temperatures, you’ll get a good correspondence.

Dang, I hadn’t realized that they had done that. I was under the incorrect impression that they’d used the TOA imbalance as the forcing … always more to learn.

So we have a couple of choices here.

The first choice is that Forster et al have accurately calculated the forcings.

If that is the case, then the models are merely mechanistic, as I’ve said. And as Nick said, in that case it’s not surprising that the forcings and the temperatures are intimately linked. And if that is the case, all of my conclusions above still stand.

The second choice is that Forster et al have NOT accurately calculated the forcings.

In that case, we have no idea what is happening, because we don’t know what the forcings are that resulted in the modeled temperatures.

About these ads
This entry was posted in Climate sensitivity, Modeling and tagged , , , . Bookmark the permalink.

90 Responses to Mechanical Models

  1. John B. Lomax says:

    You should really send this to the OMB. Your one-liner equation could save our government (and we taxpayers) a few $B by replacing all of the complex computer models.

  2. Rhoda Klapp says:

    Are the inputs really in units of watts/m2? It was my impression that the modellers used CO2 levels and modelled radiative physics in terms of local conditions. Taking some average figure for a supposed forcing, no matter how accurate the average is, can never be a satisfactory model input. This is true of any forcing, not just radiative.

  3. thingadonta says:

    “epicycles…It was replaced by a sun-centered method of calculation”.

    So should climate models, history repeating itself…

  4. Juraj V says:

    “All of these models do a passable job of emulating the historical temperatures”

    I do not agree at all. Maybe MIROC-ESM resembles reality, and probably only by plugging “aerosols” when needed. No one model mimics 1900-1940 warming, stronger than the 1975-2000 one, followed by cooling.

  5. Leonard Lane says:

    Nice analyses Willis. It seems that you have shown that no matter how complex and “scientific” the GCM climate models appear, and are claimed to be, they are nothing more that Rube Goldburg digital concoctions to linearly relate climate forcing to mean annual global temperature (the hind castings you show). As such, how can they have predicative ability for future mean annual global temperature predictions?

  6. Brian H says:

    Edit: “Another conclusion is that the sensitivity lambda of a given model is well estimated by the ratio is determined by the trend ratio of the temperatures and forcings

    Not sure which of the verbs (estimated, determined) to read here. Copy-paste error?

    [Thanks, fixed -w.]

  7. Willis Eschenbach said:

    “The inputs to the models are the annual forcings (the change in downwelling radiation at the top of the atmosphere) for the period 1860 to 2100.”

    I have been trying hard to follow the reasoning behind the models in order to follow your post closely.

    I’m not sure how to interpret the forcings. How are they calculated, how much observational data is included? If the top of atmosphere is represented by a single temperature, what ‘averaging’ rule is used by the models? Are they even using the same one?

    Cheers,

    Scott

  8. Nick Stokes says:

    “The inputs to the models are the annual forcings (the change in downwelling radiation at the top of the atmosphere) for the period 1860 to 2100.”

    I agree with Rhoda here. The forcings are often expressed as radiative equivalents, but they aren’t the actual input to models. Those inputs are the direct physical quantities, such as GHG concentrations, or for some modern AOGCM’s, the actual emissions (from scenarios). The radiative forcings in W/m2 are back-calculated for comparison. Hansen describes that here:
    “We compute Fi, Fa, Fs and Fs* for most forcing mechanisms to aid understanding and to allow other researchers easy comparison with our results.”

    I believe the forcings quoted here are from the paper by Forster et al. They are explicitly computed by those authors; they call them adjusted forcings (AF). They were not model inputs. They say:
    “Forster and Taylor [2006], hereinafter FT06, developed a methodology to diagnose 60 globally averaged AF in Coupled Model Intercomparison Project phase 3 (CMIP3) models and we use the same approach here within CMIP5 models, taking advantage of their improved diagnostics and additional integrations to improve the methodology.”

    In fact, the close association with the “canonical equation” is not surprising. F et al say:
    “The FT06 method makes use of a global linearized energy budget approach where the top of atmosphere (TOA) change in energy imbalance (N) is split between a climate forcing component (F) and a component associated with climate feedbacks that is proportional to globally averaged surface temperature change (ΔT), such that:
    N = F – α ΔT (1)
    where α is the climate feedback parameter in units of W m-2 K-1 and is the reciprocal of the climate sensitivity parameter.”

    IOW, they have used that equation to derive the adjusted forcings. It’s not surprising that if you use the thus calculated AFs to back derive the temperatures, you’ll get a good correspondence.

  9. gordon walker says:

    Thank you Willis for an empirical verification of logical necessity!
    Le Chatalier’s Principle tells us that when a change is imposed upon a physical system it will react in a way that resists the change. Otherwise stable systems would be like pencils balanced on their points and ever ready to swing from one extreme to another.
    But some people purport to believe in “our fragile planet” or “tipping points”!

  10. I have come up with a model which demonstrates that my footprints accurately predict my previous location to a high confidence level. Therefore it is obvious that my footprint model will accurately predict where I will be any time in the future.

  11. cd says:

    Willis

    All your points follow. However, if I understand your method, you’re essentially fitting a function and playing about with lamda and tau and until you get a reasonable fit with the models. While this gives you an “adaptive model”, it does sound like a statistical model of the models and therefore is one of many possible solutions – although in truth you now have your own climate model that was designed to mimic the ones your testing. This brings me on to your point on model being a Black Box. They aren’t, there are a number of online articles/lectures as well as journal papers that explain what types of algorithms they use right down to up-scaling methods and even what type of programming paradigms are chosen. So I think this is unfair, you’re almost suggesting that we should somehow be suspicious of a model because their unfathomable complexity hides a simple, and limited, algorithm – and such commentary, implicitly suggests stealth by design. If they seem like Black Boxes then that’s because you haven’t made the effort to find out what makes them tick.

  12. Allan MacRae says:

    Thank you Willis,

    What a lot of hard work, and good work too!

    I agree that climate sensitivity (“ECS” in units of C/2xCO2) has no meaning, perhaps for different reasons from yours.

    I demonstrated with confidence in January 2008 that temperature drives atmospheric CO2, not the reverse. This of course does to preclude other major drivers of CO2 such as fossil fuel combustion, deforestation, volcanoes, etc. I suggest that Jan Veizer and a few others were probably already there, or mostly so.

    If ECS (which assumes CO2 drives temperature) actually exists in the Earth system, it is so small that it is overwhelmed by the reality that temperature drives of CO2.

    Proof:
    In this enormous CO2 equation, the only signal that is apparent is that dCO2/dt varies ~contemporaneously with temperature, and CO2 lags temperature by about 9 months.
    http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/

    To suggest that ECS is larger that 1C is not credible. I suggest that if ECS exists, it is much smaller than 1C, so small as to be essentially insignificant.

    So all this costly hysteria about catastrophic humanmade global warming has been for naught, and all our yesterdays have lighted fools:

    … the expensive climate models can be replaced by a one-line equation (Bravo Willis!) …

    … the expensive wind and solar power schemes never really worked, due to the stubborn refusal of the wind to blow and the sun to shine WHEN we needed the power.

    As Richard Courtney ably pointed out, we do not understand the complexities of the carbon balance.

    As I pointed out, climate science does not even know what drives what and has “put the cart before the horse”.

    And yet several political entities have compromised their economies and their vital energy systems and even put their populations at serious risk due to this utter nonsense.

    If one was to write the global warming “scary story” as fiction, it would be dismissed as too absurd to be published – a tale told by an idiot – sadly, we have been governed by too many of them.

    In closing, I recommend the 15fps AIRS data animation of atmospheric CO2 at
    http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4

    There is no apparent impact of humanity in this magnificent display of nature’s power.

  13. I may be jumping ahead here but your emulation seems extraordinary if the models are actually doing something complex. But then again, even if they are doing some intricate algorithmic dance in order model dynamic process. It appears they are averaging the cell temperatures, in order to come up with a single global figure, thereby destroying all the good work. It is not surprising then, that the input equals the output by the sensitivity!! This implies that the canonical equation is not physical but statistical! It can’t show how changes in that statistic might affect real dynamic states! “It’s life Jim, but not as we know it.” ;-)

  14. Nick Stokes says:

    “To remind folks, the canonical equation, the equation around which the models are built, is Equation 1 above, ΔT = lambda ΔF, where ΔT is the change in temperature (°C), lambda is the sensitivity (°C per W/m2), and ΔF is the change in forcing (W/m2)”

    It doesn’t have anything to do with the way the models are built. It’s not their canonical equation. What it is is Forster’s equation (1), which he used to infer the adjusted forcings that you are using from the model output. The math is circular. You are feeding his Eq (1) derived AFs into your analysis and coming up with Eq (1).

  15. cd says:

    Scott

    As far as I am aware the models do “discretise” the atmosphere into cells. Generally they perturb the system with an internal/external forcing, and under various and changing assumptions, let the system respond through the cellular model. The mechanism by which these perturbation are disseminated are based on established physics such as the Navier-Stokes equations. However, their implementation of the physics is often poor and hence the poor performance of the models outside their training set.

    This really explained the issue so eloquently that I had no problems following it – one for the layman:

    http://www.youtube.com/watch?v=hvhipLNeda4

    What Willis has done here, rather than digging deep into the issue, has produced a statistical model of the models. I don’t know why he doesn’t just go speak to a modeller or at least engage with one in order to see whether he has stumbled upon something important.

  16. Bob G says:

    Good article. It will take some time to digest. I don’t think I’d consider model output as “data”.

  17. Nic Lewis says:

    Willis

    Many thanks for putting in the time to produce this and your previous post – both very informative.

    I wonder if you may have slightly misunderstood the relationship between the temperature and forcing data?
    As you wrote previously, the data is from the study Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models, by Forster, P. M., T. Andrews, P. Good, J. M. Gregory, L. S. Jackson, and M. Zelinka, 2013, JGR. As explained in paragraph [6] of that paper, the forcing timeseries were not obtained as model run data as such. They were actually derived for each model by multiplying its temperature timeseries by its diagnosed climate feedback parameter alpha and adding the Top-of-Atmosphere/ Top-of-Model change in energy imbalance (which is part of the model run data). So they are estimates, not actual model run data, and may not be completely accurate where a model exhibits non-linearities or time-dependence.

    If you multiply the temperature change by the value of alpha (net) for each model per Forster et al (2013) Table 1 and deduct the corresponding forcing values, you will recover timeseries for the TOA/TOM energy imbalance, which is almost the same as ocean heat uptake. Those timeseries would make quite interesting graphs, I think.

    Incidentally, a purist would not describe the ‘true’ model forcings as model inputs, although I agree it is reasonable to treat them as such in this sort of analysis. As you are probably aware, for the CMIP5 runs, the GCMs calculate greenhouse gas forcings, using their own radiative transfer codes, from specified atmospheric concentrations (See Meinshausen et al, 2011: The RCP GHG concentrations and their extension from 1765 to 2300). They also make their own estimates of forcing from aerosols, ozone, etc, I believe from abundance data. The models all do these things differently, resulting in surprisingly wide divergences between their forcings, as you have observed. The differences in the model aerosol forcings are particularly significant. As you point out, there is also a very large range in the noisyness of the timeseries between models – I’m not sure why.

  18. Old Huemul says:

    Willis, in this passage: “Another conclusion is that the sensitivity lambda of a given model is well estimated by the ratio is determined by the trend ratio of the temperatures and forcings” you should cancel the phrase “the ratio is determined by”. The new version would be: “Another conclusion is that the sensitivity lambda of a given model is well estimated by the trend ratio of the temperatures and forcings”.
    It seems you wanted to adjust your phrasing but forgot to cancel the old version.

  19. Joe Born says:

    Mr. Eschenbach:

    Thank you for another informative post. Whether the forcings values you use are the models’ actual stimuli or represent the forcings they respectively infer from the stimuli they do use, I find it telling that, after all their machinations, their results differ from respective simple one-pole linear models by much less than they differ from each other.

    One comment, which I’ve made before, so I’ll apologize for being repetitive. If a system is characterized by dy/dt + y / tau = lambda / tau x, then its response to a ramp is lambda[t - tau(1 - exp(-t / tau)]: the rate ratio equals the constant-stimulus gain. So your statement regarding ratio seems merely to be a tautology. What am I missing?

  20. Bloke down the pub says:

    Willis, knowing your attitude towards typos, you might like to change ‘ The first one the comes to mind is that regarding the models’.

    [Thanks, fixed. -w.]

  21. Genghis says:

    Willis, here is my best attempt to mangle the epicycle analogy. The models are an attempt to explain and predict CHANGES in the climate and make accurate predictions.

    What you have shown is that the models are incapable of predicting CHANGES, much less predictions of changes. They ultimately only compute one simple relationship, exactly like the complex epicycles did.

    The beauty of the Newtonian/Einstein models is that they are less constrained and have multiple variables, hence their predictive power rises exponentially.

    Without multiple variables (feedbacks) that do not cancel each other out, Maths and Models are less than useless. Much like the famous E=MC^2 is useless at unity 1=1. You have demonstrated quite nicely that the GCM’s have only a single variable, C=C and hence are useless.

  22. Robert Brown says:

    I think Nick’s comments are correct. From what I’ve seen of models, none of them use TOA “net forcing” as input. They use a complex field of initial data, model TOA insolation with equations (describing e.g. orbital variation), and then use all sorts of radiative and bulk transport equations to solve one or sometimes two coupled Navier-Stokes equations at some granularity. So it is not correct to say that one can model the output of GCMs with a one liner iterative model given the input of TOA forcing, because that is not what GCMs use as input.

    With that said, the disparity of the “TOA forcing” associated with an inversion of model predictions is indeed indicative of one of the many problems with GCMs. As is the generally poor agreement of the model-generated temperatures with each other and with the actual temperature in e.g. HADCRUT4.

    The other possibly useful aspect of the linearized equation Willis ends up with after processing the models shown is that it appears to be at least moderately invertible, at least as a difference equation. That is, one could (perhaps) take the actual temperature data, e.g. HADCRUT4, and apply the “universal model” equation to it backwards to infer the effective TOA forcing it implies for ANY of the models. Indeed, one can in principle apply this process all the way back across the Holocene.

    The main point of doing so would be to determine the model’s consistency and “physicality”. That is, if one inverts one of the models across the LIA, what exactly does it say about TOA net forcing during this period of intense, rapid, cooling (per model)? Is what it says consistent with our beliefs about the physics — given that e.g. CO_2 could not have played a significant role back then, or across much of the post-LIA warming, does the model have mechanisms that could (re)produce that variation without significant modulation from aerosols or CO_2? Such an analysis could result in signposts to missing physics, or to model inconsistencies (an incorrect balance between internals leading to natural forcings vs anthropogenic forcings, for example), or both.

    rgb

  23. Nick Stokes says:

    Joe Born says: December 2, 2013 at 6:19 am
    “Whether the forcings values you use are the models’ actual stimuli or represent the forcings they respectively infer from the stimuli they do use, I find it telling that, after all their machinations, their results differ from respective simple one-pole linear models by much less than they differ from each other.”

    I think you’ve missed the point of my earlier comment, and of Nic Lewis. Forster et al took the temperature outputs of the models and calculated adjusted forcings ΔF (they call it F) using the formula
    N = ΔF – α ΔT (1)
    Here N, the TOA imbalance, has to be small by cons eng, and some models constrain it to be zero.

    This post substitutes those ΔF into a regression and finds that, presto
    ΔF – λ ΔT=0.
    But of course, they have to. It has nothing to do with what the models actually do. It’s just repeating the arithmetic of Forster et al by which ΔF was derived.

  24. Box of Rocks says:

    What happened to all the hot weather of the 1930′s?

    Where is it hiding?

  25. RC Saumarez says:

    I have some difficulty with this analysis.
    You spent some time producing a model of cloud feedback, which you felt has implications for climate sensitivity.
    Now you have abandoned this in terms of a simple model. This model is simply a first order system which is represented by a first order differential equation.
    If we take an input X(t) and our model has a response as you suggest, h(t) and gives an output Y(t).
    We can write: Y(t)=X(t)*h(t) , where the symbol * represents convolution.
    Now assume that cloud formation is a first order process and has a gain of g in modulating radiative forcing. If is easier to analyse this using transforms in terms of the Laplace variable, s.
    The above equation can be written as:
    Y(s)=H(s)X(s) and y(t)=L-1[Y(s)]
    And H(s)=1/(T+s), where T is the time constant
    Using the well-known feedback relationship for the system response to create a “black box”:
    Y(s)/X(s)=H(s)/(1+H(s)G(s))
    Hence:
    Y(s)(1+H(s)G(s))=H(s)X(s)
    Writing H(s)=1/(s+a) and G(s)=1/(s+b) and a little algebra we get:
    Y(s)(1+ab+(a+b)s+s^2)= H(s)X(s)
    Now, s.Y(s)= y(t) and s^2.Y(s)= y’’(t), where the apostrophe indicates differentiation.
    Therefore we have a differential equation that is of the form:
    y’’(t) + k1y’(t)+k2y(t)=x(t)*h(t)
    This is a second order differential equation and therefore the model you propose here is not compatible with those you have proposed earlier.
    What you have done is in fact a curve fitting exercise and unless the match between data and output is perfect, it is unlikely that the model is correct. One way to tell if the model is correct is to compare its auto-correlation structure with that of the data. In fact it is well known that a first order model does not give a good explanation of the temperature data and there has been a large amount of debate about what form of model gives a realistic persistence. (See McIntryre and the debate between Keenan and the UK Met Office)

  26. Jim G says:

    “The most surprising thing to me about this is the wide disparity in the amount, trend, and overall shape of the different forcings.”

    Excellent and very readable graphical anaysis. Not surprising that the forcings differ so greatly given their rather miniscule absolute levels and the lack of any consistent scientific determinants of how they are constructed from model to model. I particularly appreciated Paul Sarkisian’s model regarding his footprints as it says it all quite well.

  27. Willis Eschenbach says:

    Nick Stokes says:
    December 2, 2013 at 3:39 am

    “To remind folks, the canonical equation, the equation around which the models are built, is Equation 1 above, ΔT = lambda ΔF, where ΔT is the change in temperature (°C), lambda is the sensitivity (°C per W/m2), and ΔF is the change in forcing (W/m2)”

    It doesn’t have anything to do with the way the models are built. It’s not their canonical equation. What it is is Forster’s equation (1), which he used to infer the adjusted forcings that you are using from the model output. The math is circular. You are feeding his Eq (1) derived AFs into your analysis and coming up with Eq (1).

    The “canonical equation”, in your words, “doesn’t have anything to do with the way the models are built” … and yet it describes their outputs exactly.

    So you’re telling us that is a coincidence? Really?

    You might try my post entitled “The Cold Equations” for a further discussion of the canonical equation.

    w.

    PS—It’s not “Forster’s equation” either, it’s the reported forcing from the models as shown in the CMIP5.

  28. Willis Eschenbach:

    That Equation 1 is “canonical” implies that ∆T is not the change in the temperature but rather is the change in the equilibrium temperature. With ∆T taken to be the equilibrium temperature, Equation 1 is scientifically invalid for while the temperature is an observable the equilibrium temperature is not; a consequence from the lack of observability is that Equation 1 is insusceptible to being falsified by the evidence.

    If, on the other hand, ∆T is taken to be the temperature rather than the equilibrium temperature then Equation 1 states a false proposition, for a number of different values for ∆T are associated with every value of ∆F. In either case, Equation 1 is scientifically invalid.

  29. Willis Eschenbach says:

    Nick Stokes says:
    December 2, 2013 at 2:47 am

    In fact, the close association with the “canonical equation” is not surprising. F et al say:

    “The FT06 method makes use of a global linearized energy budget approach where the top of atmosphere (TOA) change in energy imbalance (N) is split between a climate forcing component (F) and a component associated with climate feedbacks that is proportional to globally averaged surface temperature change (ΔT), such that:
    N = F – α ΔT (1)
    where α is the climate feedback parameter in units of W m-2 K-1 and is the reciprocal of the climate sensitivity parameter.”

    IOW, they have used that equation to derive the adjusted forcings. It’s not surprising that if you use the thus calculated AFs to back derive the temperatures, you’ll get a good correspondence.

    Thanks, Nick. I see I spoke prematurely above. Dang, I hadn’t realized that they had done that. I was under the incorrect impression that they’d used the TOA imbalance as the forcing … always more to learn.

    So we have a couple of choices here.

    The first choice is that Forster et al have accurately calculated the forcings.

    If that is the case, then the models are merely mechanistic, as I’ve said. And as you said, in that case it’s not surprising that the forcings and the temperatures are intimately linked. And if that is the case, all of my conclusions above still stand.

    The second choice is that Forster et al have NOT accurately calculated the forcings.

    In that case, we have no idea what is happening, because we don’t know what the forcings are that resulted in the modeled temperatures.

    I’ll add an update to the head post …

    w.

  30. Willis Eschenbach says:

    Terry Oldberg says:
    December 2, 2013 at 10:50 am

    Willis Eschenbach:

    That Equation 1 is “canonical” implies that ∆T is not the change in the temperature but rather is the change in the equilibrium temperature. With ∆T taken to be the equilibrium temperature, Equation 1 is scientifically invalid for while the temperature is an observable the equilibrium temperature is not; a consequence from the lack of observability is that Equation 1 is insusceptible to being falsified by the evidence.

    Thanks, Terry. Actually, “∆T” is simply one years temperature minus the previous years temperature, and thus has absolutely nothing to do with some purported “equilibrium” …

    w.

  31. Willis Eschenbach:

    Thank you for taking the time to reply. If we interpret ∆T as you say we should then Equation 1 states a falsehood, for many different values for ∆T are associated with every value for ∆F but Equation 1 implies there is only 1 value. To state this objection in mathematical terms: the relation from the values of ∆F to the values of ∆T is not a functional relation but Equation 1 implies that it is.

  32. Billy Liar says:

    cd says:
    December 2, 2013 at 3:44 am

    Great video! Thanks.

  33. Nic Lewis says:

    Nick Stokes says, December 2, 2013 at 7:24 am :

    “Here N, the TOA imbalance, has to be small by cons eng, and some models constrain it to be zero.”

    You’re getting a bit confused here, Nick. No model constrains the TOA radiative energy flux imbalance to zero. The TOA imbalance is the counterpart of heat uptake by the Earth’s climate system (ocean, etc.), and is not at all close to zero, although it averages close to zero whilst forcing is low. By 2005, it averages 0.7 W/m2 across the models. It bounces around a lot, and plunges upon volcanic eruptions, of course. You are maybe thinking of the difference between the TOA imbalance and the climate system heat uptake.

  34. Chip Javert says:

    I’m amazed how, time and again, Willis intellectually shreds the (presumed) hard work of hundreds (thousands?) of earnest grad students and ethically compromised professors who produce these vile models. You’d think public humiliation would have some impact.

    At the 50,000 nano-meter level (sarc), the physics at any point in time is fairly simple: photons are being absorbed and re-emitted by atoms. Period. The questions are how much ends up as heat and how much of that is retained in the atmosphere/ocean. We can’t even agree if it’s a stochastic or chaotic system. This requires accounting for (among other things) changing energy inputs, changing chemistry (eg % CO2), changing atmospheric temperature and pressure, changing absorption spectrums, etc. Sounds like a whole lot of differential equations to me, not some half-witted attempt to statistically back-fit tree rings to snippets of temperature records of questionable accuracy.

    …and yes, I’m realistic enough to understand this will go on until sometime after funding dries up.

  35. Chip Javert says:

    @ RC Saumarez says:
    December 2, 2013 at 8:06 am
    I have some difficulty with this analysis.
    ===================================

    Not intending to speak for Willis, my understanding is this is a black box analysis – how models get from input to output. Willis’ analysis shows there’s probably not a whole lot of meaningful guts or analysis in the black box.

    I understand the black box analysis to be separate and distinct from Willis’ cloud models. You appear to be mixing the two.

  36. Nick Stokes says:

    Nic Lewis says: December 2, 2013 at 12:38 pm
    “No model constrains the TOA radiative energy flux imbalance to zero.”

    Well, I may be a little out of date there. Stephens et al (2012) say
    “Models are commonly tuned to the TOA, so direct comparison of TOA fluxes provides little insight into model performance.”
    “tuned to TOA” would mean constrained to zero unless there was other information on the expected imbalance.

    And I think 0.7 W/m2 is fairly small, especially as Willis’ regression will subtract the average.

  37. AndyG55 says:

    HadCrud is NOT an instrumental record. !! certainly not pre 1979.

    The continued use of this heap of adjusted garbage as being anywhere representative of the past temperature, really is borderline stupidity.

  38. Willis Eschenbach says:

    Terry Oldberg says:
    December 2, 2013 at 11:26 am

    Willis Eschenbach:

    Thank you for taking the time to reply.

    My thanks to you. I try to answer any honest question.

    If we interpret ∆T as you say we should then Equation 1 states a falsehood, for many different values for ∆T are associated with every value for ∆F but Equation 1 implies there is only 1 value. To state this objection in mathematical terms: the relation from the values of ∆F to the values of ∆T is not a functional relation but Equation 1 implies that it is.

    I’m not defending Equation 1. I’m just pointing out that it is the current paradigm. It is what the programmers believe. Equation 1 is taken as being true in some longer-term average sense.

    I don’t think it’s true in any sense. Me, I find the idea that the output of a horrendously complex driven resonant natural system is a simple linear function of the inputs to be risible, but I was born yesterday …

    w.

  39. Willis Eschenbach says:

    RC Saumarez says:
    December 2, 2013 at 8:06 am

    I have some difficulty with this analysis.

    You spent some time producing a model of cloud feedback, which you felt has implications for climate sensitivity.

    Now you have abandoned this in terms of a simple model. This model is simply a first order system which is represented by a first order differential equation.

    Thanks, RC. You are mixing up two things. One is my idea about how the climate works. I’ve shown and provided a host of observational support for the idea that temperatures are regulated by emergent climate phenomena on a host of temporal and spatial scales. That’s one model, a model of the climate.

    The other model is the one I describe above. It is not a model of the climate like the first one. It is a model of the climate models. I show above that assuming that Forster’s estimates of the forcings are accurate, the models are doing nothing more than lagging and scaling the inputs to produce the outputs.

    Those are two quite distinct and different models, which I have generally described in different posts, and I have not “abandoned” one for the other.

    All the best,

    w.

  40. Jquip says:

    @Eschenbach: “I’m not defending Equation 1. I’m just pointing out that it is the current paradigm. It is what the programmers believe.”

    Uh, a point of clarity here. It is not what programmers believe — it is expressly what the scientists believe.

  41. Nick Stokes says:

    “The second choice is that Forster et al have NOT accurately calculated the forcings.

    In that case, we have no idea what is happening, because we don’t know what the forcings are that resulted in the modeled temperatures.”

    The forcings that are actually used in say CMIP5 are spelt out in some detail, in terms of GHG gas concentrations. That’s what GCM’s work with.

    Forster et al don’t claim to have accurately calculated the inputs. They are trying, by studying (and effectively modelling) the output, to attribute reasons for variation. They say in their conclusion (my emphasis):
    “Issues remain around the definitions of AF and the assumption of constant climate sensitivity within a transient forcing framework. The forcing/climate sensitivity concept developed essentially for slab-ocean models at equilibrium obviously does not provide a complete picture of climate evolution in today’s non-linear AOGCMs. Nevertheless, we argue that forcings are useful for understanding why models differ in their gross behaviour and forcings explain the spread of RCP projections rather well.”

  42. Willis Eschenbach:

    Thanks for taking the time to respond and for stipulating your agreement with me on an issue. The paradigm that I hear from the climatological establishment is that there is a linear functional relation from the change in the forcing to the change in the equilibrium temperature. It is this paradigm which results in the popular contention that the equilibrium climate sensitivity has a numerical value of around 3 Celsius per CO2 doubling. As it is non-falsifiable, this paradigm is non-scientific. Do we agree on this issue?

  43. donald penman says:

    “Finally, to the question of the elusive “climate sensitivity”. Me, I hold that in a system such as the climate which contains emergent thermostatic mechanisms, the concept of “climate sensitivity” has no real meaning. In part this is because the climate sensitivity varies depending on the temperature. In part this is because the temperature regulation is done by emergent, local phenomena.”
    If this means “climate sensitivity” to co2 then this raises the question if this sensitivity is a constant or is variable over time.If the “climate senstivity” is variable then what use are computer models using co2 as a forcing.I would have more confidence in climate models if we were to see a large rise in global temperature in the next 30 years or so but I feel that we are more likely to see a fall in global temperature in the next 30 years.

  44. Svend Ferdinandsen says:

    Nice to see you do these calculations.
    I have for a long time wondered what all these complex simulations in a lot of points should do anyway when they average over many years and over the whole globe. Sometimes they also claim that differences is because they have not sufficiently exact start conditions, but what does it matter when they averages it all out and any difference in the beginning is anyway clompletely lost after a few weeks.
    It is however not the same as saying the climate models are useless. They can be used to investigate processes in the weather and climate or in the models, but not to make any usefull forecast, not for any timescale.

  45. ferd berple says:

    Climate Sensitivity
    …. However, the models are built around the hypothesis that the change in temperature is a linear function of temperature.
    ===============
    Willis, maybe that should read “is a linear function of forcings”.

    [Thanks, Fred, fixed. -w.]

  46. ferdberple says:

    cd says:
    December 2, 2013 at 3:14 am
    This brings me on to your point on model being a Black Box. They aren’t,
    =============
    Willis didn’t say they were. He is analyzing them as a black box, to see if the internal logic (method) can be simplified. Computer programmers do this as a matter of routine.

    What Willis has found is that the climate models are basically Rube Goldberg machines. They perform a very complicated set of operations to deliver an extremely simple result.

    Willis would have more correctly labelled the posting:
    “Rube Goldberg Models”

  47. ferdberple says:

    Terry Oldberg says:
    December 2, 2013 at 11:26 am
    for many different values for ∆T are associated with every value for ∆F but Equation 1 implies there is only 1 value.
    =============
    That is actually an extremely important and broad ranging concept in physics. Victorian era (deterministic) physics held that for any 1 ∆F there could be only 1 ∆T. Quantum mechanics (probabilistic physics) allows many different ∆T’s for any 1 ∆F.

    However, in such a system the future cannot be realized as an average of all possible futures, which explains why the ensemble mean can be both more accurate than any single model, but at the same time will not converge to the actually future. Rather, it will drift in an unpredictable fashion, and no amount of computer simulation under our current understanding of physics can overcome this.

  48. dbstealey says:

    ferdberple says:

    “What Willis has found is that the climate models are basically Rube Goldberg machines. They perform a very complicated set of operations to deliver an extremely simple result.”

    Excellent!

    And at least Goldberg’s opened the door for the dog, so they did something productive…

  49. ferdberple says:

    ∆T = lambda * ∆F [Equation 1]
    =============
    this relationship is implies that there can be only 1 ∆T for any 1 ∆F. However, the spaghetti graph of the individual model runs shows quite clearly that not even the models believe this to be true.

    it is only the scientists themselves that believe you can average the future and arrive at a meaningful result. it is a complete nonsense.

    For any 1 given starting position, there are a near infinite number of futures. The forcings do not determine the temperature, they only determine the probability of the temperature.

    To explain, lest say that the future is 1/3 chance hotter, 1/3 chance cooler, 1/3 chance unchanged. You will arrive at one of these futures, but you don’t know which one.

    What do the climate models do? They average the future, and say that there is a 100% chance you will arrive in a future that is unchanged. Thus any change we see must be caused by humans.

    But that is not what physics tells us. We will not arrive at an average future, we will arrive at a specific future. And the models cannot tells us which specific future we will arrive at, because they see the future as an average, not a specific.

  50. cd says:
    December 2, 2013 at 3:44 am
    “This really explained the issue so eloquently that I had no problems following it – one for the layman: http://www.youtube.com/watch?v=hvhipLNeda4

    Thanks for that link, I’d not seen those time-lapse shots. The fog of ‘recurrent cars’ in the carpark shot, struck a powerful cord in me! It was like a glimpse into the quantum world. Also amazed how many of my lay intuitions about the fundamental assumptions were supported in the video. I honestly can’t get past the first two words in the debate, “Global Warming”, the first impossible thing is the assumption that there can even be such a thing as global temperature! As for warming how does a system that is out of-equilibrium warm or cool! ;-)

  51. cd says:

    ferdberple says:
    December 2, 2013 at 6:00 pm

    Willis didn’t say they were.

    No but he said he was going to perform a “black box” analysis – again. Which as I stated is only one of many solutions – his targets also, as far as I’m aware, are summaries of a population of non-unique solutions. The point I was trying to make was that if he spent time actually investigating climate models and the basic algorithms, assumptions and so on he would be in a much better place to readdress this issue – something he looked at before with almost universal appreciation but seems to have gone nowhere with it.

    What Willis has found is that the climate models are basically Rube Goldberg machines. They perform a very complicated set of operations to deliver an extremely simple result.

    No what Willis has done, as far as I can tell, is fitted a function whilst finding the best parameters (tau and lambda). That is fine and the function follows from first principles (he’d already done something similar before). But, that doesn’t mean that he has stumbled upon some fundamental truth that the modellers are trying to hide. You could do this with any function via simulated annealing, and yes it mightn’t follow from first principles but you could still argue that they capture something important about the models. What infuriates me here is that he didn’t ask a modeller to help explain why this might be the case. And before you say…”well they don’t talk to skeptics”, there are modellers that are skeptics and skeptics that have a good appreciation of the models.

    I’m not qualified to comment but I don’t see where any of this leads us to, except “take my word for it”. He should’ve explain why the models are doing nothing more than the above algorithm, and in terms of the model algorithms, assumptions etc. otherwise he has just fitted a function. He could’ve at least delved a little further by taking say one model, work out lambda and tau for the first 50% of chronology then, using these values, project beyond and see how the match the rest of the model outputs.

  52. cd says:

    Scott

    He’s a clever guy. In my opinion, often the mark of clever people is that they can take complex things and make them appear simple. I’m glad you enjoyed it.

  53. TomVonk says:

    Nick Stokes
    .
    I freely admit that I didn’t spend much time with the details of this issue but if I understood you well, then the forcings used by Willis were not the forcings used by the models but “adjusted” forcings coming from some Forster paper.
    Using then these “adjusted” forcings leads to circularity and to correlations around 99%.
    Is this understanding correct ?
    And if it is, why couldn’t one just substitute to the red curves used by Willis (adjusted) the real (non adjusted) forcings and redo exactly the same analysis that Willis did ?
    Or is it that the real (non adjusted) forcings used by the models are not easily available in the ready for use format of spatial yearly averages?

  54. Nick Stokes says:

    Tom,
    The forcings actually used by the GCMs are things like GHG gas concentrations, or even emissions if they have a model for converting those to concentrations. Also TSI variations etc. You can’t simply run those through a formula like ΔF = λΔT. The GHG gas concentrations are used in the radiative models absorption calcs. They don’t get explicitly converted within the GCM to W/m2.

    The forcings in W/m2 have to be diagnosed, usually from a back calculation from the T output. Forster et al used, in effect, ΔF=αΔT + N, where N is TOA imbalance. However derived, you can’t use such F for a black box emulation of models. You only have used information about one terminal, the output.

    Because N is relatively small, with longterm zero mean, Willis is in effect inverting the calc of Forster, and so naturally regenerates the output almost perfectly.

  55. rgbatduke says:

    To repeat what Nick said with some possibly useful backing, I’ve been posting this link:

    http://www.cesm.ucar.edu/models/atm-cam/docs/description/

    to WUWT at every reasonable opportunity. This is online documentation for the open source NCAR CAM GCM. The same site will let you download the GCM (fortran source built to run in parallel over MPI) itself as well as its initialization data. The source code is not build-friendly or particularly well documented internally, and it requires a module that is “open” but behind one of those “you have to leave your email address to download” University websites (and I’m allergic to Fortran) so I haven’t actually gotten it to fully build in the half-day I was willing to invest in it while working full time on other things with a million items in my own personal queue but I could probably get a build to work if I spent a long weekend on it to the exclusion of all else.

    The point of posting this is that many people on WUWT make statements about GCMs that simply aren’t true, because they are based on their imagination of how GCMs actually work. I’ve been guilty of a certain amount of this myself over the years, even though I can imagine better than most how they work. So I strongly advise those who are most critical, most often, of GCMs on WUWT to pause and take a few hours and read/skim their way through the CAM documentation. I especially encourage a glance at section 3.1, which describes the actual dynamical core of the GCM — basically an Eulerian coupled PDE solver. You will not understand all of the details of the math unless you are a lot smarter and better educated than I am OR unless you take six months to work through the papers (clearly referenced throughout) that specify the basis for the dynamics AND already have a pretty awesome understanding of PDEs, numerical integration, constrained dynamical evolution, and at least a working knowledge of the individual, coupled, nonlinear problems being solved and how they attempt to coordinatize them.

    Note well that AFAICT, the entire dynamical system doesn’t address radiative forcing at all, at least not explicitly. It appears only as an explicit parameter (computed by doing integrals over sub-processes per cell) in the “Energy Fixer” component, which is there to enforce global energy conservation (just as the previous component, the “Mass Fixer”, is there to enforce global mass conservation. Dynamical models of this sort, when run, will introduce numerical-error-based “drift” in parcel coordinates that over time will cause mass and/or energy errors to accumulate so that these quantities are not conserved, requiring that one check externally for this drift and correct for it. They are not trying to “fix” the result (in case the more paranoid want to interpret these terms that way) to come out some particular way, unless that way is “physically plausible” instead of “impossible”.

    Section 3.2 describes the Semi-Lagrangian dynamical core, which describes the actual physics of the parcels (their transport). It is the part that e.g. maintains what we incorrectly/approximately refer to as the ALR, only it doesn’t do just “dry” or “wet”, but “any”, and it isn’t specifically adiabatic. It basically solves the fluid dynamics equations for the atmosphere, subject (again) to mass and energy conservation as an imposed global constraint.

    Once you get a bit of a handle on how they solve the dynamical equations within the model, you can tackle section 4, which describes the model physics. Note that CAM 3.0 explicitly includes the physics of vertical convection (dry, moist, or in between) evaporation and precipitation, parameterization of the cloud fraction (a component that could easily be wrong, as a perfectly written model should get the cloud fraction correct internally without any free parameters), parameterization of shortwave and longwave radiation, vertical diffusion, and sulfur chemistry (again, quite possibly wrong according to the paper Anthony posted a few days ago). This doesn’t mean that they get all of the physics right, or that they implement it in a way that will lead to the same results obtained for the same processes in another GCM — an ongoing problem in the GCMs being that they lead to very different behavior even ON AVERAGE, even for MUCH SIMPLER problems than the rather complex Earth — but one should not accuse them of ignoring clouds, or failing to compute vertical transport of latent heat in the form of water vapor, as neither of these statements are true. They may not model it CORRECTLY, but it isn’t for lack of trying, and if they get it wrong the error isn’t malicious.

    The model contains (comparatively simple) ocean dynamics as well. There is plenty to criticize in CAM here, but as Nick has pointed out there are more detailed models that do a more detailed job, at least (not necessarily “better” as in “more accurate”, but more detailed) job of treating oceanic dynamics. The problem here, as is the case in the atmosphere as well, is that we are still accumulating critical information regarding the actual underlying dynamics of both. There are assumptions built into the GCMs that may well be incorrect, in spite of the fact that the models per se evaluate a genuine “realistic” dynamical evolution that does indeed do things like conserve mass and energy (by global fiat as an enforced constraint). They may be incorrect, but they aren’t maliciously designed or implemented.

    Indeed, if you look at their complexity it would be remarkably difficult to “fix” any of the models to obtain some particular result by messing with their internals, just as it is very likely rather easy to unconsciously tune them parametrically to “work” against some training data according to one’s biases, rather than (necessarily) correctly. The same thing is true for the simplest nonlinear function optimizer/solvers — in many cases you have to “start” the model with inputs “close” to what you think they’ll end up being and then tune them to get the best fit, but in complex problems that initial guess can be completely wrong and yet lead you to a perfectly plausible solution. It is also entirely possible that the models make systematic errors in their internal physics caused by the way they discretize the system — indeed this could be a common factor in all the GCMs as they all discretize the system in related ways. I think that I could prove as a theorem that applying parcel (average) dynamical coordinates to radiative transport in and out of a parcel will make a systematic error always underestimates the true radiative transport rates — a fact related to an existing theorem for greybody radiation. One wonders whether or not they explicitly renormalize these rates to correct for this, and one day perhaps I will look into the documentation and code in enough detail to figure this out.

    The CAM 3 model also suffers from using a lat/long grid, which is a terrible way to do integration on a sphere (simple fact) even with crude renormalization near the poles. Whether or not this can lead to systematic errors is hard to say — in principle you can push an adaptive integral on a sphere in spherical polar coordinates to convergence, but in practice it is remarkably difficult and prone to numerical error (again, simple fact as this is stuff I’ve had to do in the past — numerically integrating on a sphere is a PITA).

    But anyway, I’d suggest once last time that EVERYBODY who loves to criticize GCMs take a moment and look over at least this one model. Nothing is used as a model input that might be construed as “total forcing”, nothing is used internally PERIOD that could be called “climate sensitivity” or “feedback” as an externally imposed parametric process. The PDEs being solved implicitly contain all “forcing” and feedback, where I include quotes on the term forcing to indicate that this term shouldn’t really be used anywhere in a GCM. There are no “forcings”, there are just couplings — multivariate dynamical terms in the enormously complex PDE being solved. Some of these terms describe energy transport into parcels or energy transfer between the surface and a parcel. That doesn’t make them “forcings” — the term has no real meaning in physics.

    rgb

  56. The GCMs have undoubted strengths. However, they also have undoubted weaknesses. The weaknesses include:
    * insusceptibility to being either falsified or validated by the evidence and,
    * failure by them to provide policy makers with information about the outcomes from their policy decisions.
    Their weaknesses disqualify the GCMs from providing a basis for the regulation of greenhouse gas emissions. In representing that the GCMs are suitable for this purpose, climatologists have done us a grave disservice.

  57. Nic Lewis says:

    Nick Stokes:
    “Because N is relatively small, with longterm zero mean, Willis is in effect inverting the calc of Forster, and so naturally regenerates the output almost perfectly.”

    That’s not accurate, I’m afraid. N should have a zero mean in equilibrium, but it won’t have a zero mean otherwise. And although 0.7 W/m2 may be small in absolute terms, it is substantial in relation to alpha * deltaT for the same recent period, which is about 1.0 W/m2. So N accounts for about 40% of the derived total forcing.
    The accuracy of Willis’s temperature matching reflects not only the 60% of deltaT that depends on F but also the fact that N is related to deltaT – in a simple way if you assume a 2-box ocean model.

  58. Nic Lewis says:

    rgbatduke on December 3, 2013 at 6:44 am:

    Thank you for taking the trouble to make your excellent, very valid, comment. IMO it deserves to be elevated to a full post in its own. As you say, there is much misunderstanding by readers of WUWT as to how GCMs work.

  59. Svend Ferdinandsen says:

    Please correct me if i am wrong:
    The GCM’s are somehow tuned with aerosols and CO2 to replicate the past climate (temperature), which they do pretty well.
    If thats the case you can not use the same GCM’s to prove that CO2 will increase the temperature. It would be a circular prove. And you can not use it for validation of the models, that they simulate past temperature.

  60. rgbatduke says:

    The GCM’s are somehow tuned with aerosols and CO2 to replicate the past climate (temperature), which they do pretty well.

    I think this is correct, as far as it goes. Past temperatures over some finite interval, and not exactly “replicate” but rather “produce results that you could squint a bit and convince yourself are generally the same”. However, AFAIK, no GCMs can replicate the last 1000 years at any meaningful accuracy, especially the most prominent features (Medieval WP, LIA, Modern WP). They really can only replicate the Modern WP, and then only if you don’t insist that they get certain details right. But part of the problem there is that we don’t have actual measurements of e.g. aerosols globally for much of even the last 160 years. We don’t have particularly precise land/atmospheric measurements of anything until perhaps 50 or 60 years ago (being generous, I personally would say 30 to 40 years ago) and we don’t have particularly good oceanic measurements until as little as ten years ago, again perhaps stretchable to 30 or 40 in at least some places. Since the Earth was monotonically warming for much of that stretch, it made it pretty easy to “replicate” — any sort of linearization will have parameters that can reproduce it.

    The hard part is the places where the temperature is not monotonic, approximately linear. The GCMs suck over the last 15-20 years when temperatures had one sudden jump in 1997/1998, a bounce, and then have been basically flat ever since within normal climate/weather noise. There are further problems when one considers the details of GCM predictions, not just the average surface temperature anomaly. They get many of these detail wrong.

    I don’t know why it is so difficult for people to doubt that the GCMs are working correctly, or so easy for at least some climate scientists to believe that sooner or later the climate is going to behave the way that they are predicting in spite of their poor performance once one gets outside of the training set where the correspondence you refer to was a requirement to be taken seriously in the first place. If the models did not fit past data prior to 1998 or thereabouts particularly well, we wouldn’t expect them to work particularly well in the future. Why is it that not working well in the future (of 1998) isn’t a similar flag that they aren’t working well in general, and can reasonably be doubted?

    rgb

  61. Willis Eschenbach says:

    Robert, thank you as always for your detailed, cogent, and clear exposition of your always interesting and valid views. Much good stuff. One comment:

    rgbatduke says:
    December 3, 2013 at 6:44 am

    … Indeed, if you look at their complexity it would be remarkably difficult to “fix” any of the models to obtain some particular result by messing with their internals, just as it is very likely rather easy to unconsciously tune them parametrically to “work” against some training data according to one’s biases, rather than (necessarily) correctly.

    All of the models, without exception, are evolutionarily tuned to hindcast at least sorta-kinda close to the historical record. The ones that couldn’t do it died.

    However, there are ways to “obtain some particular result by messing with their internals”. Take this quote from Gavin Schmidt et al, for example:

    The net albedo and TOA radiation balance are to some extent tuned for, and so it should be no surprise that they are similar across models and to observations.

    or this one, op. cit.

    The model is tuned (using the threshold relative humidity U00 for the initiation of ice and water clouds) to be in global radiative balance (i.e., net radiation at TOA within 0.5 W m2 of zero) and a reasonable planetary albedo (between 29% and 31%) for the control run simulations.

    This is a most curious admission … it implies that (for the models at least) the main variable that affects the TOA radiation imbalance is not CO2.

    Instead, it is the humidity threshold for cloud formation. Now, they are clearly using that humidity threshold as a constant … but things in nature rarely work like that. Humidity which is adequate for cloud formation in one time and place will be inadequate for cloud formation in another time and place.

    And indeed, given the effects of aerosols on clouds, which are known to alter the clouds’ thresholds for formation via a variety of physical and chemical processes, there may be a secular drift in this all-important variable. Dang … another totally unexplored avenue for explaining slow drift in planetary temperatures, the mystery variable U00. Gotta love the name, sounds like something James Bond would use to temporarily disable an enemy …

    All the best,

    w.

  62. Nick Stokes says:

    Nic Lewis says: December 3, 2013 at 9:44 am
    “That’s not accurate, I’m afraid. N should have a zero mean in equilibrium, but it won’t have a zero mean otherwise.”

    The mean would be basically the amount of heat that goes into the oceans divided by the time interval. And since one is bounded and the other not, that is pretty much zero longterm.

    But anyway. thr math goes like this. We start with output ΔT. Forster gets
    ΔF = α ΔT + N
    Willis then gets by regression &lambda ΔT = ΔF, doing some delay by exponential smoothing, and says that it is a very good fit. But it’s just &lambda ΔT = α ΔT + N, so the discrepancy tells something about N, but nothing about the models. And I guess what it says about N is that it is fairly aligned with ΔT, so a difference in regression parameters can accommodate it fairly well, and any remaining effect fades under Willis’ smoothing.

  63. Svend Ferdinandsen:

    The GCMs are insusceptible to being validated as the underlying population is required but does not exist. They are “evaluated,” a process that can take place in the absence of the underling population.

  64. Nick Stokes says:

    rgbatduke says: December 3, 2013 at 12:33 pm

    A good summary. I often quote CAM 3 too, and it’s very convenient to link to. They are up to CAM 5 now.

    As you say, the code implementation of the radiative effect goes into the energy fixer, because it doesn’t do anything to interact with the multi-minute timescale of the dynamic core. But I think their description of the physics in sec 4.9 is interesting, though they have mostly delegated discussion of CO2 to the papers.

  65. rgb:

    I’m sorry to find you claiming once again that the GCM’s predict. That’s not something that they do.

  66. Willis Eschenbach says:
    December 3, 2013 at 12:40 pm

    “Instead, it is the humidity threshold for cloud formation. Now, they are clearly using that humidity threshold as a constant … but things in nature rarely work like that. Humidity which is adequate for cloud formation in one time and place will be inadequate for cloud formation in another time and place.”

    After reading a revealing, if off the cuff remark, by a modeller, I have been worrying about it ever since:

    “The reason most climate modelers will tell you that atmosphere models are worthless junk, is partly the non-equilibrium behavior of water vapor. (Take two masses of air, at the same temperature and pressure, with the same water content. One will have clouds, the other won’t.) The other problem is that it is just too computationally expensive to run large scale simulations with data at every thousand feet–and even those won’t catch some of water’s trickier behavior(sic).”

    And:

    “Finally, just to drive you up the wall–clouds can remove CO2 from the air, and of course, release some of it when it rains. Since the level (feet or meters from the surface) of CO2 determines whether it traps or releases heat, good models try to deal with this as well.”

  67. It’s not just that clouds may be overlooked. It’s the shear complexity and size of the affect of just a single cloud that bothers me. Just a cursory look at what is going on, dwarfs all the other problems with the models. I’m not convinced that enough thought is given to the way Co2 is handled by cloud formation. Carbonic acid, bicarbonate ions for example these, you would think are significant issues. That puffy white cloud in the sky doesn’t represent relative humidity or atmospheric composition. It is more like a tank of water than a cloud of gas!

  68. rgbatduke says:

    However, there are ways to “obtain some particular result by messing with their internals”.

    I think this is a matter of my not being clear as to what I meant. The models definitely have parameters and thresholds. Some of those parameters have virtually no play and are fixed a priori from reliable observations that nobody will argue with — for example, the Earth’s diurnal and orbital parameters. Others are not so tightly fixed, but are nevertheless pretty well constrained by experiment and external knowledge — the bulk modulus of air as a function of temperature, for example. Still others are not well determined at all or are entirely absent — aerosols, soot.

    There are definitely ways of monkeying with parameters (given a fixed model) to get the model to reproduce some desired result. When building statistical models, at least, this “desired result” is called the training set, and some set of model parameters are optimized against the training set. The success of the model, thus optimized, is often then further tested against a (supposedly separate/independent) trial set of data before releasing the model into the wild and “trusting” its predictions, although in reality every new prediction compared to all new experience as it comes in extends the trial set and a model can fail at any time no matter how well it has done up to that point. When building semi-empirical physical models I’m sure the nomenclature is somewhat different but the game is still the same — the model has to reproduce some known data to be taken seriously, but then has to continue to reproduce future data in order to continue to be taken seriously. If rocks start falling up tomorrow, we’ll have to reconsider the law of gravitation, no matter how well it has done up to now. (For example: In the brief time available while we are being blowning out into space by the rapidly expanding atmosphere, “reconsidering” might look like “Oh shiiiit, gravity stopped working….” before our blood boils and we die:-)

    What I meant to say is that the internal computational core itself — the dynamical equations being solved that describe the various transport and diffusive processes — isn’t particularly easy to monkey with in such a way that the result works out some particular way. For example, adding a subroutine called “CreateWarming()” that takes all temperatures in each timestep and adds 1/100,000 C (to get a few degrees net warming, one presumes, over enough timesteps) would a): be really pretty obvious and all sorts of honest people working on the code would object; and b) not work, because the dynamics would just eliminate the warming once it got above some very, very small threshold. It’s not easy to see how one could deliberately monkey with the theory of transport and its method of numerical solution to favor a particular outcome, and even heavy handed insertions would be as likely as not to have nonphysical, obvious consequences as to produce the desired result.

    So sure, parametric tuning goes without saying. Tuning the computational core, however, is more a matter of writing code that implements defensible algorithms in a testably reliable way. Errors can easily be made either in the algorithms themselves or in the code that implements them, but just as in nature most mutations are lethal, in complex code a lot of mutations are similarly lethal. Trying to create an “error” that produces a specific desired result (without killing the host or depending on some manifestly nonphysical setting of some parameters) is really rather difficult.

    It isn’t quite impossible, of course. Omitting some relevant physics altogether (deliberately or not) can do it. So can getting some relevant physics wrong and believing too strongly in the incorrect output instead of nature where the two differ. Making an algorithmic error that produces a monotonic displacement of some result in a desired direction is one of the most difficult errors to discover or prove, again independent of whether the error is deliberate or accidental. An example of the latter might be truncating an adaptive computation at some granularity that happens to produce results you “like”, believing it to be fully converged (or unwilling to continue to find OUT if you are fully converged while you are getting favorable results).

    In the old days, I used to jokingly cite “The Fundamental Theorem of Diagrammatic Physical Theory”, which one would hear invoked as many as 8 times in 10 talks in certain venues: “All of the diagrams that are not included in this particular computation do not significantly contribute.” Never mind that the included diagrams were all of the ones they could afford to compute, or all of the ones they could algebraically work out well enough to compute, never mind if one could fairly easily prove that the diagrammatic theory in question was asymptotic and wouldn’t converge if one summed all the diagrams in any accessible order — if you listened for it, this “theorem” would be invoked in the introduction somewhere and one would then be treated to a half hour of mind numbing discussion of one loop and two loop diagrams, of ladder diagrams, of diagrams containing so and so many vertices, leading to a result that might or might not be in good correspondence with measurement. I myself did a long computation where the result appeared to be converging to a “nice” number, so close that I started to really believe that it was that number as an exact result, until I did the computation still more precisely (and more expensively) and it finally stopped converging on that number and ended up differing, stably, in the fifth significant digit and beyond.

    The point is that semi-empirical computational models by definition have adjustable parameters, and I don’t think anybody would assert that GCMs are ab initio, with every bit of theory and every physical parameter and constant fixed by highly certain prior knowledge. The physics itself is somewhat provisional, the dynamical engine is approximate, the spatiotemporal granularity is suspect and to some extent untested (or untestable, at least at this time). There are many parameters, and many of those parameters, even though they are treated as fixed (albeit loosely known) values may actually be dynamical variables themselves — solar output power, for example — with imprecise or no models regulating them. There is without doubt a fair bit of semi-empirical parametric tuning, because without it the model would probably egregiously fail. There is less tuning at the algorithmic or physical level — there one has to justify the equations used somewhere, somehow. One can’t just make stuff up and throw it in to get some result to come out the way you want it to, at least not in a theory you are claiming is demonstrably correct rather than a theoretical/numerical exploration of a hypothetical cause or effect. Simply working within real world limitations of computer memory and time and systematic numerical error already introduces plenty of problems.

    What is amazing about these particular dancing bears isn’t how gracefully they dance but that they dance at all — that they produce a facsimile of a “climate future”. If it weren’t for the multitrillion dollar bet and all of the vested interests involved, nobody would for a minute claim that the bears are dancing gracefully — yet. One day they might, but so far the dance is pretty ugly compared to the dance of nature itself. In the meantime, you can try all you like to get the bears to do better by means of bribery or threats, but what you really need is to rebuild the bears into Russian ballerinas for them to be able to do a decent rendition of Swan Lake.

    rgb

  69. rgbatduke says:

    rgb:

    I’m sorry to find you claiming once again that the GCM’s predict. That’s not something that they do.

    Terry:

    I’m sorry to find you once again objecting my calling a number produced by a putatatively well-founded, physics based computational model referring to a measurable future value a prediction.

    rgb

  70. cd says:

    rgb, appreciate your earlier post. I agree with its sentiment.

    Can I add however, that they may be physics based, but the implementation of the physics is very poor. Another point, as Willis has mentioned are not total deterministic. They have stochastic components – their projections are highly dependent on starting conditions and starting assumptions. In short, each run is a non-unique solution.

  71. Nick Stokes says:

    Scott Wilmot Bennett says: December 3, 2013 at 1:50 pm
    “The reason most climate modelers will tell you that atmosphere models are worthless junk, is partly the non-equilibrium behavior of water vapor. (Take two masses of air, at the same temperature and pressure, with the same water content. One will have clouds, the other won’t.) The other problem is that it is just too computationally expensive to run large scale simulations with data at every thousand feet–and even those won’t catch some of water’s trickier behavior(sic).”

    All of those criticisms apply to the similar programs used for numerical weather forecasting. Yet they are not worthless. They are heavily relied on by people who really want to know about the coming weather.

  72. Derek Alker says:

    In short, and for the layman, what Willis appears to be saying is that in the IPCC’s GCM climate models,
    CO2 input = temperature output.

    If correct, then this is exactly what Murry Salby has been saying, but just explained by Williis
    (to be honest), in a far more complicated, if not obtuse way.

  73. rgb:

    Use of “prediction” in the manner that you suggest serves to obscure pathological features of global warming climatology that I prefer to expose. You can get an understanding of the mechanism by which these features are obscured by reading my peer-reviewed article at http://wmbriggs.com/blog/?p=7923 . In brief, this mechanism is application of the equivocation fallacy.

  74. Leo Morgan says:

    Dear Willis,
    Thanks for your exemplary demonstration of how to take correction.
    Your behaviour is in marked contrast to that of the hockey team, who deny, dissemble and engage in every sophistry.It’s that difference that makes you credible, and them not.
    Sure, none of us like being caught out having made an error. Especially when those who disagree with us are keen to go “Har Har, look at what that bonehead did.” It’s embarrassing, and it takes courage to get back on that horse again.
    Without willingness to speak ones mind despite the certainty of making mistake along the way, there is no progress.

  75. Willis Eschenbach says:

    Leo Morgan says:
    December 3, 2013 at 6:15 pm

    Dear Willis,
    Thanks for your exemplary demonstration of how to take correction.
    Your behaviour is in marked contrast to that of the hockey team, who deny, dissemble and engage in every sophistry.It’s that difference that makes you credible, and them not.

    Thanks, Leo. Falsification is the heart and soul of science, and it works best when the person both notices and publicly acknowledges that their work has been falsified. Don’t like it when it happens to me … but the fastest way onwards is to admit it and move forward.

    Plus, of course, I always learn the most when my work is falsified. I learn things that way I’d never learn if I didn’t have this marvelous peer review system called WUWT, to keep me from continuing some unproductive line of inquiry, and to clarify and correct my thinking.

    My best to you,

    w.

  76. cd says:

    Nick

    All of those criticisms apply to the similar programs used for numerical weather forecasting.

    That’s a bit naughty. The spatial resolution of short-term weather models is far, far finer than climate models because of the shorter time period. The higher resolution also means that short-term atmospheric models can deal with sensitivity to starting conditions in a far more detailed and accurate manner.

  77. Nick Stokes says:

    cd says: December 4, 2013 at 1:41 am
    “That’s a bit naughty. The spatial resolution of short-term weather models is far, far finer than climate models because of the shorter time period.”

    Not really. The stated criticisms referred to the variable behaviour of water vapor/clouds, and the failure to achieve 1000ft resolution to do water “properly”. That applies to both climate models and GWP. The NCDC Global forecasting system (GFS) uses 28 km cells. GCM’s tend to use 100 km or so for cost reasons, but that doesn’t make them worthless.

  78. cd says:

    Nick

    The up-scaling required in order to simulate energy transfer is obviously proportional to the size of your cells. It has huge implications, to say that it doesn’t would make one wonder why they bother using evermore powerful computers for the expressed purpose of increasing descretised voluminous resolution.

  79. rgbatduke says:

    Can I add however, that they may be physics based, but the implementation of the physics is very poor. Another point, as Willis has mentioned are not total deterministic. They have stochastic components – their projections are highly dependent on starting conditions and starting assumptions. In short, each run is a non-unique solution.

    Sure, but that’s a feature, not a bug. Weather is chaotic, and these things work by literally integrating the weather forward in time (in e.g. five minute timesteps). A butterfly-wing-flap change in the initial conditions creates an entirely different pattern of both weather and climate five or ten years into the future. The idea is that probable climate is determined by averaging over a sampled distribution of possible weather futures. Sadly, this is not actually the case — what is determined by the sampled distribution of the possible weather futures is precisely that — the distribution of possible weather futures conditioned on the assumptions built into the model program.

    The real world, on the other hand, follows a single trajectory that (as we can see by comparing the actual data to the actuam model outputs) may not resemble any of the sampled trajectories and may not even be within the envelope of the distribution of possible weather futures for any given model. Indeed, I’m pretty sure that for many of the CMIP5 models, the actual weather/climate trajectory for the last 15-20 years is not within the envelope of possible weather futures for the models, which is why I advocate rejecting those models on the basis of hypothesis testing.

    As to whether or not the physics implementation is “poor” — that’s very difficult to judge. Again the issue is in part chaos — if one writes nonlinear ODEs for a chaotic oscillator, the implementation of the physics can be identical on two different systems as far as the definition of the ODEs themselves are concerned, but if you either feed the exact same initial conditions into two different differential equation solvers or infinitesimally different initial conditions into a single differential equation solver or the same initial conditions into a single differential equation solver but with slightly different settings for the tolerance and error, you may well find that the solution goes to completely different places quite rapidly. The same is true for the entire class of “stiff” differential systems, or systems where the derivatives are highly sensitive to small numerical errors (e.g. one’s involving subtracting two big numbers to get a small number to get the slopes for the next time step). In some cases there IS NO “good” physics implementation, at least not one that we can (yet) compute.

    Ordinarily one would judge the quality of the implementation of a large, multi-component problem by seeing how it works, by comparing its output to the real world. But failure doesn’t really tell you whether it is the physics per se that is or isn’t well done, or if the failure is numerical, or if it is produced by simple bugs in the code, or if the model omits important physics, or if the model simply cannot be run at the requisite granularity to get the right answer. If you try integrating ANY complex system with a non-adaptive ODE solver, you basically are betting that the errors produced by e.g. Euler’s method (or any more sophisticated scheme) at the granularity you can afford to reach can be somehow regularized or renormalized so that they don’t accumulate systematically. There are plenty of problems for which this assumption is simply incorrect — plenty of very SIMPLE problems for which it is incorrect. Try integrating a planetary orbit with Euler’s method with “large” timesteps, or simply use two different small timesteps. You rapidly get into trouble — the very kind of troubles that the “mass fixer” and “energy fixer” components of CAM are trying to ameliorate. But “fixing” the mass and energy (or energy and angular momentum) ex-post-facto in a numerical integration — which might work adequately for a “simple” two body planetary orbit — is unlikely to work well in a nonlinear Navier-Stokes problem because the system can easily have completely distinct solution classes — different attractors, as it were — that a true integration will jump between while a constrained integration may find itself locked to a single attractor. It is more or less impossible to prove that any given renormalization like this or truncation of an adaptive method will preserve the important gross FEATURES of the actual trajectory.

    rgb

  80. Bart says:

    Nick Stokes says:
    December 4, 2013 at 5:41 am

    This is OT, but I believe you said some time ago that you had published a method for estimation of the Laplace transform of a system given I/O data. I would like to learn more about your formulation, and would appreciate a citation/link of some sort. Thanks.

  81. rgbatduke says:

    rgb:

    Use of “prediction” in the manner that you suggest serves to obscure pathological features of global warming climatology that I prefer to expose. You can get an understanding of the mechanism by which these features are obscured by reading my peer-reviewed article at http://wmbriggs.com/blog/?p=7923 . In brief, this mechanism is application of the equivocation fallacy.

    Dear Terry,

    As far as I’ve been able to tell, over our several exchanges on this subject, nobody really cares if you think that the equivocation fallacy is relevant to the problems with GCMs. Whether they predict, project, prophesy, or pretend, they solve physics-based model systems of equations that are tuned to the past, initialized in the present, and compute a putative system state into the future. Name that future state any way that you prefer, but do not pretend that the name you use has anything whatsoever to do with the single relevant question — is that future state — be it a prediction, a projection, a prophesy, or a complete fiction invented to sell snake oil to gullible natives — a reliable representation of the actual future of the system being modelled.

    The only point in correcting language is when there is a serious misunderstanding associated with the usage and when the general population of language users agree that the correction is relevant. Both are necessary — lie and lay and good and well are both excellent examples of cases where literal application of terms could lead to serious misunderstandings — “I’m doing good” as colloquially used does not, in fact, literally mean “I’m doing well”, in spite of the fact that all of us who do indeed understand the difference between a noun and an adverb still understand those who use the former phrase perfectly — errm – good — as intending the latter phrase.:-) We rarely correct adults who use it incorrectly (especially in the south, where the incorrect usage might even be the norm, and hence according to the true rules of language, no longer be incorrect usage), however, as they might well be offended and knock out our teeth in response and besides, communication is well-established even with the malapropism.

    Just a thought. It might help if you stop acting as the WUWT grammar/usage nazi and instead concentrate on the relevant issues, which are not linguistic, they are numerical and statistical. Not that this suggestion will have any impact on your future behavior — if you were capable of responding to it you would have already done so.

    rgb

  82. rgbatduke says:

    This is OT, but I believe you said some time ago that you had published a method for estimation of the Laplace transform of a system given I/O data. I would like to learn more about your formulation, and would appreciate a citation/link of some sort. Thanks.

    What are you looking for, Bart? Matlab and Mathematica have laplace transforms built in. The GSL and many other numerical packages have numerical integration tools galore. And the transform itself is basically a real integral evaluated on a grid — there are two or three methods for solving it efficiently that you can google up pretty easily, and you can probably find C/C++ source (at least) in the open source world. For data on a fixed grid, you can probably get by with a simple summation of f(s_j) = f(x_i) exp(s_j x_i) (for s_j in a list and x_i in some normalized interval). If the data is on a variable grid, you’ll have to weight the sum with the variable interval. If you want to get fancier, you can take the data and e.g. spline it and then run a laplace transform with an actual quadrature routine on the spline, but I doubt that is going to improve your result and it makes assumptions about the smoothness of the data. Finally, you can probably use a FFT to do an LT as the two convolutions are basically the same with the LT a special case of the FFT, although I haven’t ever done it.

    But matlab, and maybe octave too (haven’t looked) can do it out of the box, if not for a data list for a function that interpolates a data list..

    rgb

  83. Bart says:

    rgbatduke says:
    December 4, 2013 at 9:42 am

    Thanks, RGB. Mainly, I wanted to see if Nick had a particularly robust and efficient (in both senses) formulation.

    When dealing with real world stochastic data, the processing is not as straightforward as you might imagine. For example, using the FFT directly for PSD estimation is not efficient in a statistical sense. The variance does not decrease with longer record length, and you must perform additional processing to trade off bias and variance to obtain a good result. Choosing how to balance bias and variance is the classic conundrum in estimation theory.

    Also, the convolution of an LT (or Z-transform, as it would become for discrete time data) is significantly different from an FFT. The FFT makes use of periodicites to drastically reduce the number of mathematical operations. When your kernel is non-unitary, you lose that advantage.

  84. Nick Stokes says:

    Bart says: December 4, 2013 at 8:46 am
    Bart,
    Yes, the paper is here. Unfortunately I can’t find a non-paywall version.

  85. Nick Stokes says:

    rgbatduke says: December 4, 2013 at 9:42 am
    “What are you looking for, Bart? Matlab and Mathematica have laplace transforms built in.”

    It’s an inverse Laplace Transform method. But is is in Matlab.

  86. Nick Stokes says:

    I’ve put a copy of the inverse Laplace paper here. There’s a recent review article here which explains and compares various methods, including ours. It has now been published in Numerical Algorithms.

  87. Bart says:

    Nick Stokes says:
    December 4, 2013 at 1:29 pm

    Thanks, Nick. Not exactly what I was expecting, but it might be useful.

  88. Bart says:

    Which is to say, I am sure it is useful, but it might be useful to me.

  89. rgb:

    Thank you for taking the time to reply. By your speculation that “nobody really cares if you think that the equivocation fallacy is relevant to the problems with GCMs,” you make an ad hominem argument. As most of us know, an ad hominem argument is illogical.

    A fact of global warming climatology, which you steadfastly refuse to address, is that no statistical population underlies the IPCC climate models. What say you about the absence of this population?

  90. Willis Eschenbach:

    I’m sorry to have missed the excellent comments that you left on my testimony to an EPA hearing. I learned that Anthony had published my submission only yesterday and after the opportunity to respond had passed. If you’d like and are still tuning in, I’ll respond in this thread.

Comments are closed.