The Global Climate Model clique feedback loop

Elevated from a WUWT comment by Dr. Robert G. Brown, Duke University

Frank K. says: You are spot on with your assessment of ECIMs/GCMs. Unfortunately, those who believe in their ability to predict future climate really don’t want to talk about the differential equations, numerical methods or initial/boundary conditions which comprise these codes. That’s where the real problems are…

Well, let’s be careful how you state this. Those who believe in their ability to predict future climate who aren’t in the business don’t want to talk about all of this, and those who aren’t expert in predictive modeling and statistics in general in the business would prefer in many cases not to have a detailed discussion of the difficulty of properly validating a predictive model — a process which basically never ends as new data comes in.

However, most of the GCMs and ECIMs are well, and reasonably publicly, documented. It’s just that unless you have a Ph.D. in (say) physics, a knowledge of general mathematics and statistics and computer science and numerical computing that would suffice to earn you at least masters degree in each of those subjects if acquired in the context of an academic program, plus substantial subspecialization knowledge in the general fields of computational fluid dynamics and climate science, you don’t know enough to intelligently comment on the code itself. You can only comment on it as a black box, or comment on one tiny fragment of the code, or physics, or initialization, or methods, or the ode solvers, or the dynamical engines, or the averaging, or the spatiotemporal resolution, or…

Look, I actually have a Ph.D in theoretical physics. I’ve completed something like six graduate level math classes (mostly as an undergraduate, but a couple as a physics grad student). I’ve taught (and written a textbook on) graduate level electrodynamics, which is basically a thinly disguised course in elliptical and hyperbolic PDEs. I’ve written a book on large scale cluster computing that people still use when setting up compute clusters, and have several gigabytes worth of code in my personal subversion tree and cannot keep count of how many languages I either know well or have written at least one program in dating back to code written on paper tape. I’ve co-founded two companies on advanced predictive modelling on the basis of code I’ve written and a process for doing indirect Bayesian inference across privacy or other data boundaries that was for a long time patent pending before trying to defend a method patent grew too expensive and cumbersome to continue; the second company is still extant and making substantial progress towards perhaps one day making me rich. I’ve did advanced importance-sampling Monte Carlo simulation as my primary research for around 15 years before quitting that as well. I’ve learned a fair bit of climate science. I basically lack a detailed knowledge and experience of only computational fluid dynamics in the list above (and understand the concepts there pretty well, but that isn’t the same thing as direct experience) and I still have a hard time working through e.g. the CAM 3.1 documentation, and an even harder time working through the open source code, partly because the code is terribly organized and poorly internally documented to the point where just getting it to build correctly requires dedication and a week or two of effort.

Oh, and did I mention that I’m also an experienced systems/network programmer and administrator? So I actually understand the underlying tools REQUIRED for it to build pretty well…

If I have a hard time getting to where I can — for example — simply build an openly published code base and run it on a personal multicore system to watch the whole thing actually run through to a conclusion, let alone start to reorganize the code, replace underlying components such as its absurd lat/long gridding on the surface of a sphere with rescalable symmetric tesselations to make the code adaptive, isolate the various contributing physics subsystems so that they can be easily modified or replaced without affecting other parts of the computation, and so on, you can bet that there aren’t but a handful of people worldwide who are going to be able to do this and willing to do this without a paycheck and substantial support. How does one get the paycheck, the support, the access to supercomputing-scale resources to enable the process? By writing grants (and having enough time to do the work, in an environment capable of providing the required support in exchange for indirect cost money at fixed rates, with the implicit support of the department you work for) and getting grant money to do so.

And who controls who, of the tiny handful of people broadly enough competent in the list above to have a good chance of being able to manage the whole project on the basis of their own directly implemented knowledge and skills AND who has the time and indirect support etc, gets funded? Who reviews the grants?

Why, the very people you would be competing with, who all have a number of vested interests in there being an emergency, because without an emergency the US government might fund two or even three distinct efforts to write a functioning climate model, but they’d never fund forty or fifty such efforts. It is in nobody’s best interests in this group to admit outsiders — all of those groups have grad students they need to place, jobs they need to have materialize for the ones that won’t continue in research, and themselves depend on not antagonizing their friends and colleagues. As AR5 directly remarks — of the 36 or so named components of CMIP5, there aren’t anything LIKE 36 independent models — the models, data, methods, code are all variants of a mere handful of “memetic” code lines, split off on precisely the basis of grad student X starting his or her own version of the code they used in school as part of newly funded program at a new school or institution.

IMO, solving the problem the GCMs are trying to solve is a grand challenge problem in computer science. It isn’t at all surprising that the solutions so far don’t work very well. It would rather be surprising if they did. We don’t even have the data needed to intelligently initialize the models we have got, and those models almost certainly have a completely inadequate spatiotemporal resolution on an insanely stupid, non-rescalable gridding of a sphere. So the programs literally cannot be made to run at a finer resolution without basically rewriting the whole thing, and any such rewrite would only make the problem at the poles worse — quadrature on a spherical surface using a rectilinear lat/long grid is long known to be enormously difficult and to give rise to artifacts and nearly uncontrollable error estimates.

But until the people doing “statistics” on the output of the GCMs come to their senses and stop treating each GCM as if it is an independent and identically distributed sample drawn from a distribution of perfectly written GCM codes plus unknown but unbiased internal errors — which is precisely what AR5 does, as is explicitly acknowledged in section 9.2 in precisely two paragraphs hidden neatly in the middle that more or less add up to “all of the `confidence’ given the estimates listed at the beginning of chapter 9 is basically human opinion bullshit, not something that can be backed up by any sort of axiomatically correct statistical analysis” — the public will be safely protected from any “dangerous” knowledge of the ongoing failure of the GCMs to actually predict or hindcast anything at all particularly accurately outside of the reference interval.

5 1 vote
Article Rating
219 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Charles Nelson
May 7, 2014 3:41 am

Warmists often ask me just how such a giant conspiracy could exist amongst so many scientists…this is one very good particular example of the kind of structures that sustain CAGW.
Thanks for the insight!

Roy Spencer
May 7, 2014 3:54 am

I exchanged a few emails with mathematician Chris Essex recently who claimed (I hope I’m translating this correctly) that climate models are doomed to failure because you can’t use finite difference approximations in long-time scale integrations without destroying the underlying physics. Mass and energy don’t get conserved. Then they try to fix the problem with energy “flux adjustments”, which is just a band aid covering up the problem.
We spent many months trying to run the ARPS cloud resolving models in climate mode, and it has precisely these problems.

dearieme
May 7, 2014 3:56 am

“inadequate spatiotemporal resolution on an insanely stupid, non-rescalable gridding of a sphere”: that reminds me vividly of my first reaction, years ago, when I started to look into “Global Warming”. It was apparent to me that much of the modelling was “insanely stupid” – that it was being done by people who were, by the general standards of the physical sciences, duds.
My qualification to make such a judgement included considerable experience of the modelling of physicochemical systems, starting in 1967. And unlike many of the “climate scientists” I also had plenty of experience in temperature measurement.
My analysis has long been that they started off with hubris and incompetence; the lying started later as they desperately tried to defend their rubbish.

Man Bearpig
May 7, 2014 4:00 am

Well said that man.
Even the most basic (sounding) computer models very soon become more complex. I am in no way described as a computer programmer, but I did a course many years ago in VB. Part of it involved writing a computer ‘model’ or simulation of a scenario that went something like this.
In your city you have 10,000 parking machines, on average 500 break down every day. it takes on average 30 minutes to fix each one and 20 minutes to drive to the location. How many engineers do you need?
Simple? Not on your nelly !

May 7, 2014 4:01 am

Thank you Dr Robert G Brown. You have written exactly what I suspected all along. To put it bluntly the climate models cannot be relied upon. The input data being organised on lat/long grids “gives rise to nearly uncontrollable errors”. I have asked actuaries who have been using this data similar questions about its inherent biases but they didn’t reply. I will be very interested in what Bob Tisdale has to say in this post.
I seem to have a similar history to you as I started programming almost 50 years ago initially using plug board machines and have kept it up as peripheral to my current job as an actuary. I figured the failings of climate models a long time ago but lacked your ability and experience to express it so eloquently.

Man Bearpig
May 7, 2014 4:05 am

” Charles Nelson says:
May 7, 2014 at 3:41 am
Warmists often ask me just how such a giant conspiracy could exist amongst so many scientists…this is one very good particular example of the kind of structures that sustain CAGW.
Thanks for the insight!

I think that it is not that there is a conspiracy, it is more that they do not understand basic scientific principles and ‘believe’ that what they are doing is for the good of mankind (and their pockets)
.

Jos
May 7, 2014 4:07 am

There are actually people doing research on how independent GCMs really are.
An example is this paper by Pennell and Reichler [2011], but there are a few more.
http://journals.ametsoc.org/doi/abs/10.1175/2010JCLI3814.1
http://www.inscc.utah.edu/~reichler/publications/papers/Pennell_Reichler_JC_2010.pdf
They conclude, a.o., that “For the full 24 member ensemble, this leads to an Meff that,
depending on method, lies only between 7.5 and 9 when we control for the multi-model
error (MME). These results are quantitatively consistent with that from Jun. et al. (2008a,
2008b), who also found that CMIP3 cannot be treated as a collection of independent
models.”
(Meff = effective number of models)
So, according to them, to get an estimate about how many independent models there are one should divide the actual number of models by three …

May 7, 2014 4:09 am

Are they designed to actually be predictive in the hard science tradition or are these models actually just artefacts illustrating a social science theory of changing practices in political structures and social governance and economic planning and human behavior? My reading of USGCRP, especially the explicit declaration to use K-12 education to force belief in the models and the inculcation of new values to act on the models is that this is social science or the related system science of the type Ervin Laszlo has pushed for decades.
It’s no different from the Club of Rome Limits to Growth or World Order Models Projects of the 70s that admitted in fine print that they were not modelling physical reality. They were simply trying to gain the political power and taxpayer money to alter physical reality. That is also my reading of the Chapter in yesterday’s report on Adaptation.
I keep hyping this same point because by keeping the focus just on the lack of congruence with the hard science facts, we are missing where the real threats from these reports is coming from. They justify government action to remake social rules without any kind of vote, notice, or citizen consent. It makes us the governed in a despot’s dream utopia that will not end well.

Bloke down the pub
May 7, 2014 4:11 am

Now doesn’t it feel better with that off your chest? By the way, when you say ‘ It’s just that unless you have a Ph.D. in (say) physics, a knowledge of general mathematics and statistics and computer science and numerical computing that would suffice to earn you at least masters degree in each of those subjects if acquired in the context of an academic program, plus substantial subspecialization knowledge in the general fields of computational fluid dynamics and climate science, you don’t know enough to intelligently comment on the code itself’., you weren’t thinking of someone who can’t use excel, like Phil Jones were you?

Bloke down the pub
May 7, 2014 4:15 am

Man Bearpig says:
May 7, 2014 at 4:00 am
The answer is one. The sooner they are all broken the better.

May 7, 2014 4:22 am

Dr. Brown is looking at a real solution. But the modelers are not. They cannot be. They use garbage as input which means regardless of the model, the output will always be garbage. Until they stop monkeying with past temperatures, they have no hope of modeling future climate.

emsnews
May 7, 2014 4:29 am

The real battle is over energy. Note how almost all our wars are against fossil fuel energy exporting nations. Today’s target is Russia.
Unlike previous victims, Russia has a large nuclear armed force that can destroy the US!
The US is pushing for a pipeline for ‘dirty oil’ from Canada while at the same time telling citizens that oil is evil and the US has lots of coal and saying coal is evil.
The rulers want to do this because they know these things are LIMITED. Yes, we are at the Hubbert Oil Peak which isn’t one year or ten years but it is real. Energy is harder to find and more expensive to process.
To preserve this bounty of fossil fuels, the government has to price it out of the reach of the peasants which are the vast bulk of the population. This is why our media is all amazed and loving tiny houses while the elites build bigger and bigger palaces.

HomeBrewer
May 7, 2014 4:30 am

“have several gigabytes worth of code in my personal subversion tree”
I hope it’s not several gigabytes of your code only or else you are a really bad programmer (which often is the case since everybody who’s taken some kind of university course consider themselves fit for computer programmning).

May 7, 2014 4:53 am

Very good exposé, it tallies nicely with the statement that GCM’s are complex re-packed opinions.

Doug Huffman
May 7, 2014 4:53 am

While clique may be appropriate, CLAQUE is then more appropriate – a paid clique. The Magliozzi grace themselves by not being truthful – Clique and Claque, the Racket Brothers.

Tom in Florida
May 7, 2014 4:54 am

Man Bearpig says:
May 7, 2014 at 4:00 am
“In your city you have 10,000 parking machines, on average 500 break down every day. it takes on average 30 minutes to fix each one and 20 minutes to drive to the location. How many engineers do you need?”
It doesn’t matter how many you NEED, it’s government work so figure how many they hire.
Don’t forget that each engineer needs a driver (union rules apply) plus you will need dispatchers, administrative personnel and auditors. Each of those sections would need supervisors who will need supervisors themselves. The agency would end up costing more than the income from the parking meters so they will then need to be removed, most likely at a cost that is higher than the installation cost, most likely from the same contractor who installed them and who is no doubt related to someone in the administration. All in all just a typical government operation.
But back to the subject matter by Dr Brown. Most of the people to whom I speak about this haven’t the faintest clue that models are even involved. They believe that it is all based on proven science with hard data. When models are discussed the answer is very often “they must know what they are doing or the government wouldn’t fund them”.
It’s still going to be a long, uphill battle against a very good propaganda machine.

sleeping bear dunes
May 7, 2014 4:55 am

I always, absolutely always enjoy Dr. Brown’s comments. There are times when I have doubts about my skepticism. And then I read some of his perspectives and go to sleep that night feeling that I am not all that dumb after all. It is comforting that there are some real pros out there thinking about this stuff.

kadaka (KD Knoebel)
May 7, 2014 4:59 am

From Man Bearpig on May 7, 2014 at 4:00 am:

In your city you have 10,000 parking machines, on average 500 break down every day. it takes on average 30 minutes to fix each one and 20 minutes to drive to the location. How many engineers do you need?

None. Engineers don’t fix parking meters, repair techs do.
And at that failure rate, it’s moronic to field repair. Have about an extra thousand working units on hand, drive around in sweeps and swap out the nonworking heads, then repair the broken ones centrally in batches in steps (empty all coin boxes, then open all cases, then check all…). Which throws out your initial assumptions on times.
However at an average 10,000 broken units every 20 days, you might need an engineer or three. As expert witnesses when you sue the supplier and/or manufacturer who stuck you with such obviously mal-engineered inherently inadequate equipment. With an average failure rate of eighteen times a year per unit, anyone selling those deserves to be sued.
Isn’t it wonderful how models and assumptions fall apart from a simple introductory reality check?

May 7, 2014 4:59 am

My analysis has long been that they started off with hubris and incompetence; the lying started later as they desperately tried to defend their rubbish.
It has always been interesting witnessing how much ‘faith’ the AGW hypothesists, dreamers and theorists placed in those models, knowing full well just how defective they really were and are.

May 7, 2014 5:01 am

It’s good to hear Roy Spencer mention Chris Essex’s critique of climate models, as I don’t think anyone has uncovered the reality of the GCM challenge – technical and political – for me as much as Robert Brown, since I read Essex and McKitrick’s Taken by Storm way back.
Just one word of warning to commenters like dearieme: the ‘insanely stupid’ referred to something in what we tend to call the architecture of a software system. The current GCMs don’t have the ability to scale the grid size and that’s insane. The people chosen to program the next variant of such a flawed base system are, as Brown says, taken from “the tiny handful of people broadly enough competent in the list”. They aren’t stupid by any conventional measure but the overall system, IPCC and all, most certainly is. Thanks Dr Brown – this seems to me one of the most important posts I’ve ever seen on WUWT.

Nick Stokes
May 7, 2014 5:02 am

To understand GCM’s and how they fit in, you first need to understand and acknowledge their origin in numerical weather prediction. That’s where the program structure originates. Codes like GFDL are really dual-use.
Numerical Weather Prediction is big. It has been around for forty or so years, and has high-stakes uses. Performance is heavily scrutinised, and it works. It would be unwise for GCM writers to get too far away from their structure. We know there is a core of dynamics that works.
NWP has limited time range because of what people call chaos. Many things are imperfectly known, and stuff just gets out of phase. The weather predicted increasingly does not correspond to the initial state. It is in effect random weather but still responding to climate forcings. GCMs are NWPs run beyond their predictive range.
GCMs are traditionally run with a long wind-back period. That is to ensure that there is indeed no dependence on initial conditions. The reason is that these are not only imperfectly known, especially way back, but are likely (because of that) to cause strange initial behaviour, which needs time to work its way out of the system.
So there is generally no attempt to make the models predict from, say, 2014 conditions. They couldn’t predict our current run of La Nina’s, because they were not given information that would allow them to do so. Attempting to do so will introduce artefacts. They have ENSO behaviour, but they can’t expect to be in phase with any particular realisartion.
Their purpose is not to predict decadal weather but to generate climate statistics reflecting forcing. That’s why when they do ensembles, they aren’t looking for the model that does best on say a decadal basis. That would be like investing in the fund that did best last month. It’s just chance.
I understand there are now hybrid models that do try and synchronise with current states to get a shorter term predictive capability. This is related to the development of reanalysis programs, which are also a sort of hybrid. I haven’t kept up with this very well.

jaffa
May 7, 2014 5:02 am

I have a computer model that suggests the planet will be attacked by wave after wave of space aliens and it’s hopeless because every time we destroy one incoming wave the next is stronger and faster, the consequences are inevitable – we will lose and we will all be killed – the computer model proves it. Oh wait – I’ve just realised I was playing space invaders. Panic over.

May 7, 2014 5:03 am

Thank you, Dr Brown, for an excellent and informative comment. For me, the GCMs are clearly the worst bit of science in the game, mainly because they are so difficult for anyone to understand and hence relatively easy to defend in detail. Of course, once the black box results are examined their uselessness is easy to see, but their supporters continue to lap up tales of their ‘sophistication’.

DC Cowboy
Editor
May 7, 2014 5:05 am

Dr Brown,
Thank you for this explanation. It is interesting to note that the scenario you lay out means that there is no over-arching ‘conspiracy’ among the modelers. Their efforts are a natural social phenomena, sort of like a ‘self-organizing’ organic compound. I think there are many examples of this sort of ‘group dynamic’ in other fields.
I do have some experience (limited) in construction of feedback systems when I started preliminary coursework to pursuing a Phd. I quickly became frustrated with the economic models as the first thing that was always stated was, “in order to make this model mathematically tractable, the following is assumed” – the assumptions that followed essentially assumed away the problem. I suspect it is the same with the GCMs
Could you explain to the uninitiated why the use of “non-rescalable gridding of a sphere” is “insanely stupid”? I have no doubt you are correct, but, when talking with my AGW friends I would like to be able to offer an explanation rather than simply make the statement that it is ‘stupid’.
Thanks.

R2Dtoo
May 7, 2014 5:06 am

An interesting spinoff from this post would be an analysis of just how big this system has grown over the last three/four decades. I did a cursory search a while back and came up with 39 Earth System Models in 21 centres in 12 countries, but got lost in the details and inter-connections of the massive network. I did my PhD at Boulder from 1967-70. At that time NCAR was separate from the campus and virtually nothing trickled down through most programs. I now see many new offshoots, institutes and programs that didn’t exist 45 years ago. Many of these have to be on “soft money” and their continuation ensured only through grants. The university/government interface has to be a complex of state/federal agreements that is so complex as be difficult to decipher. The university logically extracts huge sums of money from the grants as “overhead”, monies that become essential to the campus for survival. Perhaps such a study has been done. If so, it would be instructive just how many “scientists” and programs actually do survive only on federal money, and are solely dependent for continuation on buying into what government wants. The fact the academies world wide also espouse global warming/climate change shows that they recognize that their members are dependents on the government. It is sad, but deeply entrenched system. Can you imagine the scramble if senior governments decided to reduce the dispersed network down to a half-dozen major locations, keep the best, and shed the rest?

John W. Garrett
May 7, 2014 5:08 am

Anybody who knows anything about modeling complex, dynamic, multivariate, non-linear systems knows full well that John von Neumann was absolutely right when he said:
Give me four parameters, and I can fit an elephant. Give me five, and I can wiggle its trunk.

May 7, 2014 5:14 am

Reblogged this on gottadobetterthanthis and commented:
Qualified? Yes, Dr. RGB is supremely qualified. And as has been said, his science-fu is good.

Editor
May 7, 2014 5:16 am

Jos says: “An example is this paper by Pennell and Reichler [2011]…”
Thanks for the link.

Nick Stokes
May 7, 2014 5:18 am

Roy Spencer says: May 7, 2014 at 3:54 am
“I exchanged a few emails with mathematician Chris Essex recently who claimed (I hope I’m translating this correctly) that climate models are doomed to failure because you can’t use finite difference approximations in long-time scale integrations without destroying the underlying physics. Mass and energy don’t get conserved.”

GCMs generally don’t use finite difference methods for the key dynamics of horizontal pressure-velocity. They use spectral methods, for speed mainly, but the conservation issues are different. For other dynamics they use either finite difference or finite volume. If conservation was a problem they would use finite volume with a conservative formulation.
GCMs are not more vulnerable to conservation loss than standard CFD. And that has been a standard engineering tool for a long while now.

Geoff Sherrington
May 7, 2014 5:18 am

As I suggested (on CA, IIRC) on seeing an ensemble with a coloured envelope of ‘confidence’ some years ago, a more appropriate population for its derivation would be all model runs by all participants, not a cherry picked subset.
Thank you, Dr Brown.
The treatment of error derivation can be a handy litmus test of the quality of the paper – and of its authors.

Editor
May 7, 2014 5:21 am

Nick Stokes says: “So there is generally no attempt to make the models predict from, say, 2014 conditions. They couldn’t predict our current run of La Nina’s, because they were not given information that would allow them to do so. Attempting to do so will introduce artefacts. They have ENSO behavior…”
“ENSO behavior” is well-phrased. In other words, climate models create noise in the tropical Pacific but that noise has no relationship with actual ENSO processes, because climate models cannot simulate ENSO processes.

Editor
May 7, 2014 5:25 am

Thanks, rgbatduke, it was well-written and understandable.
And thanks, Anthony, for promoting the comment to a post.

Jack Cowper
May 7, 2014 5:27 am

Thank you Dr Brown,
Very much enjoyed and have learned much from this comment.

May 7, 2014 5:31 am

Many years ago I was employed in a newsroom automation project for the Chicago Tribune during the course of which I acquired and used extensively a distribution of RATFOR and assorted UNIX-like tools from U. of Arizona to run on a DEC PDP-10. “RATFOR”, for those who believe programming started with java, is RATional FORtran — see Software Tooks, Brian W. Kernighan and P. J. Plauger.
Anyway during the course of this project I attended a RATFOR conference and heard various presentations on new tools, proposed extensions to the RATFOR preprocessor and even some suggested extensions to FORTRAN itself. After one such talk a participant from one of the big research labs (either Lawrence Berkeley or Lawrence Livermore) got up and told the speaker something like (I’m paraphrasing here):

You can’t change FORTRAN; you can’t ever change FORTRAN. I support the research computing environment for the lab and they use a number of locally written subroutines where each one is several hundred thousand lines of code. They’ve been worked on by dozens of people over several decades and nobody understands completely how any of them work. You can’t ever change FORTRAN.

I don’t recall if it was ever mentioned what these huge subroutines were used for. Wouldn’t it be a cheerful thought if they used in reactor safety design?
You description of CGM code sounds a lot like those old FORTRAN subroutines: so large and complex and worked on by dozens of people over a decade or more that nobody really understands exactly what they do any more.
All I can say is it’s a good thing we don’t use vendor-proprietary, trade secret computer software to record election votes .. oh, wait we do now in Georgia.

richard
May 7, 2014 5:36 am

Nick Stokes says:
May 7, 2014 at 5:18 am
Roy Spencer says: May 7, 2014 at 3:54 am
“I exchanged a few emails with mathematician Chris Essex
————————
Christoper Essex in full swing on climate models, great fun to watch.

Dagfinn
May 7, 2014 5:36 am

By the way: “However, the Bray and von Storch survey also reveals that very few of these scientists trust climate models — which form the basis of claims that human activity could have a dangerous effect on the global climate. Fewer than 3 or 4 percent said they “strongly agree” that computer models produce reliable predictions of future temperatures, precipitation, or other weather events. More scientists rated climate models “very poor” than “very good” on a long list of important matters, including the ability to model temperatures, precipitation, sea level, and extreme weather events.” http://pjmedia.com/blog/bast-understanding-the-global-warming-delusion-pjm-exclusive/
That seems to be the scientific consensus about climate models.

richard
May 7, 2014 5:36 am

christopher

harkin
May 7, 2014 5:37 am

If the hypothetical “parking machines” were as reliable as the climate models, you’d have 9,700 broken machines and 300 working ones.
The term “good enough for government work” seems apt.

Dave Yaussy
May 7, 2014 5:44 am

I always appreciate Dr. Brown’s perspective. He writes in a way that laymen like me can understand, without being condescending.

Alex Hamilton
May 7, 2014 5:48 am

Roy Spencer referred above to models being doomed, but the real reasons they are doomed has its foundation in incorrect understanding of physics. There is no understanding among climatologists that, wheras Pierrehumbert wrote about “convective equilibrium,” there can only be one state of equilibrium according to the Second Law of Thermodynamics, and that state is thermodynamic equilibrium. The two are the same.
Think about it, Dr Spencer in relation to your incorrect assumption of isothermal conditions which is also inherent in the models. Likewise the models incorrectly apply Kirchhoff’s Law of Radiation to thin atmospheric layers. The absorbed solar radiation in the troposphere does not have a Planck function distribution, so Kirchhoff’s Law is inapplicable.
Also, because the models completely overlook the fact that the state of thermodynamic equilibrium (that is the same as “convective equilibrium”) has no net energy flows, and is isentropic without unbalanced energy potentials (as physics tells us) then they don’t recognize the fact that the resultant thermal profile has a non zero gradient. Water vapor and other radiating gases like carbon dioxide do not make that gradient steeper, as we know. It is already too steep for comfort, and so fortunately we have water vapor to lower the gradient, thus reducing the surface temperature.
:

Nick Stokes
May 7, 2014 5:50 am

Bob Tisdale says: May 7, 2014 at 5:21 am
“In other words, climate models create noise in the tropical Pacific but that noise has no relationship with actual ENSO processes, because climate models cannot simulate ENSO processes.”

No, they get the dynamics right. They will oscillate in the right way. But they don’t match the Earth’s phases. Even with all the knowledge we can assemble, we can’t predict ENSO.
It’s like you can make a bell, with all the right sound quality. But you can’t predict when it will ring.

cd
May 7, 2014 5:52 am

Dr. Robert Brown
This is a brilliant piece and complements a talk I heard by Dr Essex on the same issue. He argued like you, that to model energy transfer properly you need to solve Navier-Stokes equations, and this would require a cellular grid with a resolution of c. 1 cubic mm. He also added that due to numerical instabilities solutions tend to diverge rather than converge. Now I’m not pretending to be an expert on these issues, and I may even sound confused, but I appreciate the issues and why they are such fundamental problems for those who claim GCM are based on fundamental physics. But even if we assume this claim to be completely true, the implementation of those fundamental physics in code is far from perfect.
…run at a finer resolution without basically rewriting the whole thing…
This is news to me. I heard a talk where the speaker, and perhaps I picked her up wrong, suggested that they can run their models at higher voluminous resolutions but at the expense of temporal scale (shorter time periods) due to limits in computer power; so that if she were correct then your statement doesn’t hold. My experience in this type of modelling would suggest, that if you’re correct, they’ve made a serious design error.
Dr Essex’s talk:

Nick Stokes
May 7, 2014 5:58 am

Richard Drake says: May 7, 2014 at 5:01 am
“The current GCMs don’t have the ability to scale the grid size and that’s insane.”

It’s not insane. It’s a hard physical limitation. A Courant condition on the speed of sound. Basically, sound waves are how dynamic pressure is transmitted. Timestepping programs relate properties in a cell to the properties in that and neighboring cells in the previous timestep. If pressure can cross more than one cell in a timestep (at speed of sound), there is total instability. So contracting grid size means contracting timestep in proportion, which blows up computing times.

Nick Stokes
May 7, 2014 6:03 am

cd says: May 7, 2014 at 5:52 am
“He argued like you, that to model energy transfer properly you need to solve Navier-Stokes equations, and this would require a cellular grid with a resolution of c. 1 cubic mm.”

You could say exactly the same of computational fluid dynamics, or numerical weather forecasting. They all work.

Mark Hladik
May 7, 2014 6:03 am

My opinion:
This is the reason you have the ‘blackadderthe4th’-like trolls. All they can do is parrot their mind-numbing Richard Alley Facetube videos, which reinforce their beliefs. Get down to brass tacks, the hard reality of number-crunching, and their eyes just glaze over.
I have a strong Math background, Physics background, and Geology background, and can follow most discussions up to a point. Bottom line is, as Dr. Brown so eloquently states, numerical models are worthless, and any “policy” based upon them will ultimately fail.
A well-written examination of the cold, hard, facts, Dr. Brown!
Mark H.
Mods: any chance for cross-post at Jo?

Berényi Péter
May 7, 2014 6:06 am

With the terrestrial climate system we have a single run of unique physical instance which is way too large to fit into the lab, so no one can perform controlled experiments on it. However, it is a member of a wide class, that of irreproducible* quasi stationary non equilibrium thermodynamic systems, which do have members one could perform experiments on with as many experimental runs as one wishes.
Therefore we should abandon climate modelling for a while and construct computational models of experimental setups of this kind, which can be validated using standard lab procedures.
Trouble is, of course, there is no general theory available for said class, which makes the endeavor risky, but also interesting at the same time. By the way, it also shows how fatuous it is to venture into climate with no proper preparations.
* A thermodynamic system is irreproducible if microstates belonging to the same macrostate can evolve into different macrostates in a short time, which is certainly the case with chaotic systems. It makes the very definition of entropy difficult, let alone rate of entropy production. And that’s the (theoretical) challenge.

cd
May 7, 2014 6:11 am

Nick
You could say exactly the same of computational fluid dynamics, or numerical weather forecasting. They all work.
Well they start diverging from observations straight away (as to be expected), eventually after a relatively short period of time the divergence is so great that they’re next to useless. In short the rate of divergence tells you how good they are. Short-term atmospheric models are good 1-2 days, after which the uncertainty increases dramatically, by day 5 they’re no better than a guess.

Nick Stokes
May 7, 2014 6:11 am

For those who think that despite CFD, NWP etc the Navier-Stokes equations are basically inso;uble, here is something to explain:
[youtube http://www.youtube.com/watch?v=DEhtx0atcFM%5D

Merrick
May 7, 2014 6:12 am

So, correct me if I’m wrong, but since there are a number of us around here who have a number of these skills (sadly, the computational fluid dynamics experience that Dr. Brown is lacking I also lack, but like Dr. Brown I have the necessary physics to understand them in principle but just lack the experience), don’t we just need to take the (some) actual code as a (very poor) template, turn it into a true Open Source project, and all start biting off chunks of it to wrestle with and poperly organize it, document it, and start also to establish and qualify (document and error bound) initilization conditions, boundary conditions, etc.?
It could take a couple of years and probably two or three out of 30 or 40 who jump in will do the lion’s share of the work, but at the end of the day wouldn’t we have code, and wouldn’t it be interesting to know what it said?
I’m already imagining a very modular approach where we can test and validate code against known problems. For instance, the differential equations which describe dynamics in the atmosphere (and ocean, and between the atmosphere and ocean) are identical to the equations which describe well known and well studied dynamics on smaller scales – so validation of the fluid dynamics code could be independently and reproducibly verfied. And, in addition, so relevant to one of Dr. Brown’s main themes, its predictive abilities and absolute predictive error could be analyzed as a function of SCALING to real world problems in order to inform how scaling/rescaling calculations on the sphere impact error.
Anyways – just a thought…

May 7, 2014 6:18 am

Nick Stokes:

It’s not insane. It’s a hard physical limitation. A Courant condition on the speed of sound. Basically, sound waves are how dynamic pressure is transmitted. Timestepping programs relate properties in a cell to the properties in that and neighboring cells in the previous timestep. If pressure can cross more than one cell in a timestep (at speed of sound), there is total instability. So contracting grid size means contracting timestep in proportion, which blows up computing times.

I don’t think you’ve understood what I was driving at – and what I took Robert Brown to be driving at. I fully understand the blowing up of compute times as grid size and thus timesteps are reduced. But as Moore’s Law continues to hold (or something close) one might well wish to reduce both in a new generation of models and model runs. Not to be able to do this – and do it easily – is, in Brown’s words, insanely stupid. That’s why I characterised it as a software architecture problem.
I’m open to correction on what Dr Brown meant but that’s I took from his words.

cd
May 7, 2014 6:25 am

Nick
Again, I’m not an expert here so I’m not going to pretend that I am your peer on these issues, but…
Navier-Stokes equations are basically inso;uble
1) Is there a unique solution?
2) It’s a suite of PDEs(?) and like most PDEs have to be solved analytically using a finite elements approach.
If the answer to 1 is no, then how do you know that your solution is appropriate/correct.
If the answer to 2 is yes and yes, then you’ve got a host of implementation issues.

M Seward
May 7, 2014 6:26 am

I am only a B Eng but have probably done enough for the equivalent of a master’s. I have used CFD software and have a good understanding of its limitations and the effects on accuracy of the grid mesh size vs the fluctuations in the phenomena one is modelling. My experience was an overly coarse mesh led to convergence to a solution on the high side. The inclusion of turbulence would require a mesh orders of magnitude finer and therefore require computaional effort many orders of magnitude greater. In other words there are significant practical problems with accurate modelling of the navier stokes equations including the terms for turbulence and such effects. It is little wonder to me that the ‘models’ do not reflect the measurements and I shudder to imagine what the difference would be if they started the model runs say a century ago instead of 30 years or so.
How to even explain this to Joe Public or his elected representative I do not really know given the bizarre preferences of the msm.

Crispin in Waterloo but really in Yogyakarta
May 7, 2014 6:32 am

Loved it Dr Brown. It is so similar to what I am facing it crosses my mind that inane mathematics in service of political goals is a lot more common than uncommon. If the manipulation of national statistics is considered, there is ‘a lot going on’.
The inanity of what I am trying to do is reduce the variability of calculated results that arise from 4 sets of potential variation, where the variability of A + B + C + D has to be within a prescribed range. Politically, ‘A’ is not to be examined as such a review will show that the output is non-linear for a linear change the input due to a comical set of conceptual and systematic errors. No amount of fiddling with B + C + D will correct the problems inherent in A. So much has been invested in the content of A that correcting it will (apparently) cause the Earth to stop turning.
Modeling systems within meaningfully tight ranges is hard. The climate is bewilderingly complex. Willis has shown that all of the climate models’ predictions can be reproduced by a simple ratio so are we getting anything, ever, that is worth the investment? I doesn’t look like it.

Frank K.
May 7, 2014 6:35 am

Nick Stokes says:
May 7, 2014 at 5:58 am
Richard Drake says: May 7, 2014 at 5:01 am
“The current GCMs don’t have the ability to scale the grid size and that’s insane.”
It’s not insane. It’s a hard physical limitation. A Courant condition on the speed of sound. Basically, sound waves are how dynamic pressure is transmitted. Timestepping programs relate properties in a cell to the properties in that and neighboring cells in the previous timestep. If pressure can cross more than one cell in a timestep (at speed of sound), there is total instability. So contracting grid size means contracting timestep in proportion, which blows up computing times.

“Basically, sound waves are how dynamic pressure is transmitted”
Really? Dynamic pressure = 1/2*rho*V^2. Please explain this.
” Timestepping programs relate properties in a cell to the properties in that and neighboring cells in the previous timestep. If pressure can cross more than one cell in a timestep (at speed of sound), there is total instability.”
Really?? So the GCMs are limited to Courant < 1. That's NOT what I've read – many use multistep methods to promote temporal stability. However, given the near total lack of documentation on these and other important issues, it wouldn't surprise me if they were stability limited to Courant < 1.
"So contracting grid size means contracting timestep in proportion, which blows up computing times."
Yes this is true – time step goes with cell size. The problem goes beyond this simplisitic analysis, since most GCMs have very strong source terms and extensive model coupling (e.g. ocean models coupled with atmosphere dynamics models coupled with radiation physics). This likely imposes even more stringent constraints on stability (and ultimately accuracy).

DirkH
May 7, 2014 6:36 am

Nick Stokes says:
May 7, 2014 at 5:02 am
“So there is generally no attempt to make the models predict from, say, 2014 conditions. They couldn’t predict our current run of La Nina’s, because they were not given information that would allow them to do so. Attempting to do so will introduce artefacts. They have ENSO behaviour, but they can’t expect to be in phase with any particular realisartion.
Their purpose is not to predict decadal weather but to generate climate statistics reflecting forcing. That’s why when they do ensembles, they aren’t looking for the model that does best on say a decadal basis. That would be like investing in the fund that did best last month. It’s just chance.”
You do understand what that means given the fact that they try to simulate a CHAOTIC system, don’t you?
Well I don’t think you do because you effectively just said, there is no way they can predict ANYTHING. Because chaotic systems are characterized by iterative amplification of small disturbances. If you get the state wrong now, the next state will be wronger…

May 7, 2014 6:38 am

We don’t even have the data needed to intelligently initialize the models we have got, and those models almost certainly have a completely inadequate spatiotemporal resolution on an insanely stupid, non-rescalable gridding of a sphere. So the programs literally cannot be made to run at a finer resolution without basically rewriting the whole thing,

This is really a bit surprising. One of the first things that one would try to ascertain if the spatial resolution is fine enough, is to use finer or coarser grids and see whether or not this has a major effect on the trajectories produced by the model. Hard-coding a single resolution seems really amateurish.
It might be a quixotic undertaking, but it seems to me that here on WUWT there would be enough senior gents with some time on their hands and the required skills to collaboratively write a better climate model with a proper, modular structure, one that could be adapted and extended in a sane way as new empirical knowledge about the climate system arrives.

son of mulder
May 7, 2014 6:43 am

“and I still have a hard time working through e.g. the CAM 3.1 documentation, and an even harder time working through the open source code, partly because the code is terribly organized and poorly internally documented to the point where just getting it to build correctly requires dedication and a week or two of effort.”
And when it builds and runs how do you know it has no bugs in it? How do you know it is calculating what it is meant to calculate? Just because results look like they could be real doesn’t mean they are correct. I spent 30 years involved with commercial systems development and at least there was a specification (usually) which the system had to meet and subtle bugs would still arise. But I also feel confident now, having read what sort of courses need to have been taken, that all the time I’ve considered climate modelling to be “snake oil”, I am well qualified to do so.
When the tracks of subatomic particles in say QED are calculated or Einstein’s equation is programmed in to predict say, the perihelion of Mercury, or the bending of light, at least physical results can be measured, compared to model results and the experiment repeated.
As for averaging a large number of distinct models’ outputs and trying to sell the result as in some way real and a guide for what to expect in 100 years, well that is “(snake oil)^2”. Only a crazy person could expect the average of a large number of such wrong answers to be correct except by accident. Maybe the punks feel lucky. And I haven’t even mentioned the impossibility of dealing with mathematical chaos in the basic dynamical equations of climate modelling. Oh and they also say that the behaviour of clouds is not understood. Sometimes, even the difference between the models’ outputs and reality is called a travesty.

May 7, 2014 6:45 am

Sorry, for the life of me I cannot find what the acronym ECIM stands for.
I assume it has something to do with climate or physics or mathematics or computer science.
I assume GCMs are General Circulation Models or Global Climate Models.

Ken Hall
May 7, 2014 6:46 am

As far as I am concerned, this input from Dr Brown is very welcome and adds to my scepticism over the climate models, as I had not realised that even in their very limited actual use, that they were so badly configured.
Regardless, even IF these computer models were more robust and accurate, we still would not get away from the fact that none of those models are a substitute for measuring the actual climate. They are innaccurate by necessity, as we do not know how all the elements and interactions within the actual climate works, and so the computer models are incomplete (we cannot model what we do not know) and are based on a lot of assumptions, many of which may be wrong.
All the simulation runs are based on a variant of the CAGW hypothesis, with varying degrees of Equilibruim Climate Sensitivity as but one of the variables. So the models cannot even be claimed to be providing real-world evidence to support the CAGW hypothesis. They are nothing more than a model of the hypothesis, which merely describes the hypothesis, and does nothing to TEST the hypothesis to ascertain IF the hypothesis is valid or not.
Therefore all the predictive uses of these models tell us nothing about what WILL happen. Therefore all the alarm which comes from computer climate modelling is entirely and completely baseless and should be dismissed.

May 7, 2014 6:47 am

A different non modeling approach must be used for forecasting . Forecasts of the timing and amount of a possible coming cooling based on the 60 and 1000 year natural quasi-periodicities in the temperature and using the neutron count and 10Be record as the best proxy for solar activity are presented in several posts at
http://climatesense-norpag.blogspot.com
During the last eighteen months I have laid out an analysis of the basic climate data and of methods used in climate prediction and from these have developed a simple, rational and transparent forecast of the likely coming cooling.
For details see the pertinent posts listed below.
10/30/12. Hurricane Sandy-Extreme Events and Global Cooling
11/18/12 Global Cooling Climate and Weather Forecasting
1/22/13 Global Cooling Timing and Amount
2/18/13 Its the Sun Stupid – the Minor Significance of CO2
4/2/13 Global Cooling Methods and Testable Decadal Predictions.
5/14/13 Climate Forecasting for Britain’s Seven Alarmist Scientists and for UK Politicians.
7/30/13 Skillful (so far) Thirty year Climate Forecast- 3 year update and Latest Cooling Estimate.
10/9/13 Commonsense Climate Science and Forecasting after AR5 and the Coming Cooling.
The capacity of the establishment IPCC contributing modelers and the academic science community in general to avoid the blindingly obvious natural periodicities in the temperature record is truly mind blowing.
It is very obvious- simply by eye balling the last 150 years of temperature data that there is a 60 year natural quasi periodicity at work. Sophisticated statistical analysis actually doesn’t add much to eyeballing the time series. The underlying trend can easily be attributed to the 1000 year quasi periodicity. See Figs 3 and 4 at
http://climatesense-norpag.blogspot.com/2013/10/commonsense-climate-science-and.html
The 1000 year period looks pretty good at 10000,9000,8000,7000, 2000.1000. and 0
This would look interesting I’m sure on a wavelet analysis with the peak fading out from 7000- 3000.
The same link also provides an estimate of the timing and extent of possible future cooling using the recent peak as a synchronous peak in both the 60 and 1000 year cycles and the neutron count as supporting evidence of a coming cooling trend as it appears the best proxy for solar “activity” while remaining agnostic as to the processes involved.
I suppose the problem for the academic establishment is that this method really only requires a handful of people with some insight ,understanding and the necessary background of knowledge and experience as opposed to the army of computer supported modelers who have dominated the forecasting process until now.
There has been no net warming for 16 years and the earth entered a cooling trend in about 2003 which will last for another 20 years and perhaps for hundreds of years beyond that. see
ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/annual.ocean.90S.90N.df_1901-2000mean.dat
The current weather patterns in the UK and USA are typical of those developed by the more meridional path of the jet stream on a cooling earth. The Fagan book “The Little Ice Age ” is a useful guide from the past to the future. The frequency of these weather patterns, e.g. for the USA the PDO related drought in California and the Polar Vortex excursions to the South will increase as cooling continues
The views of the establishment scientists in the USA and of the UK’s CSA and Met office’s leaders in this matter post AR5 reveals their continued refusal to recognize and admit the total failure of the climate models in the face of the empirical data of the last 16 years. It is past time for the climate community to move to another approach based on pattern recognition in the temperature and driver data and also on the recognition of the different frequencies of different regional weather patterns on a cooling ( more meridional jet stream ) and warming (more latitudinal jet stream ) world.
All of the warming since the LIA can easily be accommodated within the 1000 year natural cycle without any significant contribution from anthropogenic CO2.
The whole UNFCCC travelling circus has no empirical basis for its operations and indeed for its existence depending as it does on the predictions of the inherently useless climate models.. The climate is much too complex to model but can be predicted by simply knowing where we are in the natural quasi -cycles

Ken Hall
May 7, 2014 6:48 am

J. Philip Peterson says:
May 7, 2014 at 6:45 am
Sorry, for the life of me I cannot find what the acronym ECIM stands for.
I assume it has something to do with climate or physics or mathematics or computer science.
I assume GCMs are General Circulation Models or Global Climate Models.
===============================================
Enhanced Climate Integration Model

May 7, 2014 6:49 am

Richard Drake says:
May 7, 2014 at 5:01 am
The people chosen to program the next variant of such a flawed base system are, as Brown says, taken from “the tiny handful of people broadly enough competent in the list”. They aren’t stupid by any conventional measure but the overall system, IPCC and all, most certainly is.

Yes, extremely competent, intelligent people can sometimes produce something which is not befitting either their competence or intellect. Unfortunately, if only they are competent and intelligent enough to resolve this issue, one can surmise that a resolution may not be forthcoming.

RomanM
May 7, 2014 6:49 am

Nick (May 7, 2014 at 5:50 am):

No, they get the dynamics right. They will oscillate in the right way. But they don’t match the Earth’s phases. Even with all the knowledge we can assemble, we can’t predict ENSO.
It’s like you can make a bell, with all the right sound quality. But you can’t predict when it will ring.

This also means that the model can have absolutely no abillity for scientifically determining any resultant effect of possible temperature and/or other climate changes on the frequency and intensity of future ENSO events.
ENSO emulation is just more lipstick on the model pig…

beng
May 7, 2014 6:51 am

It takes the most sophisticated models running on supercomputers to “model” a modern aircraft. Those presumably actually work.
Now try modeling “global” climate. It’s comparing modeling a DNA molecule with a whale.

David A
May 7, 2014 6:54 am

Nick can hand wave all he wants. (However my model predicts that due to weight and lack of lift in his arms, neither he, or his arguments will get off the ground)
However it is not so comlicated. The GCMs are informative! They are extremely informative. They all run wrong, (not very close to the observations) in the SAME direction! (To warm)
This very likely means that one of the primary factors “wiggling the elephant” is given more power then is warranted. One could look for dozens of errors, but likely one or two of the dominat forcings are overrated. If chnaging the forcing power of just one factor, moves ALL the models closer to the real world observations, that may likely indicate that that dominant factor is overrated.
Does anyone care to guess which single forcing factor, lowered in power, would cause all the models to move closer to real world observations?
BTW, for anyone to take the model mean of a group of GCM which all run wrong in one direction, and then base public policy on the modeled projected harms, is dishonest, and in my view morallly evil.

May 7, 2014 6:55 am

ECIM – Thank you moderator – And I would change “I’ve did advanced importance-sampling…” to I’ve done advanced importance-sampling…

May 7, 2014 6:55 am

It’s an age old human response to believe that folks who are clearly more knowledgeable
about the intricate innards of a climate model can be trusted when they say they work.
This holds true as long as the model’s predictions seem to be coming to pass. But when
reality and predictions diverge sharply, one doesn’t have to know anything about how the model is constructed in order to know that it isn’t worth anything. There is certainly no logical reason for trusting non-climate scientists to continue believing in the models’ predictions.

Bill Illis
May 7, 2014 6:57 am

If you built a new climate model (after using large parts of the code from other models obviously) and you ran it and it kept coming back with just 1.0C temperature increase by 2100, …
… what would then happen.
Well, you can’t get invited to all the great global warming parties with your new 1.0C climate model. You are not only uninvited, you are drummed out of the business. Obviously, the model gets a tweak or two or hundreds until it is in the nice safe range of 2.2C to 4.0C (exactly the range of acceptable models prescribed by the IPCC AR4 team – you couldn’t submit a forecast unless the model had an underlying sensitivity within that range).

Nick Stokes
May 7, 2014 7:00 am

Frank K. says: May 7, 2014 at 6:35 am
“Really?? So the GCMs are limited to Courant < 1.”

Usually run at less. Here is WMO explaining:
” For example, a model with a 100 km horizontal resolution and 20 vertical levels, would typically use a time-step of 10–20 minutes. A one-year simulation with this configuration would need to process the data for each of the 2.5 million grid points more than 27 000 times – hence the necessity for supercomputers. In fact it can take several months just to complete a 50 year projection.”
At 334m/s, 100km corresponds to 5 minutes. They can’t push too close. Of course, implementation is usually spectral, but the basic limitation is there.

Frank K.
May 7, 2014 7:02 am

Nick Stokes says:
May 7, 2014 at 5:02 am
“GCMs are traditionally run with a long wind-back period. That is to ensure that there is indeed no dependence on initial conditions. The reason is that these are not only imperfectly known, especially way back, but are likely (because of that) to cause strange initial behaviour, which needs time to work its way out of the system.”
So the initial condition which is finally reached after the arbitrary “wind up” period is a unique and accurate initial condition for the start time of the simulation? Please explain in more detail how this is achieved.
All systems of PDEs are dependent on their initial/boundary conditions. There has to be an initial condition to begin any analysis, and the solution will depend on this. You really can’t have “no dependence”…

Matt Skaggs
May 7, 2014 7:04 am

Thanks to Nick Stokes for commenting on this one. No quibbles with Dr. Brown’s assessment, but Nick’s comments greatly enrich the thread in terms of the readers being able to get the big picture.

ossqss
May 7, 2014 7:05 am

Thank you Dr. Brown for the interesting and enlightening view from any given climate modelers keyboard. The qualifications for such create a very unique subset of scientists for certain.
I have often pondered how many concantenated items would be necessary to achieve climate modeling accuracy for a given period of time. 10’s of Trillions or more? I don’t believe anyone really knows in the end. That thought brings forth the question as to how these modelers can claim climate accuracy for 100 years out. I believe that is impossible based upon historic performance and continuously discovered variables not accounted for to begin with. Don’t get me wrong, numeric modeling has had success in weather prediction in the short range. Go out more than 7-10 days, not so much success.
I would say without hesitation that the climate modeler is the most powerful person on the planet. Just look at the results of their work. They have been setting energy and social policy for decades without being seen or heard from. They are truly the people behind the curtain if you will.
So tell me who are these beings?
What are they like in the real world?
Would they make good neighbors?
Would you trust them with your children?
I think we have the right to know since they are impacting everyone on this planet in one way or another, every minute of your day.
It seems to me that we have changed the dynamic of science in the climate community. We used to build up to a conclusion by virtue of hard work and theory validation. It seems we have inverted that pyramid of function in climate modeling and strive to backfill from the top down based upon a preconceived end result. Inclusive to this is the obvious adjustment of the historic observational data to fit the expectation of the models.
I ask why?
I have viewed some entities post here that climate is like engineering when it comes to CO2. The observations simply prove them wrong as they are obviously missing important parts of their equation.
In climate science, Cloud Computing, has a whole different meaning.
Regards Ed

Steve Keohane
May 7, 2014 7:08 am

Thank you Dr. Brown.

Cold in Wisconsin
May 7, 2014 7:09 am

This is all a very intelligent discussion of an extremely technical nature, but can I return to Middle School for a moment and ask how we can call these models “Science” in the first place? If the experiment is “let’s build a predictive model and see if we can predict the future climate or temperature of the earth” then I think it clear that the experiment has failed miserably many times over. We certainly can’t expect the projections to improve over time if they diverge from reality so early in the experiment. The models may be based upon scientific laws and theories, but that doesn’t make the models science. If it did, then every time I get in my car and drive it, I am doing science. There’s lots of science under the hood, but I am just running the car, not doing an experiment. Also, if my husband (a Food Scientist) makes me some toast, is that science?
We need to stop referring to these models in the same sentence as the word science.

Latitude
May 7, 2014 7:09 am

You guys are a hoot….you’re talking about clouds, ENSO, chaotic systems, etc like these computer games have any hope at all…..
Here’s the take home…..
“difficulty of properly validating a predictive model — a process which basically never ends as new data comes in.”
The real bottom line is temperature reconstructions…and they are constantly changing them
These games will never be right when they are constantly “adjusting” the official temperature history these guys use to validate these computer games….
They adjusted past temps down and present temps up to show a faster warming trend that’s real…
…based on that garbage….the computer games are showing that same trend
They will never be right………

rogerknights
May 7, 2014 7:11 am

Charles Nelson says:
May 7, 2014 at 3:41 am
Warmists often ask me just how such a giant conspiracy could exist amongst so many scientists…this is one very good particular example of the kind of structures that sustain CAGW.

Here’s a relevant quotation:

Paul Vaughan (07:53:54) :
norris hall (05:10:11) “[…] it is possible that this is just a big conspiracy by climate scientist around the world to boost their cause and make themselves more important. Though I find it hard to believe that thousands of scientists […] all agreed to promote bogus science. […] Pretty hard to do without being discovered.”
Actually not so hard.
Personal anecdote:
Last spring when I was shopping around for a new source of funding, after having my funding slashed to zero 15 days after going public with a finding about natural climate variations, I kept running into funding application instructions of the following variety:
Successful candidates will:
1) Demonstrate AGW.
2) Demonstrate the catastrophic consequences of AGW.
3) Explore policy implications stemming from 1 & 2.
Follow the money — perhaps a conspiracy is unnecessary where a carrot will suffice.

May 7, 2014 7:16 am

Dr. Brown
This post I did with respect to modeling might be useful for this discussion
What Are Climate Models? What Do They Do?
http://pielkeclimatesci.wordpress.com/2005/07/15/what-are-climate-models-what-do-they-do/
On the serious flaws with respect to these models when run for multi-decadal climate predictions (projections) even for current climate [in hindcasts where they can actually be tested against real world data], see, for example, the Preface http://pielkeclimatesci.files.wordpress.com/2013/05/b-18preface.pdf
to
Pielke Sr, R.A., Editor in Chief., 2013: Climate Vulnerability, Understanding and Addressing Threats to Essential Resources, 1st Edition. J. Adegoke, F. Hossain, G. Kallos, D. Niyoki, T. Seastedt, K. Suding, C. Wright, Eds., Academic Press, 1570 pp.
Best Regards
Roger Sr.

ferd berple
May 7, 2014 7:21 am

Dr Brown
Isn’t it more likely that climate models are predicting what climate modelers BELIEVE the future climate will be, rather than predicting future climate?
Here is my reasoning: Since we don’t know the future climate, it is not possible to validate the models, except against what we believe to be the future. Given two models that hindcast with the same approximate level of accuracy, the model builder will subconsciously select the model that forecasts closer to their own belief, believing this model to be “more correct”
As every non trivial piece of code has “bugs”, this process over time will select for code in which the errors skew the results in the direction of the developers beliefs as to what is the correct answer. In the case of a community of developers, then the model will also be selected based on the community belief system, especially where grant money is involved. Otherwise, those models that do not conform will not get funding and will be eliminated.
In effect, climate models are undergoing “survival of the fittest” in which the fittest model is the one that the climate community believes is delivery the “best” answer, regardless of how accurately it is able to actually predict the future. Over time, the “best answer” is the answer that attracts the most funding, not the answer that delivers the most accuracy.

May 7, 2014 7:21 am

Oops, thanks Ken Hall for the ECIM definition.
And Dr. Brown, this is one of the best reads on WUWT that I have seen, even though I had only 1 year of college physics, I do get the gist of it. Great post…

mpainter
May 7, 2014 7:23 am

Many thanks to Dr, Robert G. Brown for these insights into the GCM’s. For me,the final analysis is in the differing results of the various models. It seems a truism that the model does what the modeler wants it to do.

Frank K.
May 7, 2014 7:24 am

Stokes
“Basically, sound waves are how dynamic pressure is transmitted”
Could you please answer my question about this statement? I would like to know where I can find out more information about how dynamic pressure is transmitted by sound waves. Thanks.

ripshin
Editor
May 7, 2014 7:28 am

jaffa – “space invaders” – LMAO!

John McClure
May 7, 2014 7:32 am

So the code is poorly documented, the equations suspect, and it apparently lacks modular design for testing. Is this an example of featherbedding or the need to assign the coding to an independent group?
“all of the `confidence’ given the estimates listed at the beginning of chapter 9 is basically human opinion bullshit, not something that can be backed up by any sort of axiomatically correct statistical analysis”
If memory serves, Lord May refers to the scientific opinions as consensus regarding educated guesses.

May 7, 2014 7:37 am

Robert G. Brown,
A stimulating read. Thank you.
I think the modelers will just shrug their shoulders at their model’s non-real behavior and exclaim with non-scientific self-righteousness something like => Hey, our climate models are a work in progress and to be on the safe side for humanity (and our grandchildren) we purposely made them dangerously warm until (sometime in the far future) we get our models working right.
I see no other position they can take except they warmed the models for the precautionary principle. They aren’t doing science.
The question is, is what they did lying?
John

Matt
May 7, 2014 7:41 am

I too am a physicist with considerable experience collecting data to initialize and validate models. It occurs to me that one of the biggest challenges for saving the planet from this warming trend is that organizations funded by hydrocarbon billionaires are confusing the public by spreading disinformation on climate change. I think the solution is obvious. Now that 99.999 percent of the scientists agree, our climate model-builders should turn their massive skills to predicting financial markets. We should be able to quickly amass huge amounts of cash to spend on lobbyists, political campaigns, and public education. Then we can buy up all the carbon-capture patents and keep that technology secret, and go on to destroy the evil coal, oil and natural gas industries. Problem solved.

rogerknights
May 7, 2014 7:41 am

JohnWho says:
May 7, 2014 at 6:49 am

Richard Drake says:
May 7, 2014 at 5:01 am
The people chosen to program the next variant of such a flawed base system are, as Brown says, taken from “the tiny handful of people broadly enough competent in the list”. They aren’t stupid by any conventional measure but the overall system, IPCC and all, most certainly is.

Yes, extremely competent, intelligent people can sometimes produce something which is not befitting either their competence or intellect. Unfortunately, if only they are competent and intelligent enough to resolve this issue, one can surmise that a resolution may not be forthcoming.

Edward de Bono wrote about how how intelligent people lead themselves astray and won’t retrace their steps. He called it “the intelligence trap.” (Google for it)

bonanzapilot
May 7, 2014 7:42 am

Sir:
Can you at least tell us what the price of Tesla Motors will be a year from now?
Thank you in advance for your reply…

David A
May 7, 2014 7:42 am

Sun et al. (2012) found that
“.in global climate models, [t]he radiation sampling error due to infrequent radiation calculations is investigated …. It is found that.. errors are
very large, exceeding 800Wm2 at many non-radiation time steps due to ignoring the effects of clouds..”
===================================
Watts a silly little watt between friends.

A. E. Soledad
May 7, 2014 7:43 am

Modern climatology doesn’t even remember the atmosphere’s temperature is governed by the Ideal Gas Law today, just like it was when James Hansen began weighting the atmospheric temperature according to trace gas composition.
It’s fairy physics. The atmosphere obeys Ideal Gas Law. If your model doesn’t explicitly obey it, then your model’s wrong.
NO modern GCM since James Hansen tried to re-invent computer modeling of the atmosphere after NASA already properly modeled it using the proper law for orbital calculations regarding Mercury and Apollo,
has the atmosphere obeying Ideal Gas Law.
That’s why those of you who swear you’re as smart as anybody in the room regarding this
are laughingstocks to the real working scientists
who’ve laughed in all your faces
for the past ten to fifteen years as you’ve sworn you believe trace gas controls atmospheric temperature,
It’s as ridiculous today as it was when Hansen was lying to Congress about it even being potentially real. Just because you believed it doesn’t mean it was maybe right.
Just because you’ve been wrong for going on twenty years, as we in the actual, working scientific world have laughed you to shame,
doesn’t mean there might have actually been something to it after all.
It’s only the people who don’t go out and actually make things fly and land and communicate, who ever believed it. Working scientists who actually make the solar system probes, and the rovers, an the satellite communications networks, and the avionics people, the submariners,
nobody ever believed Hansen’s lunacy. You who believed it have had 40 N.A.S.A. retirees telling you for the past several years, they’re ashamed of Hansen and his sorry science.
You wouldn’t listen to them either.
But there’s one thing the working scientists of this world have been telling you – all of you who believed it – for nearly a quarter of a century.
There’s no such thing as a Green House Gas Law, governing atmospheric temperature.
There is such thing as Ideal Gas Law governing atmospheric temperature.

May 7, 2014 7:49 am

Dr Brown, you have good company.
“If you live with models for 10 to 20 years, you start to believe in them…A model is such a fascinating toy that you fall in love with your creation.”
– Freeman Dyson (At the age of five he calculated the number of atoms in the sun.)

MikeUK
May 7, 2014 7:51 am

I read somewhere that just the radiative interactions part of the NASA GCM has N thousand lines of code (not sure the value of N). This code is being maintained and updated by a series of grad students and postdocs. Anyone who knows about such monsters KNOW that many bugs and errors will exist.
Formal validations should be presented of any GCM used in climate research that has policy implications.

Roy Spencer
May 7, 2014 7:51 am

Alex, I have no idea what “isothermal assumption” you think I have attached to my name. Also, your points are so riddled with misunderstanding, I don’t even know where to start.

Harry Passfield
May 7, 2014 8:01 am

RGB says: “…and an even harder time working through the open source code, partly because the code is terribly organized and poorly internally documented…”
I just wonder what his reaction was to the Harry_read_me code from CG1 – which I doubt could be called ‘open source’.

Nick Stokes
May 7, 2014 8:04 am

Frank K. says: May 7, 2014 at 7:24 am
“Could you please answer my question about this statement? I would like to know where I can find out more information about how dynamic pressure is transmitted by sound waves. Thanks.”

OK. But first a correction – it’s late here. I said that they run at Courant number less than 1. In the example I cited, 5 min vs 10-20, they are running at >1. That’s possible to a point, using devices like you mentioned.
Now on sound and pressure, it’s there in the Navier-Stokes equations. I’ve given more details here. But you have the momentum equation
ρDv/Dt = -∇P+…
and the conservation of mass equation:
∂ρ/∂t = -ρ∇.v+…
and a state equation: dP/dρ=c2
Put those together and you have an acoustic wave equation, buried in the Navier-Stokes equation. And you’re limited by its Courant Number.

David L. Hagen
May 7, 2014 8:23 am

Glimpse into Type B Errors
Dr. Robert Brown concisely observes:
Re: “IMO, solving the problem the GCMs are trying to solve is a grand challenge problem in computer science. It isn’t at all surprising that the solutions so far don’t work very well. It would rather be surprising if they did. We don’t even have the data needed to intelligently initialize the models we have got, and those models almost certainly have a completely inadequate spatiotemporal resolution on an insanely stupid, non-rescalable gridding of a sphere. . . . .the ongoing failure of the GCMs to actually predict or hindcast anything at all particularly accurately outside of the reference interval.”
That is a brief glimpse into contributions to major “Type B errors” – which the IPCC totally ignores, and yet which are so obvious with 96% of GCM 34 year projections exceeding temperature reality.
See BIPM’s GUM: Guide to the Expression of Uncertainty in Measurement 2008 and
NIST’s Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results 1994

May 7, 2014 8:26 am

@RGB

It’s just that unless you have a Ph.D. in (say) [a bunch of degrees in many independent, interdisciplinary, and complementary science, mathematics, programming fields] you don’t know enough to intelligently comment on the code itself. You can only comment on it as a black box, or comment on one tiny fragment of the code, or physics, or initialization, or methods, or the ode solvers, or the dynamical engines, or the averaging, or the spatiotemporal resolution, or…

Dr. Brown, your post at top was delicious. But I make a small issue with the blockquote above. You make it sound that if you are not some post-doc octopus, you are not worthy to make comments on the GCM code. Quite the contrary, only a post-doc octopus may be capable of writing such a model, but normal one-fisted experts ought to be able to point out flaws in any part of the process.
Building a house requires the skills of a soils and foundation engineering, masonry, structural engineering, carpentry, HVAC, electrical, plumbing, roofing, insulation, cabinetry… etc. Any single element of that endeavor should be and usually is reviewed and critiqued by independent masters of the craft. That a plumbing inspector might not know roofing doesn’t disqualify him for red tagging a drain pipe with insufficient gradient.
You point is that GCMs are horribly complex dynamic processes where ALL the skills are necessary to write a good one. All true. But since all these processes must work correctly in isolation and in dynamic cooperation, EACH process can and should be able to withstand criticism by any expert, indeed anyone with knowledge, in that field. The burden of complexity is on the authors of the GCM, not the peer reviewers.
The human body is horribly complex, but you don’t need the skills of a neurosurgeon to recognize the patient has a broken leg, dislocated shoulder, or is dehydrated.
I am sure you agree with this in principle because you close with the hope that a some ordinary

people doing “statistics” on the output of the GCMs come to their senses and stop treating each GCM as if it is an independent and identically distributed sample drawn from a distribution of perfectly written GCM codes plus unknown but unbiased internal errors….

For here our ordinary statisticians need to make inferences and question not just one GCM, but on the entire genus of GCMs.

Taphonomic
May 7, 2014 8:37 am

“ode solvers” in the article appears to be a typo.
Unless it’s something like fixing problems dealing with the History Channel’s “Vikings” series take on Ragnar Lothbrok.

May 7, 2014 8:38 am

L. Hagen at 8:23 am
I am familiar with Type 1 and Type 2 error. I’m familiar with false positives and false negatives.
Thanks to Lord Monckton, I am familiar with a dozen styles of invalid reasoning.
But I am not familiar with “Type B Error”, at least by that name. What is it? And what is Type A Error?
(Google wasn’t much help: http://www.west.net/~ger/math_errors.html, not bad, but not on point.)

basicstats
May 7, 2014 8:39 am

Nick Stokes
The unsolved Navier-Stokes problem involves variously the solution or the breakdown of these equations in 3 dimensions. It’s that so little is known.
More generally, how far do numerical solutions in GCM’s reproduce the dynamics of the actual PDE? Stephen Wolfram knows much more about this than I (or most others, I suspect). He observes:”.. it has become increasingly common to see numerical results given far into the turbulent regime – leading sometimes to the assumption that turbulence has somehow been derived from the Navier-Stokes equations. But just what such numerical results actually have to do with detailed solutions to the Navier-Stokes equations is not clear”
This also goes to the often-cited issue of chaotic climate dynamics. It is likely numerical solutions
will exhibit chaotic features, but does this represent the so-called ‘fundamental physics’?
Reference to climate being a chaotic system seems often to rely on the very simplified version
of N-S studied by Lorenz. Of this Wolfram comments: “The Lorenz equations represent a first-order
approximation to certain Navier-Stokes-like equations, in which viscosity is ignored. And when one
goes to higher orders progressively more account is taken of viscosity, but the chaos phenomenon
becomes progressively weaker. I suspect that in the limit where viscosity is fully included most details of initial conditions will simply be damped out, as physical intuition suggests”. It seems
a reasonable conjecture that these observations may also apply to GCM solutions.
Anyway, it would seem prudent to know more about the ‘fine print’ of these models before tasking them with ‘saving mankind’?

bones
May 7, 2014 8:40 am

Nick Stokes says: “So there is generally no attempt to make the models predict from, say, 2014 conditions. They couldn’t predict our current run of La Nina’s, because they were not given information that would allow them to do so. Attempting to do so will introduce artefacts. They have ENSO behavior…”
So if ENSO is an emergent phenomenon in the models, one might expect AMO and PDO features to also be emergent. Show us the models that produce these.

Martin C
May 7, 2014 8:41 am

Roy Spencer says:
May 7, 2014 at 7:51 am
Dr. Spencer, from reading ‘Alex Hamilton’s words, they seemed a lot like D. C. – I think you know who that is . . . !

rgbatduke
May 7, 2014 8:42 am

I exchanged a few emails with mathematician Chris Essex recently who claimed (I hope I’m translating this correctly) that climate models are doomed to failure because you can’t use finite difference approximations in long-time scale integrations without destroying the underlying physics. Mass and energy don’t get conserved. Then they try to fix the problem with energy “flux adjustments”, which is just a band aid covering up the problem.
We spent many months trying to run the ARPS cloud resolving models in climate mode, and it has precisely these problems.

I’ve spent a fair bit of time solving open stochastic differential equations (or, arguably, integrodifferential equations) — things like coupled Langevin equations — back when I did around five or six years of work on a microscopic statistical simulation of quantum optics, as well as LOTS of work numerically solving systems of coupled (de facto partial) differential equations (the system detailed in my Ph.D. dissertation on an exact single-electron band theory). Even before one hits the level of stiff equations — quantum bound states — where tiny errors are magnified due to the fact that numerically, eigensolutions are always numerically unstable past the classical turning point, problems with drift and normalization and accumulation of error are commonplace in all of these sorts of systems. And yes, one of the consequences of drift is that conservation laws aren’t satisfied by the numerical solution.
A common solution for closed systems is to renormalize per time step (or N time steps) to ensure that the solution is projected back to the conserved subspace instead of being allowed to drift away. This prevents the accumulation of error and ensures that the local dynamics remains MOSTLY on the tangent bundle of the conserved hypersurface/subspace, if you like, with only small/second order deviation that is immediately projected away. This is, however, computationally expensive, and it isn’t the way e.g. stiff systems are solved (they use special backwards ODE solvers).
For open systems doing mass/energy transport, however, there is a real problem — energy ISN’T conserved locally ANYWHERE, and most cells can both gain or lose energy directly out of the whole system. In the case of climate models, in some sense this is “the point” — sunlight can and does warm the entire column from the TOA to the lower bound surface of light transport in the model, outgoing radiation can and does cool the entire column from the TOA to the lower bound surface of light transport for the model, AND each cell can exchange energy not only with nearest neighbors but with neighbors as far away as light can travel from the cell boundaries and remain inside the system laterally and vertically combined. That is, radiation from a ground level atmospheric cell can exchange energy with a cell two levels up (assuming that the model has multiple vertical slabs) and at least one cell over PAST any nearest-neighbor, same level cell. One can model this as a nearest-neighbor interaction/exchange system, but it’s not, and in physics systems with long range interactions often have startlingly different behavior from systems that only have short range interactions. Radiation is a long range coupling and direct source and sink for energy.
The consequence of this is that enforcing energy conservation per se is impossible. All one can do is try to solve a system of cell dynamics that is conservative at the differential level, basically implementing the First Law per cell — the energy crossing the borders of the cell has to balance the change in internal energy of the cell plus the work done by the cell. Errors in the integration of those ODE/PDEs cannot be corrected, they simply accumulate. To the extent that the system has negative feedbacks or dynamical processes that limit growth, this doesn’t mean that the energy will diverge, only that the trajectory of the system will rapidly, irreversibly and uncorrectably diverge from the true trajectory. If the negative feedbacks are not correctly implemented, of course, the system can diverge — the addition of a series of randomly drawn -1, +1 numbers diverges like the square root of the number of steps (a so-called “drunkard’s walk”). In the climate system the issue isn’t so much a divergence as the possibility of bias in the errors. Even if you have some sort of damping/global conservation principle forcing you back to zero, if the selection of +/- 1 steps is NOT random but (say) biased 2 +1’s for ever -1, you will not be driven back to the correct equilibrium energy content. This sort of thing can easily happen in numerical code as a simple artifact of things like rounding rules or truncation rules — setting an integer from a floating point number and then using the integer as if it were the float.
It can also happen as an artifact of the method used to extrapolate when solving differential equations or finite difference equations, or because of using interpolation on an inadequate grid while trying to solve dynamics with significant short term variation. Interpolation basically cuts off extremes, but in a nonlinear model the contribution of the excluded extremes will not be symmetric in the deviation specifically when the deviation is normally distributed around the interpolation. As a trivial, but highly relevant example, if one has a transport process that (say) is proportional to T^4, and use interpolated T’s on an inadequate grid de facto assuming unbiased noise around the interpolation, one will strictly underestimate the transport.
In fact, if one creates a field that has a fixed mean and a purely normal distribution around the mean, and use the mean temperature as an estimate of the (say, radiative) transport, one will strictly underestimate the actual transport because the places where the temperature is warmer than the mean lose energy (relative to the mean) faster than the places where the temperature is cooler, do, relative to the mean temperature. If you like: T_0^4 < \frac{1}{2} (T_0 + \Delta T)^4 + \frac{1}{2}(T_0 - \Delta T)^4 for any \Delta T.
Mass transport is even harder to deal with. Well, not really, but it is more expensive to deal with. In the Earth system, one can probably assume that total mass of atmosphere plus ocean plus land (including the highly variable humidity/water/ice distribution that can swing all three ways) is constant. Yes, there is a small exchange across the “boundary” from thermal outgassing and infalling meteor and other matter but it is a very, very small (net) number compared to the total mass in question and probably is irrelevant on less than geological time (in geological time that is not clear!) But GCMs don’t work over geological time so we can assume that the Earth’s mass is basically conserved.
Back a few paragraphs, you’ll note that one has to implement the first law per cell in the model. This is nontrivial, because cells not only receive energy transport across their boundaries in the form of radiation and conduction at the boundary “surfaces”, but they can receive mass across those boundaries and the mass itself carries energy. Worse, the energy carried by the mass isn’t a simple matter of multiplying its temperature by some specific heat and the mass itself; in the case of pesky old water it also carries latent heat, heat that shows up or disappears from the cell relative to its temperature as water in the cell changes phase. Finally, each cell can do work.
This last thing is a real problem. It is left as an exercise to see that a cell model with fixed boundaries cannot directly/internally compute work, because work is a force through a distance and fixed boundaries do not move. If a cell expands into its neighbors it clearly does work (and all things being equal, cools) but “expanding into neighbors” means that the neighbors get smaller and cell boundaries move. One cannot compute the work done at any single boundary from mass transport alone, because no work is done by the uniform motion of mass through a set of cells — a constant wind, as it were. One cannot compute work done by any SIMPLE rule. One has to basically look at net flux of mass into/out of a constant volume cell and instead of evaluating P\Delta V (work) evaluate V \Delta P and try to infer work from this plus a knowledge of the cell temperature, plus a knowledge of the cell’s heat capacity plus fond hopes concerning the rates of phase transitions occurring inside the cell. Yet without this, the model cannot work because ultimately this is the source of convective forces and the cause of things like the wind and global mass-energy transport. Not to worry, this is what the Navier-Stokes equation is all about:
http://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equations
In fact, the NS equations do even better. They account for the fact that there the motion of the transport is accompanied by the moral equivalent of friction — drag forces that exert shear stress across the fluid, a.k.a. viscosity. They account for the fact that the motion occurs in a gravitational field, so that downward transport of parcels is accompanied by the gain of total energy while upward transport costs total energy (the kind of thing that gives rise to lapse rates). They can be modified to account for “pseudoforces” that appear in a rotating frame, e.g. Coriolis forces, so that mass transported south is deflected spinward (East) in the northern hemisphere, mass transported upwards is deflected antispinward (West) in both hemispheres, by “forces” that depend in detail on where you are on the globe and in which direction you are moving. However, when solving the system (as this article notes) a statement of mass conservation is necessary, for example the continuity equation:
\frac{\partial \rho}{\partial t} + \vec{\nabla} \cdot (\rho\vec{v}) = 0
Recall, though, that the continuity equation in climate science is not so simple. The ocean evaporates. Rain falls. Ice melts. Water itself also transports itself nontrivially around the globe according to a separate, coupled NS equation with its own substantial complexity. Not only does this mean mass transport is difficult to enforce as a constraint, it means that one has to account for enormous, nonlinear variations in cell transport dynamics according to the state of a substantial fraction of the mass in any given cell. Where by “substantial” I don’t mean that it is ever a particularly large fraction — most of the dry atmosphere is nitrogen and oxygen and argon — but water averages 0.25% of the atmosphere and locally can be as much as 5%!
This is not negligible in any sense of the word when integrating a long, long time series!
So just as was the case when considering energy, one has to consider mass transport across cell boundaries and the forces that drive this transport, where one is not tracking parcels of mass as they move around and interact with other parcels of mass, but rather tracking what enters and leaves fixed volume, fixed location cells. One is better off because one only has to worry about nearest neighbor cells. One is far worse off in that one has to worry not only about dry air, but about the fact that in any given timestep, an entirely non-negligible fraction of the fluid mass in a cell can “appear” or “disappear” as water in the cell changes state, and worse, does so in perfect consonance with the appearance and disappearance of energy in the cell as measured by things like the “temperature” of the atmosphere in the cell.
Maintaining normalization in circumstances like this in a purely numerical computation is enormously difficult. One could in principle do it by integrating the total mass of the system in each timestep and using the result to renormalize it to a constant value, but this is safe to do only if the system is sufficiently detailed that it can be considered closed. That is, the solution has to track in detail the water in inland lakes, the water locked up as ice and snow, the water that evaporates from the surface of the ocean, the total water content of the ocean (cell by cell, all the way to whatever layer you want to consider an “unchanging boundary”. Otherwise errors in your treatment of water as a contribution of total cell mass will bleed over into errors in the total dry atmospheric mass, and the system will drift slowly away into the nonphysical regime as the integration proceeds.
Finally, one has the same issues one had with energy and granularity, only worse. Considerably worse on a lat/long grid on a sphere. At the poles, one can literally step from one cell to the next — in fact at the north pole itself one can put one’s foot down and have it be in dozens of cells at once. Dynamics based on approximately rectilinear cells at the equator, where cells are hundreds of kilometers across and one might be forgiven for neglecting non-nearest neighbor cell coupling are totally wrong at the poles where a fire in one call can heat somebody’s hands when they are standing four cells away. Mass transport dynamics is also skewed — a tiny storm that would be literally invisible at the equator in the middle of a cell might span many cells at the poles, just as “Antarctica” stretches all the way across a rectangular map with linear latitude and longitude, apparently contributing a 12th or so of the total area of the globe in spite of it being only a modest sized continent that is far less than 1/12 of the land area only, where land area is only 30% of the globe in the first place. Siberia, Canada, Greenland are all similarly distorted into appearing comparatively huge. One can of course compensate — in a sense — for this distortion by multiplying by a suitable trig factor in the integrals (the Jacobean) but this doesn’t correct the dynamical algorithms themselves!
This I do have direct experience with. In my band theory problem, I had to perform projective integrals over the surfaces of spheres. Ordinarily, a 2D integral with controllable error would be simple — just use two 1D adaptive quadratures, or without too much effort, write a 2D rectilinear adaptive quadrature routine to use directly. But on a sphere, using a gridding of the spherical polar angles \phi, \theta this does not work. Or rather, it works, but enormously inefficiently and with nearly uncontrollable error estimates. By the time one has a grid that integrates the equator accurately, one has a grid that enormously oversamples the poles. Using a differential adaptive routine can help, but it still doesn’t account for the non-rectangular nature of the cells at the poles and hence one’s error estimates there are still sketchy.
Finally, note well my use of the word adaptive. Even when solving simple problems with ordinary quadrature (let alone trying to solve a nonlinear, chaotic, partial differential equation that cannot even be proven to have general solutions) one cannot just say “hey, I’m going to use a fixed grid/step size of x, I’m sure that will be all right”. Errors grow at a rate that directly depends on x. Whether or not the error growth rate for any given problem can be neglected can only rarely be known a priori, and in this problem any such assertion of a priori knowledge would truly be absurd. That is why even ordinary, boring numerical integration routines with controllable error specification do things like:
* Pick a grid size. Do the integral.
* Divide the grid size by some number, say two. Do the integral again, comparing the answers. (Some methods will do the integral a third time, or will avoid factors of two as being too likely to produce a spurious/accidentally accepted result that is still badly wrong.)
* If the two answers agree within the requested tolerance, accept the second (probably more accurate) one.
* If not, throw away the first answer, replace the first by the second, divide the grid size by two again, and repeat until the two answers do agree within the accepted tolerance.
Again, there are more subtle/efficient variants of this — sometimes adapting only over part of the range where the function itself is rapidly varying (e.g. applying the sort of process observed above directly to the subdivisions until they separately converge). All adaptive quadrature routines can, however, be fooled by just the right function into giving precisely the wrong answer, for example a harmonic function with zero integral will give a nonzero result of the initial gridding is some large, even integer multiple of the period as even 2 or three divisions by 2 will of course give you the same nonzero value and the routine will converge without discovering the short period variation.
Solving differential equations has exactly this problem, only — you guessed it — worse. The accurate knowledge of the integral to be done for each step depends on the accuracy of the integral done in the step before. Or, in many methods, the steps before. Even if the error in that step was well within a very small tolerance, integrating over many steps can — drunkard’s walk fashion — cause the cumulated error to grow, even grow rapidly, outside of the requested tolerance for the ODE solution. If there is any sort of bias in the cumulated error, it can grow much faster than the square root of the number of steps, and even if the differential equations themselves have intrinsic analytic conservation laws built in, the numerical solution will not.
Certain differential systems can still be fairly reliably integrated over quite long time periods with controllable errors by a good, adaptive code. These systems are called (generically) “non-stiff” systems, because they have an internal stability such that the solutions one jumps around between due to small uncontrollable errors in the integration tend to move together so that even as you drift away you drift away slowly, and can usually make the step size small enough to make that drift “slow enough” to achieve a given probable tolerance or absolute error requirement.
Others — yes, the ones that are called “stiff” — cannot. These are systems where neighboring trajectories rapidly (usually exponentially or faster) diverge from one another, even when started with very small perturbations in their initial conditions. In these systems, simply using different integration routines from precisely the same initial condition, or changing a single least-significant digit in the initialization will, over time, lead to completely different solutions.
Guess which kind of system the climate is. Not just the climate, but even comparatively simple systems described by the Navier-Stokes equation. One doesn’t even usually describe such systems as being merely stiff, as the methods that will work adequately to integrate stiff systems with a simple exponential divergence will generally not work for them. We call these systems chaotic, and deterministic chaos was discovered in the context of weather prediction. Not only do neighboring solutions diverge, they diverge in ways described by a set of Lyapunov exponents:
http://en.wikipedia.org/wiki/Lyapunov_exponent
Note well that the Maximal Lyapunov Exponent (MLE) is considered to be a direct measure of the predictability of any given chaotic system (where the “phase space compactness” requirement mentioned in the first paragraph is precisely the requirement that e.g. mass-energy be conserved by the dynamics (allowing for the ins and outs of energy in the energy-open system). The differential equations of quantum eigensolutions aren’t chaotic as they diverge “badly” and produce non-normalizable solutions (violating the axioms of quantum theory, which is why only eigensolutions that are normalizable are allowed), they are merely stiff. I’m not certain, but I’m guessing the the MLE for climate dynamics is such that system stability extends only out to pretty much the limits of weather prediction, decades away from any sort of ability to predict climate.
Without an adaptive solution, one literally cannot validate any given solution to the system for a given, a priori determined spatiotemporal gridding. One cannot even say if the solution one obtains is like the actual solution. All one can do is solve the differential system lots of times and hope that the result is e.g. normally distributed with respect to the actual solution, or that the actual solution has some reasonable probability of being “like” one of the solutions obtained. One cannot even estimate that probability, because one cannot verify that the distribution of solutions obtained is stationary as one further subdivides the gridding or improves or merely alters the algorithm used to solve the differential system.
The really sad thing is that we know that there are numerous small scale weather phenomena that easily fit inside of an equatorial cell in any of the current gridding schemes and that we know will have a large, local effect on the cell’s dynamics. For example, thunderstorms. Tornadoes. Mere rainfall. Winds. The weather system is not homogeneous on a scale of 3 degree lat/long blocks.
What this in turn means is that we know that the cell dynamics are fundamentally wrong. If we put a thunderstorm in a cell, it is too big that has one, it is too big. If we don’t put a thunderstorm into a cell that has one, we miss a rapidly varying mass fraction, latent heat exchange, variation of albedo, transport of bulk matter all over the place vertically and horizontally carrying variations in energy density with it. There isn’t the slightest reason to believe that dynamics carried out with thunderstorms that are always a hundred kilometers across or more will in any way have a long term integral that matches the integral of a system with thunderstorms that can be as small as one kilometer across (not a crazy estimate for the lateral size of a summer thunderhead), or that either extreme of assigning thunderstormness plus some interpolatory scheme for “mean effect of a cell thunderstorm” will integrate out to the right result either.
So here’s the conclusion of the rather long article above. In my opinion — which one is obviously free to reject or criticize as you see fit — using a lat/long grid in climate science, as appears to be pretty much universally done, is a critical mistake, one that is preventing a proper, rescalable, dynamically adaptive climate model from being built. There are unbiased, rescalable tessellations of the sphere — triangular ones, or my personal favorite/suggestion, the icosahedral tessellation. There are probably tessellations still undiscovered that can even be rescaled N-fold for some N and still preserve projective cell boundaries (some variation of a triangular tesselation, for example). These tessellations do not treat the poles any different from the equator, and one can (sigh) still use lat/long coordinates to locate the centers, corners and sides of the tessera. Yes, it is a pain in the ass to write a stable, rescalable quadrature routine over the tessera, but one only has to do it once, and that’s the thing federal grant money should be used for — to fund eminently practical applied computationa mathematics to facilitate the accurate, rescalable solution to many problems that involve quadrature on hyperspherical surfaces (which is a nontrivial problem, as I’ve worked on it and written (well, stolen and adapted) an algorithm/routine for it myself. It happens all the time in physics not just NS equations on the globe in climate science.
Given a solid, trivially rescalable grid, many questions concerning the climate models could directly be addressed, at some considerable expense in computer time. For example, the routine could simply be asked to adaptively rescale until the model converged to some fairly demanding tolerance over as long a time series as possible, and then the features it produces could be compared to real weather features to see if it is getting the dynamics right with the many per-cell approximations being made. Many of those approximations can probably be eliminated, as they exist only because cells right now are absurdly large compared to nearly all known local weather features and only manage large scale, broad transport processes while accumulating errors with an unknown bias in each and every cell due to the approximation of everything else. The MLE of the models could be computed, and used to determine the probable predictivity of the model on various time scales. The dynamically adaptive distributions of final trajectories could be computed and compared to see if even this is converging. And finally, one wouldn’t ever again have to completely rewrite a climate model to make it higher resolution. One would simply have to alter a single parameter in the program input to set the scale size of the cell, and a single parameter to set the scale size of the time step, with a third parameter used to control whether or not to let the program itself determine if these initial settings are adequate or need to be globally or locally subdivided to resolve key dynamics.
I’m guessing that predictivity will still suck, because hey, the climate is chaotic and highly nonlinear. But at least such a program might be able to answer metaquestions like “just how chaotic and nonlinear is that, anyway, when one can freely increase resolution or run the model to some sort of adaptive convergence”. Even if solving the model to some reasonable tolerance proved to be impossibly expensive — as I strongly suspect is the case — we would actually know this and would know not to take climate models claiming to predict fifty years out seriously. Hey, there are problems Man was Not Meant to Solve (so far), like building a starship within the bounds of the current knowledge of the laws of physics, solving NP complete problems in P time (oh, wait, that could actually be what this problem is), building a direct recursion relation capable of systematically sieving out out all of the primes, generating a truly random number using an algorithm, and who knows, maybe long term climate modelling.
rgb

Lady in Red
May 7, 2014 8:43 am

As Robin put it, “They [GCMs] justify government action to remake social rules without any kind of vote, notice, or citizen consent. It makes us the governed in a despot’s dream utopia that will not end well.”
Sadly, that, boys and girls, is the really important take-away. The back-and-forth about science is a Punch and Judy show distraction, like the games politicians play. We tend to believe they are real, important, meaningful. They are not; they are only shadows we are watching. Real world reality is taking place behind us, where we aren’t looking. …..Lady in Red

May 7, 2014 8:43 am

Again. it goes to show what a colossal waste of time, energy, money and manpower the whole CAGW scam has been – and continues to be.
The climate system is chaotic. Only a complete imbecile would try to model it. Surely?

Frank K.
May 7, 2014 8:49 am

Nick Stokes says:
May 7, 2014 at 8:04 am
OK. You can derive the acoustic wave equation this way, but this really has nothing to do with “dynamic pressure” directly. Please be more careful with your wording in the future.
As for the Courant number restriction, it depends on your wave speeds. If your equations exclude/filter sound waves, then the convective velocity becomes the characteristic velocity for stability (Courant # = V*dt/dx <= 1). However, stability for non-linear systems is never guaranteed, and particularly for COUPLED non-linear systems. Which is why I commented above that the whole stability issue goes way beyond a simplistic Von Neumann stability analysis.

May 7, 2014 8:51 am

Man Bearpig: You need NO engineers, just a sh*t load of mechanics.

May 7, 2014 8:53 am

Stephen Rasey: “You make it sound that if you are not some post-doc octopus, you are not worthy to make comments on the GCM code. Quite the contrary, only a post-doc octopus may be capable of writing such a model, but normal one-fisted experts ought to be able to point out flaws in any part of the process.”
I heartily agree. If Geordi Laforge tells me that because the Enterprise has 27 flux capacitors in its cargo deck and 33 flux capacitors in its WT shack there are no more than 50 flux capacitors total on the starship, I am entitled to question his conclusion even though I’m not a warp-drive engineer and in fact wouldn’t know a flux capacitor from a toaster.
But Dr. Brown’s main point is well taken: the system is so arranged that almost no one who doesn’t have a vested interest in perpetuating the system has the wherewithal to inventory the cargo deck and WT shack and thereby scrutinize raw data the high priesthood doesn’t deign to divulge.

May 7, 2014 8:56 am

Too many mistakes in this post to correct
If you want to build a GCM. use this one or get somebody who is competent to help you.
http://mitgcm.org
there is a support list and plenty of people to help you.
resolution? play with this:
http://oceans.mit.edu/featured-stories/mitgcm
Interesting approach to gridding
http://mitgcm.org/projects/cubedsphere/
REPLY: So Mr. Mosher, please show us the climate model that YOU designed.
I thought so. – Anthony

Dr. Thorsten Ottosen
May 7, 2014 9:00 am

Dr Brown & Others,
Do we know how much of this FORTRAN code (that makes up the models) that is devoted to unit-tests? From a software quality perspective, there ought to be substantial amounts of unit-tests for a code base of, say, a million lines of code. Also, out of curiosity, where can one
download these code bases?
-Thorsten

May 7, 2014 9:04 am

Um, Nick, explain this then: http://en.wikipedia.org/wiki/Blast_wave. Are you really going to tell me that a pressure wave cannot be propagated faster than a sound wave? What do you think lightning does in a thunderstorm then? Can a numerical weather model do thunderstorms without tricks?

commieBob
May 7, 2014 9:14 am

kadaka (KD Knoebel) says:
May 7, 2014 at 4:59 am
From Man Bearpig on May 7, 2014 at 4:00 am:
… None. Engineers don’t fix parking meters, repair techs do.
… Isn’t it wonderful how models and assumptions fall apart from a simple introductory reality check?

In England the guy who fixes something is called an engineer.
The underlying question behind the problem is “how long are we willing for a parking meter to be out of service and what percent of the time are we willing for that to happen.” It’s very similar to the problem of “how many telephone lines do we need between city A and city B?” The parking meter problem is complicated by the fact that we can’t just use the average driving time.
It is very easy to come up with a “simple reality check” and that often works … as long as you’re willing to be out by an order of magnitude in your calculations.
The whole point of Dr. Robert G. Brown’s post is that “it’s complicated folks”. GCMs have been over-sold. The pathetic thing is that the ‘folks in charge’ should know better. Chaos theory is a result of climate modelling. Judith Curry has a wonderful post based on the opinion of Edward Lorenz, climate modeller and the father of chaos theory. Using different reasoning, he corroborates Dr. Brown.
The really pathetic thing here is that Kevin E. Trenberth was a doctoral student of Lorenz. He, of all people, should know better. The problem is so wicked, evil, and complicated that nobody should pretend to have any confidence in GCMs.

May 7, 2014 9:15 am

@Steven Ramey, unless it has changed from my student days, Type I errors could also be called false positives or Type A errors. Type II errors could also be called false negatives or Type B errors.

rgbatduke
May 7, 2014 9:16 am

“have several gigabytes worth of code in my personal subversion tree”
I hope it’s not several gigabytes of your code only or else you are a really bad programmer (which often is the case since everybody who’s taken some kind of university course consider themselves fit for computer programmning).

Rest assured, my man. Even though I type like the wind itself and really do write a lot of code, there are only 3.1\times 10^7 seconds/year and typing a letter (byte) of code per second I would have to write code for 31 years to write a single gigabyte, and I’m not quite 2×31 years old.
I, like any sane programmer, liberally steal other people’s (open source, legal) code and at every opportunity clone my own code rather than retype it. Also, not all of that is (honestly) code, but the ephemera of code — makefiles, comments, documentation, probably in some cases leftover object files. Then again, subversion saves differences and isn’t linearly efficient — the size of a program in subversion includes the sizes of all of the diff-able variations of the code I’ve checked in as well as the size of the code tree itself.
Still, I really do have a shit-pile of code that I have written myself all the way back to entire boxes of fortran on punch cards to do e.g. the complete resolution of angular momentum coupling of nuclear targets and incident particles in nuclear scattering theory back in 1977-1978, where I had to write my OWN code to generate 3j, 6j, 9j symbols as well as a huge pile of nested loops summing over angular momentum indices and phases. And I never throw anything away — that’s why God created version control systems as a means of versioned archival storage, to permit the reuse and resurrection of old code.
I still regret the code I’ve lost. For example, back in the early 80s I wrote a Mastermind emulator in APL on an IBM 5100. For a couple of decades, I had the QIC tape with the program on it in my office, but the 5100 was a brief flash in the pan and — although I think I’ve rewritten Mastermind three times so far, in APL, in BASICA for the IBM PC, and in C for the IBM PC — I could never get the code off of the tape and so it is lost in the mists of the past (ditto, actually, the BASICA and C versions, sigh).
rgb

Pete Brown
May 7, 2014 9:24 am

Yes I see.
<>

May 7, 2014 9:26 am

Robert Brown:

I’m guessing that predictivity will still suck, because hey, the climate is chaotic and highly nonlinear. But at least such a program might be able to answer metaquestions like “just how chaotic and nonlinear is that, anyway, when one can freely increase resolution or run the model to some sort of adaptive convergence”.

Thank you for this most instructive follow-up, from which I’ve learned in far more detail what is meant by adaptive convergence. And if I may myself go back and diverge slightly …

In the Earth system, one can probably assume that total mass of atmosphere plus ocean plus land (including the highly variable humidity/water/ice distribution that can swing all three ways) is constant. Yes, there is a small exchange across the “boundary” from thermal outgassing and infalling meteor and other matter but it is a very, very small (net) number compared to the total mass in question and probably is irrelevant on less than geological time (in geological time that is not clear!)

Yep, formation of our moon through a major planetary collision would I guess play havoc with the conservation assumptions here. And that reminds me of another mind-bending piece this week – by Matt Ridley in The Times two days ago:

We may be unique and alone in the Universe, not because we are special but because we are lucky. By “we”, I mean not just the human race, but intelligent life itself. A fascinating book published last week has changed my mind about this mighty question, and I would like to change yours. The key argument concerns the Moon, which makes it an appropriate topic for a bank holiday Moonday.
David Waltham, of Royal Holloway, University of London, is the author of the very readable Lucky Planet, which argues that the Earth is probably rare, perhaps even unique, as planets go. He is also a self-confessed “moon bore” who has made important discoveries about how the Moon formed.
Ever since Copernicus, the “mediocrity principle” has been scientific orthodoxy: that our planet is not the centre of the Universe; it’s just one of (as we now estimate) a thousand billion billion spherical objects of similar size orbiting fiery suns just like ours.
But in that case, as the nuclear physicist Enrico Fermi famously asked, where is everybody?

That’s paywall protected so I guess I shouldn’t quote more. But well worth a read, if one gets the chance, and a reflection how much our moon has made possible the miracle of climate allowing the evolution of intelligent life. Gaia couldn’t go the distance but Goldilocks has.

Pete Brown
May 7, 2014 9:26 am

Nope. That’s too hard.
There are some very clever people here.

commieBob
May 7, 2014 9:30 am

Missing link:
Judith Curry’s post on Lorenz is here:
link
Here is the link as a url: http://judithcurry.com/2013/10/13/words-of-wisdom-from-ed-lorenz/ (just in case)

Tetragrammaton
May 7, 2014 9:31 am

Dr. Brown is too modest. In addition to the “climatist qualifications” boxes he justifiably checks off, he is a superb writer and presumably can check other relevant boxes regarding team-building and project management. Most of all, he has the perspective to see things as they are and the courage to speak about them.
The spectrum of large-scale science projects, as many of us know too well, can range from “unacceptable nonsense” at one end, through “acceptable nonsense”, “plausibly insightful”, all the way to the other end of the spectrum where we may find “robust, repeatable , tested body of knowledge”. Dr. Brown convincingly positions climate models, and by inference much of the other “global warming” work, clearly at the “unacceptable nonsense” end of the spectrum. Not even the redoubtable Dr. Stokes can nick or pick apart much of Dr. Brown’s narrative (though some may be amused by his attempts.)
What the public relations geniuses have managed to do over the past thirty years is to reposition climate science, in the minds of the public and the press, from the unacceptable side of the spectrum to the robust side. Meanwhile the science itself has made no such journey and is arguably not far along the road.
One can understand how awkward this is for honest scientists. As Dr. Brown points out, only a very small number of individuals actually can grasp the full structure and disciplines involved in building these climate models. But, just as one needn’t be a playwright in order to be a competent drama critic, it isn’t necessary to be able to check all the quantum physics, statistics, thermodynamics and other boxes in order to have a pretty good idea of what is actually going on. Are there really so many senior scientific professionals unwilling to spill the beans and admit to the wobbly underpinnings of their “climate science”?

May 7, 2014 9:47 am

@ Stephen Ramey, please excuse my misspelling your name. Sorry. Mea culpa, mea maxima culpa.

richard
May 7, 2014 9:48 am

I feel sure that if co2 caused any additional warming then Chilbolton facilities would be able to show the definitive proof.
http://www.stfc.ac.uk/Chilbolton/facilities/24807.aspx
Here is what they have to say about climate models
“Formation of precipitation in boundary-layer clouds is critical to understanding their microphysical and dynamical evolution, and the impact such clouds have on the earth’s radiation balance
and hydrological cycle. Climate models have so far been unable to realistically simulate this “warm-rain” process”
http://www.nerc.ac.uk/research/sites/facilities/list/ar_cfarr.pdf

richard
May 7, 2014 9:51 am

ah i should add the end of the piece,
……..Applying the new scheme based on the CFARR
data dramatically improved the agreement with observations in these regions, solving a long-standing model error.

Theo Goodwin
May 7, 2014 9:58 am

“Given a solid, trivially rescalable grid, many questions concerning the climate models could directly be addressed, at some considerable expense in computer time. For example, the routine could simply be asked to adaptively rescale until the model converged to some fairly demanding tolerance over as long a time series as possible, and then the features it produces could be compared to real weather features to see if it is getting the dynamics right with the many per-cell approximations being made.”
Nailed it. Comparison between models and “real weather features” is the heart and soul of working with models. It rests on knowledge that is inherently practical and empirical. Imagination is required. Please note that it is not possible to do this kind of work until an organization has one or more “experts” who can provide his/her practical and empirical knowledge and who can make judgments about the future that will be severely criticized. Models cannot substitute for empirical science that does not exist.

Michael Gordon
May 7, 2014 10:00 am

Brent Walker says: May 7, 2014 at 4:01 am “I started programming almost 50 years ago initially using plug board machines…”
IBM 407 accounting machine. What a beast (over a ton if I remember right).
First I learned to program it and then I became a repairman of such things and I am still amazed by its engineeering. Too bad Charles Babbage didn’t live to see it.
It had hundreds of timing cams and thousands of wire-contact relays and if I remember right a 3/4 horsepower induction motor to drive it. The master timing wheel was calibrated to 1/4 of a degree and so were most of its hundreds of cams.
Weather forecasting was a big consumer of early computers. I am amazed that anyone can forecast the weather 3 days from now with about 50 percent confidence.

rgbatduke
May 7, 2014 10:05 am

I’m watching the Chris Essex video above now, and it is interesting to see that (so far) we are saying exactly the same things. I mean, it’s scary. One by one, he is laying out the points I made the the post just above on the grid. Except (so far) the tessellation problem. I’m waiting for it.
I do have to say, though, that he is much funnier.
I strongly urge people who have a hard time understanding what I’m saying about the numerical complexity of the problem and the absurdity of the claims of predictivity to watch this video, especially Nick Stokes (who I am perfectly happy to agree with, but not today). Or rather, perhaps I DO agree with him today. Yes, GCMs are basically weather prediction programs repurposed to climate prediction, based on fake physics (Chris Essex’s words, but I don’t disagree) that works well enough to predict weather some moderately short distance out and implemented on an absurd grid that arose because some graduate student in the extremely remote past had never heard of tessellation but had heard of spherical polar coordinates or latitude and longitude and use them as “rectangular” grid coordinates in the very first implementation of a weather prediction problem. Everything since then is mere code inheritance. As Chris (and I) point out, however, that gives us absolutely no good reason to believe that they will work for long term climate prediction in a nonlinear complex system whose climate dynamics is known to be strongly governed by what amount to structured, self-organized, named, macroscopic enormously nonlinear quasi-particle dynamics. ENSO. The PDO. Hadley cells. THE NAO. Monsoon. We don’t even really know the sign of the changes in macroscopic climate descriptors likely to be consequential on an increase in atmospheric CO_2 forcing. All we have is fond hopes, belief that the fake physics on an absurd grid at a ludicrous resolution will integrate 50 years into the future in some meaningful way, a hope that the climate models themselves do not support as one examines the full envelope of their integrated solutions.
rgb

philincalifornia
May 7, 2014 10:11 am

John Whitman says:
May 7, 2014 at 7:37 am
I see no other position they can take except they warmed the models for the precautionary principle. They aren’t doing science.
The question is, is what they did lying?
————————————————-
I’m all for the precautionary principle. Can we please defund them before they destroy the economy that our children and our grandchildren will inherit?

David Young
May 7, 2014 10:35 am

Roy is right about discrete conservation. The usual fix for this is to use finite element discretization methods. I was surprised some years ago when I learned that climate models still used finite difference methods.

May 7, 2014 10:40 am

RGB I have been saying for some years that the IPCC models are useless for forecasting and that a new approach to forecasting must be adopted. I would appreciate a comment from you on my post at 6:47 above and the methods used and the forecasts made in the posts linked there at http://climatesense-norpag.blogspot.com
The 1000 year quasi – periodicity in the temperatures is the key to forecasting . Models cannot be tuned unless they could run backwards for 2- 3000 years using different parameters and processes on each run for comparison purposes. This is simply not possible.
Yet even the skeptics seem unable to break away from essentially running bits of the models over short time frames as a basis for discussion.
Perhaps it would help psychologically if people thought of the temperature record as the output of a virtual computer which properly integrated all the component processes.
Let us not waste more time flogging the IPCC dead horse and look at other methods.

Speed
May 7, 2014 10:46 am

Robert G. Brown,
Thanks for spending time on this.
I have been told that weather forecasting is an initial conditions problem while climate forecasting is a boundary conditions problem. Is this (to your knowledge) a true statement? It seems to be a statement of the type, “Go away little boy. You’re bothering me. You wouldn’t understand anyway.”

Editor
May 7, 2014 10:47 am

Roy Spencer says:
May 7, 2014 at 3:54 am

I exchanged a few emails with mathematician Chris Essex recently who claimed (I hope I’m translating this correctly) that climate models are doomed to failure because you can’t use finite difference approximations in long-time scale integrations without destroying the underlying physics. Mass and energy don’t get conserved. Then they try to fix the problem with energy “flux adjustments”, which is just a band aid covering up the problem.
We spent many months trying to run the ARPS cloud resolving models in climate mode, and it has precisely these problems.

Thanks, Dr. Roy, I had an online discussion about ten years ago with Gavin Schmidt on the lack of energy conservation in the GISS model. What happened was, as I was reading the GISS model code I noticed that at the end of each time step, they sweep up any excess or deficit of energy, and they sprinkle it evenly over the whole globe to keep the balance correct. Energy conservation at its finest!
I was, as they say, gobsmacked. I wondered how big an amount of energy this was. So I asked Gavin.
After hemming and hawing for a while, Gavin claimed that the number was small and of no consequence. It took a while, but finally he admitted that no, they didn’t monitor that number, so he couldn’t say exactly what the average loss was other than “small”, a tenth of a W/m2 or so. He couldn’t say if it changed over time, or increased over the length of the run, or anything …
More surprisingly, he didn’t have a “Murphy gauge” monitoring the value of the energy imbalance. (“Murphy Gauges” are great. They are a kind of physical meter with a built in alarm, used extensively on ships to monitor critical systems.
A Murphy gauge not only displays a value for some variable. It tells you when Murphy’s Law comes into effect and your variable has gone into the danger zone. I have used a variety of these both in the real world and in the programs that I write. If that were my model, when the energy being lost/gained went outside certain limits, the process would shut down immediately to prevent further damage to irreplaceable electrons and I’d take a look at why it stopped.
That was the point at which I lost all confidence in the models. It was early in my involvement in the climate question, a decade or so ago, and my jaw dropped to the floor. Here were these scientists claiming that their model was “physics based”, and then sprinking excess energy like pixie dust all over the globe. Not only that, but they didn’t have any idea if the system was net gaining energy, or net losing energy, or both at different times. They just swept it under the rug without monitoring it at all, vanished it all nice and even, leaving no lumps in the rug at all.
Not only that, I didn’t understand even theoretically what their justification was for the procedure. I mean, why sprinkle it evenly over the earth? Wouldn’t you want to return it to the gridcell(s) where the error is? Their procedure leads to curious outcomes.
For example, as Dr. Brown pointed out in the head post, calculating gridcells on a Mercator projection gets very inaccurate at the poles. As a result, it is a reasonable assumption that the poles either inaccurately gain or lose more than the equator. For the sake of discussion, let’s say that the model ends up with excess energy at the poles.
If we simply sprinkle it evenly over the globe, it sets up an entirely fictitious flow of energy equatorwards from the poles. And yes, generally the numbers would be small … but that’s the problem with iterative models. Each step is built on the last. As a result, it is not only quite possible but quite common for a small error to accumulate over time in an iterative model.
At a half-hour per model timestep, in a model year, there are almost 20,000 model timesteps. If you have an ongoing error of say 0.1 W/m2 per timestep, at the end of the year you have an error of 2,000 W/m2 … you can see why some procedure to deal with the inaccuracy is necessary.
There is another problem with Gavin’s claim that the total energy imbalance is small. This is that it is small when averaged out over the surface of the globe.
But now suppose that the majority of the error is coming from a few gridcells, where some oddity of the local topography/currents/insolation/advection is such that the model is miscalculating the results. To put some numbers on it, let’s assume that the overall model error in energy conservation per half-hour time step is say a tenth of a watt per square metre. That was the order of magnitude Gavin mentioned at the time, and indeed, it is small.
From memory, the GISS model gridcell 2.5° x 2.5°, so there are 180 / 2.5 * 360 / 2.5=10,328 gridcells on the earth’s surface. So if the overwhelming majority of that 0.1 W/m2 error is coming from say a hundred gridcells, that means that the error in those hundred gridcells is on the order of 10 W/m2 …
I fear mainstream climate modelers lost most of their credibility with me right then and there.
If I ran the zoo, I’d a) put Murphy gauges all over the model, b) track the value of the error over time, c) do my best to track the errors back to the responsible gridcells, and d) try to fix the problem rather than living with it. It’s a devilish kind of error to track down, but hey, that’s what they signed up for.
There’s only one model out there that I pay the slightest attention to, the GATOR-GCMOM model of Mark Jacobson over at Stanford. It actually does most of the stuff that Dr. Robert listed in his head post, including using a reasonable grid instead of Mercator and being scalable and nestable. All the rest of the climate models aren’t worth a bucket of warm spit.
Not that I’m saying Jacobson’s model is right. I’m saying it’s the only one I’ve seen that stands a chance of being right on a given day. A description of the GATOR model is here. My 2011 post on the Jacobson model (which I’d completely forgotten I’d written) is called “The Alligator Model”.
I live my life in part by rules of thumb. And I’ve written a number of computer models of various systems. One of my rules of thumb about my own computer models is:
• A computer model is a solid, believable, and occasionally valuable representation of the exact state of the programmer’s misconceptions … plus bugs.
w.

Tim Obrien
May 7, 2014 10:56 am

The real problem is the vast bulk of the public and the politicians hear “computer model” and they think “Star Trek” and “Iron Man” and think its REAL. They don’t understand that humans have to build them with a lot of guesses, suppositions and windage.

rgbatduke
May 7, 2014 11:01 am

It’s not insane. It’s a hard physical limitation. A Courant condition on the speed of sound. Basically, sound waves are how dynamic pressure is transmitted. Timestepping programs relate properties in a cell to the properties in that and neighboring cells in the previous timestep. If pressure can cross more than one cell in a timestep (at speed of sound), there is total instability. So contracting grid size means contracting timestep in proportion, which blows up computing times.
What is insane is writing code where one cannot rescale the spatiotemporal grid any way other than basically rewriting the entire program, including the fake physics on the grid cells. Or pretending that the “total instability” you speak of isn’t still there, but now reflected as an error in the physics on all of the smaller/neglected timescales!
You can’t take a function with nontrivial structure all the way down to the millimeter length scale and try to forward integrate it on a 100 kilometer grid and pretend that none of the nontrivial structure inside of the 100 kilometers matters. That’s an assertion that one simply cannot support with any mathematical theory I’m aware of, probably because there are literally uncountably many counterexamples. As I pointed out, adaptive quadrature can easily enough be fooled if it is poorly written and the function being integrated has nontrivial structure at smaller scales than the starting scale — considerable effort goes into designing algorithms that minimize the space of “smooth functions” for which the algorithms will not actually adapt and converge. This approach makes a mockery of nearly everything that is known about integrating complex differential systems.
I don’t really have a problem with this as long as it is being done as a research question with an open acknowledgement that your computation has about the same chance of being right as a dart thrown at a dartboard covered with future possible climates, blindfolded, at least without a few hundred years of computation and observation to empirically determine what, if any, predictivity the model has. I have an enormous problem with the results of these computations being used to divert hundreds of billions of dollars of global wealth into highly specific pockets under the assumptions that these computations are better than such a dart board even as they are actively diverging from the actual climate.
This is one of the things Chris did not say in his otherwise excellent video above. He indicated at the end that he thought that the sociology of GCMs had about run its course and that they would now proceed to fade away. What he didn’t point out is that the fundamental reason that they will fade away is that their use is predicated on the assumption that they can somehow beat random chance or correctly show long term behavior, and there is literally no reason at all to think that this is the case. He presented a lot of excellent reasons to think that this is not the case generically, and there are even better reasons if one examines one model at a time. His slide on the power spectrum is particularly damning. The climate models are completely incapable of exhibiting the long term trend variations observed in the real climate without CO_2 variation. They all have basically flat power spectra on intervals greater than a few hundred months when run completely unforced, but Nature manages things like the MWP and LIA and Dalton minimum and the recovery and the early twentieth century warming on timescales ranging from decades to centuries with a very non-flat power spectrum. This is clearly visible on figure 9.8a of AR5 — the climate models smooth straight over the natural variations (invariably running too hot) everywhere but the nearly monotonic increase in the reference period, which they incorrectly extend post 1998 ad infinitum.
They literally cannot do anything else as they omit the physics responsible for those variations entirely — they have basically no unforced variability on any multidecadal time scale. The real climate does. That all by itself is the end of the story as far as their ability to predict long term behavior, especially when the models are built against the reference period in such a way that makes essentially arbitrary, biased assumptions about the balance between natural and anthropogenic factors in even the short time scale dynamics in the single 15 year period in the late 20th century where it happened to substantially warm, that just happened to be the reference period.
I also would draw watcher’s attention to Chris’s comment that back in the early 1980’s he was recruited by climate scientists to help them with their code, not to solve the climate problem, but to make their code provide the smoking gun proving that CO_2 would cause runaway warming.
If he is honestly reporting the facts, and these words or any equivalent words were used, there is really no need to go any farther. It is one thing to try to determine whether or not jelly beans cause acne. It is another to set out to prove that jelly beans cause acne. The former is good science, if a bit unlikely as an assertion. The latter is dangerous, scary science, the kind of science that leads to cherrypicking, data dredging, confirmation bias, pseudoscience, and of course plays into the hands of people who want to invest in jellybean futures as long as they can control the flow of angry papers from your research lab.
God invented double blind, placebo controlled, experiments for a good reason — because we cannot trust ourselves and our greedy pattern matching cognitive algorithms and our tendency to be open to the long con, convincingly presented. God taught us to be very skeptical of grand theoretical claims based on poorly implemented computations attempting to solve a grand challenge mathematical problem numerically, with a literally incomprehensibly coarse spatiotemporal grid given our knowledge of the Kolmogorov scale of the atmosphere, out not five minutes into the future but out fifty years into the future, unless and until those computations are empirically shown to be worth more than a bag full of darts and a blindfold.
At the moment, the bag is winning. In the past (pre-reference period) the bag wins. On longer terms, the bag is the only contender — no power is no power. There is no good reason at this very moment to think that the GCMs collectively and most of the GCMs individually are anything more than amplified, carefully tuned noise, literally white in the long term power spectrum, superimposed on a single variable monotonic function that is the only possible contributor on all long time scales.
Excuse me?
rgb

Phil R
May 7, 2014 11:09 am

Man Bearpig says:
May 7, 2014 at 4:00 am
In your city you have 10,000 parking machines, on average 500 break down every day. it takes on average 30 minutes to fix each one and 20 minutes to drive to the location. How many engineers do you need?
Simples. None. Technicians or mechanics should be fixing the machines, not engineers. 🙂
As a side note, any engineer who designs a machine where 100% of them break down every 200 days should be fired.

Editor
May 7, 2014 11:14 am

Dr. Robert, I just had to say I truly loved this line:

… and try to infer work from this plus a knowledge of the cell temperature, plus a knowledge of the cell’s heat capacity plus fond hopes concerning the rates of phase transitions occurring inside the cell.

I laughed out loud.
Both the head post and your long comments are devastating to the models. Well done.
w.

Phil R
May 7, 2014 11:16 am

kadaka (KD Knoebel) says:
May 7, 2014 at 4:59 am
None. Engineers don’t fix parking meters, repair techs do.
Dang, should have read further.

May 7, 2014 11:25 am

Eee. When ah were young ah had it tough…
http://www.youtube.com/watch?v=VKHFZBUTA4k

May 7, 2014 11:33 am

rgb,
Phew, the last time I did computer programming was as a Junior at university in 1970. The university had a CDC 6400 (Control Data Corp) with a whole bunch of IBM 360s as peripherals. I did my programming projects on punch cards using Fortran IV (if I recall correctly).
Humorous memory => There was a terminal in my dormitory basement. There was a huge printer at the terminal that made loud sounds as it printed; I mean it was like 8 ft wide, 5 ft tall and 6 ft deep. Late at night some nerds would run programs that made the printer mimic music as it was printing. One of my friends would make the printer mimic the song ‘Anchors Away’ as it printed gibberish. : ) I also liked another nerd who would make the printer mimic ‘Whiter Shade of Pale’.
John

David Merillat
May 7, 2014 11:34 am

Nick Stokes: “It is in effect random weather but still responding to climate forcings.”
That is, still responding to climate forcings based on the assumptions of the model authors, since most of those forcings don’t apply on the short timescales used by NWPs, and thus can’t be validated by the success of NWPs.
Nick Stokes: “GCMs are NWPs run beyond their predictive range.”
That would be the problem, now, wouldn’t it? The only way to validate a GCM is long-term observation (against a fixed or pre-calculated GCM). As McIntyre has pointed out, that hasn’t worked out so well. The IPCC has to keep revising the old model results to keep the temperature observations within the “envelope” of the model spaghetti-graph.

David A
May 7, 2014 11:35 am

Man Bearpig says:
May 7, 2014 at 4:00 am
In your city you have 10,000 parking machines, on average 500 break down every day. it takes on average 30 minutes to fix each one and 20 minutes to drive to the location. How many engineers do you need?
========================================================
One good one, to build a better meter.
However, perhaps you point is that the question does not provide near the needed data. (Just like in Climat Science) Much was said already but, among much other missing info, how many hours do you want the repair crews working? Do the take weekends off?
All productions issues come down to the four “P”s of production. If there is a production problem, it is in one of those four categories. They are “Paint” (The job itself), “paint brush”, (the tools to do the job) “Picture” (The blueprint, work order etc showing what needs to be done), and the fourth P = “People” The personnel doing the job.
Your picture neglected to point out start and comnplete times for doing your 400 hours of work.
On every work order the area availble time, and the comlet by time are critical.

richardscourtney
May 7, 2014 12:10 pm

Dr Brown:
Your excellent post upgraded to the above article together with your comments in this thread have generated what I think to be the best thread ever on WUWT.
The thread contains much serious information and debate which deserves study by everyone who has reason to discern what climate science can and cannot indicate. Thankyou.
Richard

RACookPE1978
Editor
May 7, 2014 12:34 pm

rgbatduke says:
May 7, 2014 at 11:01 am (replying to an earlier quote )

It’s not insane. It’s a hard physical limitation. A Courant condition on the speed of sound. Basically, sound waves are how dynamic pressure is transmitted. Timestepping programs relate properties in a cell to the properties in that and neighboring cells in the previous timestep. If pressure can cross more than one cell in a timestep (at speed of sound), there is total instability. So contracting grid size means contracting timestep in proportion, which blows up computing times

…..
You can’t take a function with nontrivial structure all the way down to the millimeter length scale and try to forward integrate it on a 100 kilometer grid and pretend that none of the nontrivial structure inside of the 100 kilometers matters.

Hmmmmn.
So, the maximum grid size for “physics” to work reliably (for more than one iteration that is) across a GCM grid (er, cube) requires the cube be SMALLER than the speed of sound (a pressure signal) to cross the smallest distance across the cube. OK. Let us assume that physical requirement is correct.
Now, the atmosphere is only “one variable” in the GCM of course, therefore, the MAXIMUM cube must be less than the height of the atmosphere – and that only IF that atmosphere were to be simplified and modeled as even a single cube only “one atmosphere” high, right?
So, from sea level to “top of atmosphere” is ???
Let us first try to define what sea level to “bottom of atmosphere” needs to be. Then, maybe one can go from bottom of atmosphere to top of atmosphere later.
Well, you can define all sorts of criteria: Clearly, the multi-million square kilometers of the Gobi desert have “weather” and that “weather” affects tens of thousands of kilometers around the Gobi, so at a minimum one would need the smallest cube to allow for “land” (heat and radiation balances to land) varying from 0.0 meters to the Gobi’s 0.900 km to 1.500 km high. So the LOWEST cubic size MUST be smaller than 1.0 km, if the world is to be modeled from sea level to bottom of atmosphere. Central Greenland, central Antarctic, and very large other areas of the earth are higher 1,000 meters (1.000 km): Thus one would at a minimum need “land” cubes at no SMALLER than 1.0 km vertically to even approximate ground level winds. Or do they assume “sea level” ground everywhere when the models are “run” continuously for thousands of cycles to “set initial conditions” as Nick Stokes requires they be?
The stratosphere is commonly defined to start past 50,000-56,000 ft.
The troposphere (from Wiki) is:

The troposphere is one of the lowest layers of the Earth’s atmosphere; it is located right above the planetary boundary layer, and is the layer in which most weather phenomena take place. The troposphere extends upwards from right above the boundary layer, and ranges in height from an average of 9 km (5.6 mi; 30,000 ft) at the poles, to 17 km (11 mi; 56,000 ft) at the Equator.

SO, does a GCM need to include the stratosphere? Again, from the Wiki

At moderate latitudes the stratosphere is situated between about 10–13 km (33,000–43,000 ft; 6.2–8.1 mi) and 50 km (160,000 ft; 31 mi) altitude above the surface, while at the poles it starts at about 8 km (26,000 ft; 5.0 mi) altitude, and near the equator it may start at altitudes as high as 18 km (59,000 ft; 11 mi).

Hmmn. Sounds like not only does the “ground level” of GCM need to include a function (presence or absence of a single ground level cube) varying to require a cube no larger than 1.0 km to include the “land” that is being radiated by the sun, but the “total cube” arrangement across a spherical earth that RGB discusses in his first paragraphs above MUST ALSO include the changes in absolute atmosphere thicknesses that vary from the equator to the pole.
If so, then the MINIMUM “cube stack” of these 1 km cubes needs to be at least 50 cubes high to include all of the “atmosphere” that is being studied.
Now, could one simply the atmospheric changes by “allowing” the arithmetic to reset the pressures and humidities and temperatures everywhere in every cube as part of Nick Stokes “runs”?
Sure. Then, before ANY changes in the forcing functions, one would first need to very that every model in every one of its cubes DID duplicate the earth’s initial conditions, right?

Gary Hladik
May 7, 2014 12:45 pm

rgbatduke says (May 7, 2014 at 8:42 am): “In my opinion — which one is obviously free to reject or criticize as you see fit — using a lat/long grid in climate science, as appears to be pretty much universally done, is a critical mistake, one that is preventing a proper, rescalable, dynamically adaptive climate model from being built. There are unbiased, rescalable tessellations of the sphere — triangular ones, or my personal favorite/suggestion, the icosahedral tessellation.”
Having played in my youth some games that require 20-sided dice, upon reading of the absurdity of the lat/long grid in GCMs, I immediately wondered if similar tessellations could be used in the models (that was in fact about the only point I actually understood in the original article and subsequent discussion).
And my wife said I was wasting my time! Ha! 🙂
(BTW I still have the dice.)

Billy Liar
May 7, 2014 12:46 pm

Nick Stokes says:
May 7, 2014 at 6:11 am
Hollywood fake physics – no explanation required. If it was paired up with some satellite data with temperature similarly codified it might have got my attention.

kadaka (KD Knoebel)
May 7, 2014 12:59 pm

From commieBob on May 7, 2014 at 9:14 am:

In England the guy who fixes something is called an engineer.

And the UK’s government is a constitutional monarchy, with the monarch supposedly having no real power, but can “kick out all the bums” and dissolve Parliament. We have an elected President who just ignores Congress and writes his own laws.
Over there they also refer to a directed safe light source using batteries and a bulb by the same term as a fire on a stick used for general illumination and igniting flammable substances like rubbish piles and witches.
Offhand I’d think the operator of a modern diesel locomotive would be surprised they’re expected to fix those beasts as well. There’s likely union rules forbidding it. IPCC head Rajendra K. Pachauri doesn’t seem the kind to practice repair work while he practiced railway engineering, he’d only have industrial lubricants covering his hands if he was researching for writing a novel.
We can’t expect to bring England up to modern standards overnight, but perhaps we can teach them the proper use of at least one word.

F. Ross
May 7, 2014 1:14 pm

Thank you Dr. Brown. Excellent exposition.
Your posts are always a breath of fresh air, even though one sometimes gets a bit dizzy breathing that fresh air.

Ian W
May 7, 2014 1:29 pm

The climate models are extremely successful.
They have fully met the functional requirements of their funding sponsors.
They were required to produce evidence of a ‘climate catastrophe’ should anthropogenic carbon dioxide emissions continue at even a low rate. At higher rates of emissions there would be unstoppable catastrophes and runaway warming. The models were meant to be impenetrably complex (as described by RGB) and poorly documented, and, from a software industry viewpoint show no quality control and be impossible to maintain with no sensible structure. This lack of quality management ensures that external groups cannot audit the software. This meets the same requirement as not allowing external groups to audit the data, or even the emails about the systems and data.
The world has moved on now – the funding sponsors of the models are (incorrectly) using ensembles of the model output forecasts/predictions/projections to justify: closure of generation plant and entire industries; increases in taxation; and more political power for the sponsors. Thus validating the functional requirement of the models.

george e smith
May 7, 2014 1:39 pm

Well I was not able to immediately discern the gist of Prof rgb ‘s post; but I gather he says coding and reading other peoples code is very difficult. I would agree with that and I wouldn’t try to unravel someone else’s code.
I’m sure some people can, including WUWT posters. One reason, I don’t try, is that I’m more interested in the specific equations being solved, than I am in the code that might do that. But I understand, that lots of problems, you just can’t churn out a closed form (rigorous) solution for, so some sort of “finite element” like analysis, with small steps of arithmetical solutions need to be done, in some iterative process.
I do that sort of thing myself in a much simplified form, to develop some graphable form of output, in say an excel spread sheet form.
But I always worry about the issue that Dr. Spencer raised, in that you might overstep some bounds of physics , when doing that.
For people who do the same computation repeatedly, with different data, then obviously a coded computer routine simplifies that process (once you have the code, and debugged it) To which one might add; “and properly commented and annotated it so you yourself can follow it.”
Even with the much simpler systems, I deal with, I sometimes find a “neat trick” (well I think so) while I’m working on it.
I’d like a dollar, for each time I have come back, and then asked; “why the hell did I do this step here ?”
Think about that short term memory fog, when contemplating conversations with ET even just a few light years away.
I can only conclude, when I see planet earth refusing to follow all 57 varieties of GCM; or even one of them; that somehow they are ignoring all the butterflies that there are in Brazil.
If in a chaotic system, a small perturbation pops up unannounced; a “butterfly”, or it could be a Pinatubo, how the hell can any model allow for such events, which we know throw the system for a loop ( minor maybe) but it has to move off on a different path, from the one that it was on before the glitch.
I’ve watched those compound coupled pendulum demos, in “science” museums, and it always amazes me that minor perturbations in the start conditions, can morph into such havoc.
So how the hell do we expect to model systems with unpredictable butterflies ??
But I’m glad that Dr. Roy, and Prof Christy are on top of that; and that Prof rgb is advising students on such follies. Izzat Red, Green, Blue ??

Txomin
May 7, 2014 1:54 pm

There being so few exceptions, it can be safely said that science is always messy. It is the reason why all data/methodology must be publicly shared so that it can be contrasted, reevaluated, etc. It is also the reason why CW is a political and not a scientific issue for most people, no matter how much they yell.

Martin A
May 7, 2014 2:13 pm

” Climate models have a sound physical basis and mature, domain-specific software development processes” (
“Engineering the Software for Understanding Climate Change”, Steve M. Easterbrook Timothy C. Johns
)

Editor
May 7, 2014 2:34 pm

Nick Stokes (May 7 5:02am) says “Numerical Weather Prediction is big. It has been around for forty or so years, and has high-stakes uses. Performance is heavily scrutinised, and it works. It would be unwise for GCM writers to get too far away from their structure. We know there is a core of dynamics that works.“.
The weather models do a pretty good job, on the whole, at predicting weather up to a few days ahead. Weeks ahead, not much use. Months ahead, no chance. Years ahead, don’t be daft. But for climate, we need years ahead. Mathematically, a model that operates on the interactions between little time-slices of little notional boxes in the atmosphere must quickly and exponentially diverge from reality, so it cannot possibly give us years ahead. That means that the weather model structure is useless for climate. Instead of weather models we need climate models. We don’t have any.

Nick Stokes
May 7, 2014 2:35 pm

RACookPE1978 says: May 7, 2014 at 12:34 pm
“So, the maximum grid size for “physics” to work reliably (for more than one iteration that is) across a GCM grid (er, cube) requires the cube be SMALLER than the speed of sound (a pressure signal) to cross the smallest distance across the cube”

I thin k you are on to something there. But it is an old problem, and long resolved.
As I’ve said, there is an acoustic wave equation embedded in the N-S equations (as must be). And resolving sound waves is the key numerical problem in N-S solution. Traditional, a semi-implicit formulation was used. The Pressure Poisson Equation that is central to those is the implicit solution of the acoustics.
But GCM’s are explicit in the horizontal directions. Then they use an implicit method in the vertical, which is hydrostatic balance. That is, only the
0= -∇P – g
of the momentum equation is retained in the vertical. Since the acceleration has gone, the wave equation is disabled. It works because there is very little large scale vertical acceleration in the atmosphere.
As with any stepwise method, that doesn’t mean that everything else is assumed zero. You have to define a process which gets the main effects right. Then you can catch up, with iterations if necessary. Of course, there are situations where vertical acceleration is important – thunderstorms etc. So they have vertical updraft models. The point of the splitting is to solve for the physics you want without letting the N-S introduce spurious sound waves.

Nick Stokes
May 7, 2014 2:59 pm

Mike Jonas says: May 7, 2014 at 2:34 pm
“But for climate, we need years ahead.”

You don’t need a weather prediction years ahead. Climate models won’t tell you about rain on May 7, 2050. But they will tell you quite a lot about that day. It won’t be 100°C, for example. They are getting something right. And given what some here say about the catastrophe of N-S solution, that is an achievement.
But that is the key point I’ve been making. They are NWP programs working in a mode where they can’t predict weather. They generate random weather. But because all the physics of NWP is in there, the random weather responds correctly to changes in forcing, so you can see by simulation how average climate changes, even though the weather isn’t right.

May 7, 2014 3:06 pm

george e smith says:
May 7, 2014 at 1:39 pm
…..I always worry about the issue that Dr. Spencer raised, …that somehow they are ignoring all the butterflies that there are in Brazil….
Regional temperature is a weasel, the so called global is the Mosher’s unicorn.
If one of Dr. Spencer’s aliens landed in good old England (as it is often reported by some of the local papers), in the later half of 2012, observing the CET for about four months, he/she/it would justifiably conclude that the CET is directly controlled by the solar rotation with the daily sunspot numbers moving the daily maximum temperature by about 4 to 5 C pp.
Here are the observational data (‘the true arbiter’- Feynman). http://www.vukcevic.talktalk.net/SSN-CETd.htm
Butterflies, weasels, unicorns and aliens, regretfully are not adequately represented in the climate models.

May 7, 2014 3:10 pm

I again thank rgb for stimulating this discussion.
A root cause analysis integrated with a case study would be useful on what turned out to be an inappropriate priority and unreasonable expectation on GCMs (then later ECIMs) by academia, gov’t science bodies and scientific societies; including why the encouragement of the IPCC to focus on GCMs.
I think there is need to explain it in the context of science; with externalities like politics, environmental ideology and economics treated as imports / exports from the science aspect.
John

May 7, 2014 3:11 pm

How far into the future can Global Climate Models predict?

Lloyd Martin Hendaye
May 7, 2014 3:15 pm

Recall Goldbach’s Conjecture, stated 1742: “Every even integer greater than 2 can be expressed as the sum of two primes.”
The story goes that preeminent mathematician Marc Kac once received a lengthy proof of this hypothesis, brimming with “approbation and testimonial plaudits”. After some few minutes, Kac scribbled his reply: “Dear Sir: Your first error, which invalidates all subsequent deductions, occurs in Paragraph 3, Page 1.”
In logic, this is termed a Fallacy of the First Order, meaning that an invalid premise necessarily negates all reasoning that follows. Leaping from crag-to-crag in pursuit of grant monies, deviant Warmists accordingly prefer never to examine GCMs’ primary assumptions. But anyone not wracked with organic brain disease will readily see this tripe for what it is, and act accordingly.

rgbatduke
May 7, 2014 3:31 pm

But that is the key point I’ve been making. They are NWP programs working in a mode where they can’t predict weather. They generate random weather. But because all the physics of NWP is in there, the random weather responds correctly to changes in forcing, so you can see by simulation how average climate changes, even though the weather isn’t right.
Except for the fact that when run with no forcings at all, they completely fail to reproduce the actual variability of the climate on any decadal or longer timescale. If you run them with the CO_2 forcing, they only reproduce the observed climate variation within the reference (training) period. This is not impressive evidence that the models are getting the climate right.
One is tempted to claim that it is direct evidence that the models are not getting the split between natural variability and CO_2 forcing right, greatly underestimating the [former] and greatly overestimating the latter. But even that cannot really be said with any real confidence.
rgb

RC Saumarez
May 7, 2014 4:39 pm

Thank you Professor Brown for this.
It strikes a chord with my work which is, by GCM standards, trivial.- the understanding of how lethal arrhythmias arise in the heart.
Mathematical models of cardiac activation depend on coupled PDEs. which describe current flow within the heart, and ODEs which describe the nature of current sources within cells. The latter depend exquisitely on paramaterisation – at least 53 coupled ODEs that depend on parameters measured under very artificial conditions.
When I was told by a professor of mathematics at a major institution “If that is what the equations say, that is what is happening”, I came to a conclusion similar to your own.
The upshot of “Mathematical Cardiac Physiology” is that there are supposed “spiral waves” of activation in ventricular fibrillation. This has found its way into major textbooks (including one from MIT). When presenting data that refutes thiis, I have been told that my data must be wrong because they do not conform to current models!
The problem is that nobody has ever demonstrated the presence of “spiral waves” in animal preparations and such data that exists from intact human hearts do not support their existence.
My conclusion is simply that mathematical modellers should perform experiments to test their assertions. iI is after all the scientific method!

Alex Hamilton
May 7, 2014 4:58 pm

[SNIP – DOUG COTTON, STILL BANNED ]

MaxLD
May 7, 2014 5:12 pm

Nick Stokes says:
May 7, 2014 at 5:18 am
GCMs generally don’t use finite difference methods for the key dynamics of horizontal pressure-velocity. They use spectral methods…
I believe the US GFS model uses a spectral method. The Canadian Global Environmental Multiscale Model (GEM) spatial discretization is a Galerkin (finite element) grid-point formulation in the horizontal. Finite element is different from finite difference. I have programmed both methods in various models.

ROM
May 7, 2014 5:12 pm

Even at a pure layman’s level such as myself, this is one of the best posts I have seen on WUWT, particularly the first third of the posts but most decidedly not at all decrying a number of other really excellent and very readable, lay man language and technically orientated highly explanatory posts throughout the whole list.
And the hoi polloi like myself, have kept our heads down and our many inane comments, like this one, right out of it to the immense benefit of a significant increase in our knowledge on just how seriously bad the model driven climate science is that underlies the colossal waste and destruction of now close to trillion dollars worth of global treasure, the winding down of once vigorous and viable national economies and the completely avoidable deaths of tens of thousands through energy deprivation created by the inflated and increasingly unaffordable cost of energy, the heat or eat syndrome.
We see once again that the claims made by the climate modellers are being made in the full knowledge that they are being thoroughly and publicly untruthful in making their quite spurious claims on the supposed unchallengeable veracity and predictive capabilities of these same climate models

Nick Stokes
May 7, 2014 5:12 pm

RC Saumarez says: May 7, 2014 at 4:39 pm
“My conclusion is simply that mathematical modellers should perform experiments to test their assertions. iI is after all the scientific method!”

Numerical weather prediction programs undergo a stringent experimental test every day.
rgbatduke says: May 7, 2014 at 3:31 pm
“Except for the fact that when run with no forcings at all, they completely fail to reproduce the actual variability of the climate on any decadal or longer timescale.”

Completely fail? I’d like to see that quantified.

May 7, 2014 5:31 pm

May 7, 2014 at 5:50 am | Nick Stokes says:

It’s like you can make a bell, with all the right sound quality. But you can’t predict when it will ring.

In as much as I thoroughly enjoy Dr Brown’s writing to the point of actively seeking out his work, so in some perverse way do I enjoy Nick Stokes’ brilliant contortions. Science speaking, I’m not even in the same universe as these two gentlemen … but I’d like to help Nick out here with my layman science:
“It’s like you can make a bell, with all the right notes and sound quality. But you can’t predict when it will ring and what the note will be.”

davidmhoffer
May 7, 2014 5:37 pm

Wow what a thread! Devastating commentary, RGB.
Willis;
I was reading the GISS model code I noticed that at the end of each time step, they sweep up any excess or deficit of energy, and they sprinkle it evenly over the whole globe to keep the balance correct.
This is new info for me, and I am…. gobsmacked? thunderstruck? need-to-invent-a-new-word? Wow. They’re taking calculations they KNOW to be wrong and averaging them out across a system that is highly variable, cyclical, and comprised of parameters that do not vary linearly. Wow, just wow.

North of 43 and south of 44
May 7, 2014 5:37 pm

Nick Stokes says:
May 7, 2014 at 7:00 am
Frank K. says: May 7, 2014 at 6:35 am
“Really?? So the GCMs are limited to Courant < 1.”
Usually run at less. Here is WMO explaining:
” For example, a model with a 100 km horizontal resolution and 20 vertical levels, would typically use a time-step of 10–20 minutes. A one-year simulation with this configuration would need to process the data for each of the 2.5 million grid points more than 27 000 times – hence the necessity for supercomputers. In fact it can take several months just to complete a 50 year projection.”
At 334m/s, 100km corresponds to 5 minutes. They can’t push too close. Of course, implementation is usually spectral, but the basic limitation is there.
_______________________________________________________________
And how do they handle the format conversion, truncation, and rounding issues in such a number of calculations?

MaxLD
May 7, 2014 5:38 pm

Willis Eschenbach says:
May 7, 2014 at 10:47 am
I had an online discussion about ten years ago with Gavin Schmidt on the lack of energy conservation in the GISS model. What happened was, as I was reading the GISS model code I noticed that at the end of each time step, they sweep up any excess or deficit of energy,…
I pretty much grew up with the Numercial Weather Prediction (NWP) models. We saw them evolve from the old barotropic model which was basically an advection model. Energy conservation has always been a problem and I believe it still is (I can be corrected on that if some magical procedure has been found.). A lot of it arises because the equations have to be solved on a discontinuous grid. I remember when there would be constant updates to the models and then new versions would come out. One time I remember well when they went down to a much small grid, about 30 km at the time I believe. This started to generate all sorts of spurious waves and noise, so much so that we could not even use the model. So, then they had to introduce more smoothing to get rid of the noise. As far as I know there is still smoothing in all the NWP models, otherwise they would blow up fairly quickly.

May 7, 2014 5:38 pm

RC Saumarez says: May 7, 2014 at 4:39 pm
“My conclusion is simply that mathematical modellers should perform experiments to test their assertions. iI is after all the scientific method!”
May 7, 2014 at 5:12 pm | Nick Stokes says:
Numerical weather prediction programs undergo a stringent experimental test every day.
—–
Nick, you’d know this … what was the name of that cyclone a few weeks back in north Queensland that the BoM predicted to penetrate 100’s km’s inland and cause untold damage … only to wimp out on landfall? Many amateurs correctly commented on its impending demise as it had little energy to feed on.

F. Ross
May 7, 2014 5:39 pm

Moderators. Not really important but wondering if a post I made earlier today got sent to the
“bit bucket”?
Thanks.
[nothing in the spam filter – may have simply been an error in submission – it happens -mod]

F. Ross
May 7, 2014 5:41 pm

Found it please forget and forgive my 5:39pm post.

Nick Stokes
May 7, 2014 6:03 pm

Willis Eschenbach says: May 7, 2014 at 10:47 am
“If we simply sprinkle it evenly over the globe, it sets up an entirely fictitious flow of energy equatorwards from the poles. And yes, generally the numbers would be small … but that’s the problem with iterative models. Each step is built on the last. As a result, it is not only quite possible but quite common for a small error to accumulate over time in an iterative model.”

That’s a complete misunderstanding. You never give numbers. How small – too small to notice. That’s the point.
This process sets up a negative feedback for a particular type of potentially destructive oscillation. As I mentioned above, the big numerical issue with explicit Navier-Stokes is sound waves. What limits the timestep is nominally a Courant condition, but the arithmetic actually reflects the numerical treatment of sound waves of wavelength comparable to grid size. Sound waves should be propagated with conservation of energy, but if inadequately resolved, they can go into exponentially growing modes, from an infinitesimal beginning. You can detect this by a breach of conservation of energy.
The process Gavin describes damps this at the outset, automatically. It takes the energy out of the mode. The reason he couldn’t give you a number is that it is intended to damp the effect before you can notice. And the amount of energy redistributed is negligible.
It’s like stabilising your sound amplifier with negative feedback. You’re feeding back amplified signal from the output to the input. That can’t be right. Gobsmacked?
No, because the instability modes that the feedback damps never grow. So you are not destroying your fine amplifier with spurious output signal.

ferd berple
May 7, 2014 6:22 pm

There are unbiased, rescalable tessellations of the sphere — triangular ones
===========
Triangular tessellation is the obvious choice. Lat long at the poles form triangle, and rectangles can always be divided into triangles. However, even this problem quickly runs into trouble, as we discovered years ago when writing code to “unfold” complex 3D objects into 2D patterns.
The unfolding process itself is rather simple. You cover an object in triangles, then solve using Pythagoras, and lay all the resulting shapes out on a flat sheet. Using a fast computer you shrink the size of the triangles until even though the surface is curved, the triangles themselves are very close to straight lines, and lay these out on the flat sheet,
And when you cut out the resulting flat sheet and try and try and rebuild your 3D object, you end up with a hopeless mess that looks very little like the original.

Nick Stokes
May 7, 2014 6:25 pm

Streetcred says: May 7, 2014 at 5:38 pm
“Nick, you’d know this … what was the name of that cyclone a few weeks back in north Queensland that the BoM predicted to penetrate 100’s km’s inland and cause untold damage”

Ita. I don’t think BoM did predict that. What they did predict, when it was 700 km offshore, was that it would come ashore near Cooktown. Which it duly did. Not bad for a program which, we’re told, will fall in a heap because of insoluble Navier-Stokes.

rogerknights
May 7, 2014 6:40 pm

Willis Eschenbach says:
May 7, 2014 at 11:14 am
Dr. Robert, I just had to say I truly loved this line:
“… and try to infer work from this plus a knowledge of the cell temperature, plus a knowledge of the cell’s heat capacity plus fond hopes concerning the rates of phase transitions occurring inside the cell.”
I laughed out loud.
Both the head post and your long comments are devastating to the models. Well done.

This is why a sister site to WUWT, or a reference tab, should contain a “Best of WUWT” selection. For certain authors all their threads and/or comments (for rgb) would be included.

george e. smith
May 7, 2014 7:14 pm

“””””…..Lloyd Martin Hendaye says:
May 7, 2014 at 3:15 pm
Recall Goldbach’s Conjecture, stated 1742: “Every even integer greater than 2 can be expressed as the sum of two primes.”…..”””””
So what did (2) do to be excluded?? Is it not the sum of two integers, whose only factors are (1) and (itself) ??
(2) is an even integer that is the sum of two primes; (1) and (1).
Well except in common core math, where (1) +(1) might be (3) , so long as you use the right process to arrive at that answer.

ferd berple
May 7, 2014 7:25 pm

Nick Stokes says:
May 7, 2014 at 6:03 pm
And the amount of energy redistributed is negligible.
============
that is an assumption. you cannot say that it has no effect, because no one knows. even the IPCC mentions the butterfly effect.
I’m reminded of a kinetic gas model we worked on. All it took was the smallest energy leak to turn an isothermal atmosphere in the absence of GHG into an atmosphere with a temperature gradient.

Gary Palmgren
May 7, 2014 7:27 pm

I know Willis Eschenbach has mentioned in several posts that he can duplicate the output of the GCM’s by a linear response to the forcings plus a lag such as here:
http://wattsupwiththat.com/2013/06/25/the-thousand-year-model/
“all that the climate models do to forecast the global average surface temperature is to lag and resize the forcing.”
If Willis is correct, then I can build a simulation of the entire climate as described in the computer models in the lab. All I need is a hot plate, a thermometer, an ice bath and a brick. I put one end of the brick in the ice bath and heat the other end with the hot plate. I put the thermometer in a hole in the middle of the brick. When I make a step change to the setting of the hot plate, there will be a lag for the thermometer to respond due to the limited thermal conductivity of the brick. The fixed temperature at the cold end will make sure that the temperature response is linear.
Thus the climate models treat the entire atmosphere, solar heating, oceans, evaporation, rain, show, clouds, storms, etc., the same as a solid brick.

davidmhoffer
May 7, 2014 7:33 pm

Nick Stokes;
The reason he couldn’t give you a number is that it is intended to damp the effect before you can notice. And the amount of energy redistributed is negligible.
>>>>>>>>>>>>>>>>>>>
Read those two sentences again Nick.
Gobsmacked.

Nick Stokes
May 7, 2014 7:44 pm

davidmhoffer says: May 7, 2014 at 7:33 pm
“Gobsmacked.”

OK. You have a wonderful audio amplifier. But sometimes it goes haywire with RF oscillations. You tell your EE – fix it, but don’t change the sound. He does.
“What did you do?”
“O, I sent filtered feedback from output to input. Only RF, no audio”
“Good. How much energy are you sending back?”
“Well, I can’t notice any RF at all?””
“Gobsmacked. How can it work if you feed back no energy?”
“Do you want me to take it out?”

davidmhoffer
May 7, 2014 8:20 pm

Nick Stokes;
OK. You have a wonderful audio amplifier. But sometimes it goes haywire with RF oscillations. You tell your EE – fix it, but don’t change the sound. He does.
>>>>>>>>>>>>>>>>>
1. Negative feedback in an amplifier does change the sound. Transient distortion in particular rather than harmonic. That amplifiers are now good enough that the human ear cannot for the most part detect the distortion by no means suggests that it doesn’t exist, nor that it isn’t significant under certain conditions.
2. Your analogy has nothing to do with the physics at hand. Perhaps I understood the explanation incorrectly, but if I did, the bottom line is pretty simple. The models are calculating a number known to be wrong, and spreading it out across the planet surface on the assumption that the error is insignificant. Since you don’t know WHY the number is wrong (if you did, the code would be corrected to fix it instead of some bizarre method of damping it out) you ALSO don’t know by how much. You don’t know for example if correcting it would reveal still another error that is being masked by the first one. You may even find out that there is a second problem that is bigger than the first but of opposite sign. It gets worse from there. Your damping method might be fine. But since you don’t know WHY the original calc is wrong in the first place, for all you know the fix is introducing cumulative error larger than the error you were trying to fix in the first place.
The models are increasingly running warmer than the reality, even the IPCC has grudgingly admitted that. Give your head a shake. The models are wrong, nobody knows exactly why, and hear you are defending the practice of handling energy balance in a completely unrealistic fashion. No wonder the models don’t work well when they are underpinned by reasoning such as yours.

Richard Petschauer
May 7, 2014 8:55 pm

So we drop the GCMs and improve the global energy balance models concentrating on the role of CO2 and the feedbacks supported with spectral absorption tools and calibrate the model with data such as how evaporation and absolute surface humidity vary with temperature. As an early example see http://wattsupwiththat.com/2014/04/15/major-errors-apparent-in-climate-model-evaporation-estimates/

Nick Stokes
May 7, 2014 8:59 pm

davidmhoffer says: May 7, 2014 at 8:20 pm
“Negative feedback in an amplifier does change the sound.”

No, the point here is that the oscillation is RF – radio frequency. You can feed that component back, filtering out audio, and it won’t change the sound. In fact it usually won’t do anything, because in normal operation there is no RF.
Same here. The analogue is relatively high frequency sound (wavelength 100km say) that is distorted by poor resolution, and can create a growing mode (like the RF). It shouldn’t be there, or at least, you can do without it. If one starts its spurious energy will show up as a discrepancy and be dissipated. How much? As with the RF, if it works, you won’t see any. I’m sure they flag and deal with any substantial energy discrepancy.

davidmhoffer
May 7, 2014 9:16 pm

Nick Stokes;
No, the point here is that the oscillation is RF – radio frequency. You can feed that component back, filtering out audio, and it won’t change the sound. In fact it usually won’t do anything, because in normal operation there is no RF.
>>>>>>>>>>>>>>>.
In this scenario you are taking two signals, both from the REAL world, and using a technique to eliminate ONE of them. The technique ABSOLUTELY affects the other one. Which is beside the point.
There is no “real signal” in a model. You’ve got an artificial signal generated by the model itself. So you’ve got bad math creating a problem and you’re using more math to eliminate it. Band aids upon band aids.
Nick Stokes;
It shouldn’t be there, or at least, you can do without it.
>>>>>>>>>>>>
You’re right, it shouldn’t be there. The fix is to not create it in the first place. There’s no analog signal coming into the model that has to be taken out. You’re creating an excuse for erasing something the model created, and maintaining that erasing it by spreading it out across the modeled earth surface is a valid way to do it. Bull.
Nick Stokes;
If one starts its spurious energy will show up as a discrepancy and be dissipated. How much? As with the RF, if it works, you won’t see any. I’m sure they flag and deal with any substantial energy discrepancy.
>>>>>>>>>>>>>>>.
You’re “sure”? So you don’t know? IF it works you shouldn’t notice it? How do you know it works Nick? If the models were right, you would have a leg to stand on. But they aren’t and you don’t.

May 7, 2014 9:52 pm

Argh I did it again. Apologies, Stephen Rasey.

May 7, 2014 10:40 pm

May 7, 2014 at 6:25 pm | Nick Stokes says:

Ita. I don’t think BoM did predict that. What they did predict, when it was 700 km offshore, was that it would come ashore near Cooktown. Which it duly did. Not bad for a program which, we’re told, will fall in a heap because of insoluble Navier-Stokes.

Sure, Nick … but where did the lame stream media gets all of its catastrophic forecasts? BoM published maps that indicated Ita coming ashore up near Cape Melville and spearing inshore … I notice that they’ve now taken down these maps … “IDQ65002 is not available” … 😉 There’s a fair distance between Cape Melville and Cooktown (great pub there) … and it trickled down the coast before falling into a wimpish wet heap north of Townsville. Seems that the ‘program’ is really only good for 24hr forecasting done at 3 hr intervals.

May 7, 2014 10:44 pm

Climate models in need of review
http://notrickszone.com/#sthash.9Hpx0txZ.dpuf

Another aspect swaying Bengsston to speak up has been the abject failure of climate models. In the interview Bengsston maintains that he’s a friend of climate prognoses, but that they are now due for major review in order “to secure their credibility“. He says the IPCC has not been critical enough over the models:
“It is frustrating that climate science is not able to validate their simulations correctly. The warming of the Earth has been much weaker since the end of the 20th century compared to what climate models show.”

Might be useful to index the growing number of quotes from eminent climate scientists regarding the failure of the GCM’s.

Martin A
May 8, 2014 1:14 am

Nick Stokes says:
May 7, 2014 at 7:44 pm
davidmhoffer says: May 7, 2014 at 7:33 pm
“Gobsmacked.”
OK. You have a wonderful audio amplifier. But sometimes it goes haywire with RF oscillations. You tell your EE – fix it, but don’t change the sound. He does.
“What did you do?”
“O, I sent filtered feedback from output to input. Only RF, no audio”
“Good. How much energy are you sending back?”
“Well, I can’t notice any RF at all?””
“Gobsmacked. How can it work if you feed back no energy?”
“Do you want me to take it out?”

If you have an amplifier oscillating at rf, you find out why and stop it. [I’ve been doing just that, with a Quad 405 bought for ‘spares or repair’].
To think that you can reliably sort out such a problem by applying feedback from output to input via a high-pass filter is a delusion. It would be miracle if such a procedure worked.
I’d imagine the same conclusion applies to the procedure of ‘correcting’ energy balance errors in GCMs.

richardscourtney
May 8, 2014 4:23 am

Friends:
Many good – and important – details are being discussed, but the thread’s discussion seems to have lost sight of a point which I think is more important than such details.
In his above article Robert G Brown writes

As AR5 directly remarks — of the 36 or so named components of CMIP5, there aren’t anything LIKE 36 independent models — the models, data, methods, code are all variants of a mere handful of “memetic” code lines, split off on precisely the basis of grad student X starting his or her own version of the code they used in school as part of newly funded program at a new school or institution.

That is true (i.e. the models are not independent) but despite that the models each emulates a different climate system from each other model .
Long ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Thanks to Bill Illis, Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
The outputs of the models cannot be validly averaged because the models are not independent, and any obtained average must be wrong because average wrong is wrong.
Richard

Bill Illis
May 8, 2014 5:54 am

richardscourtney says:
May 8, 2014 at 4:23 am
—————–
Richard, everyone can use any of my charts at any time, without attribution. To me, it is just information that should be available to all.

Merrick
May 8, 2014 6:09 am

In response to HomeBrew, rgbatduke says:
“have several gigabytes worth of code in my personal subversion tree”
I hope it’s not several gigabytes of your code only or else you are a really bad programmer (which often is the case since everybody who’s taken some kind of university course consider themselves fit for computer programmning).
Well, let’s not forget, either, that the point of a subversion tree is to (poentially) have many branches of code for different development aspects. Internally, a well implemented (i.e., modern) implmentation of subversion means that (mostly) only differences in code are stored between versions, but that’s totally internal. To the end user, though a lot of it might be VERY redundant, there can easily be MANY gigabytes of code visible for a few complex projects with many branches.

rgbatduke
May 8, 2014 6:19 am

Mathematical models of cardiac activation depend on coupled PDEs. which describe current flow within the heart, and ODEs which describe the nature of current sources within cells. The latter depend exquisitely on paramaterisation – at least 53 coupled ODEs that depend on parameters measured under very artificial conditions.
When I was told by a professor of mathematics at a major institution “If that is what the equations say, that is what is happening”, I came to a conclusion similar to your own.
The upshot of “Mathematical Cardiac Physiology” is that there are supposed “spiral waves” of activation in ventricular fibrillation. This has found its way into major textbooks (including one from MIT). When presenting data that refutes thiis, I have been told that my data must be wrong because they do not conform to current models!
The problem is that nobody has ever demonstrated the presence of “spiral waves” in animal preparations and such data that exists from intact human hearts do not support their existence.

Yeah, I know of some people at Duke who have worked on this. Interesting (and difficult) problem. My recollection of the discussion is that (however it are modelled) the oscillator is supposedly sufficiently nonlinear that when its structure degrades in certain ways or is desynchronized in certain ways, the heart’s beat becomes chaotic — period doubling occurs and it descends into the branches of a Feigenbaum tree with both spatial and temporal chaos. I don’t know how much of this is related to a specific e.g. spiral model (not having ever looked at the heart models at all) but the idea is an appealing one and — I thought — had support from EEG traces and so on. Not to open another can of worms on WUWT, of course. I’m surprised that they have elevated the models to the point of truth without any direct experimental confirmation and observation, though. Do the models have ANY predictive value that makes them appealing even if internal details are wrong?
rgb

Walt The Physicist
May 8, 2014 7:00 am

Climate model predicting some 100s of years… It seems there’s no problem, these fellows can model the Universe. All 13 billion years of it. I wonder what does Chris Essex say? As usual, very interesting info is in the Acknowledgements section – all the funding agencies allocating our taxes so wisely.
See: Nature 509, 177–182 (08 May 2014) doi:10.1038/nature13316
Properties of galaxies reproduced by a hydrodynamic simulation
M. Vogelsberger, S. Genel, V. Springel, P. Torrey, D. Sijacki, D. Xu, G. Snyder, S. Bird, D. Nelson & L. Hernquist
Previous simulations of the growth of cosmic structures have broadly reproduced the ‘cosmic web’ of galaxies that we see in the Universe, but failed to create a mixed population of elliptical and spiral galaxies, because of numerical inaccuracies and incomplete physical models. Moreover, they were unable to track the small-scale evolution of gas and stars to the present epoch within a representative portion of the Universe. Here we report a simulation that starts 12
million years after the Big Bang, and traces 13 billion years of cosmic evolution with 12 billion resolution elements in a cube of 106.5 megaparsecs a side. It yields a reasonable population of ellipticals and spirals, reproduces the observed distribution of galaxies in clusters and characteristics of hydrogen on large scales, and at the same time matches the ‘metal’ and hydrogen content of galaxies on small scales.

rgbatduke
May 8, 2014 7:16 am

Completely fail? I’d like to see that quantified.
Well, there are two places and ways you can. You can go get chapter 9 of AR5, which I’ve been referring to for months now, and look at figure 9.8a. Eyeball the collective variability of the MME mean compared to the actual temperature (accepting uncritically for the moment that HADCRUT4 as portrayed is the actual temperature). You will note that the MME mean skates smoothly over the entire global temperature variation of the first half of the 20th century, smoothing it out of existence. You will note that the MME mean smoothly diverges from the temperature post 2000. You will note carefully the term “over” — and, if you count the collective extent where the MME mean exceeds the actual temperature vs the places that it is lower (and imagine that the MME mean is actually tracking the physics, so that the probabilities of it being higher than the actual temperature and lower than the actual temperature outside of the reference interval should be about equal). If you compute the probability of the observed data — even with a crude estimate — and turn it into a p-value, I think you’ll conclude that the MME mean soundly fails a hypothesis test with the null hypothesis “the MME mean accurately predicts the temperature within statistical noise”.
If you look at the per-model traces (hard to do in the spaghetti graph presented you will observe that the individual models — in spite of themselves being PPE averages of tens to hundreds of traces — have truly absurd variability compared to the actual climate. Again, everywhere but the reference period, their variance is easily 2-3 times the variance of the climate, where the climate is a single trace the the models are the averages of many, many traces. Surely you are aware of the fluctuation-dissipation theorem. Surely you understand that this is direct evidence that the models, having the wrong dissipation — or if you prefer, susceptibility — are deeply broken. They do not correctly capture the physics of the dissipation of global energy, and incorrectly balance gain terms with loss terms to manage to hold even close to the mean. Since the balance is delicate, they consistently overshoot before negative feedback drives the system back towards the mean, and do so (rather amazingly) across multiple runs. I shudder to think of what individual runs might look like compared to the actual trace — one expects that the variance is at least 3 to 10 times even what is portrayed in 9.8a.
The second place you can look is that you can, as I suggested earlier, watch Chris Essex’s video (linked above). He’s not an idiot — he’s a mathematician, AFAICT — and in the video he explains in some detail why the models are very unlikely to be able to succeed in what they are attempting to do, beginning with a quote from AR3 that states basically that “the models are very unlikely to be able to succeed in what they are attempting to do”. That is, everybody knows that predicting a chaotic system at inadequate resolution over very long times is impossible, and knew it fifteen years ago. I knew it then. You knew it then. We all know it. So why do we persist in pretending that in the field of climate science we can do it when not only we fail everywhere else, but when solving this is a million dollar grand challenge math problem that this far has defied solution? It was very amusing to watch the video and watch him make almost precisely the same points I’d already written down in text above, one after the other.
Except for one, the one you should watch the video for as it answers your question. He presents a graph — from AR4 IIRC — of the fourier transform of the temperature series generated by the GCMs at that time. My recollection was that the FT was for unforced climate models — CO_2 increases turned off. As he points out, the climate scientists present it in terms of period, not frequency (as Willis has been doing) but that is unimportant. The point is that the FT starts off noisy and low, rises to a substantial peak on the order of a few years in, and then drops off to flat — flatty flat flat, as he waggishly puts it outside around 100-200 months. It is then an absolute wasteland of flatness beyond.
This is unsurprising — the GCMs are (incorrectly) balanced so that there is a single relevant knob — anthropogenic CO_2 — that determines the climate set point, and so that the other things like volcanic aerosols that have any observable effect have a lifetime of at most a few years post a major event. A cursory examination of the data reveals that they have too high a “Q” value — their effective mass and spring constant are too large relative to their damping (fluctuation-dissipation, again) — so that they oscillate over many times the correct range, but at the same time are tighlty bound to a fixed equilibrium with little room for any sort of naturally cumulating variation. The data presented in this curve is proof of this. There is no long term variability in the GCMs, yet the temperature anomaly itself varies by amounts ranging from 0.2 to 0.4 C per decade numerous places in the climate record, and cumulates temperature differences of up to a bit more than 1 C over a century or two in numerous places. The spectrum of the actual climate is not flat and the actual climate has long term natural trends that are not driven by CO_2 — as we have discussed in numerous threads on WUWT.
So there are two sources of support for my statement — you can lay an eyeball on the actual data traces in AR5 and see for yourself that the models have a mean behavior that is not accurate EITHER in the past OR the future of the reference period (plus note in passing the direct evidence that the individual models, viewed as damped driven oscillators with a slowly varying driving force, generally have the wrong Q — a really, really wrong Q), or you can visit Essex’ video and look at the AR4 FT (or, probably, look at it directly in AR4 but I don’t have any idea what chapter or page it is in). I’d strongly suggest that you at LEAST do the former — you are a physicist and a smart guy and can read graphs at least as well as I can — you look at figure 9.8a and you tell me that the figure is strong evidence that the models are correct. They work well only in the reference period.
Of course they work well in the reference period. Nobody cares. Anybody can fit a nearly monotonic function over a 25 year period, especially when the models incorporate the major volcanic events with a significant short term effect. What they don’t do is damp correctly, even from these events. They don’t exhibit any tendency to wander around slowly, cumulating natural variability. They do exhibit a tendency to spike way up in temperature, then zoom straight down in temperature, then zoom way up in temperature again — and this is a mean behavior already averaging over many runs, persistent over long time integrals!
How you, or anybody, can look at this behavior and conclude that “the models are working” is beyond me. I look at it and go “back to the drawing board”, or “not ready for prime time”. But Chris Essex’s astounding assertion at the beginning of his talk explains a lot. He was briefly recruited by a climate group to help them find the smoking gun condemning CO_2 by building GCMs that showed strong warming. I have to say that it sounded like he was quoting them.
Not since I read through the climategate letters have I encountered anything so appalling in science.
How much confidence would you place in a study that hired somebody to “find the smoking gun” connecting the eating of wheat bran to infertility rates in women which then found it? To “find the smoking gun” relating being black to having lowered intelligence? To “find the smoking gun” relating the consumption of jelly beans to acne:
http://xkcd.com/882/
To “find the smoking gun” linking, well, anything to anything.
That is truly a recipe for disaster in science, an open invitation to its corruption. People hired for the express purpose of “finding something” in science either find it or else, well, they become unemployed and their spouses get angry, their children have to change schools, they have to dip into their retirement, they fail to get tenure and have to become car salesmen or Wal Mart greeters.
rgb

May 8, 2014 10:06 am

This is powerful stuff -and I agree, worth archiving for that long distant posterity that will look back and say ‘daddy, where were you when all this climate modelling was rampant across all science academies, all governments, the EU, the UN, all environmental NGOs…..and the whole of the left-liberal-green coalition, their press and most of the media?’
I can’t follow the code critics – well, not much of it. But I know good critical stuff when I see it. But there is something missing. Here’s a little bit of history that will illustrate the problem – it is about the creation of alternative models (which is a very tricky business) and how to use them. Maybe there is a lesson here that someone can take up.
In the early part of the nuclear industry in Britain and the US, computer simulation was used to assess the environmental impact of reactor accidents – including melt-down of the core. But this was all kept secret. The reactors were licensed – and ‘safety’ standards set without public involvement, parliamentary oversight or needless to say, critical input from outsiders. Toward the end of that decade, the American Physical Society forced the issue, and the first reports were made public on the consequences of melt-downs. In the UK, this prompted a Royal Commission report (1976). Of course, many reactors had already been built by then.
The consequences of aerial releases were however obfuscated in the way the models presented ‘probabilistic’ combinations of failure rates, amount released, radiotoxicity and even wind direction, to arrive at an individual risk component – that was very, very small. The societal risk was not computed – and of course, failure rates were unvalidated because of the complexity of the systems – and did not include such factors as terrorist attack or aircraft impact. Concerned outsiders, such a s myself wanted to know ‘what if’ the event has happened and the wind is in my direction – but the models could not do that (or rather – in the hands of the nuclear labs, they would not do that).
So – we (my colleagues in an environmental science research group and I) petitioned for the release of the models. We succeeded. Next we had to understand them and modify the parameters to make them deterministic. This was during the time of punch cards on computers that you could walk around inside of! I knew nothing myself – but others came to our aid – even from within the nuclear labs – and the programmes were decoded and rerun – all with alternative but justifiable parameters – such as factoring out the probability of failure (and assuming it has happened), fixing the wind direction, but accepting all the downwind consequence part of the analysis in order to limit objections on methodology.
We succeeded in lifting the obscurity and focussing the consequence side of the equation not on personal individual risk of cancer (tiny beyond 10 miles) but the societal impact of contaminated land, relocation, loss of productions, etc. These reports were then fed into the policy process at all levels of government decision making and in the EU (with some small successes in affecting emergency planning procedures).
Later in the 1980s, we had gained the trust of that particular aerial modelling community. At one point, my research group even commissioned a model-run from a UK national laboratory to input to an inquiry.
Could it be done today? That is – take one of the better models and modify the key parameters of, say, the lambda factor in the RF equations or aerosol fudging factors, and add a coming Maunder Minimum in solar activity? I did ask our MetOffice Hadley Centre if I could have access to their more simpler model -the one they used to predict that a coming Maunder Minimum would not significantly slow down global warming – on the assumption that I could find some help to modify and run it. But they ignored the request (Julia Slingo was otherwise polite). In the old days, such requests could not be ignored because we had a battery of big guns on our side from the NGOs, the press, and even the trades unions representing emergency workers (who funded the study we commissioned). Now, of course, we have no friends or allies – well, at least, none we would care to embrace. And actually, it is not much of a ‘we’ over here in the UK.
But could it be done in the USA? The expertise is there, from what this thread shows.
I would be interested to know……. peter.taylor(at)ethos-uk.com
and to participate.

May 8, 2014 10:46 am

RGB Your latest post shows that you are completely convinced that the IPCC model outputs provide no basis for forecasting future climate . Is it not time to move on .I repeat an earlier comment which I hope you can find time to respond to.
“RGB I have been saying for some years that the IPCC models are useless for forecasting and that a new approach to forecasting must be adopted. I would appreciate a comment from you on my post at 5/7/6:47 am above and the methods used and the forecasts made in the posts linked there at http://climatesense-norpag.blogspot.com
The 1000 year quasi – periodicity in the temperatures is the key to forecasting . Models cannot be tuned unless they could run backwards for 2- 3000 years using different parameters and processes on each run for comparison purposes. This is simply not possible.
Yet even the skeptics seem unable to break away from essentially running bits of the models over short time frames as a basis for discussion.
Perhaps it would help psychologically if people thought of the temperature record as the output of a virtual computer which properly integrated all the component processes.
Let us not waste more time flogging the IPCC modeling dead horse and look at other methods.”

Michael Gordon
May 8, 2014 10:59 am

I will add some noise to this conversation with Nick Stokes and davidhmhoffer. For context:
Nick Stokes says: May 7, 2014 at 7:44 pm
OK. You have a wonderful audio amplifier. But sometimes it goes haywire with RF oscillations. You tell your EE – fix it, but don’t change the sound. He does.
“What did you do?”
“O, I sent filtered feedback from output to input. Only RF, no audio”
“Good. How much energy are you sending back?”
“Well, I can’t notice any RF at all?””
“Gobsmacked. How can it work if you feed back no energy?”
“Do you want me to take it out?”
I (*) can tell you how much energy I am feeding back by direct measurement (oscilloscope or spectrum analyzer, the latter probably better for this purpose).
Your argument seems to be that negative feedback removes the RF entirely and thus there is nothing left to measure. This is incorrect. RF energy ALWAYS exists as broadband thermal noise. That’s what starts the oscillator in the first place. Even when damped it is still there; the idea is to reduce it such that at RF frequencies the gain of the amplifier is less than 1. It is still there and can be measured with a spectrum analyser (or, in a pinch, an oscilloscope with no audio input signal).
In a climate model, which is a computer program, noise cannot exist at any frequency unless the model puts it there. The argument seems to be dealing with “noise” through filtering versus simply not adding noise in the first place.
I hope we can all agree that a model with no noise also cannot vary. All runs of a model, in fact all models using the same code and data, must necessarily produce exactly the same output every time. That was my criticism of Michael Mann’s offer of a MatLab model and data — to see if you get the same result.
Duh, of course I will get the same result. It’s a computer program. It does exactly what it was told to do. I pointed this out on the Scientific American website and was banned shortly thereafter 🙂 (Another groupthink echo chamber bites the dust)
I recognize that a model could be extremely sensitive to input conditions. Hashing algorithms are an example; changing a single byte of input to MD5, SHA(1,2) or even CRC produces a completely different output in a way that is supposed to not be predictable. This is not noise, it is still deterministic as is any computer program, but it is hard to PREDICT. That’s okay.

Michael Gordon
May 8, 2014 11:06 am

Whups, forgot to explain the (*). Qualifications: former holder of First Class FCC radiotelephone license and current holder of Amatuer Extra class radio license. Of these the First Class FCC was considerably more difficult and included questions related to this subtopic. I’m also a part-time computer programmer so I know that programs are deterministic. They will always produce the same output for the same input, but the output might not be easily predictable or reversible (as in hashing algorithms).
Hashing algorithms use feedback internally and so do weather systems, seems to me.

Michael Gordon
May 8, 2014 11:16 am

“The process Gavin describes damps this at the outset, automatically. It takes the energy out of the mode. The reason he couldn’t give you a number is that it is intended to damp the effect before you can notice. And the amount of energy redistributed is negligible.”
And yet measurable. If you can mathematically damp it then you already know its magnitude and it can be reported. All such computations can be logged to a file if you wish and know with precision every instance of adjustment and their cumulative effect.

Nick Stokes
May 8, 2014 12:01 pm

Michael Gordon says: May 8, 2014 at 10:59 am
“In a climate model, which is a computer program, noise cannot exist at any frequency unless the model puts it there. The argument seems to be dealing with “noise” through filtering versus simply not adding noise in the first place.”

No, the situation is similar to the amplifier. Like RF, the noise is always there; the aim is to stop it getting out of hand. It is, literally, noise; acoustic oscillation, though in the milliHz range. Wind is noisy, as we know, and models reflect that. It does no harm unless some limitation (resolution) causes it to appear as something else which has a high loop gain.
In an amplifier you would limit RF gain, possibly through selective negative feedback, as in my example. The objective of the forced dissipation of spurious energy is similar.

davidmhoffer
May 8, 2014 12:43 pm

Nick Stokes;
No, the situation is similar to the amplifier.
>>>>>>>>>>>>>>
An amplifier is a physical analog device. A climate model is a completely digital construct whose link to reality is entirely devised by programmers. Drawing an analogy between the two is utterly absurd.
That said, I agree with richardscourtney. The central take away for readers of this thread should be the devastating critiques by RGB.

AJ
May 8, 2014 12:51 pm

Dr. Brown,
Thanks for your interesting comments. I have an observation to add to yours regarding dissipation. A couple of years ago I was comparing ARGO data vs. the model outputs for AR4. Specifically i was looking at the variance in ocean temperatures at different depths and latitudes. Given the short time span examined I assume the annual cycle dominated the signal.
In general, my results indicated that below the thermocline the dissipation of energy was greater in the models than observations would support. Given that the observed variances were small to begin with perhaps it could be argued that this “didn’t matter”. On the other hand, i didn’t get a warm and fuzzy feeling that the models could adequately integrate heat uptake over decades.
If you’re interested here are my findings.
https://sites.google.com/site/climateadj/ocean_variance

Nick Stokes
May 8, 2014 1:09 pm

rgbatduke says: May 8, 2014 at 7:16 am
“Eyeball the collective variability of the MME mean compared to the actual temperature (accepting uncritically for the moment that HADCRUT4 as portrayed is the actual temperature). You will note that the MME mean skates smoothly over the entire global temperature variation of the first half of the 20th century, smoothing it out of existence.”

Well. of course the multi-model mean is smoother. Its smoothness can be increased as you wish by including more models in the mean. But the individual runs, whose variability could reasonably be compared with Earth, are not smooth, as Fig 9.8a shows.
As I’ve said above, models are not expected to predict weather, even decadal weather. They are not provided with the information to do so. They generate random weather, with hopefully correct climate statistics in response to forcing. It isn’t clear what caused the early 20C warming, but if it wasn’t a forcing supplied to the models, they won’t show it. They have no information to cause them to do so.
Then you say that individual runs are too variable. That’s hard to deduce from a spaghetti graph. There are some big spikes, but a lot of runs. I’d like o see that quantified too. But it’s possible; I can imagine that the models have more trouble getting the variability right than the expected value. But it’s the EV we really want.
Incidentally, I think there is one good reason for greater variability. HADCRUT, like all indices, uses SST as a proxy for air temperature. Models return the actual air temperature in the boundary cells.
I tried to watch the Essex video, but it’s hopelessly waffly – I wish people could just write down what they want to say.
As to why they work well in the reference period – that’s not to do with model performance. Reference period just means the period during which they are set to a common mean. Any set of squiggly curves will show more concordance if you make that requirement. Setting the mean minimises SS variation. That’s just statistics, not PDE modelling. You can see it in this plot of paleo proxies that have been baselined 4500-5500 BP.

May 8, 2014 1:33 pm

davidmhoffer:

That said, I agree with richardscourtney. The central take away for readers of this thread should be the devastating critiques by RGB.

Me too. And if I was awarding grants I’d look kindly on Brown’s generic software component for ‘stable, rescalable quadrature … over the tessera’ – to allow much more flexible global climate models and all kinds of other goodies. For these highly educational posts haven’t primarily been destructive (of current GCMs) but constructive. One can imagine a immensely powerful open source community in the future with software objects like this available to them. Even if, as Brown, Essex and many of us suspect, we discover through them that we can’t ever model the spatio-temporal chaos of climate much better.

Mark
May 8, 2014 3:11 pm

David A says:
The GCMs are informative! They are extremely informative. They all run wrong, (not very close to the observations) in the SAME direction! (To warm)
This would be less remarkable if there are actually only a very small number of GCMs.
Which is something the OP was claiming.
A bit like the way you can have lots of “brands” of a product in a supermarket, but on close examination you find many have things in common.

Mark
May 8, 2014 3:22 pm

beng says:
It takes the most sophisticated models running on supercomputers to “model” a modern aircraft. Those presumably actually work.
Would Boeing, EADS, etc actually build an aircraft purely on the basis of such models or would they still put physical models into wind tunnels at some point?

john robertson
May 8, 2014 4:52 pm

Another fine comment.
Thank you Robert Brown.
So climatology by computer model is a faith based racket.
The shaman and High Priest types who fell from civic dominance with the Reformation and acceptance of the scientific method, are back.
And still lusting to lord it over all.
The climate models are a modern substitute for the incantation and gobbley gook of the dominant state religion.
A nastier state religion would be hard to imagine.
Attacks productive citizens.
Rewards parasitic activity.
Attacks the foundations of civil society.
Seeks active destruction of poor brown persons.
Robs the many to reward the criminal few.
Sure to end well?
These schemes and activities have always been social intelligence tests.
But it can be a mistake to pass the test if the fools and bandits have the guns.
I keep running into this thought; Is it a form of insanity to need to feel yourself morally superior to other people and insist you alone are fit to rule?
Of the nature of;” No.. I am Napoleon”
Is this the nature of the persons who gravitate toward the UN?

Nick Stokes
May 8, 2014 5:14 pm

Mark says:May 8, 2014 at 3:22 pm
“Would Boeing, EADS, etc actually build an aircraft purely on the basis of such models or would they still put physical models into wind tunnels at some point?”

Both, but CFD, solving the same Navier-Stokes that Essex says is impossible, is a big part. Here is an interesting Boeing presentation on CFD and the 787.

May 8, 2014 5:26 pm

I think Frank K has been correct in his comments responding to others who attempted to explain various aspects of the numerical solution methods applied to the discrete approximations to some of the PDEs used in GCMs.
Generally, an initial analysis objective is to determine the type of the PDEs; parabolic, hyperbolic, or elliptic. This analysis is frequently conducted based on the quasi-linear approximations of the PDEs. The type of equation is determined by the roots of the characteristic equation that is given by the determinant of the Jacobian of the equation system. The characteristics determine the locations and content of the inform that must be specified at the boundaries of the solution domain of the independent variables.
Stability properties of the finite-difference approximations to the PDEs is generally based on completely linearized forms of the equation system. Additionally, the analyses generally require that the initial state of the equations be a uniform state with an absence of all temporal and spatial gradients. The stability properties depend on the time level at which each of the terms in the approximations is evaluated. Stability properties are usually summarized in a relationship between the temporal and spatial increments plus geometric descriptions of the discrete grid.
Numerical solution methods can be unconditionally unstable, conditionally stable, or unconditionally stable. The last of these usually arise when ‘fully implicit’ solution methods are used, and is generally interpreted to mean that large discrete temporal increments can be used in applications.
One of the more familiar of limitations for conditionally stable methods is the Courant-Friedrichs-Lewy condition that relates to the time required for a signal to traverse a single spatial increment. The condition is usually encountered in the case of numerical solution of hyperbolic PDEs. The signal can be either the pressure, or the transport of a convected dependent variable.
If the pressure is treated in an explicit manner, i. e. evaluated at the previous time level, the CFL condition leads to the necessity to use small discrete temporal increments in so far as the speed of sound is large or not.
The transport of temperature or energy when treated explicitly leads to conditional stability relating to the time required for the bulk motion to traverse a discrete grid.
Implicit handling of the pressure and transported quantity will eliminate the conditional stability and lead to unconditional stability.
It is very important to note the following:
1. Stability is only one aspect of numerical solutions of systems algebraic equations that arise from discrete approximations to PDEs. The discrete approximations are also required to be consistent with the PDEs. Stability and consistency give convergence.
2. Stability analyses are also usually conducted with linearized forms of the algebraic approximations. Additionally, as in the case of characteristics analyses, a uniform initial state for the complete system is assumed. As Frank K noted above, the model equations used in GCMs are partial differential equations containing algebraic expressions for mass, momentum, and energy exchanges at the boundaries of the constitutes. These exchanges must be assumed to be zero in order for uniform initial states to be obtained.
Thus the algebraic coupling terms do not enter the stability analyses. The time constants for some of these terms can easily be more restrictive than the CFL criterion, or other stability criterion associated with the numerical approach. Generally, all possible initial states that could be realized when the interaction terms are included cannot be covered in stability analyses.
In general, all aspects of all the equations in the total model must be investigated for stability requirements.
3. The model equations used in GCMs are generally not hyperbolic; second-order diffusion terms in the momentum balances, for example, lead to parabolic equations PDEs for the transient case, and elliptic systems for the steady-state case. Additionally, some momentum model equations have been modified to include fourth-order derivatives beyond the second-order physical diffusion terms.
4. The CFL criterion is not a ‘guideline’. It is instead a hard and fast limit. If a non-linear analysis is conducted the effects of the terms not completely handled in the linear analyses can be determined. The same thing for the algebraic coupling terms, although that’s especially messy.
5. Characteristics and stability analyses are math problems. One does not need to have actually worked with CFD codes to understand the ramifications of the analyses.
In summary, throwing out simplistic concepts which are very likely to be completely un-related to any actual GCM does not lead to understanding of the numerical solution aspects of the equation systems used in GCMs.

May 8, 2014 7:28 pm

Dr. Page:
“The whole UNFCCC travelling circus has no empirical basis for its operations and indeed for its existence depending as it does on the predictions of the inherently useless climate models.. The climate is much too complex to model but can be predicted by simply knowing where we are in the natural quasi -cycles”
Of course the UNFCCC has no empirical basis for its operations. It was established as a political entity from the get go, with political aims, and a totally political organization. It was sold in the UN as a method of extracting taxes from more develped countries to be paid as damages to less developed countries. Human caused climate change was a basic assumption. Global warming with negative effects was simply the first interation. It also assumed that all other weather, climate, and environmental changes were human caused with negative consequences, and that they were the fault of the more developed countries.
No number of scientific arguments, models, or improvements will have any effect. It takes political operations to change a political organization. We are starting to see some of that now that it is becoming more obvious of the UNFCCC and IPCC failings at useful analysis of the climate and useful predictions combined with astronomical recommended remediation costs that won’t remediate anything. Even just a few more years of cooling will very likely put an end to global warming and hopefully climate change as useful political tools.

May 8, 2014 8:00 pm

Calculation limits, just a general observation- the Boeing presentation is a slick marketing tool, but what the heck is a “Boise Sled” on the landing gear? Maybe for landing during a snowstorm in Boise, Idaho?
A good friend of mine has made a career of using various CFD programs in aviation. His bottom line obervations are that they can be very useful in designing the wind tunnel models used for validation, and all Boeing’s claims, despite the Blooper, are true. Current CFD is more than adequate for significantly improving aerodynamics. But he also pointed out that they all have limits where they simply fall apart and figuring that the CFD failed, what exactly is causing the failure and whether it means anything for what you are trying to do is not always a simple matter. He gave an example of where a friend of his lost his life in a plane crash caused by pushing the limits of laminar flow on an a critical airfoil. CFD couldn’t give an accurate prediction of the effects of off center gusts on the airflow. A gust well within general aivation limits caused one wing to stall abruptly and flip the plane into the ground. This was, as usual, a cascade of failures that could have been avoided by more careful procedures.
You’ll notice that Boeing is probably using the most advanced software available, but they still include two stages of wind tunnel testing to verify the results, and some features undoubtedly undergo even more WT tests. Verification is almost totally absent from climate modelling. I wonder if any of the climate modelers have rigorously tested the limits of the models and what consequences of discovering major failures in the models long after the unaffordable mediations required by their forcasts have been started ?

May 8, 2014 9:14 pm

Dan Hughes says: May 8, 2014 at 5:26 pm
” The model equations used in GCMs are generally not hyperbolic”

As I demonstrated to Frank K above, the Navier Stokes equations have the acoustic wave equation embedded within. All sound satisfies the Navier-Stokes equations (it must), and sound waves are always a possible solution (and you get them).
“The CFL criterion is not a ‘guideline’. It is instead a hard and fast limit.”
It is the limit, and I’m very familiar with what happens when you get close. Waves of the maximum frequency that the grid can represent (Nyquist) start to switch to a growing mode. Checkerboard instabilities etc. You can do things to prevent that growth, so in that sense the limit is not hard. However, as you go beyond, there are more and more possible modes, so it is an effective limit.
“One does not need to have actually worked with CFD codes to understand the ramifications of the analyses.”
It helps. At least you don’t forget that with all the theoretical handwringing, they do actually work, and the results are used in high stakes applications (not just GCMs).

Martin A
May 9, 2014 12:59 am

Are computer models reliable?
Yes. Computer models are an essential tool in understanding how the climate will respond to changes in greenhouse gas concentrations, and other external effects, such as solar output and volcanoes.
Computer models are the only reliable way to predict changes in climate. Their reliability is tested by seeing if they are able to reproduce the past climate, which gives scientists confidence that they can also predict the future.
But computer models cannot predict the future exactly. They depend, for example, on assumptions made about the levels of future greenhouse gas emissions.

UK Met Office publication “Warming A guide to climate change”, 2011

tty
May 9, 2014 3:06 am

Nick Stokes says:
“It isn’t clear what caused the early 20C warming”
So what is the probability that this “X” factor caused the late 20C warming as well?

Nick Stokes
May 9, 2014 3:58 am

tty says: May 9, 2014 at 3:06 am
“So what is the probability that this “X” factor caused the late 20C warming as well?”

I simply said it isn’t clear. GHGs did rise significantly.
There’s every reason to expect the Earth to warm in response to forcing. But other things happen too, as they do in model runs. And there’s no reason to expect model runs to synchronise with those unforced changes on Earth. Or with each other (unforced), for that matter.

E.M.Smith
Editor
May 9, 2014 11:07 am

Very well put.
I got GISTemp to run. Royal PITA. It’s a hodge podge of code written over several decades by different hands with a couple of different computer languages with no source code control system and no versioning. It looks like Topsy. “It just growed”…
In just one example I found where an F to C conversion was done in the “simple but wrong’ way and introduced a 1/10 C warming in about 1/10 th of the records. Based on just ONE obvious line of code in just ONE program of the set. How many more such? Who knows. Subtile faults of the same order are very hard to spot, and near as I can tell nobody but me even bothered to look.
Oh, and they regularly ignore things like the simple fact that an average of temperatures is NOT a temperature. (NO, they do not convert to anomalies first, then do all the math. The temperatures are carried AS temperatures until the very last stop, then converted to grid-cell anomalies…)
The whole thing is just a computer fantasy run wild, IMHO. Dancing in the error bands of the data and doing it badly. That’s my oppinion as someone who has been a professional programmer / DBA / sysadmin / computer project manager / Director of IT etc. for about 36 years.
I’ve also looked at the GCMs code. Somewhat better, but substantially as described above. Not gotten one to run yet. It’s on my “someday list”… Maybe when I get a grant /sarc?

Dagfinn
May 9, 2014 11:21 am

Martin A says:
May 7, 2014 at 2:13 pm
” Climate models have a sound physical basis and mature, domain-specific software development processes” (
“Engineering the Software for Understanding Climate Change”, Steve M. Easterbrook Timothy C. Johns )
———————–
E.M.Smith says:
May 9, 2014 at 11:07 am
Very well put.
I got GISTemp to run. Royal PITA. It’s a hodge podge of code written over several decades by different hands with a couple of different computer languages with no source code control system and no versioning. It looks like Topsy. “It just growed”…
——————-
The Easterbrook and Johns paper has one occurrence of the word quality: “Overall code quality is hard to assess.”
http://www.cs.toronto.edu/~sme/papers/2008/Easterbrook-Johns-2008.pdf

May 9, 2014 1:41 pm

As I demonstrated to Frank K above, the Navier Stokes equations have the acoustic wave equation embedded within. All sound satisfies the Navier-Stokes equations (it must), and sound waves are always a possible solution (and you get them).
The important issues are not associated with the essentially un-countable number of fluid flows that are captured by the continuous formulation of the Navier-Stokes equations. That would be all the flows that meet the requirement of a continuum with all the fluids that meet the linear rate-of-strain / stress model. There is absolutely nothing useful in pointing out just a single example of these flows. Frank K is well aware of this aspect of these equations.
The critically important issues are those associated with (1) the modifications and limitations of the continuous formulation of the model equation systems used in GCMs ( generally the fluid-flow model equations are not the Navier-Stokes equations ), and this applies to all the equations used in the GCM, (2) the exact transformation of all the continuous equation formulations into discrete approximations, (3) the critically important properties and characteristics of the numerical solution methods used to solve the discrete approximations, (4) the limitations introduced at run time for each type of application and the effects of these on the response functions of interest for the application, and (5) the expertise and experience of users of the GCMs for each application area.
Discussions of specific aspects of the above issues cannot be usefully conducted until the details are known and the issues at hand identified in the code documentation. All else is hand waving
The CFD situation is in no ways an analogy for the GCM situation. Many CFD applications, those that do not resolve the fundamental scales of the un-altered Navier-Stokes equations, involve a few parameters. Additionally, some of these parameters have a basis in fundamental aspects of fluid flows. The GCMs, on the other hand involve a multitude of parameters for phenomena and processes that occur at temporal and spatial scales that are orders of magnitude smaller than the scale resolved with the discrete numerical-methods grid. It is the parameterizations that carry the heavy load of the degree of fidelity between the model results and the real world physical phenomena and processes. The parameterizations are descriptions of states that the materials of interest have previously attained. They are not properties of the materials.
Generally, CFD applications that potentially involve the health and safety of the public are subjected to a degree of Verification, Validation, and Uncertainty Qualification that the GCMs can only dream about. Decades of detailed experimental testing and associated model/code/calculational-procedure Validation for each response function of interest continues to be carried out for such applications.
There is no correspondence to the GCM case at all. Absolutely. None. Whatsoever.
It is getting to be a joke whenever some fundamental approaches to descriptions of material behaviors are invoked as analogies to the status of Climate Science. That reminds me of the good ol’ days when “realizations” using GCMs were equated to actual realizations of the Navier-Stokes equations to investigate the basis nature of turbulence. And the times that statistical mechanics is similarly invoked.
There is a singular, and of upmost importance, critical difference between those, proven, descriptions of materials and the status of Climate Science. The proven fundamental laws will not ever, as in never, incorporate descriptions of previous states that the materials have previously attained. Never. Instead, the proven fundamental laws will always solely contain descriptions of properties of the materials.
The GCMs are based on approximate models of some parts of some fundamental equations, plus a multitude of empirical descriptions of states that the materials in the system have previously attained. Even some of the approximate models will contain parameters that represent previous states of the materials, and are not material properties. Many of the empirical descriptions are somewhat, or completely, ad hoc ( for this case only ). The multitude of parameterizations do all the heavy lifting relative to the fidelity of the results of the model to physical reality.
A “realization” by a GCM is a “realization” of the processes captured by the descriptions of the previous states. Such “realizations” are not in any way actual realizations of the materials that make up the the Earth’s climate systems.
The distinctions between descriptions based on material properties and empirical estimates of previous states, the latter are characterized as ‘process models’, are at such a critical basis that they must always be kept in mind. Climate Science seems to completely ignore these distinctions and continues to invoke false analogies.

David Young
May 14, 2014 11:00 pm

Nick Stokes brings up the problem of sound waves for Navier-Stokes solvers. This is indeed a problem since the speed of sound is orders of magnitude greater than the weather pressure fluctuations that are of interest. Indeed, I recall when I was in graduate school a seminar at NCAR by Gerry Browning on the method he and Kreiss devised to filter out sound waves from weather models to allow much larger time steps. So for Nick, sound waves are in fact an unwanted feature of the Navier-Stokes equations that must be filtered out for effective weather simulations.
I also think this idea that “weather models work” therefore your criticism of climate models is wrong is just silly. Weather models just barely “work” as Browning and others have revealed. Numerically, Navier-Stokes simulations are very difficult and subject to all the problems associated with nonlinear systems.
The real question here is why one would expect a weather model run on a very course grid with a huge time step to yield anything meaningful whatsoever given that it is terrible as a weather model even in the short term. As climate of doom asserted in another venue, the answer climate modelers give is “every time I run the model, I get a reasonable climate.” That is just colorful fluid dynamics and not a scientific argument.

David Young
May 15, 2014 6:11 pm

Nick Stokes references a presentation on CFD and the 787. Be very careful here. Boeing people can also be CFD salesmen and that is what we have here. It’s an internal advocate. More accurate is an excellent paper in journal of math in industry dec 2012 I believe by 2 airbus specialists. CFD is postdictive and only occasionally predictive, especially outside the vast range of past testing.

May 16, 2014 5:50 am

Use of CFD as an analog for GCMs is not valid so long as all the issues discussed in the citations listed below have not been addressed for each code model, numerical solution method, code software, application procedure, system response function, and user.
Additionally, GCMs as process models must also address the issues.
There is now a very large, robust, and directly applicable literature that has been accepted by many organizations that develop engineering and scientific models, methods, and software for a multitude of diverse applications. The Climate Science community remains the singular exception. It is now an undeniable fact that the Climate Science community continues to avoid even the mention of these matters, preferring instead hand-waving dismissal of the existence of the matters and those who dare to mention them.
Fundamentals of Verification and Validation by Patrick J. Roache
Verification and Validation in Computational Science and Engineering by Patrick J. Roache
Verification and Validation in Scientific Computing by William L. Oberkampf and Christopher J. Roy
Additional references are listed here

David Young
May 17, 2014 10:24 am

Dan Hughes, I basically agree with you. Navier-Stokes is infinitely easy compared to climate. Even NS though has serious issues though which are often swept under the rug.