The Global Climate Model clique feedback loop

Elevated from a WUWT comment by Dr. Robert G. Brown, Duke University

Frank K. says: You are spot on with your assessment of ECIMs/GCMs. Unfortunately, those who believe in their ability to predict future climate really don’t want to talk about the differential equations, numerical methods or initial/boundary conditions which comprise these codes. That’s where the real problems are…

Well, let’s be careful how you state this. Those who believe in their ability to predict future climate who aren’t in the business don’t want to talk about all of this, and those who aren’t expert in predictive modeling and statistics in general in the business would prefer in many cases not to have a detailed discussion of the difficulty of properly validating a predictive model — a process which basically never ends as new data comes in.

However, most of the GCMs and ECIMs are well, and reasonably publicly, documented. It’s just that unless you have a Ph.D. in (say) physics, a knowledge of general mathematics and statistics and computer science and numerical computing that would suffice to earn you at least masters degree in each of those subjects if acquired in the context of an academic program, plus substantial subspecialization knowledge in the general fields of computational fluid dynamics and climate science, you don’t know enough to intelligently comment on the code itself. You can only comment on it as a black box, or comment on one tiny fragment of the code, or physics, or initialization, or methods, or the ode solvers, or the dynamical engines, or the averaging, or the spatiotemporal resolution, or…

Look, I actually have a Ph.D in theoretical physics. I’ve completed something like six graduate level math classes (mostly as an undergraduate, but a couple as a physics grad student). I’ve taught (and written a textbook on) graduate level electrodynamics, which is basically a thinly disguised course in elliptical and hyperbolic PDEs. I’ve written a book on large scale cluster computing that people still use when setting up compute clusters, and have several gigabytes worth of code in my personal subversion tree and cannot keep count of how many languages I either know well or have written at least one program in dating back to code written on paper tape. I’ve co-founded two companies on advanced predictive modelling on the basis of code I’ve written and a process for doing indirect Bayesian inference across privacy or other data boundaries that was for a long time patent pending before trying to defend a method patent grew too expensive and cumbersome to continue; the second company is still extant and making substantial progress towards perhaps one day making me rich. I’ve did advanced importance-sampling Monte Carlo simulation as my primary research for around 15 years before quitting that as well. I’ve learned a fair bit of climate science. I basically lack a detailed knowledge and experience of only computational fluid dynamics in the list above (and understand the concepts there pretty well, but that isn’t the same thing as direct experience) and I still have a hard time working through e.g. the CAM 3.1 documentation, and an even harder time working through the open source code, partly because the code is terribly organized and poorly internally documented to the point where just getting it to build correctly requires dedication and a week or two of effort.

Oh, and did I mention that I’m also an experienced systems/network programmer and administrator? So I actually understand the underlying tools REQUIRED for it to build pretty well…

If I have a hard time getting to where I can — for example — simply build an openly published code base and run it on a personal multicore system to watch the whole thing actually run through to a conclusion, let alone start to reorganize the code, replace underlying components such as its absurd lat/long gridding on the surface of a sphere with rescalable symmetric tesselations to make the code adaptive, isolate the various contributing physics subsystems so that they can be easily modified or replaced without affecting other parts of the computation, and so on, you can bet that there aren’t but a handful of people worldwide who are going to be able to do this and willing to do this without a paycheck and substantial support. How does one get the paycheck, the support, the access to supercomputing-scale resources to enable the process? By writing grants (and having enough time to do the work, in an environment capable of providing the required support in exchange for indirect cost money at fixed rates, with the implicit support of the department you work for) and getting grant money to do so.

And who controls who, of the tiny handful of people broadly enough competent in the list above to have a good chance of being able to manage the whole project on the basis of their own directly implemented knowledge and skills AND who has the time and indirect support etc, gets funded? Who reviews the grants?

Why, the very people you would be competing with, who all have a number of vested interests in there being an emergency, because without an emergency the US government might fund two or even three distinct efforts to write a functioning climate model, but they’d never fund forty or fifty such efforts. It is in nobody’s best interests in this group to admit outsiders — all of those groups have grad students they need to place, jobs they need to have materialize for the ones that won’t continue in research, and themselves depend on not antagonizing their friends and colleagues. As AR5 directly remarks — of the 36 or so named components of CMIP5, there aren’t anything LIKE 36 independent models — the models, data, methods, code are all variants of a mere handful of “memetic” code lines, split off on precisely the basis of grad student X starting his or her own version of the code they used in school as part of newly funded program at a new school or institution.

IMO, solving the problem the GCMs are trying to solve is a grand challenge problem in computer science. It isn’t at all surprising that the solutions so far don’t work very well. It would rather be surprising if they did. We don’t even have the data needed to intelligently initialize the models we have got, and those models almost certainly have a completely inadequate spatiotemporal resolution on an insanely stupid, non-rescalable gridding of a sphere. So the programs literally cannot be made to run at a finer resolution without basically rewriting the whole thing, and any such rewrite would only make the problem at the poles worse — quadrature on a spherical surface using a rectilinear lat/long grid is long known to be enormously difficult and to give rise to artifacts and nearly uncontrollable error estimates.

But until the people doing “statistics” on the output of the GCMs come to their senses and stop treating each GCM as if it is an independent and identically distributed sample drawn from a distribution of perfectly written GCM codes plus unknown but unbiased internal errors — which is precisely what AR5 does, as is explicitly acknowledged in section 9.2 in precisely two paragraphs hidden neatly in the middle that more or less add up to “all of the `confidence’ given the estimates listed at the beginning of chapter 9 is basically human opinion bullshit, not something that can be backed up by any sort of axiomatically correct statistical analysis” — the public will be safely protected from any “dangerous” knowledge of the ongoing failure of the GCMs to actually predict or hindcast anything at all particularly accurately outside of the reference interval.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Charles Nelson

Warmists often ask me just how such a giant conspiracy could exist amongst so many scientists…this is one very good particular example of the kind of structures that sustain CAGW.
Thanks for the insight!

Roy Spencer

I exchanged a few emails with mathematician Chris Essex recently who claimed (I hope I’m translating this correctly) that climate models are doomed to failure because you can’t use finite difference approximations in long-time scale integrations without destroying the underlying physics. Mass and energy don’t get conserved. Then they try to fix the problem with energy “flux adjustments”, which is just a band aid covering up the problem.
We spent many months trying to run the ARPS cloud resolving models in climate mode, and it has precisely these problems.

dearieme

“inadequate spatiotemporal resolution on an insanely stupid, non-rescalable gridding of a sphere”: that reminds me vividly of my first reaction, years ago, when I started to look into “Global Warming”. It was apparent to me that much of the modelling was “insanely stupid” – that it was being done by people who were, by the general standards of the physical sciences, duds.
My qualification to make such a judgement included considerable experience of the modelling of physicochemical systems, starting in 1967. And unlike many of the “climate scientists” I also had plenty of experience in temperature measurement.
My analysis has long been that they started off with hubris and incompetence; the lying started later as they desperately tried to defend their rubbish.

Man Bearpig

Well said that man.
Even the most basic (sounding) computer models very soon become more complex. I am in no way described as a computer programmer, but I did a course many years ago in VB. Part of it involved writing a computer ‘model’ or simulation of a scenario that went something like this.
In your city you have 10,000 parking machines, on average 500 break down every day. it takes on average 30 minutes to fix each one and 20 minutes to drive to the location. How many engineers do you need?
Simple? Not on your nelly !

Thank you Dr Robert G Brown. You have written exactly what I suspected all along. To put it bluntly the climate models cannot be relied upon. The input data being organised on lat/long grids “gives rise to nearly uncontrollable errors”. I have asked actuaries who have been using this data similar questions about its inherent biases but they didn’t reply. I will be very interested in what Bob Tisdale has to say in this post.
I seem to have a similar history to you as I started programming almost 50 years ago initially using plug board machines and have kept it up as peripheral to my current job as an actuary. I figured the failings of climate models a long time ago but lacked your ability and experience to express it so eloquently.

Man Bearpig

” Charles Nelson says:
May 7, 2014 at 3:41 am
Warmists often ask me just how such a giant conspiracy could exist amongst so many scientists…this is one very good particular example of the kind of structures that sustain CAGW.
Thanks for the insight!

I think that it is not that there is a conspiracy, it is more that they do not understand basic scientific principles and ‘believe’ that what they are doing is for the good of mankind (and their pockets)
.

Jos

There are actually people doing research on how independent GCMs really are.
An example is this paper by Pennell and Reichler [2011], but there are a few more.
http://journals.ametsoc.org/doi/abs/10.1175/2010JCLI3814.1
http://www.inscc.utah.edu/~reichler/publications/papers/Pennell_Reichler_JC_2010.pdf
They conclude, a.o., that “For the full 24 member ensemble, this leads to an Meff that,
depending on method, lies only between 7.5 and 9 when we control for the multi-model
error (MME). These results are quantitatively consistent with that from Jun. et al. (2008a,
2008b), who also found that CMIP3 cannot be treated as a collection of independent
models.”
(Meff = effective number of models)
So, according to them, to get an estimate about how many independent models there are one should divide the actual number of models by three …

Are they designed to actually be predictive in the hard science tradition or are these models actually just artefacts illustrating a social science theory of changing practices in political structures and social governance and economic planning and human behavior? My reading of USGCRP, especially the explicit declaration to use K-12 education to force belief in the models and the inculcation of new values to act on the models is that this is social science or the related system science of the type Ervin Laszlo has pushed for decades.
It’s no different from the Club of Rome Limits to Growth or World Order Models Projects of the 70s that admitted in fine print that they were not modelling physical reality. They were simply trying to gain the political power and taxpayer money to alter physical reality. That is also my reading of the Chapter in yesterday’s report on Adaptation.
I keep hyping this same point because by keeping the focus just on the lack of congruence with the hard science facts, we are missing where the real threats from these reports is coming from. They justify government action to remake social rules without any kind of vote, notice, or citizen consent. It makes us the governed in a despot’s dream utopia that will not end well.

Bloke down the pub

Now doesn’t it feel better with that off your chest? By the way, when you say ‘ It’s just that unless you have a Ph.D. in (say) physics, a knowledge of general mathematics and statistics and computer science and numerical computing that would suffice to earn you at least masters degree in each of those subjects if acquired in the context of an academic program, plus substantial subspecialization knowledge in the general fields of computational fluid dynamics and climate science, you don’t know enough to intelligently comment on the code itself’., you weren’t thinking of someone who can’t use excel, like Phil Jones were you?

Bloke down the pub

Man Bearpig says:
May 7, 2014 at 4:00 am
The answer is one. The sooner they are all broken the better.

Dr. Brown is looking at a real solution. But the modelers are not. They cannot be. They use garbage as input which means regardless of the model, the output will always be garbage. Until they stop monkeying with past temperatures, they have no hope of modeling future climate.

emsnews

The real battle is over energy. Note how almost all our wars are against fossil fuel energy exporting nations. Today’s target is Russia.
Unlike previous victims, Russia has a large nuclear armed force that can destroy the US!
The US is pushing for a pipeline for ‘dirty oil’ from Canada while at the same time telling citizens that oil is evil and the US has lots of coal and saying coal is evil.
The rulers want to do this because they know these things are LIMITED. Yes, we are at the Hubbert Oil Peak which isn’t one year or ten years but it is real. Energy is harder to find and more expensive to process.
To preserve this bounty of fossil fuels, the government has to price it out of the reach of the peasants which are the vast bulk of the population. This is why our media is all amazed and loving tiny houses while the elites build bigger and bigger palaces.

HomeBrewer

“have several gigabytes worth of code in my personal subversion tree”
I hope it’s not several gigabytes of your code only or else you are a really bad programmer (which often is the case since everybody who’s taken some kind of university course consider themselves fit for computer programmning).

Very good exposé, it tallies nicely with the statement that GCM’s are complex re-packed opinions.

While clique may be appropriate, CLAQUE is then more appropriate – a paid clique. The Magliozzi grace themselves by not being truthful – Clique and Claque, the Racket Brothers.

Tom in Florida

Man Bearpig says:
May 7, 2014 at 4:00 am
“In your city you have 10,000 parking machines, on average 500 break down every day. it takes on average 30 minutes to fix each one and 20 minutes to drive to the location. How many engineers do you need?”
It doesn’t matter how many you NEED, it’s government work so figure how many they hire.
Don’t forget that each engineer needs a driver (union rules apply) plus you will need dispatchers, administrative personnel and auditors. Each of those sections would need supervisors who will need supervisors themselves. The agency would end up costing more than the income from the parking meters so they will then need to be removed, most likely at a cost that is higher than the installation cost, most likely from the same contractor who installed them and who is no doubt related to someone in the administration. All in all just a typical government operation.
But back to the subject matter by Dr Brown. Most of the people to whom I speak about this haven’t the faintest clue that models are even involved. They believe that it is all based on proven science with hard data. When models are discussed the answer is very often “they must know what they are doing or the government wouldn’t fund them”.
It’s still going to be a long, uphill battle against a very good propaganda machine.

sleeping bear dunes

I always, absolutely always enjoy Dr. Brown’s comments. There are times when I have doubts about my skepticism. And then I read some of his perspectives and go to sleep that night feeling that I am not all that dumb after all. It is comforting that there are some real pros out there thinking about this stuff.

kadaka (KD Knoebel)

From Man Bearpig on May 7, 2014 at 4:00 am:

In your city you have 10,000 parking machines, on average 500 break down every day. it takes on average 30 minutes to fix each one and 20 minutes to drive to the location. How many engineers do you need?

None. Engineers don’t fix parking meters, repair techs do.
And at that failure rate, it’s moronic to field repair. Have about an extra thousand working units on hand, drive around in sweeps and swap out the nonworking heads, then repair the broken ones centrally in batches in steps (empty all coin boxes, then open all cases, then check all…). Which throws out your initial assumptions on times.
However at an average 10,000 broken units every 20 days, you might need an engineer or three. As expert witnesses when you sue the supplier and/or manufacturer who stuck you with such obviously mal-engineered inherently inadequate equipment. With an average failure rate of eighteen times a year per unit, anyone selling those deserves to be sued.
Isn’t it wonderful how models and assumptions fall apart from a simple introductory reality check?

My analysis has long been that they started off with hubris and incompetence; the lying started later as they desperately tried to defend their rubbish.
It has always been interesting witnessing how much ‘faith’ the AGW hypothesists, dreamers and theorists placed in those models, knowing full well just how defective they really were and are.

It’s good to hear Roy Spencer mention Chris Essex’s critique of climate models, as I don’t think anyone has uncovered the reality of the GCM challenge – technical and political – for me as much as Robert Brown, since I read Essex and McKitrick’s Taken by Storm way back.
Just one word of warning to commenters like dearieme: the ‘insanely stupid’ referred to something in what we tend to call the architecture of a software system. The current GCMs don’t have the ability to scale the grid size and that’s insane. The people chosen to program the next variant of such a flawed base system are, as Brown says, taken from “the tiny handful of people broadly enough competent in the list”. They aren’t stupid by any conventional measure but the overall system, IPCC and all, most certainly is. Thanks Dr Brown – this seems to me one of the most important posts I’ve ever seen on WUWT.

To understand GCM’s and how they fit in, you first need to understand and acknowledge their origin in numerical weather prediction. That’s where the program structure originates. Codes like GFDL are really dual-use.
Numerical Weather Prediction is big. It has been around for forty or so years, and has high-stakes uses. Performance is heavily scrutinised, and it works. It would be unwise for GCM writers to get too far away from their structure. We know there is a core of dynamics that works.
NWP has limited time range because of what people call chaos. Many things are imperfectly known, and stuff just gets out of phase. The weather predicted increasingly does not correspond to the initial state. It is in effect random weather but still responding to climate forcings. GCMs are NWPs run beyond their predictive range.
GCMs are traditionally run with a long wind-back period. That is to ensure that there is indeed no dependence on initial conditions. The reason is that these are not only imperfectly known, especially way back, but are likely (because of that) to cause strange initial behaviour, which needs time to work its way out of the system.
So there is generally no attempt to make the models predict from, say, 2014 conditions. They couldn’t predict our current run of La Nina’s, because they were not given information that would allow them to do so. Attempting to do so will introduce artefacts. They have ENSO behaviour, but they can’t expect to be in phase with any particular realisartion.
Their purpose is not to predict decadal weather but to generate climate statistics reflecting forcing. That’s why when they do ensembles, they aren’t looking for the model that does best on say a decadal basis. That would be like investing in the fund that did best last month. It’s just chance.
I understand there are now hybrid models that do try and synchronise with current states to get a shorter term predictive capability. This is related to the development of reanalysis programs, which are also a sort of hybrid. I haven’t kept up with this very well.

jaffa68

I have a computer model that suggests the planet will be attacked by wave after wave of space aliens and it’s hopeless because every time we destroy one incoming wave the next is stronger and faster, the consequences are inevitable – we will lose and we will all be killed – the computer model proves it. Oh wait – I’ve just realised I was playing space invaders. Panic over.

Thank you, Dr Brown, for an excellent and informative comment. For me, the GCMs are clearly the worst bit of science in the game, mainly because they are so difficult for anyone to understand and hence relatively easy to defend in detail. Of course, once the black box results are examined their uselessness is easy to see, but their supporters continue to lap up tales of their ‘sophistication’.

dccowboy

Dr Brown,
Thank you for this explanation. It is interesting to note that the scenario you lay out means that there is no over-arching ‘conspiracy’ among the modelers. Their efforts are a natural social phenomena, sort of like a ‘self-organizing’ organic compound. I think there are many examples of this sort of ‘group dynamic’ in other fields.
I do have some experience (limited) in construction of feedback systems when I started preliminary coursework to pursuing a Phd. I quickly became frustrated with the economic models as the first thing that was always stated was, “in order to make this model mathematically tractable, the following is assumed” – the assumptions that followed essentially assumed away the problem. I suspect it is the same with the GCMs
Could you explain to the uninitiated why the use of “non-rescalable gridding of a sphere” is “insanely stupid”? I have no doubt you are correct, but, when talking with my AGW friends I would like to be able to offer an explanation rather than simply make the statement that it is ‘stupid’.
Thanks.

R2Dtoo

An interesting spinoff from this post would be an analysis of just how big this system has grown over the last three/four decades. I did a cursory search a while back and came up with 39 Earth System Models in 21 centres in 12 countries, but got lost in the details and inter-connections of the massive network. I did my PhD at Boulder from 1967-70. At that time NCAR was separate from the campus and virtually nothing trickled down through most programs. I now see many new offshoots, institutes and programs that didn’t exist 45 years ago. Many of these have to be on “soft money” and their continuation ensured only through grants. The university/government interface has to be a complex of state/federal agreements that is so complex as be difficult to decipher. The university logically extracts huge sums of money from the grants as “overhead”, monies that become essential to the campus for survival. Perhaps such a study has been done. If so, it would be instructive just how many “scientists” and programs actually do survive only on federal money, and are solely dependent for continuation on buying into what government wants. The fact the academies world wide also espouse global warming/climate change shows that they recognize that their members are dependents on the government. It is sad, but deeply entrenched system. Can you imagine the scramble if senior governments decided to reduce the dispersed network down to a half-dozen major locations, keep the best, and shed the rest?

John W. Garrett

Anybody who knows anything about modeling complex, dynamic, multivariate, non-linear systems knows full well that John von Neumann was absolutely right when he said:
Give me four parameters, and I can fit an elephant. Give me five, and I can wiggle its trunk.

Reblogged this on gottadobetterthanthis and commented:
Qualified? Yes, Dr. RGB is supremely qualified. And as has been said, his science-fu is good.

Jos says: “An example is this paper by Pennell and Reichler [2011]…”
Thanks for the link.

Roy Spencer says: May 7, 2014 at 3:54 am
“I exchanged a few emails with mathematician Chris Essex recently who claimed (I hope I’m translating this correctly) that climate models are doomed to failure because you can’t use finite difference approximations in long-time scale integrations without destroying the underlying physics. Mass and energy don’t get conserved.”

GCMs generally don’t use finite difference methods for the key dynamics of horizontal pressure-velocity. They use spectral methods, for speed mainly, but the conservation issues are different. For other dynamics they use either finite difference or finite volume. If conservation was a problem they would use finite volume with a conservative formulation.
GCMs are not more vulnerable to conservation loss than standard CFD. And that has been a standard engineering tool for a long while now.

Geoff Sherrington

As I suggested (on CA, IIRC) on seeing an ensemble with a coloured envelope of ‘confidence’ some years ago, a more appropriate population for its derivation would be all model runs by all participants, not a cherry picked subset.
Thank you, Dr Brown.
The treatment of error derivation can be a handy litmus test of the quality of the paper – and of its authors.

Nick Stokes says: “So there is generally no attempt to make the models predict from, say, 2014 conditions. They couldn’t predict our current run of La Nina’s, because they were not given information that would allow them to do so. Attempting to do so will introduce artefacts. They have ENSO behavior…”
“ENSO behavior” is well-phrased. In other words, climate models create noise in the tropical Pacific but that noise has no relationship with actual ENSO processes, because climate models cannot simulate ENSO processes.

Thanks, rgbatduke, it was well-written and understandable.
And thanks, Anthony, for promoting the comment to a post.

Jack Cowper

Thank you Dr Brown,
Very much enjoyed and have learned much from this comment.

Alan Watt, Climate Denialist Level 7

Many years ago I was employed in a newsroom automation project for the Chicago Tribune during the course of which I acquired and used extensively a distribution of RATFOR and assorted UNIX-like tools from U. of Arizona to run on a DEC PDP-10. “RATFOR”, for those who believe programming started with java, is RATional FORtran — see Software Tooks, Brian W. Kernighan and P. J. Plauger.
Anyway during the course of this project I attended a RATFOR conference and heard various presentations on new tools, proposed extensions to the RATFOR preprocessor and even some suggested extensions to FORTRAN itself. After one such talk a participant from one of the big research labs (either Lawrence Berkeley or Lawrence Livermore) got up and told the speaker something like (I’m paraphrasing here):

You can’t change FORTRAN; you can’t ever change FORTRAN. I support the research computing environment for the lab and they use a number of locally written subroutines where each one is several hundred thousand lines of code. They’ve been worked on by dozens of people over several decades and nobody understands completely how any of them work. You can’t ever change FORTRAN.

I don’t recall if it was ever mentioned what these huge subroutines were used for. Wouldn’t it be a cheerful thought if they used in reactor safety design?
You description of CGM code sounds a lot like those old FORTRAN subroutines: so large and complex and worked on by dozens of people over a decade or more that nobody really understands exactly what they do any more.
All I can say is it’s a good thing we don’t use vendor-proprietary, trade secret computer software to record election votes .. oh, wait we do now in Georgia.

richard

Nick Stokes says:
May 7, 2014 at 5:18 am
Roy Spencer says: May 7, 2014 at 3:54 am
“I exchanged a few emails with mathematician Chris Essex
————————
Christoper Essex in full swing on climate models, great fun to watch.

Dagfinn

By the way: “However, the Bray and von Storch survey also reveals that very few of these scientists trust climate models — which form the basis of claims that human activity could have a dangerous effect on the global climate. Fewer than 3 or 4 percent said they “strongly agree” that computer models produce reliable predictions of future temperatures, precipitation, or other weather events. More scientists rated climate models “very poor” than “very good” on a long list of important matters, including the ability to model temperatures, precipitation, sea level, and extreme weather events.” http://pjmedia.com/blog/bast-understanding-the-global-warming-delusion-pjm-exclusive/
That seems to be the scientific consensus about climate models.

richard

christopher

harkin

If the hypothetical “parking machines” were as reliable as the climate models, you’d have 9,700 broken machines and 300 working ones.
The term “good enough for government work” seems apt.

Dave Yaussy

I always appreciate Dr. Brown’s perspective. He writes in a way that laymen like me can understand, without being condescending.

Alex Hamilton

Roy Spencer referred above to models being doomed, but the real reasons they are doomed has its foundation in incorrect understanding of physics. There is no understanding among climatologists that, wheras Pierrehumbert wrote about “convective equilibrium,” there can only be one state of equilibrium according to the Second Law of Thermodynamics, and that state is thermodynamic equilibrium. The two are the same.
Think about it, Dr Spencer in relation to your incorrect assumption of isothermal conditions which is also inherent in the models. Likewise the models incorrectly apply Kirchhoff’s Law of Radiation to thin atmospheric layers. The absorbed solar radiation in the troposphere does not have a Planck function distribution, so Kirchhoff’s Law is inapplicable.
Also, because the models completely overlook the fact that the state of thermodynamic equilibrium (that is the same as “convective equilibrium”) has no net energy flows, and is isentropic without unbalanced energy potentials (as physics tells us) then they don’t recognize the fact that the resultant thermal profile has a non zero gradient. Water vapor and other radiating gases like carbon dioxide do not make that gradient steeper, as we know. It is already too steep for comfort, and so fortunately we have water vapor to lower the gradient, thus reducing the surface temperature.
:

Bob Tisdale says: May 7, 2014 at 5:21 am
“In other words, climate models create noise in the tropical Pacific but that noise has no relationship with actual ENSO processes, because climate models cannot simulate ENSO processes.”

No, they get the dynamics right. They will oscillate in the right way. But they don’t match the Earth’s phases. Even with all the knowledge we can assemble, we can’t predict ENSO.
It’s like you can make a bell, with all the right sound quality. But you can’t predict when it will ring.

cd

Dr. Robert Brown
This is a brilliant piece and complements a talk I heard by Dr Essex on the same issue. He argued like you, that to model energy transfer properly you need to solve Navier-Stokes equations, and this would require a cellular grid with a resolution of c. 1 cubic mm. He also added that due to numerical instabilities solutions tend to diverge rather than converge. Now I’m not pretending to be an expert on these issues, and I may even sound confused, but I appreciate the issues and why they are such fundamental problems for those who claim GCM are based on fundamental physics. But even if we assume this claim to be completely true, the implementation of those fundamental physics in code is far from perfect.
…run at a finer resolution without basically rewriting the whole thing…
This is news to me. I heard a talk where the speaker, and perhaps I picked her up wrong, suggested that they can run their models at higher voluminous resolutions but at the expense of temporal scale (shorter time periods) due to limits in computer power; so that if she were correct then your statement doesn’t hold. My experience in this type of modelling would suggest, that if you’re correct, they’ve made a serious design error.
Dr Essex’s talk:

Richard Drake says: May 7, 2014 at 5:01 am
“The current GCMs don’t have the ability to scale the grid size and that’s insane.”

It’s not insane. It’s a hard physical limitation. A Courant condition on the speed of sound. Basically, sound waves are how dynamic pressure is transmitted. Timestepping programs relate properties in a cell to the properties in that and neighboring cells in the previous timestep. If pressure can cross more than one cell in a timestep (at speed of sound), there is total instability. So contracting grid size means contracting timestep in proportion, which blows up computing times.

cd says: May 7, 2014 at 5:52 am
“He argued like you, that to model energy transfer properly you need to solve Navier-Stokes equations, and this would require a cellular grid with a resolution of c. 1 cubic mm.”

You could say exactly the same of computational fluid dynamics, or numerical weather forecasting. They all work.

Mark Hladik

My opinion:
This is the reason you have the ‘blackadderthe4th’-like trolls. All they can do is parrot their mind-numbing Richard Alley Facetube videos, which reinforce their beliefs. Get down to brass tacks, the hard reality of number-crunching, and their eyes just glaze over.
I have a strong Math background, Physics background, and Geology background, and can follow most discussions up to a point. Bottom line is, as Dr. Brown so eloquently states, numerical models are worthless, and any “policy” based upon them will ultimately fail.
A well-written examination of the cold, hard, facts, Dr. Brown!
Mark H.
Mods: any chance for cross-post at Jo?

Berényi Péter

With the terrestrial climate system we have a single run of unique physical instance which is way too large to fit into the lab, so no one can perform controlled experiments on it. However, it is a member of a wide class, that of irreproducible* quasi stationary non equilibrium thermodynamic systems, which do have members one could perform experiments on with as many experimental runs as one wishes.
Therefore we should abandon climate modelling for a while and construct computational models of experimental setups of this kind, which can be validated using standard lab procedures.
Trouble is, of course, there is no general theory available for said class, which makes the endeavor risky, but also interesting at the same time. By the way, it also shows how fatuous it is to venture into climate with no proper preparations.
* A thermodynamic system is irreproducible if microstates belonging to the same macrostate can evolve into different macrostates in a short time, which is certainly the case with chaotic systems. It makes the very definition of entropy difficult, let alone rate of entropy production. And that’s the (theoretical) challenge.

cd

Nick
You could say exactly the same of computational fluid dynamics, or numerical weather forecasting. They all work.
Well they start diverging from observations straight away (as to be expected), eventually after a relatively short period of time the divergence is so great that they’re next to useless. In short the rate of divergence tells you how good they are. Short-term atmospheric models are good 1-2 days, after which the uncertainty increases dramatically, by day 5 they’re no better than a guess.

For those who think that despite CFD, NWP etc the Navier-Stokes equations are basically inso;uble, here is something to explain:
[youtube http://www.youtube.com/watch?v=DEhtx0atcFM%5D

Merrick

So, correct me if I’m wrong, but since there are a number of us around here who have a number of these skills (sadly, the computational fluid dynamics experience that Dr. Brown is lacking I also lack, but like Dr. Brown I have the necessary physics to understand them in principle but just lack the experience), don’t we just need to take the (some) actual code as a (very poor) template, turn it into a true Open Source project, and all start biting off chunks of it to wrestle with and poperly organize it, document it, and start also to establish and qualify (document and error bound) initilization conditions, boundary conditions, etc.?
It could take a couple of years and probably two or three out of 30 or 40 who jump in will do the lion’s share of the work, but at the end of the day wouldn’t we have code, and wouldn’t it be interesting to know what it said?
I’m already imagining a very modular approach where we can test and validate code against known problems. For instance, the differential equations which describe dynamics in the atmosphere (and ocean, and between the atmosphere and ocean) are identical to the equations which describe well known and well studied dynamics on smaller scales – so validation of the fluid dynamics code could be independently and reproducibly verfied. And, in addition, so relevant to one of Dr. Brown’s main themes, its predictive abilities and absolute predictive error could be analyzed as a function of SCALING to real world problems in order to inform how scaling/rescaling calculations on the sphere impact error.
Anyways – just a thought…

Nick Stokes:

It’s not insane. It’s a hard physical limitation. A Courant condition on the speed of sound. Basically, sound waves are how dynamic pressure is transmitted. Timestepping programs relate properties in a cell to the properties in that and neighboring cells in the previous timestep. If pressure can cross more than one cell in a timestep (at speed of sound), there is total instability. So contracting grid size means contracting timestep in proportion, which blows up computing times.

I don’t think you’ve understood what I was driving at – and what I took Robert Brown to be driving at. I fully understand the blowing up of compute times as grid size and thus timesteps are reduced. But as Moore’s Law continues to hold (or something close) one might well wish to reduce both in a new generation of models and model runs. Not to be able to do this – and do it easily – is, in Brown’s words, insanely stupid. That’s why I characterised it as a software architecture problem.
I’m open to correction on what Dr Brown meant but that’s I took from his words.