Elevated from a WUWT comment by Dr. Robert G. Brown, Duke University
Frank K. says: You are spot on with your assessment of ECIMs/GCMs. Unfortunately, those who believe in their ability to predict future climate really don’t want to talk about the differential equations, numerical methods or initial/boundary conditions which comprise these codes. That’s where the real problems are…
Well, let’s be careful how you state this. Those who believe in their ability to predict future climate who aren’t in the business don’t want to talk about all of this, and those who aren’t expert in predictive modeling and statistics in general in the business would prefer in many cases not to have a detailed discussion of the difficulty of properly validating a predictive model — a process which basically never ends as new data comes in.
However, most of the GCMs and ECIMs are well, and reasonably publicly, documented. It’s just that unless you have a Ph.D. in (say) physics, a knowledge of general mathematics and statistics and computer science and numerical computing that would suffice to earn you at least masters degree in each of those subjects if acquired in the context of an academic program, plus substantial subspecialization knowledge in the general fields of computational fluid dynamics and climate science, you don’t know enough to intelligently comment on the code itself. You can only comment on it as a black box, or comment on one tiny fragment of the code, or physics, or initialization, or methods, or the ode solvers, or the dynamical engines, or the averaging, or the spatiotemporal resolution, or…
Look, I actually have a Ph.D in theoretical physics. I’ve completed something like six graduate level math classes (mostly as an undergraduate, but a couple as a physics grad student). I’ve taught (and written a textbook on) graduate level electrodynamics, which is basically a thinly disguised course in elliptical and hyperbolic PDEs. I’ve written a book on large scale cluster computing that people still use when setting up compute clusters, and have several gigabytes worth of code in my personal subversion tree and cannot keep count of how many languages I either know well or have written at least one program in dating back to code written on paper tape. I’ve co-founded two companies on advanced predictive modelling on the basis of code I’ve written and a process for doing indirect Bayesian inference across privacy or other data boundaries that was for a long time patent pending before trying to defend a method patent grew too expensive and cumbersome to continue; the second company is still extant and making substantial progress towards perhaps one day making me rich. I’ve did advanced importance-sampling Monte Carlo simulation as my primary research for around 15 years before quitting that as well. I’ve learned a fair bit of climate science. I basically lack a detailed knowledge and experience of only computational fluid dynamics in the list above (and understand the concepts there pretty well, but that isn’t the same thing as direct experience) and I still have a hard time working through e.g. the CAM 3.1 documentation, and an even harder time working through the open source code, partly because the code is terribly organized and poorly internally documented to the point where just getting it to build correctly requires dedication and a week or two of effort.
Oh, and did I mention that I’m also an experienced systems/network programmer and administrator? So I actually understand the underlying tools REQUIRED for it to build pretty well…
If I have a hard time getting to where I can — for example — simply build an openly published code base and run it on a personal multicore system to watch the whole thing actually run through to a conclusion, let alone start to reorganize the code, replace underlying components such as its absurd lat/long gridding on the surface of a sphere with rescalable symmetric tesselations to make the code adaptive, isolate the various contributing physics subsystems so that they can be easily modified or replaced without affecting other parts of the computation, and so on, you can bet that there aren’t but a handful of people worldwide who are going to be able to do this and willing to do this without a paycheck and substantial support. How does one get the paycheck, the support, the access to supercomputing-scale resources to enable the process? By writing grants (and having enough time to do the work, in an environment capable of providing the required support in exchange for indirect cost money at fixed rates, with the implicit support of the department you work for) and getting grant money to do so.
And who controls who, of the tiny handful of people broadly enough competent in the list above to have a good chance of being able to manage the whole project on the basis of their own directly implemented knowledge and skills AND who has the time and indirect support etc, gets funded? Who reviews the grants?
Why, the very people you would be competing with, who all have a number of vested interests in there being an emergency, because without an emergency the US government might fund two or even three distinct efforts to write a functioning climate model, but they’d never fund forty or fifty such efforts. It is in nobody’s best interests in this group to admit outsiders — all of those groups have grad students they need to place, jobs they need to have materialize for the ones that won’t continue in research, and themselves depend on not antagonizing their friends and colleagues. As AR5 directly remarks — of the 36 or so named components of CMIP5, there aren’t anything LIKE 36 independent models — the models, data, methods, code are all variants of a mere handful of “memetic” code lines, split off on precisely the basis of grad student X starting his or her own version of the code they used in school as part of newly funded program at a new school or institution.
IMO, solving the problem the GCMs are trying to solve is a grand challenge problem in computer science. It isn’t at all surprising that the solutions so far don’t work very well. It would rather be surprising if they did. We don’t even have the data needed to intelligently initialize the models we have got, and those models almost certainly have a completely inadequate spatiotemporal resolution on an insanely stupid, non-rescalable gridding of a sphere. So the programs literally cannot be made to run at a finer resolution without basically rewriting the whole thing, and any such rewrite would only make the problem at the poles worse — quadrature on a spherical surface using a rectilinear lat/long grid is long known to be enormously difficult and to give rise to artifacts and nearly uncontrollable error estimates.
But until the people doing “statistics” on the output of the GCMs come to their senses and stop treating each GCM as if it is an independent and identically distributed sample drawn from a distribution of perfectly written GCM codes plus unknown but unbiased internal errors — which is precisely what AR5 does, as is explicitly acknowledged in section 9.2 in precisely two paragraphs hidden neatly in the middle that more or less add up to “all of the `confidence’ given the estimates listed at the beginning of chapter 9 is basically human opinion bullshit, not something that can be backed up by any sort of axiomatically correct statistical analysis” — the public will be safely protected from any “dangerous” knowledge of the ongoing failure of the GCMs to actually predict or hindcast anything at all particularly accurately outside of the reference interval.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Mike Jonas says: May 7, 2014 at 2:34 pm
“But for climate, we need years ahead.”
You don’t need a weather prediction years ahead. Climate models won’t tell you about rain on May 7, 2050. But they will tell you quite a lot about that day. It won’t be 100°C, for example. They are getting something right. And given what some here say about the catastrophe of N-S solution, that is an achievement.
But that is the key point I’ve been making. They are NWP programs working in a mode where they can’t predict weather. They generate random weather. But because all the physics of NWP is in there, the random weather responds correctly to changes in forcing, so you can see by simulation how average climate changes, even though the weather isn’t right.
george e smith says:
May 7, 2014 at 1:39 pm
…..I always worry about the issue that Dr. Spencer raised, …that somehow they are ignoring all the butterflies that there are in Brazil….
Regional temperature is a weasel, the so called global is the Mosher’s unicorn.
If one of Dr. Spencer’s aliens landed in good old England (as it is often reported by some of the local papers), in the later half of 2012, observing the CET for about four months, he/she/it would justifiably conclude that the CET is directly controlled by the solar rotation with the daily sunspot numbers moving the daily maximum temperature by about 4 to 5 C pp.
Here are the observational data (‘the true arbiter’- Feynman). http://www.vukcevic.talktalk.net/SSN-CETd.htm
Butterflies, weasels, unicorns and aliens, regretfully are not adequately represented in the climate models.
I again thank rgb for stimulating this discussion.
A root cause analysis integrated with a case study would be useful on what turned out to be an inappropriate priority and unreasonable expectation on GCMs (then later ECIMs) by academia, gov’t science bodies and scientific societies; including why the encouragement of the IPCC to focus on GCMs.
I think there is need to explain it in the context of science; with externalities like politics, environmental ideology and economics treated as imports / exports from the science aspect.
John
How far into the future can Global Climate Models predict?
Recall Goldbach’s Conjecture, stated 1742: “Every even integer greater than 2 can be expressed as the sum of two primes.”
The story goes that preeminent mathematician Marc Kac once received a lengthy proof of this hypothesis, brimming with “approbation and testimonial plaudits”. After some few minutes, Kac scribbled his reply: “Dear Sir: Your first error, which invalidates all subsequent deductions, occurs in Paragraph 3, Page 1.”
In logic, this is termed a Fallacy of the First Order, meaning that an invalid premise necessarily negates all reasoning that follows. Leaping from crag-to-crag in pursuit of grant monies, deviant Warmists accordingly prefer never to examine GCMs’ primary assumptions. But anyone not wracked with organic brain disease will readily see this tripe for what it is, and act accordingly.
But that is the key point I’ve been making. They are NWP programs working in a mode where they can’t predict weather. They generate random weather. But because all the physics of NWP is in there, the random weather responds correctly to changes in forcing, so you can see by simulation how average climate changes, even though the weather isn’t right.
Except for the fact that when run with no forcings at all, they completely fail to reproduce the actual variability of the climate on any decadal or longer timescale. If you run them with the CO_2 forcing, they only reproduce the observed climate variation within the reference (training) period. This is not impressive evidence that the models are getting the climate right.
One is tempted to claim that it is direct evidence that the models are not getting the split between natural variability and CO_2 forcing right, greatly underestimating the [former] and greatly overestimating the latter. But even that cannot really be said with any real confidence.
rgb
Thank you Professor Brown for this.
It strikes a chord with my work which is, by GCM standards, trivial.- the understanding of how lethal arrhythmias arise in the heart.
Mathematical models of cardiac activation depend on coupled PDEs. which describe current flow within the heart, and ODEs which describe the nature of current sources within cells. The latter depend exquisitely on paramaterisation – at least 53 coupled ODEs that depend on parameters measured under very artificial conditions.
When I was told by a professor of mathematics at a major institution “If that is what the equations say, that is what is happening”, I came to a conclusion similar to your own.
The upshot of “Mathematical Cardiac Physiology” is that there are supposed “spiral waves” of activation in ventricular fibrillation. This has found its way into major textbooks (including one from MIT). When presenting data that refutes thiis, I have been told that my data must be wrong because they do not conform to current models!
The problem is that nobody has ever demonstrated the presence of “spiral waves” in animal preparations and such data that exists from intact human hearts do not support their existence.
My conclusion is simply that mathematical modellers should perform experiments to test their assertions. iI is after all the scientific method!
[SNIP – DOUG COTTON, STILL BANNED ]
Nick Stokes says:
May 7, 2014 at 5:18 am
“GCMs generally don’t use finite difference methods for the key dynamics of horizontal pressure-velocity. They use spectral methods…”
I believe the US GFS model uses a spectral method. The Canadian Global Environmental Multiscale Model (GEM) spatial discretization is a Galerkin (finite element) grid-point formulation in the horizontal. Finite element is different from finite difference. I have programmed both methods in various models.
Even at a pure layman’s level such as myself, this is one of the best posts I have seen on WUWT, particularly the first third of the posts but most decidedly not at all decrying a number of other really excellent and very readable, lay man language and technically orientated highly explanatory posts throughout the whole list.
And the hoi polloi like myself, have kept our heads down and our many inane comments, like this one, right out of it to the immense benefit of a significant increase in our knowledge on just how seriously bad the model driven climate science is that underlies the colossal waste and destruction of now close to trillion dollars worth of global treasure, the winding down of once vigorous and viable national economies and the completely avoidable deaths of tens of thousands through energy deprivation created by the inflated and increasingly unaffordable cost of energy, the heat or eat syndrome.
We see once again that the claims made by the climate modellers are being made in the full knowledge that they are being thoroughly and publicly untruthful in making their quite spurious claims on the supposed unchallengeable veracity and predictive capabilities of these same climate models
RC Saumarez says: May 7, 2014 at 4:39 pm
“My conclusion is simply that mathematical modellers should perform experiments to test their assertions. iI is after all the scientific method!”
Numerical weather prediction programs undergo a stringent experimental test every day.
rgbatduke says: May 7, 2014 at 3:31 pm
“Except for the fact that when run with no forcings at all, they completely fail to reproduce the actual variability of the climate on any decadal or longer timescale.”
Completely fail? I’d like to see that quantified.
May 7, 2014 at 5:50 am | Nick Stokes says:
In as much as I thoroughly enjoy Dr Brown’s writing to the point of actively seeking out his work, so in some perverse way do I enjoy Nick Stokes’ brilliant contortions. Science speaking, I’m not even in the same universe as these two gentlemen … but I’d like to help Nick out here with my layman science:
“It’s like you can make a bell, with all the right notes and sound quality. But you can’t predict when it will ring and what the note will be.”
Wow what a thread! Devastating commentary, RGB.
Willis;
I was reading the GISS model code I noticed that at the end of each time step, they sweep up any excess or deficit of energy, and they sprinkle it evenly over the whole globe to keep the balance correct.
This is new info for me, and I am…. gobsmacked? thunderstruck? need-to-invent-a-new-word? Wow. They’re taking calculations they KNOW to be wrong and averaging them out across a system that is highly variable, cyclical, and comprised of parameters that do not vary linearly. Wow, just wow.
Nick Stokes says:
May 7, 2014 at 7:00 am
Frank K. says: May 7, 2014 at 6:35 am
“Really?? So the GCMs are limited to Courant < 1.”
Usually run at less. Here is WMO explaining:
” For example, a model with a 100 km horizontal resolution and 20 vertical levels, would typically use a time-step of 10–20 minutes. A one-year simulation with this configuration would need to process the data for each of the 2.5 million grid points more than 27 000 times – hence the necessity for supercomputers. In fact it can take several months just to complete a 50 year projection.”
At 334m/s, 100km corresponds to 5 minutes. They can’t push too close. Of course, implementation is usually spectral, but the basic limitation is there.
_______________________________________________________________
And how do they handle the format conversion, truncation, and rounding issues in such a number of calculations?
Willis Eschenbach says:
May 7, 2014 at 10:47 am
“ I had an online discussion about ten years ago with Gavin Schmidt on the lack of energy conservation in the GISS model. What happened was, as I was reading the GISS model code I noticed that at the end of each time step, they sweep up any excess or deficit of energy,…”
I pretty much grew up with the Numercial Weather Prediction (NWP) models. We saw them evolve from the old barotropic model which was basically an advection model. Energy conservation has always been a problem and I believe it still is (I can be corrected on that if some magical procedure has been found.). A lot of it arises because the equations have to be solved on a discontinuous grid. I remember when there would be constant updates to the models and then new versions would come out. One time I remember well when they went down to a much small grid, about 30 km at the time I believe. This started to generate all sorts of spurious waves and noise, so much so that we could not even use the model. So, then they had to introduce more smoothing to get rid of the noise. As far as I know there is still smoothing in all the NWP models, otherwise they would blow up fairly quickly.
RC Saumarez says: May 7, 2014 at 4:39 pm
“My conclusion is simply that mathematical modellers should perform experiments to test their assertions. iI is after all the scientific method!”
May 7, 2014 at 5:12 pm | Nick Stokes says:
Numerical weather prediction programs undergo a stringent experimental test every day.
—–
Nick, you’d know this … what was the name of that cyclone a few weeks back in north Queensland that the BoM predicted to penetrate 100’s km’s inland and cause untold damage … only to wimp out on landfall? Many amateurs correctly commented on its impending demise as it had little energy to feed on.
Moderators. Not really important but wondering if a post I made earlier today got sent to the
“bit bucket”?
Thanks.
[nothing in the spam filter – may have simply been an error in submission – it happens -mod]
Found it please forget and forgive my 5:39pm post.
Willis Eschenbach says: May 7, 2014 at 10:47 am
“If we simply sprinkle it evenly over the globe, it sets up an entirely fictitious flow of energy equatorwards from the poles. And yes, generally the numbers would be small … but that’s the problem with iterative models. Each step is built on the last. As a result, it is not only quite possible but quite common for a small error to accumulate over time in an iterative model.”
That’s a complete misunderstanding. You never give numbers. How small – too small to notice. That’s the point.
This process sets up a negative feedback for a particular type of potentially destructive oscillation. As I mentioned above, the big numerical issue with explicit Navier-Stokes is sound waves. What limits the timestep is nominally a Courant condition, but the arithmetic actually reflects the numerical treatment of sound waves of wavelength comparable to grid size. Sound waves should be propagated with conservation of energy, but if inadequately resolved, they can go into exponentially growing modes, from an infinitesimal beginning. You can detect this by a breach of conservation of energy.
The process Gavin describes damps this at the outset, automatically. It takes the energy out of the mode. The reason he couldn’t give you a number is that it is intended to damp the effect before you can notice. And the amount of energy redistributed is negligible.
It’s like stabilising your sound amplifier with negative feedback. You’re feeding back amplified signal from the output to the input. That can’t be right. Gobsmacked?
No, because the instability modes that the feedback damps never grow. So you are not destroying your fine amplifier with spurious output signal.
There are unbiased, rescalable tessellations of the sphere — triangular ones
===========
Triangular tessellation is the obvious choice. Lat long at the poles form triangle, and rectangles can always be divided into triangles. However, even this problem quickly runs into trouble, as we discovered years ago when writing code to “unfold” complex 3D objects into 2D patterns.
The unfolding process itself is rather simple. You cover an object in triangles, then solve using Pythagoras, and lay all the resulting shapes out on a flat sheet. Using a fast computer you shrink the size of the triangles until even though the surface is curved, the triangles themselves are very close to straight lines, and lay these out on the flat sheet,
And when you cut out the resulting flat sheet and try and try and rebuild your 3D object, you end up with a hopeless mess that looks very little like the original.
Streetcred says: May 7, 2014 at 5:38 pm
“Nick, you’d know this … what was the name of that cyclone a few weeks back in north Queensland that the BoM predicted to penetrate 100’s km’s inland and cause untold damage”
Ita. I don’t think BoM did predict that. What they did predict, when it was 700 km offshore, was that it would come ashore near Cooktown. Which it duly did. Not bad for a program which, we’re told, will fall in a heap because of insoluble Navier-Stokes.
This is why a sister site to WUWT, or a reference tab, should contain a “Best of WUWT” selection. For certain authors all their threads and/or comments (for rgb) would be included.
“””””…..Lloyd Martin Hendaye says:
May 7, 2014 at 3:15 pm
Recall Goldbach’s Conjecture, stated 1742: “Every even integer greater than 2 can be expressed as the sum of two primes.”…..”””””
So what did (2) do to be excluded?? Is it not the sum of two integers, whose only factors are (1) and (itself) ??
(2) is an even integer that is the sum of two primes; (1) and (1).
Well except in common core math, where (1) +(1) might be (3) , so long as you use the right process to arrive at that answer.
Nick Stokes says:
May 7, 2014 at 6:03 pm
And the amount of energy redistributed is negligible.
============
that is an assumption. you cannot say that it has no effect, because no one knows. even the IPCC mentions the butterfly effect.
I’m reminded of a kinetic gas model we worked on. All it took was the smallest energy leak to turn an isothermal atmosphere in the absence of GHG into an atmosphere with a temperature gradient.
I know Willis Eschenbach has mentioned in several posts that he can duplicate the output of the GCM’s by a linear response to the forcings plus a lag such as here:
http://wattsupwiththat.com/2013/06/25/the-thousand-year-model/
“all that the climate models do to forecast the global average surface temperature is to lag and resize the forcing.”
If Willis is correct, then I can build a simulation of the entire climate as described in the computer models in the lab. All I need is a hot plate, a thermometer, an ice bath and a brick. I put one end of the brick in the ice bath and heat the other end with the hot plate. I put the thermometer in a hole in the middle of the brick. When I make a step change to the setting of the hot plate, there will be a lag for the thermometer to respond due to the limited thermal conductivity of the brick. The fixed temperature at the cold end will make sure that the temperature response is linear.
Thus the climate models treat the entire atmosphere, solar heating, oceans, evaporation, rain, show, clouds, storms, etc., the same as a solid brick.