Elevated from a WUWT comment by Dr. Robert G. Brown, Duke University
Frank K. says: You are spot on with your assessment of ECIMs/GCMs. Unfortunately, those who believe in their ability to predict future climate really don’t want to talk about the differential equations, numerical methods or initial/boundary conditions which comprise these codes. That’s where the real problems are…
Well, let’s be careful how you state this. Those who believe in their ability to predict future climate who aren’t in the business don’t want to talk about all of this, and those who aren’t expert in predictive modeling and statistics in general in the business would prefer in many cases not to have a detailed discussion of the difficulty of properly validating a predictive model — a process which basically never ends as new data comes in.
However, most of the GCMs and ECIMs are well, and reasonably publicly, documented. It’s just that unless you have a Ph.D. in (say) physics, a knowledge of general mathematics and statistics and computer science and numerical computing that would suffice to earn you at least masters degree in each of those subjects if acquired in the context of an academic program, plus substantial subspecialization knowledge in the general fields of computational fluid dynamics and climate science, you don’t know enough to intelligently comment on the code itself. You can only comment on it as a black box, or comment on one tiny fragment of the code, or physics, or initialization, or methods, or the ode solvers, or the dynamical engines, or the averaging, or the spatiotemporal resolution, or…
Look, I actually have a Ph.D in theoretical physics. I’ve completed something like six graduate level math classes (mostly as an undergraduate, but a couple as a physics grad student). I’ve taught (and written a textbook on) graduate level electrodynamics, which is basically a thinly disguised course in elliptical and hyperbolic PDEs. I’ve written a book on large scale cluster computing that people still use when setting up compute clusters, and have several gigabytes worth of code in my personal subversion tree and cannot keep count of how many languages I either know well or have written at least one program in dating back to code written on paper tape. I’ve co-founded two companies on advanced predictive modelling on the basis of code I’ve written and a process for doing indirect Bayesian inference across privacy or other data boundaries that was for a long time patent pending before trying to defend a method patent grew too expensive and cumbersome to continue; the second company is still extant and making substantial progress towards perhaps one day making me rich. I’ve did advanced importance-sampling Monte Carlo simulation as my primary research for around 15 years before quitting that as well. I’ve learned a fair bit of climate science. I basically lack a detailed knowledge and experience of only computational fluid dynamics in the list above (and understand the concepts there pretty well, but that isn’t the same thing as direct experience) and I still have a hard time working through e.g. the CAM 3.1 documentation, and an even harder time working through the open source code, partly because the code is terribly organized and poorly internally documented to the point where just getting it to build correctly requires dedication and a week or two of effort.
Oh, and did I mention that I’m also an experienced systems/network programmer and administrator? So I actually understand the underlying tools REQUIRED for it to build pretty well…
If I have a hard time getting to where I can — for example — simply build an openly published code base and run it on a personal multicore system to watch the whole thing actually run through to a conclusion, let alone start to reorganize the code, replace underlying components such as its absurd lat/long gridding on the surface of a sphere with rescalable symmetric tesselations to make the code adaptive, isolate the various contributing physics subsystems so that they can be easily modified or replaced without affecting other parts of the computation, and so on, you can bet that there aren’t but a handful of people worldwide who are going to be able to do this and willing to do this without a paycheck and substantial support. How does one get the paycheck, the support, the access to supercomputing-scale resources to enable the process? By writing grants (and having enough time to do the work, in an environment capable of providing the required support in exchange for indirect cost money at fixed rates, with the implicit support of the department you work for) and getting grant money to do so.
And who controls who, of the tiny handful of people broadly enough competent in the list above to have a good chance of being able to manage the whole project on the basis of their own directly implemented knowledge and skills AND who has the time and indirect support etc, gets funded? Who reviews the grants?
Why, the very people you would be competing with, who all have a number of vested interests in there being an emergency, because without an emergency the US government might fund two or even three distinct efforts to write a functioning climate model, but they’d never fund forty or fifty such efforts. It is in nobody’s best interests in this group to admit outsiders — all of those groups have grad students they need to place, jobs they need to have materialize for the ones that won’t continue in research, and themselves depend on not antagonizing their friends and colleagues. As AR5 directly remarks — of the 36 or so named components of CMIP5, there aren’t anything LIKE 36 independent models — the models, data, methods, code are all variants of a mere handful of “memetic” code lines, split off on precisely the basis of grad student X starting his or her own version of the code they used in school as part of newly funded program at a new school or institution.
IMO, solving the problem the GCMs are trying to solve is a grand challenge problem in computer science. It isn’t at all surprising that the solutions so far don’t work very well. It would rather be surprising if they did. We don’t even have the data needed to intelligently initialize the models we have got, and those models almost certainly have a completely inadequate spatiotemporal resolution on an insanely stupid, non-rescalable gridding of a sphere. So the programs literally cannot be made to run at a finer resolution without basically rewriting the whole thing, and any such rewrite would only make the problem at the poles worse — quadrature on a spherical surface using a rectilinear lat/long grid is long known to be enormously difficult and to give rise to artifacts and nearly uncontrollable error estimates.
But until the people doing “statistics” on the output of the GCMs come to their senses and stop treating each GCM as if it is an independent and identically distributed sample drawn from a distribution of perfectly written GCM codes plus unknown but unbiased internal errors — which is precisely what AR5 does, as is explicitly acknowledged in section 9.2 in precisely two paragraphs hidden neatly in the middle that more or less add up to “all of the `confidence’ given the estimates listed at the beginning of chapter 9 is basically human opinion bullshit, not something that can be backed up by any sort of axiomatically correct statistical analysis” — the public will be safely protected from any “dangerous” knowledge of the ongoing failure of the GCMs to actually predict or hindcast anything at all particularly accurately outside of the reference interval.
Nick
Again, I’m not an expert here so I’m not going to pretend that I am your peer on these issues, but…
Navier-Stokes equations are basically inso;uble
1) Is there a unique solution?
2) It’s a suite of PDEs(?) and like most PDEs have to be solved analytically using a finite elements approach.
If the answer to 1 is no, then how do you know that your solution is appropriate/correct.
If the answer to 2 is yes and yes, then you’ve got a host of implementation issues.
I am only a B Eng but have probably done enough for the equivalent of a master’s. I have used CFD software and have a good understanding of its limitations and the effects on accuracy of the grid mesh size vs the fluctuations in the phenomena one is modelling. My experience was an overly coarse mesh led to convergence to a solution on the high side. The inclusion of turbulence would require a mesh orders of magnitude finer and therefore require computaional effort many orders of magnitude greater. In other words there are significant practical problems with accurate modelling of the navier stokes equations including the terms for turbulence and such effects. It is little wonder to me that the ‘models’ do not reflect the measurements and I shudder to imagine what the difference would be if they started the model runs say a century ago instead of 30 years or so.
How to even explain this to Joe Public or his elected representative I do not really know given the bizarre preferences of the msm.
Loved it Dr Brown. It is so similar to what I am facing it crosses my mind that inane mathematics in service of political goals is a lot more common than uncommon. If the manipulation of national statistics is considered, there is ‘a lot going on’.
The inanity of what I am trying to do is reduce the variability of calculated results that arise from 4 sets of potential variation, where the variability of A + B + C + D has to be within a prescribed range. Politically, ‘A’ is not to be examined as such a review will show that the output is non-linear for a linear change the input due to a comical set of conceptual and systematic errors. No amount of fiddling with B + C + D will correct the problems inherent in A. So much has been invested in the content of A that correcting it will (apparently) cause the Earth to stop turning.
Modeling systems within meaningfully tight ranges is hard. The climate is bewilderingly complex. Willis has shown that all of the climate models’ predictions can be reproduced by a simple ratio so are we getting anything, ever, that is worth the investment? I doesn’t look like it.
Nick Stokes says:
May 7, 2014 at 5:58 am
Richard Drake says: May 7, 2014 at 5:01 am
“The current GCMs don’t have the ability to scale the grid size and that’s insane.”
It’s not insane. It’s a hard physical limitation. A Courant condition on the speed of sound. Basically, sound waves are how dynamic pressure is transmitted. Timestepping programs relate properties in a cell to the properties in that and neighboring cells in the previous timestep. If pressure can cross more than one cell in a timestep (at speed of sound), there is total instability. So contracting grid size means contracting timestep in proportion, which blows up computing times.
—
“Basically, sound waves are how dynamic pressure is transmitted”
Really? Dynamic pressure = 1/2*rho*V^2. Please explain this.
” Timestepping programs relate properties in a cell to the properties in that and neighboring cells in the previous timestep. If pressure can cross more than one cell in a timestep (at speed of sound), there is total instability.”
Really?? So the GCMs are limited to Courant < 1. That's NOT what I've read – many use multistep methods to promote temporal stability. However, given the near total lack of documentation on these and other important issues, it wouldn't surprise me if they were stability limited to Courant < 1.
"So contracting grid size means contracting timestep in proportion, which blows up computing times."
Yes this is true – time step goes with cell size. The problem goes beyond this simplisitic analysis, since most GCMs have very strong source terms and extensive model coupling (e.g. ocean models coupled with atmosphere dynamics models coupled with radiation physics). This likely imposes even more stringent constraints on stability (and ultimately accuracy).
Nick Stokes says:
May 7, 2014 at 5:02 am
“So there is generally no attempt to make the models predict from, say, 2014 conditions. They couldn’t predict our current run of La Nina’s, because they were not given information that would allow them to do so. Attempting to do so will introduce artefacts. They have ENSO behaviour, but they can’t expect to be in phase with any particular realisartion.
Their purpose is not to predict decadal weather but to generate climate statistics reflecting forcing. That’s why when they do ensembles, they aren’t looking for the model that does best on say a decadal basis. That would be like investing in the fund that did best last month. It’s just chance.”
You do understand what that means given the fact that they try to simulate a CHAOTIC system, don’t you?
Well I don’t think you do because you effectively just said, there is no way they can predict ANYTHING. Because chaotic systems are characterized by iterative amplification of small disturbances. If you get the state wrong now, the next state will be wronger…
We don’t even have the data needed to intelligently initialize the models we have got, and those models almost certainly have a completely inadequate spatiotemporal resolution on an insanely stupid, non-rescalable gridding of a sphere. So the programs literally cannot be made to run at a finer resolution without basically rewriting the whole thing,
—
This is really a bit surprising. One of the first things that one would try to ascertain if the spatial resolution is fine enough, is to use finer or coarser grids and see whether or not this has a major effect on the trajectories produced by the model. Hard-coding a single resolution seems really amateurish.
It might be a quixotic undertaking, but it seems to me that here on WUWT there would be enough senior gents with some time on their hands and the required skills to collaboratively write a better climate model with a proper, modular structure, one that could be adapted and extended in a sane way as new empirical knowledge about the climate system arrives.
“and I still have a hard time working through e.g. the CAM 3.1 documentation, and an even harder time working through the open source code, partly because the code is terribly organized and poorly internally documented to the point where just getting it to build correctly requires dedication and a week or two of effort.”
And when it builds and runs how do you know it has no bugs in it? How do you know it is calculating what it is meant to calculate? Just because results look like they could be real doesn’t mean they are correct. I spent 30 years involved with commercial systems development and at least there was a specification (usually) which the system had to meet and subtle bugs would still arise. But I also feel confident now, having read what sort of courses need to have been taken, that all the time I’ve considered climate modelling to be “snake oil”, I am well qualified to do so.
When the tracks of subatomic particles in say QED are calculated or Einstein’s equation is programmed in to predict say, the perihelion of Mercury, or the bending of light, at least physical results can be measured, compared to model results and the experiment repeated.
As for averaging a large number of distinct models’ outputs and trying to sell the result as in some way real and a guide for what to expect in 100 years, well that is “(snake oil)^2”. Only a crazy person could expect the average of a large number of such wrong answers to be correct except by accident. Maybe the punks feel lucky. And I haven’t even mentioned the impossibility of dealing with mathematical chaos in the basic dynamical equations of climate modelling. Oh and they also say that the behaviour of clouds is not understood. Sometimes, even the difference between the models’ outputs and reality is called a travesty.
Sorry, for the life of me I cannot find what the acronym ECIM stands for.
I assume it has something to do with climate or physics or mathematics or computer science.
I assume GCMs are General Circulation Models or Global Climate Models.
As far as I am concerned, this input from Dr Brown is very welcome and adds to my scepticism over the climate models, as I had not realised that even in their very limited actual use, that they were so badly configured.
Regardless, even IF these computer models were more robust and accurate, we still would not get away from the fact that none of those models are a substitute for measuring the actual climate. They are innaccurate by necessity, as we do not know how all the elements and interactions within the actual climate works, and so the computer models are incomplete (we cannot model what we do not know) and are based on a lot of assumptions, many of which may be wrong.
All the simulation runs are based on a variant of the CAGW hypothesis, with varying degrees of Equilibruim Climate Sensitivity as but one of the variables. So the models cannot even be claimed to be providing real-world evidence to support the CAGW hypothesis. They are nothing more than a model of the hypothesis, which merely describes the hypothesis, and does nothing to TEST the hypothesis to ascertain IF the hypothesis is valid or not.
Therefore all the predictive uses of these models tell us nothing about what WILL happen. Therefore all the alarm which comes from computer climate modelling is entirely and completely baseless and should be dismissed.
A different non modeling approach must be used for forecasting . Forecasts of the timing and amount of a possible coming cooling based on the 60 and 1000 year natural quasi-periodicities in the temperature and using the neutron count and 10Be record as the best proxy for solar activity are presented in several posts at
http://climatesense-norpag.blogspot.com
During the last eighteen months I have laid out an analysis of the basic climate data and of methods used in climate prediction and from these have developed a simple, rational and transparent forecast of the likely coming cooling.
For details see the pertinent posts listed below.
10/30/12. Hurricane Sandy-Extreme Events and Global Cooling
11/18/12 Global Cooling Climate and Weather Forecasting
1/22/13 Global Cooling Timing and Amount
2/18/13 Its the Sun Stupid – the Minor Significance of CO2
4/2/13 Global Cooling Methods and Testable Decadal Predictions.
5/14/13 Climate Forecasting for Britain’s Seven Alarmist Scientists and for UK Politicians.
7/30/13 Skillful (so far) Thirty year Climate Forecast- 3 year update and Latest Cooling Estimate.
10/9/13 Commonsense Climate Science and Forecasting after AR5 and the Coming Cooling.
The capacity of the establishment IPCC contributing modelers and the academic science community in general to avoid the blindingly obvious natural periodicities in the temperature record is truly mind blowing.
It is very obvious- simply by eye balling the last 150 years of temperature data that there is a 60 year natural quasi periodicity at work. Sophisticated statistical analysis actually doesn’t add much to eyeballing the time series. The underlying trend can easily be attributed to the 1000 year quasi periodicity. See Figs 3 and 4 at
http://climatesense-norpag.blogspot.com/2013/10/commonsense-climate-science-and.html
The 1000 year period looks pretty good at 10000,9000,8000,7000, 2000.1000. and 0
This would look interesting I’m sure on a wavelet analysis with the peak fading out from 7000- 3000.
The same link also provides an estimate of the timing and extent of possible future cooling using the recent peak as a synchronous peak in both the 60 and 1000 year cycles and the neutron count as supporting evidence of a coming cooling trend as it appears the best proxy for solar “activity” while remaining agnostic as to the processes involved.
I suppose the problem for the academic establishment is that this method really only requires a handful of people with some insight ,understanding and the necessary background of knowledge and experience as opposed to the army of computer supported modelers who have dominated the forecasting process until now.
There has been no net warming for 16 years and the earth entered a cooling trend in about 2003 which will last for another 20 years and perhaps for hundreds of years beyond that. see
ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/annual.ocean.90S.90N.df_1901-2000mean.dat
The current weather patterns in the UK and USA are typical of those developed by the more meridional path of the jet stream on a cooling earth. The Fagan book “The Little Ice Age ” is a useful guide from the past to the future. The frequency of these weather patterns, e.g. for the USA the PDO related drought in California and the Polar Vortex excursions to the South will increase as cooling continues
The views of the establishment scientists in the USA and of the UK’s CSA and Met office’s leaders in this matter post AR5 reveals their continued refusal to recognize and admit the total failure of the climate models in the face of the empirical data of the last 16 years. It is past time for the climate community to move to another approach based on pattern recognition in the temperature and driver data and also on the recognition of the different frequencies of different regional weather patterns on a cooling ( more meridional jet stream ) and warming (more latitudinal jet stream ) world.
All of the warming since the LIA can easily be accommodated within the 1000 year natural cycle without any significant contribution from anthropogenic CO2.
The whole UNFCCC travelling circus has no empirical basis for its operations and indeed for its existence depending as it does on the predictions of the inherently useless climate models.. The climate is much too complex to model but can be predicted by simply knowing where we are in the natural quasi -cycles
J. Philip Peterson says:
May 7, 2014 at 6:45 am
Sorry, for the life of me I cannot find what the acronym ECIM stands for.
I assume it has something to do with climate or physics or mathematics or computer science.
I assume GCMs are General Circulation Models or Global Climate Models.
===============================================
Enhanced Climate Integration Model
Richard Drake says:
May 7, 2014 at 5:01 am
The people chosen to program the next variant of such a flawed base system are, as Brown says, taken from “the tiny handful of people broadly enough competent in the list”. They aren’t stupid by any conventional measure but the overall system, IPCC and all, most certainly is.
Yes, extremely competent, intelligent people can sometimes produce something which is not befitting either their competence or intellect. Unfortunately, if only they are competent and intelligent enough to resolve this issue, one can surmise that a resolution may not be forthcoming.
Nick (May 7, 2014 at 5:50 am):
This also means that the model can have absolutely no abillity for scientifically determining any resultant effect of possible temperature and/or other climate changes on the frequency and intensity of future ENSO events.
ENSO emulation is just more lipstick on the model pig…
It takes the most sophisticated models running on supercomputers to “model” a modern aircraft. Those presumably actually work.
Now try modeling “global” climate. It’s comparing modeling a DNA molecule with a whale.
Nick can hand wave all he wants. (However my model predicts that due to weight and lack of lift in his arms, neither he, or his arguments will get off the ground)
However it is not so comlicated. The GCMs are informative! They are extremely informative. They all run wrong, (not very close to the observations) in the SAME direction! (To warm)
This very likely means that one of the primary factors “wiggling the elephant” is given more power then is warranted. One could look for dozens of errors, but likely one or two of the dominat forcings are overrated. If chnaging the forcing power of just one factor, moves ALL the models closer to the real world observations, that may likely indicate that that dominant factor is overrated.
Does anyone care to guess which single forcing factor, lowered in power, would cause all the models to move closer to real world observations?
BTW, for anyone to take the model mean of a group of GCM which all run wrong in one direction, and then base public policy on the modeled projected harms, is dishonest, and in my view morallly evil.
ECIM – Thank you moderator – And I would change “I’ve did advanced importance-sampling…” to I’ve done advanced importance-sampling…
It’s an age old human response to believe that folks who are clearly more knowledgeable
about the intricate innards of a climate model can be trusted when they say they work.
This holds true as long as the model’s predictions seem to be coming to pass. But when
reality and predictions diverge sharply, one doesn’t have to know anything about how the model is constructed in order to know that it isn’t worth anything. There is certainly no logical reason for trusting non-climate scientists to continue believing in the models’ predictions.
If you built a new climate model (after using large parts of the code from other models obviously) and you ran it and it kept coming back with just 1.0C temperature increase by 2100, …
… what would then happen.
Well, you can’t get invited to all the great global warming parties with your new 1.0C climate model. You are not only uninvited, you are drummed out of the business. Obviously, the model gets a tweak or two or hundreds until it is in the nice safe range of 2.2C to 4.0C (exactly the range of acceptable models prescribed by the IPCC AR4 team – you couldn’t submit a forecast unless the model had an underlying sensitivity within that range).
Frank K. says: May 7, 2014 at 6:35 am
“Really?? So the GCMs are limited to Courant < 1.”
Usually run at less. Here is WMO explaining:
” For example, a model with a 100 km horizontal resolution and 20 vertical levels, would typically use a time-step of 10–20 minutes. A one-year simulation with this configuration would need to process the data for each of the 2.5 million grid points more than 27 000 times – hence the necessity for supercomputers. In fact it can take several months just to complete a 50 year projection.”
At 334m/s, 100km corresponds to 5 minutes. They can’t push too close. Of course, implementation is usually spectral, but the basic limitation is there.
Nick Stokes says:
May 7, 2014 at 5:02 am
“GCMs are traditionally run with a long wind-back period. That is to ensure that there is indeed no dependence on initial conditions. The reason is that these are not only imperfectly known, especially way back, but are likely (because of that) to cause strange initial behaviour, which needs time to work its way out of the system.”
So the initial condition which is finally reached after the arbitrary “wind up” period is a unique and accurate initial condition for the start time of the simulation? Please explain in more detail how this is achieved.
All systems of PDEs are dependent on their initial/boundary conditions. There has to be an initial condition to begin any analysis, and the solution will depend on this. You really can’t have “no dependence”…
Thanks to Nick Stokes for commenting on this one. No quibbles with Dr. Brown’s assessment, but Nick’s comments greatly enrich the thread in terms of the readers being able to get the big picture.
Thank you Dr. Brown for the interesting and enlightening view from any given climate modelers keyboard. The qualifications for such create a very unique subset of scientists for certain.
I have often pondered how many concantenated items would be necessary to achieve climate modeling accuracy for a given period of time. 10’s of Trillions or more? I don’t believe anyone really knows in the end. That thought brings forth the question as to how these modelers can claim climate accuracy for 100 years out. I believe that is impossible based upon historic performance and continuously discovered variables not accounted for to begin with. Don’t get me wrong, numeric modeling has had success in weather prediction in the short range. Go out more than 7-10 days, not so much success.
I would say without hesitation that the climate modeler is the most powerful person on the planet. Just look at the results of their work. They have been setting energy and social policy for decades without being seen or heard from. They are truly the people behind the curtain if you will.
So tell me who are these beings?
What are they like in the real world?
Would they make good neighbors?
Would you trust them with your children?
I think we have the right to know since they are impacting everyone on this planet in one way or another, every minute of your day.
It seems to me that we have changed the dynamic of science in the climate community. We used to build up to a conclusion by virtue of hard work and theory validation. It seems we have inverted that pyramid of function in climate modeling and strive to backfill from the top down based upon a preconceived end result. Inclusive to this is the obvious adjustment of the historic observational data to fit the expectation of the models.
I ask why?
I have viewed some entities post here that climate is like engineering when it comes to CO2. The observations simply prove them wrong as they are obviously missing important parts of their equation.
In climate science, Cloud Computing, has a whole different meaning.
Regards Ed
Thank you Dr. Brown.
This is all a very intelligent discussion of an extremely technical nature, but can I return to Middle School for a moment and ask how we can call these models “Science” in the first place? If the experiment is “let’s build a predictive model and see if we can predict the future climate or temperature of the earth” then I think it clear that the experiment has failed miserably many times over. We certainly can’t expect the projections to improve over time if they diverge from reality so early in the experiment. The models may be based upon scientific laws and theories, but that doesn’t make the models science. If it did, then every time I get in my car and drive it, I am doing science. There’s lots of science under the hood, but I am just running the car, not doing an experiment. Also, if my husband (a Food Scientist) makes me some toast, is that science?
We need to stop referring to these models in the same sentence as the word science.
You guys are a hoot….you’re talking about clouds, ENSO, chaotic systems, etc like these computer games have any hope at all…..
Here’s the take home…..
“difficulty of properly validating a predictive model — a process which basically never ends as new data comes in.”
The real bottom line is temperature reconstructions…and they are constantly changing them
These games will never be right when they are constantly “adjusting” the official temperature history these guys use to validate these computer games….
They adjusted past temps down and present temps up to show a faster warming trend that’s real…
…based on that garbage….the computer games are showing that same trend
They will never be right………