Elevated from a WUWT comment by Dr. Robert G. Brown, Duke University
Frank K. says: You are spot on with your assessment of ECIMs/GCMs. Unfortunately, those who believe in their ability to predict future climate really don’t want to talk about the differential equations, numerical methods or initial/boundary conditions which comprise these codes. That’s where the real problems are…
Well, let’s be careful how you state this. Those who believe in their ability to predict future climate who aren’t in the business don’t want to talk about all of this, and those who aren’t expert in predictive modeling and statistics in general in the business would prefer in many cases not to have a detailed discussion of the difficulty of properly validating a predictive model — a process which basically never ends as new data comes in.
However, most of the GCMs and ECIMs are well, and reasonably publicly, documented. It’s just that unless you have a Ph.D. in (say) physics, a knowledge of general mathematics and statistics and computer science and numerical computing that would suffice to earn you at least masters degree in each of those subjects if acquired in the context of an academic program, plus substantial subspecialization knowledge in the general fields of computational fluid dynamics and climate science, you don’t know enough to intelligently comment on the code itself. You can only comment on it as a black box, or comment on one tiny fragment of the code, or physics, or initialization, or methods, or the ode solvers, or the dynamical engines, or the averaging, or the spatiotemporal resolution, or…
Look, I actually have a Ph.D in theoretical physics. I’ve completed something like six graduate level math classes (mostly as an undergraduate, but a couple as a physics grad student). I’ve taught (and written a textbook on) graduate level electrodynamics, which is basically a thinly disguised course in elliptical and hyperbolic PDEs. I’ve written a book on large scale cluster computing that people still use when setting up compute clusters, and have several gigabytes worth of code in my personal subversion tree and cannot keep count of how many languages I either know well or have written at least one program in dating back to code written on paper tape. I’ve co-founded two companies on advanced predictive modelling on the basis of code I’ve written and a process for doing indirect Bayesian inference across privacy or other data boundaries that was for a long time patent pending before trying to defend a method patent grew too expensive and cumbersome to continue; the second company is still extant and making substantial progress towards perhaps one day making me rich. I’ve did advanced importance-sampling Monte Carlo simulation as my primary research for around 15 years before quitting that as well. I’ve learned a fair bit of climate science. I basically lack a detailed knowledge and experience of only computational fluid dynamics in the list above (and understand the concepts there pretty well, but that isn’t the same thing as direct experience) and I still have a hard time working through e.g. the CAM 3.1 documentation, and an even harder time working through the open source code, partly because the code is terribly organized and poorly internally documented to the point where just getting it to build correctly requires dedication and a week or two of effort.
Oh, and did I mention that I’m also an experienced systems/network programmer and administrator? So I actually understand the underlying tools REQUIRED for it to build pretty well…
If I have a hard time getting to where I can — for example — simply build an openly published code base and run it on a personal multicore system to watch the whole thing actually run through to a conclusion, let alone start to reorganize the code, replace underlying components such as its absurd lat/long gridding on the surface of a sphere with rescalable symmetric tesselations to make the code adaptive, isolate the various contributing physics subsystems so that they can be easily modified or replaced without affecting other parts of the computation, and so on, you can bet that there aren’t but a handful of people worldwide who are going to be able to do this and willing to do this without a paycheck and substantial support. How does one get the paycheck, the support, the access to supercomputing-scale resources to enable the process? By writing grants (and having enough time to do the work, in an environment capable of providing the required support in exchange for indirect cost money at fixed rates, with the implicit support of the department you work for) and getting grant money to do so.
And who controls who, of the tiny handful of people broadly enough competent in the list above to have a good chance of being able to manage the whole project on the basis of their own directly implemented knowledge and skills AND who has the time and indirect support etc, gets funded? Who reviews the grants?
Why, the very people you would be competing with, who all have a number of vested interests in there being an emergency, because without an emergency the US government might fund two or even three distinct efforts to write a functioning climate model, but they’d never fund forty or fifty such efforts. It is in nobody’s best interests in this group to admit outsiders — all of those groups have grad students they need to place, jobs they need to have materialize for the ones that won’t continue in research, and themselves depend on not antagonizing their friends and colleagues. As AR5 directly remarks — of the 36 or so named components of CMIP5, there aren’t anything LIKE 36 independent models — the models, data, methods, code are all variants of a mere handful of “memetic” code lines, split off on precisely the basis of grad student X starting his or her own version of the code they used in school as part of newly funded program at a new school or institution.
IMO, solving the problem the GCMs are trying to solve is a grand challenge problem in computer science. It isn’t at all surprising that the solutions so far don’t work very well. It would rather be surprising if they did. We don’t even have the data needed to intelligently initialize the models we have got, and those models almost certainly have a completely inadequate spatiotemporal resolution on an insanely stupid, non-rescalable gridding of a sphere. So the programs literally cannot be made to run at a finer resolution without basically rewriting the whole thing, and any such rewrite would only make the problem at the poles worse — quadrature on a spherical surface using a rectilinear lat/long grid is long known to be enormously difficult and to give rise to artifacts and nearly uncontrollable error estimates.
But until the people doing “statistics” on the output of the GCMs come to their senses and stop treating each GCM as if it is an independent and identically distributed sample drawn from a distribution of perfectly written GCM codes plus unknown but unbiased internal errors — which is precisely what AR5 does, as is explicitly acknowledged in section 9.2 in precisely two paragraphs hidden neatly in the middle that more or less add up to “all of the `confidence’ given the estimates listed at the beginning of chapter 9 is basically human opinion bullshit, not something that can be backed up by any sort of axiomatically correct statistical analysis” — the public will be safely protected from any “dangerous” knowledge of the ongoing failure of the GCMs to actually predict or hindcast anything at all particularly accurately outside of the reference interval.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Roy is right about discrete conservation. The usual fix for this is to use finite element discretization methods. I was surprised some years ago when I learned that climate models still used finite difference methods.
RGB I have been saying for some years that the IPCC models are useless for forecasting and that a new approach to forecasting must be adopted. I would appreciate a comment from you on my post at 6:47 above and the methods used and the forecasts made in the posts linked there at http://climatesense-norpag.blogspot.com
The 1000 year quasi – periodicity in the temperatures is the key to forecasting . Models cannot be tuned unless they could run backwards for 2- 3000 years using different parameters and processes on each run for comparison purposes. This is simply not possible.
Yet even the skeptics seem unable to break away from essentially running bits of the models over short time frames as a basis for discussion.
Perhaps it would help psychologically if people thought of the temperature record as the output of a virtual computer which properly integrated all the component processes.
Let us not waste more time flogging the IPCC dead horse and look at other methods.
Robert G. Brown,
Thanks for spending time on this.
I have been told that weather forecasting is an initial conditions problem while climate forecasting is a boundary conditions problem. Is this (to your knowledge) a true statement? It seems to be a statement of the type, “Go away little boy. You’re bothering me. You wouldn’t understand anyway.”
Roy Spencer says:
May 7, 2014 at 3:54 am
Thanks, Dr. Roy, I had an online discussion about ten years ago with Gavin Schmidt on the lack of energy conservation in the GISS model. What happened was, as I was reading the GISS model code I noticed that at the end of each time step, they sweep up any excess or deficit of energy, and they sprinkle it evenly over the whole globe to keep the balance correct. Energy conservation at its finest!
I was, as they say, gobsmacked. I wondered how big an amount of energy this was. So I asked Gavin.
After hemming and hawing for a while, Gavin claimed that the number was small and of no consequence. It took a while, but finally he admitted that no, they didn’t monitor that number, so he couldn’t say exactly what the average loss was other than “small”, a tenth of a W/m2 or so. He couldn’t say if it changed over time, or increased over the length of the run, or anything …
More surprisingly, he didn’t have a “Murphy gauge” monitoring the value of the energy imbalance. (“Murphy Gauges” are great. They are a kind of physical meter with a built in alarm, used extensively on ships to monitor critical systems.
A Murphy gauge not only displays a value for some variable. It tells you when Murphy’s Law comes into effect and your variable has gone into the danger zone. I have used a variety of these both in the real world and in the programs that I write. If that were my model, when the energy being lost/gained went outside certain limits, the process would shut down immediately to prevent further damage to irreplaceable electrons and I’d take a look at why it stopped.
That was the point at which I lost all confidence in the models. It was early in my involvement in the climate question, a decade or so ago, and my jaw dropped to the floor. Here were these scientists claiming that their model was “physics based”, and then sprinking excess energy like pixie dust all over the globe. Not only that, but they didn’t have any idea if the system was net gaining energy, or net losing energy, or both at different times. They just swept it under the rug without monitoring it at all, vanished it all nice and even, leaving no lumps in the rug at all.
Not only that, I didn’t understand even theoretically what their justification was for the procedure. I mean, why sprinkle it evenly over the earth? Wouldn’t you want to return it to the gridcell(s) where the error is? Their procedure leads to curious outcomes.
For example, as Dr. Brown pointed out in the head post, calculating gridcells on a Mercator projection gets very inaccurate at the poles. As a result, it is a reasonable assumption that the poles either inaccurately gain or lose more than the equator. For the sake of discussion, let’s say that the model ends up with excess energy at the poles.
If we simply sprinkle it evenly over the globe, it sets up an entirely fictitious flow of energy equatorwards from the poles. And yes, generally the numbers would be small … but that’s the problem with iterative models. Each step is built on the last. As a result, it is not only quite possible but quite common for a small error to accumulate over time in an iterative model.
At a half-hour per model timestep, in a model year, there are almost 20,000 model timesteps. If you have an ongoing error of say 0.1 W/m2 per timestep, at the end of the year you have an error of 2,000 W/m2 … you can see why some procedure to deal with the inaccuracy is necessary.
There is another problem with Gavin’s claim that the total energy imbalance is small. This is that it is small when averaged out over the surface of the globe.
But now suppose that the majority of the error is coming from a few gridcells, where some oddity of the local topography/currents/insolation/advection is such that the model is miscalculating the results. To put some numbers on it, let’s assume that the overall model error in energy conservation per half-hour time step is say a tenth of a watt per square metre. That was the order of magnitude Gavin mentioned at the time, and indeed, it is small.
From memory, the GISS model gridcell 2.5° x 2.5°, so there are 180 / 2.5 * 360 / 2.5=10,328 gridcells on the earth’s surface. So if the overwhelming majority of that 0.1 W/m2 error is coming from say a hundred gridcells, that means that the error in those hundred gridcells is on the order of 10 W/m2 …
I fear mainstream climate modelers lost most of their credibility with me right then and there.
If I ran the zoo, I’d a) put Murphy gauges all over the model, b) track the value of the error over time, c) do my best to track the errors back to the responsible gridcells, and d) try to fix the problem rather than living with it. It’s a devilish kind of error to track down, but hey, that’s what they signed up for.
There’s only one model out there that I pay the slightest attention to, the GATOR-GCMOM model of Mark Jacobson over at Stanford. It actually does most of the stuff that Dr. Robert listed in his head post, including using a reasonable grid instead of Mercator and being scalable and nestable. All the rest of the climate models aren’t worth a bucket of warm spit.
Not that I’m saying Jacobson’s model is right. I’m saying it’s the only one I’ve seen that stands a chance of being right on a given day. A description of the GATOR model is here. My 2011 post on the Jacobson model (which I’d completely forgotten I’d written) is called “The Alligator Model”.
I live my life in part by rules of thumb. And I’ve written a number of computer models of various systems. One of my rules of thumb about my own computer models is:
• A computer model is a solid, believable, and occasionally valuable representation of the exact state of the programmer’s misconceptions … plus bugs.
w.
The real problem is the vast bulk of the public and the politicians hear “computer model” and they think “Star Trek” and “Iron Man” and think its REAL. They don’t understand that humans have to build them with a lot of guesses, suppositions and windage.
It’s not insane. It’s a hard physical limitation. A Courant condition on the speed of sound. Basically, sound waves are how dynamic pressure is transmitted. Timestepping programs relate properties in a cell to the properties in that and neighboring cells in the previous timestep. If pressure can cross more than one cell in a timestep (at speed of sound), there is total instability. So contracting grid size means contracting timestep in proportion, which blows up computing times.
What is insane is writing code where one cannot rescale the spatiotemporal grid any way other than basically rewriting the entire program, including the fake physics on the grid cells. Or pretending that the “total instability” you speak of isn’t still there, but now reflected as an error in the physics on all of the smaller/neglected timescales!
You can’t take a function with nontrivial structure all the way down to the millimeter length scale and try to forward integrate it on a 100 kilometer grid and pretend that none of the nontrivial structure inside of the 100 kilometers matters. That’s an assertion that one simply cannot support with any mathematical theory I’m aware of, probably because there are literally uncountably many counterexamples. As I pointed out, adaptive quadrature can easily enough be fooled if it is poorly written and the function being integrated has nontrivial structure at smaller scales than the starting scale — considerable effort goes into designing algorithms that minimize the space of “smooth functions” for which the algorithms will not actually adapt and converge. This approach makes a mockery of nearly everything that is known about integrating complex differential systems.
I don’t really have a problem with this as long as it is being done as a research question with an open acknowledgement that your computation has about the same chance of being right as a dart thrown at a dartboard covered with future possible climates, blindfolded, at least without a few hundred years of computation and observation to empirically determine what, if any, predictivity the model has. I have an enormous problem with the results of these computations being used to divert hundreds of billions of dollars of global wealth into highly specific pockets under the assumptions that these computations are better than such a dart board even as they are actively diverging from the actual climate.
This is one of the things Chris did not say in his otherwise excellent video above. He indicated at the end that he thought that the sociology of GCMs had about run its course and that they would now proceed to fade away. What he didn’t point out is that the fundamental reason that they will fade away is that their use is predicated on the assumption that they can somehow beat random chance or correctly show long term behavior, and there is literally no reason at all to think that this is the case. He presented a lot of excellent reasons to think that this is not the case generically, and there are even better reasons if one examines one model at a time. His slide on the power spectrum is particularly damning. The climate models are completely incapable of exhibiting the long term trend variations observed in the real climate without CO_2 variation. They all have basically flat power spectra on intervals greater than a few hundred months when run completely unforced, but Nature manages things like the MWP and LIA and Dalton minimum and the recovery and the early twentieth century warming on timescales ranging from decades to centuries with a very non-flat power spectrum. This is clearly visible on figure 9.8a of AR5 — the climate models smooth straight over the natural variations (invariably running too hot) everywhere but the nearly monotonic increase in the reference period, which they incorrectly extend post 1998 ad infinitum.
They literally cannot do anything else as they omit the physics responsible for those variations entirely — they have basically no unforced variability on any multidecadal time scale. The real climate does. That all by itself is the end of the story as far as their ability to predict long term behavior, especially when the models are built against the reference period in such a way that makes essentially arbitrary, biased assumptions about the balance between natural and anthropogenic factors in even the short time scale dynamics in the single 15 year period in the late 20th century where it happened to substantially warm, that just happened to be the reference period.
I also would draw watcher’s attention to Chris’s comment that back in the early 1980’s he was recruited by climate scientists to help them with their code, not to solve the climate problem, but to make their code provide the smoking gun proving that CO_2 would cause runaway warming.
If he is honestly reporting the facts, and these words or any equivalent words were used, there is really no need to go any farther. It is one thing to try to determine whether or not jelly beans cause acne. It is another to set out to prove that jelly beans cause acne. The former is good science, if a bit unlikely as an assertion. The latter is dangerous, scary science, the kind of science that leads to cherrypicking, data dredging, confirmation bias, pseudoscience, and of course plays into the hands of people who want to invest in jellybean futures as long as they can control the flow of angry papers from your research lab.
God invented double blind, placebo controlled, experiments for a good reason — because we cannot trust ourselves and our greedy pattern matching cognitive algorithms and our tendency to be open to the long con, convincingly presented. God taught us to be very skeptical of grand theoretical claims based on poorly implemented computations attempting to solve a grand challenge mathematical problem numerically, with a literally incomprehensibly coarse spatiotemporal grid given our knowledge of the Kolmogorov scale of the atmosphere, out not five minutes into the future but out fifty years into the future, unless and until those computations are empirically shown to be worth more than a bag full of darts and a blindfold.
At the moment, the bag is winning. In the past (pre-reference period) the bag wins. On longer terms, the bag is the only contender — no power is no power. There is no good reason at this very moment to think that the GCMs collectively and most of the GCMs individually are anything more than amplified, carefully tuned noise, literally white in the long term power spectrum, superimposed on a single variable monotonic function that is the only possible contributor on all long time scales.
Excuse me?
rgb
Man Bearpig says:
May 7, 2014 at 4:00 am
In your city you have 10,000 parking machines, on average 500 break down every day. it takes on average 30 minutes to fix each one and 20 minutes to drive to the location. How many engineers do you need?
Simples. None. Technicians or mechanics should be fixing the machines, not engineers. 🙂
As a side note, any engineer who designs a machine where 100% of them break down every 200 days should be fired.
Dr. Robert, I just had to say I truly loved this line:
I laughed out loud.
Both the head post and your long comments are devastating to the models. Well done.
w.
kadaka (KD Knoebel) says:
May 7, 2014 at 4:59 am
None. Engineers don’t fix parking meters, repair techs do.
Dang, should have read further.
Eee. When ah were young ah had it tough…
http://www.youtube.com/watch?v=VKHFZBUTA4k
rgb,
Phew, the last time I did computer programming was as a Junior at university in 1970. The university had a CDC 6400 (Control Data Corp) with a whole bunch of IBM 360s as peripherals. I did my programming projects on punch cards using Fortran IV (if I recall correctly).
Humorous memory => There was a terminal in my dormitory basement. There was a huge printer at the terminal that made loud sounds as it printed; I mean it was like 8 ft wide, 5 ft tall and 6 ft deep. Late at night some nerds would run programs that made the printer mimic music as it was printing. One of my friends would make the printer mimic the song ‘Anchors Away’ as it printed gibberish. : ) I also liked another nerd who would make the printer mimic ‘Whiter Shade of Pale’.
John
Nick Stokes: “It is in effect random weather but still responding to climate forcings.”
That is, still responding to climate forcings based on the assumptions of the model authors, since most of those forcings don’t apply on the short timescales used by NWPs, and thus can’t be validated by the success of NWPs.
Nick Stokes: “GCMs are NWPs run beyond their predictive range.”
That would be the problem, now, wouldn’t it? The only way to validate a GCM is long-term observation (against a fixed or pre-calculated GCM). As McIntyre has pointed out, that hasn’t worked out so well. The IPCC has to keep revising the old model results to keep the temperature observations within the “envelope” of the model spaghetti-graph.
Man Bearpig says:
May 7, 2014 at 4:00 am
In your city you have 10,000 parking machines, on average 500 break down every day. it takes on average 30 minutes to fix each one and 20 minutes to drive to the location. How many engineers do you need?
========================================================
One good one, to build a better meter.
However, perhaps you point is that the question does not provide near the needed data. (Just like in Climat Science) Much was said already but, among much other missing info, how many hours do you want the repair crews working? Do the take weekends off?
All productions issues come down to the four “P”s of production. If there is a production problem, it is in one of those four categories. They are “Paint” (The job itself), “paint brush”, (the tools to do the job) “Picture” (The blueprint, work order etc showing what needs to be done), and the fourth P = “People” The personnel doing the job.
Your picture neglected to point out start and comnplete times for doing your 400 hours of work.
On every work order the area availble time, and the comlet by time are critical.
Dr Brown:
Your excellent post upgraded to the above article together with your comments in this thread have generated what I think to be the best thread ever on WUWT.
The thread contains much serious information and debate which deserves study by everyone who has reason to discern what climate science can and cannot indicate. Thankyou.
Richard
Hmmmmn.
So, the maximum grid size for “physics” to work reliably (for more than one iteration that is) across a GCM grid (er, cube) requires the cube be SMALLER than the speed of sound (a pressure signal) to cross the smallest distance across the cube. OK. Let us assume that physical requirement is correct.
Now, the atmosphere is only “one variable” in the GCM of course, therefore, the MAXIMUM cube must be less than the height of the atmosphere – and that only IF that atmosphere were to be simplified and modeled as even a single cube only “one atmosphere” high, right?
So, from sea level to “top of atmosphere” is ???
Let us first try to define what sea level to “bottom of atmosphere” needs to be. Then, maybe one can go from bottom of atmosphere to top of atmosphere later.
Well, you can define all sorts of criteria: Clearly, the multi-million square kilometers of the Gobi desert have “weather” and that “weather” affects tens of thousands of kilometers around the Gobi, so at a minimum one would need the smallest cube to allow for “land” (heat and radiation balances to land) varying from 0.0 meters to the Gobi’s 0.900 km to 1.500 km high. So the LOWEST cubic size MUST be smaller than 1.0 km, if the world is to be modeled from sea level to bottom of atmosphere. Central Greenland, central Antarctic, and very large other areas of the earth are higher 1,000 meters (1.000 km): Thus one would at a minimum need “land” cubes at no SMALLER than 1.0 km vertically to even approximate ground level winds. Or do they assume “sea level” ground everywhere when the models are “run” continuously for thousands of cycles to “set initial conditions” as Nick Stokes requires they be?
The stratosphere is commonly defined to start past 50,000-56,000 ft.
The troposphere (from Wiki) is:
SO, does a GCM need to include the stratosphere? Again, from the Wiki
Hmmn. Sounds like not only does the “ground level” of GCM need to include a function (presence or absence of a single ground level cube) varying to require a cube no larger than 1.0 km to include the “land” that is being radiated by the sun, but the “total cube” arrangement across a spherical earth that RGB discusses in his first paragraphs above MUST ALSO include the changes in absolute atmosphere thicknesses that vary from the equator to the pole.
If so, then the MINIMUM “cube stack” of these 1 km cubes needs to be at least 50 cubes high to include all of the “atmosphere” that is being studied.
Now, could one simply the atmospheric changes by “allowing” the arithmetic to reset the pressures and humidities and temperatures everywhere in every cube as part of Nick Stokes “runs”?
Sure. Then, before ANY changes in the forcing functions, one would first need to very that every model in every one of its cubes DID duplicate the earth’s initial conditions, right?
rgbatduke says (May 7, 2014 at 8:42 am): “In my opinion — which one is obviously free to reject or criticize as you see fit — using a lat/long grid in climate science, as appears to be pretty much universally done, is a critical mistake, one that is preventing a proper, rescalable, dynamically adaptive climate model from being built. There are unbiased, rescalable tessellations of the sphere — triangular ones, or my personal favorite/suggestion, the icosahedral tessellation.”
Having played in my youth some games that require 20-sided dice, upon reading of the absurdity of the lat/long grid in GCMs, I immediately wondered if similar tessellations could be used in the models (that was in fact about the only point I actually understood in the original article and subsequent discussion).
And my wife said I was wasting my time! Ha! 🙂
(BTW I still have the dice.)
Nick Stokes says:
May 7, 2014 at 6:11 am
Hollywood fake physics – no explanation required. If it was paired up with some satellite data with temperature similarly codified it might have got my attention.
From commieBob on May 7, 2014 at 9:14 am:
And the UK’s government is a constitutional monarchy, with the monarch supposedly having no real power, but can “kick out all the bums” and dissolve Parliament. We have an elected President who just ignores Congress and writes his own laws.
Over there they also refer to a directed safe light source using batteries and a bulb by the same term as a fire on a stick used for general illumination and igniting flammable substances like rubbish piles and witches.
Offhand I’d think the operator of a modern diesel locomotive would be surprised they’re expected to fix those beasts as well. There’s likely union rules forbidding it. IPCC head Rajendra K. Pachauri doesn’t seem the kind to practice repair work while he practiced railway engineering, he’d only have industrial lubricants covering his hands if he was researching for writing a novel.
We can’t expect to bring England up to modern standards overnight, but perhaps we can teach them the proper use of at least one word.
Thank you Dr. Brown. Excellent exposition.
Your posts are always a breath of fresh air, even though one sometimes gets a bit dizzy breathing that fresh air.
The climate models are extremely successful.
They have fully met the functional requirements of their funding sponsors.
They were required to produce evidence of a ‘climate catastrophe’ should anthropogenic carbon dioxide emissions continue at even a low rate. At higher rates of emissions there would be unstoppable catastrophes and runaway warming. The models were meant to be impenetrably complex (as described by RGB) and poorly documented, and, from a software industry viewpoint show no quality control and be impossible to maintain with no sensible structure. This lack of quality management ensures that external groups cannot audit the software. This meets the same requirement as not allowing external groups to audit the data, or even the emails about the systems and data.
The world has moved on now – the funding sponsors of the models are (incorrectly) using ensembles of the model output forecasts/predictions/projections to justify: closure of generation plant and entire industries; increases in taxation; and more political power for the sponsors. Thus validating the functional requirement of the models.
Well I was not able to immediately discern the gist of Prof rgb ‘s post; but I gather he says coding and reading other peoples code is very difficult. I would agree with that and I wouldn’t try to unravel someone else’s code.
I’m sure some people can, including WUWT posters. One reason, I don’t try, is that I’m more interested in the specific equations being solved, than I am in the code that might do that. But I understand, that lots of problems, you just can’t churn out a closed form (rigorous) solution for, so some sort of “finite element” like analysis, with small steps of arithmetical solutions need to be done, in some iterative process.
I do that sort of thing myself in a much simplified form, to develop some graphable form of output, in say an excel spread sheet form.
But I always worry about the issue that Dr. Spencer raised, in that you might overstep some bounds of physics , when doing that.
For people who do the same computation repeatedly, with different data, then obviously a coded computer routine simplifies that process (once you have the code, and debugged it) To which one might add; “and properly commented and annotated it so you yourself can follow it.”
Even with the much simpler systems, I deal with, I sometimes find a “neat trick” (well I think so) while I’m working on it.
I’d like a dollar, for each time I have come back, and then asked; “why the hell did I do this step here ?”
Think about that short term memory fog, when contemplating conversations with ET even just a few light years away.
I can only conclude, when I see planet earth refusing to follow all 57 varieties of GCM; or even one of them; that somehow they are ignoring all the butterflies that there are in Brazil.
If in a chaotic system, a small perturbation pops up unannounced; a “butterfly”, or it could be a Pinatubo, how the hell can any model allow for such events, which we know throw the system for a loop ( minor maybe) but it has to move off on a different path, from the one that it was on before the glitch.
I’ve watched those compound coupled pendulum demos, in “science” museums, and it always amazes me that minor perturbations in the start conditions, can morph into such havoc.
So how the hell do we expect to model systems with unpredictable butterflies ??
But I’m glad that Dr. Roy, and Prof Christy are on top of that; and that Prof rgb is advising students on such follies. Izzat Red, Green, Blue ??
There being so few exceptions, it can be safely said that science is always messy. It is the reason why all data/methodology must be publicly shared so that it can be contrasted, reevaluated, etc. It is also the reason why CW is a political and not a scientific issue for most people, no matter how much they yell.
” Climate models have a sound physical basis and mature, domain-specific software development processes” (
“Engineering the Software for Understanding Climate Change”, Steve M. Easterbrook Timothy C. Johns )
Nick Stokes (May 7 5:02am) says “Numerical Weather Prediction is big. It has been around for forty or so years, and has high-stakes uses. Performance is heavily scrutinised, and it works. It would be unwise for GCM writers to get too far away from their structure. We know there is a core of dynamics that works.“.
The weather models do a pretty good job, on the whole, at predicting weather up to a few days ahead. Weeks ahead, not much use. Months ahead, no chance. Years ahead, don’t be daft. But for climate, we need years ahead. Mathematically, a model that operates on the interactions between little time-slices of little notional boxes in the atmosphere must quickly and exponentially diverge from reality, so it cannot possibly give us years ahead. That means that the weather model structure is useless for climate. Instead of weather models we need climate models. We don’t have any.
RACookPE1978 says: May 7, 2014 at 12:34 pm
“So, the maximum grid size for “physics” to work reliably (for more than one iteration that is) across a GCM grid (er, cube) requires the cube be SMALLER than the speed of sound (a pressure signal) to cross the smallest distance across the cube”
I thin k you are on to something there. But it is an old problem, and long resolved.
As I’ve said, there is an acoustic wave equation embedded in the N-S equations (as must be). And resolving sound waves is the key numerical problem in N-S solution. Traditional, a semi-implicit formulation was used. The Pressure Poisson Equation that is central to those is the implicit solution of the acoustics.
But GCM’s are explicit in the horizontal directions. Then they use an implicit method in the vertical, which is hydrostatic balance. That is, only the
0= -∇P – g
of the momentum equation is retained in the vertical. Since the acceleration has gone, the wave equation is disabled. It works because there is very little large scale vertical acceleration in the atmosphere.
As with any stepwise method, that doesn’t mean that everything else is assumed zero. You have to define a process which gets the main effects right. Then you can catch up, with iterations if necessary. Of course, there are situations where vertical acceleration is important – thunderstorms etc. So they have vertical updraft models. The point of the splitting is to solve for the physics you want without letting the N-S introduce spurious sound waves.