Elevated from a WUWT comment by Dr. Robert G. Brown, Duke University
Frank K. says: You are spot on with your assessment of ECIMs/GCMs. Unfortunately, those who believe in their ability to predict future climate really don’t want to talk about the differential equations, numerical methods or initial/boundary conditions which comprise these codes. That’s where the real problems are…
Well, let’s be careful how you state this. Those who believe in their ability to predict future climate who aren’t in the business don’t want to talk about all of this, and those who aren’t expert in predictive modeling and statistics in general in the business would prefer in many cases not to have a detailed discussion of the difficulty of properly validating a predictive model — a process which basically never ends as new data comes in.
However, most of the GCMs and ECIMs are well, and reasonably publicly, documented. It’s just that unless you have a Ph.D. in (say) physics, a knowledge of general mathematics and statistics and computer science and numerical computing that would suffice to earn you at least masters degree in each of those subjects if acquired in the context of an academic program, plus substantial subspecialization knowledge in the general fields of computational fluid dynamics and climate science, you don’t know enough to intelligently comment on the code itself. You can only comment on it as a black box, or comment on one tiny fragment of the code, or physics, or initialization, or methods, or the ode solvers, or the dynamical engines, or the averaging, or the spatiotemporal resolution, or…
Look, I actually have a Ph.D in theoretical physics. I’ve completed something like six graduate level math classes (mostly as an undergraduate, but a couple as a physics grad student). I’ve taught (and written a textbook on) graduate level electrodynamics, which is basically a thinly disguised course in elliptical and hyperbolic PDEs. I’ve written a book on large scale cluster computing that people still use when setting up compute clusters, and have several gigabytes worth of code in my personal subversion tree and cannot keep count of how many languages I either know well or have written at least one program in dating back to code written on paper tape. I’ve co-founded two companies on advanced predictive modelling on the basis of code I’ve written and a process for doing indirect Bayesian inference across privacy or other data boundaries that was for a long time patent pending before trying to defend a method patent grew too expensive and cumbersome to continue; the second company is still extant and making substantial progress towards perhaps one day making me rich. I’ve did advanced importance-sampling Monte Carlo simulation as my primary research for around 15 years before quitting that as well. I’ve learned a fair bit of climate science. I basically lack a detailed knowledge and experience of only computational fluid dynamics in the list above (and understand the concepts there pretty well, but that isn’t the same thing as direct experience) and I still have a hard time working through e.g. the CAM 3.1 documentation, and an even harder time working through the open source code, partly because the code is terribly organized and poorly internally documented to the point where just getting it to build correctly requires dedication and a week or two of effort.
Oh, and did I mention that I’m also an experienced systems/network programmer and administrator? So I actually understand the underlying tools REQUIRED for it to build pretty well…
If I have a hard time getting to where I can — for example — simply build an openly published code base and run it on a personal multicore system to watch the whole thing actually run through to a conclusion, let alone start to reorganize the code, replace underlying components such as its absurd lat/long gridding on the surface of a sphere with rescalable symmetric tesselations to make the code adaptive, isolate the various contributing physics subsystems so that they can be easily modified or replaced without affecting other parts of the computation, and so on, you can bet that there aren’t but a handful of people worldwide who are going to be able to do this and willing to do this without a paycheck and substantial support. How does one get the paycheck, the support, the access to supercomputing-scale resources to enable the process? By writing grants (and having enough time to do the work, in an environment capable of providing the required support in exchange for indirect cost money at fixed rates, with the implicit support of the department you work for) and getting grant money to do so.
And who controls who, of the tiny handful of people broadly enough competent in the list above to have a good chance of being able to manage the whole project on the basis of their own directly implemented knowledge and skills AND who has the time and indirect support etc, gets funded? Who reviews the grants?
Why, the very people you would be competing with, who all have a number of vested interests in there being an emergency, because without an emergency the US government might fund two or even three distinct efforts to write a functioning climate model, but they’d never fund forty or fifty such efforts. It is in nobody’s best interests in this group to admit outsiders — all of those groups have grad students they need to place, jobs they need to have materialize for the ones that won’t continue in research, and themselves depend on not antagonizing their friends and colleagues. As AR5 directly remarks — of the 36 or so named components of CMIP5, there aren’t anything LIKE 36 independent models — the models, data, methods, code are all variants of a mere handful of “memetic” code lines, split off on precisely the basis of grad student X starting his or her own version of the code they used in school as part of newly funded program at a new school or institution.
IMO, solving the problem the GCMs are trying to solve is a grand challenge problem in computer science. It isn’t at all surprising that the solutions so far don’t work very well. It would rather be surprising if they did. We don’t even have the data needed to intelligently initialize the models we have got, and those models almost certainly have a completely inadequate spatiotemporal resolution on an insanely stupid, non-rescalable gridding of a sphere. So the programs literally cannot be made to run at a finer resolution without basically rewriting the whole thing, and any such rewrite would only make the problem at the poles worse — quadrature on a spherical surface using a rectilinear lat/long grid is long known to be enormously difficult and to give rise to artifacts and nearly uncontrollable error estimates.
But until the people doing “statistics” on the output of the GCMs come to their senses and stop treating each GCM as if it is an independent and identically distributed sample drawn from a distribution of perfectly written GCM codes plus unknown but unbiased internal errors — which is precisely what AR5 does, as is explicitly acknowledged in section 9.2 in precisely two paragraphs hidden neatly in the middle that more or less add up to “all of the `confidence’ given the estimates listed at the beginning of chapter 9 is basically human opinion bullshit, not something that can be backed up by any sort of axiomatically correct statistical analysis” — the public will be safely protected from any “dangerous” knowledge of the ongoing failure of the GCMs to actually predict or hindcast anything at all particularly accurately outside of the reference interval.
Nick Stokes;
The reason he couldn’t give you a number is that it is intended to damp the effect before you can notice. And the amount of energy redistributed is negligible.
>>>>>>>>>>>>>>>>>>>
Read those two sentences again Nick.
Gobsmacked.
davidmhoffer says: May 7, 2014 at 7:33 pm
“Gobsmacked.”
OK. You have a wonderful audio amplifier. But sometimes it goes haywire with RF oscillations. You tell your EE – fix it, but don’t change the sound. He does.
“What did you do?”
“O, I sent filtered feedback from output to input. Only RF, no audio”
“Good. How much energy are you sending back?”
“Well, I can’t notice any RF at all?””
“Gobsmacked. How can it work if you feed back no energy?”
“Do you want me to take it out?”
Nick Stokes;
OK. You have a wonderful audio amplifier. But sometimes it goes haywire with RF oscillations. You tell your EE – fix it, but don’t change the sound. He does.
>>>>>>>>>>>>>>>>>
1. Negative feedback in an amplifier does change the sound. Transient distortion in particular rather than harmonic. That amplifiers are now good enough that the human ear cannot for the most part detect the distortion by no means suggests that it doesn’t exist, nor that it isn’t significant under certain conditions.
2. Your analogy has nothing to do with the physics at hand. Perhaps I understood the explanation incorrectly, but if I did, the bottom line is pretty simple. The models are calculating a number known to be wrong, and spreading it out across the planet surface on the assumption that the error is insignificant. Since you don’t know WHY the number is wrong (if you did, the code would be corrected to fix it instead of some bizarre method of damping it out) you ALSO don’t know by how much. You don’t know for example if correcting it would reveal still another error that is being masked by the first one. You may even find out that there is a second problem that is bigger than the first but of opposite sign. It gets worse from there. Your damping method might be fine. But since you don’t know WHY the original calc is wrong in the first place, for all you know the fix is introducing cumulative error larger than the error you were trying to fix in the first place.
The models are increasingly running warmer than the reality, even the IPCC has grudgingly admitted that. Give your head a shake. The models are wrong, nobody knows exactly why, and hear you are defending the practice of handling energy balance in a completely unrealistic fashion. No wonder the models don’t work well when they are underpinned by reasoning such as yours.
So we drop the GCMs and improve the global energy balance models concentrating on the role of CO2 and the feedbacks supported with spectral absorption tools and calibrate the model with data such as how evaporation and absolute surface humidity vary with temperature. As an early example see http://wattsupwiththat.com/2014/04/15/major-errors-apparent-in-climate-model-evaporation-estimates/
davidmhoffer says: May 7, 2014 at 8:20 pm
“Negative feedback in an amplifier does change the sound.”
No, the point here is that the oscillation is RF – radio frequency. You can feed that component back, filtering out audio, and it won’t change the sound. In fact it usually won’t do anything, because in normal operation there is no RF.
Same here. The analogue is relatively high frequency sound (wavelength 100km say) that is distorted by poor resolution, and can create a growing mode (like the RF). It shouldn’t be there, or at least, you can do without it. If one starts its spurious energy will show up as a discrepancy and be dissipated. How much? As with the RF, if it works, you won’t see any. I’m sure they flag and deal with any substantial energy discrepancy.
Nick Stokes;
No, the point here is that the oscillation is RF – radio frequency. You can feed that component back, filtering out audio, and it won’t change the sound. In fact it usually won’t do anything, because in normal operation there is no RF.
>>>>>>>>>>>>>>>.
In this scenario you are taking two signals, both from the REAL world, and using a technique to eliminate ONE of them. The technique ABSOLUTELY affects the other one. Which is beside the point.
There is no “real signal” in a model. You’ve got an artificial signal generated by the model itself. So you’ve got bad math creating a problem and you’re using more math to eliminate it. Band aids upon band aids.
Nick Stokes;
It shouldn’t be there, or at least, you can do without it.
>>>>>>>>>>>>
You’re right, it shouldn’t be there. The fix is to not create it in the first place. There’s no analog signal coming into the model that has to be taken out. You’re creating an excuse for erasing something the model created, and maintaining that erasing it by spreading it out across the modeled earth surface is a valid way to do it. Bull.
Nick Stokes;
If one starts its spurious energy will show up as a discrepancy and be dissipated. How much? As with the RF, if it works, you won’t see any. I’m sure they flag and deal with any substantial energy discrepancy.
>>>>>>>>>>>>>>>.
You’re “sure”? So you don’t know? IF it works you shouldn’t notice it? How do you know it works Nick? If the models were right, you would have a leg to stand on. But they aren’t and you don’t.
Argh I did it again. Apologies, Stephen Rasey.
May 7, 2014 at 6:25 pm | Nick Stokes says:
Sure, Nick … but where did the lame stream media gets all of its catastrophic forecasts? BoM published maps that indicated Ita coming ashore up near Cape Melville and spearing inshore … I notice that they’ve now taken down these maps … “IDQ65002 is not available” … 😉 There’s a fair distance between Cape Melville and Cooktown (great pub there) … and it trickled down the coast before falling into a wimpish wet heap north of Townsville. Seems that the ‘program’ is really only good for 24hr forecasting done at 3 hr intervals.
Climate models in need of review
http://notrickszone.com/#sthash.9Hpx0txZ.dpuf
Might be useful to index the growing number of quotes from eminent climate scientists regarding the failure of the GCM’s.
Nick Stokes says:
May 7, 2014 at 7:44 pm
davidmhoffer says: May 7, 2014 at 7:33 pm
“Gobsmacked.”
OK. You have a wonderful audio amplifier. But sometimes it goes haywire with RF oscillations. You tell your EE – fix it, but don’t change the sound. He does.
“What did you do?”
“O, I sent filtered feedback from output to input. Only RF, no audio”
“Good. How much energy are you sending back?”
“Well, I can’t notice any RF at all?””
“Gobsmacked. How can it work if you feed back no energy?”
“Do you want me to take it out?”
If you have an amplifier oscillating at rf, you find out why and stop it. [I’ve been doing just that, with a Quad 405 bought for ‘spares or repair’].
To think that you can reliably sort out such a problem by applying feedback from output to input via a high-pass filter is a delusion. It would be miracle if such a procedure worked.
I’d imagine the same conclusion applies to the procedure of ‘correcting’ energy balance errors in GCMs.
Friends:
Many good – and important – details are being discussed, but the thread’s discussion seems to have lost sight of a point which I think is more important than such details.
In his above article Robert G Brown writes
That is true (i.e. the models are not independent) but despite that the models each emulates a different climate system from each other model .
Long ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:
And, importantly, Kiehl’s paper says:
And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Thanks to Bill Illis, Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:
It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
The outputs of the models cannot be validly averaged because the models are not independent, and any obtained average must be wrong because average wrong is wrong.
Richard
richardscourtney says:
May 8, 2014 at 4:23 am
—————–
Richard, everyone can use any of my charts at any time, without attribution. To me, it is just information that should be available to all.
In response to HomeBrew, rgbatduke says:
“have several gigabytes worth of code in my personal subversion tree”
I hope it’s not several gigabytes of your code only or else you are a really bad programmer (which often is the case since everybody who’s taken some kind of university course consider themselves fit for computer programmning).
Well, let’s not forget, either, that the point of a subversion tree is to (poentially) have many branches of code for different development aspects. Internally, a well implemented (i.e., modern) implmentation of subversion means that (mostly) only differences in code are stored between versions, but that’s totally internal. To the end user, though a lot of it might be VERY redundant, there can easily be MANY gigabytes of code visible for a few complex projects with many branches.
Mathematical models of cardiac activation depend on coupled PDEs. which describe current flow within the heart, and ODEs which describe the nature of current sources within cells. The latter depend exquisitely on paramaterisation – at least 53 coupled ODEs that depend on parameters measured under very artificial conditions.
When I was told by a professor of mathematics at a major institution “If that is what the equations say, that is what is happening”, I came to a conclusion similar to your own.
The upshot of “Mathematical Cardiac Physiology” is that there are supposed “spiral waves” of activation in ventricular fibrillation. This has found its way into major textbooks (including one from MIT). When presenting data that refutes thiis, I have been told that my data must be wrong because they do not conform to current models!
The problem is that nobody has ever demonstrated the presence of “spiral waves” in animal preparations and such data that exists from intact human hearts do not support their existence.
Yeah, I know of some people at Duke who have worked on this. Interesting (and difficult) problem. My recollection of the discussion is that (however it are modelled) the oscillator is supposedly sufficiently nonlinear that when its structure degrades in certain ways or is desynchronized in certain ways, the heart’s beat becomes chaotic — period doubling occurs and it descends into the branches of a Feigenbaum tree with both spatial and temporal chaos. I don’t know how much of this is related to a specific e.g. spiral model (not having ever looked at the heart models at all) but the idea is an appealing one and — I thought — had support from EEG traces and so on. Not to open another can of worms on WUWT, of course. I’m surprised that they have elevated the models to the point of truth without any direct experimental confirmation and observation, though. Do the models have ANY predictive value that makes them appealing even if internal details are wrong?
rgb
Climate model predicting some 100s of years… It seems there’s no problem, these fellows can model the Universe. All 13 billion years of it. I wonder what does Chris Essex say? As usual, very interesting info is in the Acknowledgements section – all the funding agencies allocating our taxes so wisely.
See: Nature 509, 177–182 (08 May 2014) doi:10.1038/nature13316
Properties of galaxies reproduced by a hydrodynamic simulation
M. Vogelsberger, S. Genel, V. Springel, P. Torrey, D. Sijacki, D. Xu, G. Snyder, S. Bird, D. Nelson & L. Hernquist
Previous simulations of the growth of cosmic structures have broadly reproduced the ‘cosmic web’ of galaxies that we see in the Universe, but failed to create a mixed population of elliptical and spiral galaxies, because of numerical inaccuracies and incomplete physical models. Moreover, they were unable to track the small-scale evolution of gas and stars to the present epoch within a representative portion of the Universe. Here we report a simulation that starts 12
million years after the Big Bang, and traces 13 billion years of cosmic evolution with 12 billion resolution elements in a cube of 106.5 megaparsecs a side. It yields a reasonable population of ellipticals and spirals, reproduces the observed distribution of galaxies in clusters and characteristics of hydrogen on large scales, and at the same time matches the ‘metal’ and hydrogen content of galaxies on small scales.
Completely fail? I’d like to see that quantified.
Well, there are two places and ways you can. You can go get chapter 9 of AR5, which I’ve been referring to for months now, and look at figure 9.8a. Eyeball the collective variability of the MME mean compared to the actual temperature (accepting uncritically for the moment that HADCRUT4 as portrayed is the actual temperature). You will note that the MME mean skates smoothly over the entire global temperature variation of the first half of the 20th century, smoothing it out of existence. You will note that the MME mean smoothly diverges from the temperature post 2000. You will note carefully the term “over” — and, if you count the collective extent where the MME mean exceeds the actual temperature vs the places that it is lower (and imagine that the MME mean is actually tracking the physics, so that the probabilities of it being higher than the actual temperature and lower than the actual temperature outside of the reference interval should be about equal). If you compute the probability of the observed data — even with a crude estimate — and turn it into a p-value, I think you’ll conclude that the MME mean soundly fails a hypothesis test with the null hypothesis “the MME mean accurately predicts the temperature within statistical noise”.
If you look at the per-model traces (hard to do in the spaghetti graph presented you will observe that the individual models — in spite of themselves being PPE averages of tens to hundreds of traces — have truly absurd variability compared to the actual climate. Again, everywhere but the reference period, their variance is easily 2-3 times the variance of the climate, where the climate is a single trace the the models are the averages of many, many traces. Surely you are aware of the fluctuation-dissipation theorem. Surely you understand that this is direct evidence that the models, having the wrong dissipation — or if you prefer, susceptibility — are deeply broken. They do not correctly capture the physics of the dissipation of global energy, and incorrectly balance gain terms with loss terms to manage to hold even close to the mean. Since the balance is delicate, they consistently overshoot before negative feedback drives the system back towards the mean, and do so (rather amazingly) across multiple runs. I shudder to think of what individual runs might look like compared to the actual trace — one expects that the variance is at least 3 to 10 times even what is portrayed in 9.8a.
The second place you can look is that you can, as I suggested earlier, watch Chris Essex’s video (linked above). He’s not an idiot — he’s a mathematician, AFAICT — and in the video he explains in some detail why the models are very unlikely to be able to succeed in what they are attempting to do, beginning with a quote from AR3 that states basically that “the models are very unlikely to be able to succeed in what they are attempting to do”. That is, everybody knows that predicting a chaotic system at inadequate resolution over very long times is impossible, and knew it fifteen years ago. I knew it then. You knew it then. We all know it. So why do we persist in pretending that in the field of climate science we can do it when not only we fail everywhere else, but when solving this is a million dollar grand challenge math problem that this far has defied solution? It was very amusing to watch the video and watch him make almost precisely the same points I’d already written down in text above, one after the other.
Except for one, the one you should watch the video for as it answers your question. He presents a graph — from AR4 IIRC — of the fourier transform of the temperature series generated by the GCMs at that time. My recollection was that the FT was for unforced climate models — CO_2 increases turned off. As he points out, the climate scientists present it in terms of period, not frequency (as Willis has been doing) but that is unimportant. The point is that the FT starts off noisy and low, rises to a substantial peak on the order of a few years in, and then drops off to flat — flatty flat flat, as he waggishly puts it outside around 100-200 months. It is then an absolute wasteland of flatness beyond.
This is unsurprising — the GCMs are (incorrectly) balanced so that there is a single relevant knob — anthropogenic CO_2 — that determines the climate set point, and so that the other things like volcanic aerosols that have any observable effect have a lifetime of at most a few years post a major event. A cursory examination of the data reveals that they have too high a “Q” value — their effective mass and spring constant are too large relative to their damping (fluctuation-dissipation, again) — so that they oscillate over many times the correct range, but at the same time are tighlty bound to a fixed equilibrium with little room for any sort of naturally cumulating variation. The data presented in this curve is proof of this. There is no long term variability in the GCMs, yet the temperature anomaly itself varies by amounts ranging from 0.2 to 0.4 C per decade numerous places in the climate record, and cumulates temperature differences of up to a bit more than 1 C over a century or two in numerous places. The spectrum of the actual climate is not flat and the actual climate has long term natural trends that are not driven by CO_2 — as we have discussed in numerous threads on WUWT.
So there are two sources of support for my statement — you can lay an eyeball on the actual data traces in AR5 and see for yourself that the models have a mean behavior that is not accurate EITHER in the past OR the future of the reference period (plus note in passing the direct evidence that the individual models, viewed as damped driven oscillators with a slowly varying driving force, generally have the wrong Q — a really, really wrong Q), or you can visit Essex’ video and look at the AR4 FT (or, probably, look at it directly in AR4 but I don’t have any idea what chapter or page it is in). I’d strongly suggest that you at LEAST do the former — you are a physicist and a smart guy and can read graphs at least as well as I can — you look at figure 9.8a and you tell me that the figure is strong evidence that the models are correct. They work well only in the reference period.
Of course they work well in the reference period. Nobody cares. Anybody can fit a nearly monotonic function over a 25 year period, especially when the models incorporate the major volcanic events with a significant short term effect. What they don’t do is damp correctly, even from these events. They don’t exhibit any tendency to wander around slowly, cumulating natural variability. They do exhibit a tendency to spike way up in temperature, then zoom straight down in temperature, then zoom way up in temperature again — and this is a mean behavior already averaging over many runs, persistent over long time integrals!
How you, or anybody, can look at this behavior and conclude that “the models are working” is beyond me. I look at it and go “back to the drawing board”, or “not ready for prime time”. But Chris Essex’s astounding assertion at the beginning of his talk explains a lot. He was briefly recruited by a climate group to help them find the smoking gun condemning CO_2 by building GCMs that showed strong warming. I have to say that it sounded like he was quoting them.
Not since I read through the climategate letters have I encountered anything so appalling in science.
How much confidence would you place in a study that hired somebody to “find the smoking gun” connecting the eating of wheat bran to infertility rates in women which then found it? To “find the smoking gun” relating being black to having lowered intelligence? To “find the smoking gun” relating the consumption of jelly beans to acne:
http://xkcd.com/882/
To “find the smoking gun” linking, well, anything to anything.
That is truly a recipe for disaster in science, an open invitation to its corruption. People hired for the express purpose of “finding something” in science either find it or else, well, they become unemployed and their spouses get angry, their children have to change schools, they have to dip into their retirement, they fail to get tenure and have to become car salesmen or Wal Mart greeters.
rgb
This is powerful stuff -and I agree, worth archiving for that long distant posterity that will look back and say ‘daddy, where were you when all this climate modelling was rampant across all science academies, all governments, the EU, the UN, all environmental NGOs…..and the whole of the left-liberal-green coalition, their press and most of the media?’
I can’t follow the code critics – well, not much of it. But I know good critical stuff when I see it. But there is something missing. Here’s a little bit of history that will illustrate the problem – it is about the creation of alternative models (which is a very tricky business) and how to use them. Maybe there is a lesson here that someone can take up.
In the early part of the nuclear industry in Britain and the US, computer simulation was used to assess the environmental impact of reactor accidents – including melt-down of the core. But this was all kept secret. The reactors were licensed – and ‘safety’ standards set without public involvement, parliamentary oversight or needless to say, critical input from outsiders. Toward the end of that decade, the American Physical Society forced the issue, and the first reports were made public on the consequences of melt-downs. In the UK, this prompted a Royal Commission report (1976). Of course, many reactors had already been built by then.
The consequences of aerial releases were however obfuscated in the way the models presented ‘probabilistic’ combinations of failure rates, amount released, radiotoxicity and even wind direction, to arrive at an individual risk component – that was very, very small. The societal risk was not computed – and of course, failure rates were unvalidated because of the complexity of the systems – and did not include such factors as terrorist attack or aircraft impact. Concerned outsiders, such a s myself wanted to know ‘what if’ the event has happened and the wind is in my direction – but the models could not do that (or rather – in the hands of the nuclear labs, they would not do that).
So – we (my colleagues in an environmental science research group and I) petitioned for the release of the models. We succeeded. Next we had to understand them and modify the parameters to make them deterministic. This was during the time of punch cards on computers that you could walk around inside of! I knew nothing myself – but others came to our aid – even from within the nuclear labs – and the programmes were decoded and rerun – all with alternative but justifiable parameters – such as factoring out the probability of failure (and assuming it has happened), fixing the wind direction, but accepting all the downwind consequence part of the analysis in order to limit objections on methodology.
We succeeded in lifting the obscurity and focussing the consequence side of the equation not on personal individual risk of cancer (tiny beyond 10 miles) but the societal impact of contaminated land, relocation, loss of productions, etc. These reports were then fed into the policy process at all levels of government decision making and in the EU (with some small successes in affecting emergency planning procedures).
Later in the 1980s, we had gained the trust of that particular aerial modelling community. At one point, my research group even commissioned a model-run from a UK national laboratory to input to an inquiry.
Could it be done today? That is – take one of the better models and modify the key parameters of, say, the lambda factor in the RF equations or aerosol fudging factors, and add a coming Maunder Minimum in solar activity? I did ask our MetOffice Hadley Centre if I could have access to their more simpler model -the one they used to predict that a coming Maunder Minimum would not significantly slow down global warming – on the assumption that I could find some help to modify and run it. But they ignored the request (Julia Slingo was otherwise polite). In the old days, such requests could not be ignored because we had a battery of big guns on our side from the NGOs, the press, and even the trades unions representing emergency workers (who funded the study we commissioned). Now, of course, we have no friends or allies – well, at least, none we would care to embrace. And actually, it is not much of a ‘we’ over here in the UK.
But could it be done in the USA? The expertise is there, from what this thread shows.
I would be interested to know……. peter.taylor(at)ethos-uk.com
and to participate.
RGB Your latest post shows that you are completely convinced that the IPCC model outputs provide no basis for forecasting future climate . Is it not time to move on .I repeat an earlier comment which I hope you can find time to respond to.
“RGB I have been saying for some years that the IPCC models are useless for forecasting and that a new approach to forecasting must be adopted. I would appreciate a comment from you on my post at 5/7/6:47 am above and the methods used and the forecasts made in the posts linked there at http://climatesense-norpag.blogspot.com
The 1000 year quasi – periodicity in the temperatures is the key to forecasting . Models cannot be tuned unless they could run backwards for 2- 3000 years using different parameters and processes on each run for comparison purposes. This is simply not possible.
Yet even the skeptics seem unable to break away from essentially running bits of the models over short time frames as a basis for discussion.
Perhaps it would help psychologically if people thought of the temperature record as the output of a virtual computer which properly integrated all the component processes.
Let us not waste more time flogging the IPCC modeling dead horse and look at other methods.”
I will add some noise to this conversation with Nick Stokes and davidhmhoffer. For context:
Nick Stokes says: May 7, 2014 at 7:44 pm
OK. You have a wonderful audio amplifier. But sometimes it goes haywire with RF oscillations. You tell your EE – fix it, but don’t change the sound. He does.
“What did you do?”
“O, I sent filtered feedback from output to input. Only RF, no audio”
“Good. How much energy are you sending back?”
“Well, I can’t notice any RF at all?””
“Gobsmacked. How can it work if you feed back no energy?”
“Do you want me to take it out?”
I (*) can tell you how much energy I am feeding back by direct measurement (oscilloscope or spectrum analyzer, the latter probably better for this purpose).
Your argument seems to be that negative feedback removes the RF entirely and thus there is nothing left to measure. This is incorrect. RF energy ALWAYS exists as broadband thermal noise. That’s what starts the oscillator in the first place. Even when damped it is still there; the idea is to reduce it such that at RF frequencies the gain of the amplifier is less than 1. It is still there and can be measured with a spectrum analyser (or, in a pinch, an oscilloscope with no audio input signal).
In a climate model, which is a computer program, noise cannot exist at any frequency unless the model puts it there. The argument seems to be dealing with “noise” through filtering versus simply not adding noise in the first place.
I hope we can all agree that a model with no noise also cannot vary. All runs of a model, in fact all models using the same code and data, must necessarily produce exactly the same output every time. That was my criticism of Michael Mann’s offer of a MatLab model and data — to see if you get the same result.
Duh, of course I will get the same result. It’s a computer program. It does exactly what it was told to do. I pointed this out on the Scientific American website and was banned shortly thereafter 🙂 (Another groupthink echo chamber bites the dust)
I recognize that a model could be extremely sensitive to input conditions. Hashing algorithms are an example; changing a single byte of input to MD5, SHA(1,2) or even CRC produces a completely different output in a way that is supposed to not be predictable. This is not noise, it is still deterministic as is any computer program, but it is hard to PREDICT. That’s okay.
Whups, forgot to explain the (*). Qualifications: former holder of First Class FCC radiotelephone license and current holder of Amatuer Extra class radio license. Of these the First Class FCC was considerably more difficult and included questions related to this subtopic. I’m also a part-time computer programmer so I know that programs are deterministic. They will always produce the same output for the same input, but the output might not be easily predictable or reversible (as in hashing algorithms).
Hashing algorithms use feedback internally and so do weather systems, seems to me.
“The process Gavin describes damps this at the outset, automatically. It takes the energy out of the mode. The reason he couldn’t give you a number is that it is intended to damp the effect before you can notice. And the amount of energy redistributed is negligible.”
And yet measurable. If you can mathematically damp it then you already know its magnitude and it can be reported. All such computations can be logged to a file if you wish and know with precision every instance of adjustment and their cumulative effect.
Michael Gordon says: May 8, 2014 at 10:59 am
“In a climate model, which is a computer program, noise cannot exist at any frequency unless the model puts it there. The argument seems to be dealing with “noise” through filtering versus simply not adding noise in the first place.”
No, the situation is similar to the amplifier. Like RF, the noise is always there; the aim is to stop it getting out of hand. It is, literally, noise; acoustic oscillation, though in the milliHz range. Wind is noisy, as we know, and models reflect that. It does no harm unless some limitation (resolution) causes it to appear as something else which has a high loop gain.
In an amplifier you would limit RF gain, possibly through selective negative feedback, as in my example. The objective of the forced dissipation of spurious energy is similar.
Nick Stokes;
No, the situation is similar to the amplifier.
>>>>>>>>>>>>>>
An amplifier is a physical analog device. A climate model is a completely digital construct whose link to reality is entirely devised by programmers. Drawing an analogy between the two is utterly absurd.
That said, I agree with richardscourtney. The central take away for readers of this thread should be the devastating critiques by RGB.
Dr. Brown,
Thanks for your interesting comments. I have an observation to add to yours regarding dissipation. A couple of years ago I was comparing ARGO data vs. the model outputs for AR4. Specifically i was looking at the variance in ocean temperatures at different depths and latitudes. Given the short time span examined I assume the annual cycle dominated the signal.
In general, my results indicated that below the thermocline the dissipation of energy was greater in the models than observations would support. Given that the observed variances were small to begin with perhaps it could be argued that this “didn’t matter”. On the other hand, i didn’t get a warm and fuzzy feeling that the models could adequately integrate heat uptake over decades.
If you’re interested here are my findings.
https://sites.google.com/site/climateadj/ocean_variance
rgbatduke says: May 8, 2014 at 7:16 am
“Eyeball the collective variability of the MME mean compared to the actual temperature (accepting uncritically for the moment that HADCRUT4 as portrayed is the actual temperature). You will note that the MME mean skates smoothly over the entire global temperature variation of the first half of the 20th century, smoothing it out of existence.”
Well. of course the multi-model mean is smoother. Its smoothness can be increased as you wish by including more models in the mean. But the individual runs, whose variability could reasonably be compared with Earth, are not smooth, as Fig 9.8a shows.
As I’ve said above, models are not expected to predict weather, even decadal weather. They are not provided with the information to do so. They generate random weather, with hopefully correct climate statistics in response to forcing. It isn’t clear what caused the early 20C warming, but if it wasn’t a forcing supplied to the models, they won’t show it. They have no information to cause them to do so.
Then you say that individual runs are too variable. That’s hard to deduce from a spaghetti graph. There are some big spikes, but a lot of runs. I’d like o see that quantified too. But it’s possible; I can imagine that the models have more trouble getting the variability right than the expected value. But it’s the EV we really want.
Incidentally, I think there is one good reason for greater variability. HADCRUT, like all indices, uses SST as a proxy for air temperature. Models return the actual air temperature in the boundary cells.
I tried to watch the Essex video, but it’s hopelessly waffly – I wish people could just write down what they want to say.
As to why they work well in the reference period – that’s not to do with model performance. Reference period just means the period during which they are set to a common mean. Any set of squiggly curves will show more concordance if you make that requirement. Setting the mean minimises SS variation. That’s just statistics, not PDE modelling. You can see it in this plot of paleo proxies that have been baselined 4500-5500 BP.