# An IT expert's view on climate modelling

Guest essay by Eric Worrall

One point struck me, reading Anthony’s fascinating account of his meeting with Bill McKibben. Bill, whose primary expertise is writing, appears to have an almost magical view of what computers can do.

Computers are amazing, remarkable, incredibly useful, but they are not magic. As an IT expert with over 25 years commercial experience, someone who has spent a significant part of almost every day of my life, since my mid teens, working on computer software, I’m going to share some of my insights into this most remarkable device – and I’m going to explain why my experience of computers makes me skeptical, of claims about the accuracy and efficacy of climate modelling.

First and foremost, computer models are deeply influenced by the assumptions of the software developer. Creating software is an artistic experience, it feels like embedding a piece of yourself into a machine. Your thoughts, your ideas, amplified by the power of a machine which is built to serve your needs – its a eerie sensation, feeling your intellectual reach unfold and expand with the help of a machine.

But this act of creation is also a restriction – it is very difficult to create software which produces a completely unexpected result. More than anything, software is a mirror of the creator’s opinions. It might help you to fill in a few details, but unless you deliberately and very skilfully set out to create a machine which can genuinely innovate, computers rarely produce surprises. They do what you tell them to do.

So when I see scientists or politicians claiming that their argument is valid because of the output of a computer model they created, it makes me cringe. To my expert ears, all they are saying is they embedded their opinion in a machine and it produced the answer they wanted it to produce. They might as well say they wrote their opinion into a MS Word document, and printed it – here is the proof see, its printed on a piece of paper…

My second thought, is that it is very easy to be captured by the illusion, that a reflection of yourself means something more than it does.

If people don’t understand the limitations of computers, if they don’t understand that what they are really seeing is a reflection of themselves, they can develop an inflated sense of the value the computer is adding to their efforts. I have seen this happen more than once in a corporate setting. The computer almost never disagrees with the researchers who create the software, or who commission someone else to write the software to the researcher’s specifications. If you always receive positive reinforcement for your views, its like being flattered – its very, very tempting to mistake flattery for genuine support. This is, in part, what I think has happened to climate researchers who rely on computers. The computers almost always tell them they are right – because they told the computers what to say. But its easy to forget, that all that positive reinforcement is just a reflection of their own opinions.

Bill McKibben is receiving assurances from people who are utterly confident that their theories are correct – but if my theory as to what has gone wrong is correct, the people delivering the assurances have been deceived by the ultimate echo chamber. Their computer simulations hardly ever deviate from their preconceived conclusions – because the output of their simulations is simply a reflection of their preconceived opinions.

One day, maybe one day soon, computers will supersede the boundaries we impose. Researchers like Kenneth Stanley, like Alex Wissner-Gross, are investing their significant intellectual efforts into finding ways to defeat the limitations software developers impose on their creations.

They will succeed. Even after 50 years, computer hardware capabilities are growing exponentially, doubling every 18 months, unlocking a geometric rise in computational power, power to conduct ever more ambitious attempts to create genuine artificial intelligence. The technological singularity – a prediction that computers will soon exceed human intelligence, and transform society in ways which are utterly beyond our current ability to comprehend – may only be a few decades away. In the coming years, we shall be dazzled with a series of ever more impressive technological marvels. Problems which seem insurmountable today – extending human longevity, creating robots which can perform ordinary household tasks, curing currently incurable diseases, maybe even creating a reliable climate model, will in the next few decades start to fall like skittles before the increasingly awesome computational power, and software development skills at our disposal.

But that day, that age of marvels, the age in which computers stop just being machines, and become our friends and partners, maybe even become part of us, through neural implants – perfect memory, instant command of any foreign language, immediately recall the name of anyone you talk to – that day has not yet dawned. For now, computers are just machines, they do what we tell them to do – nothing more. This is why I am deeply skeptical, about claims that computer models created by people who already think they know the answer, who have strong preconceptions about the outcome they want to see, can accurately model the climate.

## 615 thoughts on “An IT expert's view on climate modelling”

1. CodeTech says:

Well put, and I wholeheartedly agree.

• billw1984 says:

Good points. Then add to that the following: 1. confirmation bias (related to main point), 2. noble cause confirmation, 3. peer pressure (both confirmation and condemnation), 4. human tendency to not want to admit error, 5. human tendency to extrapolate trends, 6. need to get grants and please referees of grants and papers and you have our present situation. Now, being a robot myself – I am not subject to any of these.
We have a very nice test coming the next 10-15 years. The weak solar cycle may not play any role at all but this will be a nice test to see if it does. The PDO seems to have switched and now various climate agencies are saying the AMO may have as well. So, a very nice test of the relative strengths of the natural and CO2 forcings. I hope there are not any large volcanic eruptions the next 10 years as that will add another variable to the mix and make it harder to sort out.

• Alcheson says:

Bill, we had a nice test the past 15 years and the CAGW models failed miserably. However, because the climate scientists have the ability to rewrite temperature history and make whatever temperature trend they desire, how confident are you that a cooling trend will be allowed to stand? Even if a cyclic cooling trend is underway (which I think it is since weather around the world sure does seem an awful lot like it did in the 1970s), adjustments will be made to insure the warming continues unabated, or at least long enough to have successfully put into place the political agenda. Once the agenda is complete, we will magically see cooling and we are all to bow down to the world saviors.

• Sorry, but this article is very poor, the software engineering aspect of any computer-based model is insignificant, apart from the obvious need to avoid bugs. Climate models are mathematical representations of physics, chemistry and biology, those fields are where any weaknesses lie, software engineers just translate things into code.
In principle climate models could run on hydraulic computers, but that would not allow plumbers to comment on their fidelity.

• Retired Engineer Jim says:

Exactly – the model is a set of equations. Ideally, well documented, with all known assumptions written down and with any limitations of application also noted. If it is an initial-value problem, the modeler needs to document how to define the initial value. If it is a boundary-value problem, then the means for defining the boundary values must be documented. And the majority of the problems are encountered / committed during the building of the model.
Ideally, the model is then handed to a group of software engineers independent of the modeller, and as was said, they “just code the model”. (It isn’t ever that easy.) An area in which more problems may arise is in the SE’s choice of solution techniques, especially if the SEs aren’t aware of any stability issues in the solution technique.
Where real serious problems arise is if the modeller(s) are also the “coders”. Then the modeller/coder can introduce all sorts of biases, intentionally or otherwise.
Been there, (unfortunately) done that, have the T-shirts.

• Pseudo Idiot says:

When a computer modeler chooses to model the geography of the planet at a resolution that’s too large to represent (for example) small clouds (all modelers are guilty of this, because of technical limitations), it is not a weakness of physics, chemistry or biology.
It means that the output of the models is useless because the models ignore (or attempt to approximate) important physical properties of nature, and the results have an unknowable level of error.

• Climanrecon – the software side of the models is more influential than you credit. The limitations on computation power, the grid sizes, interaction between grids and the workarounds needed to approximate the physics, thermodynamics, chemistry, etc are the reasons why a simulator is only a mirror of the people who programmed and ran it. I have been running computer simulations for 40 years and I can assure you that Eric’s article is very spot on.

• climanrecon

Climate models are mathematical representations of physics, chemistry and biology, those fields are where any weaknesses lie, software engineers just translate things into code.

Rather, Climate models are mathematical representations approximations of assumed ideal textbook physics, chemistry and biology conditions in an assumed pideal environment with assumed perfect uniformity and exactly modeled by the approximations assumed in populating the assumed perfect world.
For example, dust particles are finely modeled by approximations of a few samples of dust particles over California’s LA basin – because the original “climate models” were regional areas for tracking dust plumes and car pollution clouds. And there are many “scientific papers: touting the improvement in dust particle modeling and the timy parameters of the “selected average” dust particle for the global circulation models.
But are these assumed particle properties correct for the southern hemisphere of little land area at all and virtually no cars even on what little land area is present?
Rather, the “model” is calibrated back to the stated conditions of “the earth’s average albedo is 30%. Unless it is 29.9995% Or 30.0001 percent. Or has “greened down” as the world’s plants grow faster, higher, stronger with m ore leaves ever earlier in the season – and is now 29.5% … All without the model ever changing.
THe Big Government-paid self-called “scientists” are paid to create models that must create catastrophe. Or their political paymasters fire them.

• There was another group that had this supreme confidence in computer models. It was the MIT study that was commissioned by the Club of Rome. They made all these predictions based upon their computer models about what was going to happen to the world.
They were wrong in every single prediction.
Computers are the ultimate appeal to authority, but their ability to predict the future, especially something as complex as climate, is no better than reading entrails.

• looncraz says:

I’m a programmer who has examined climate model code (and worked on them) and I can tell you, without a doubt, that numerous assumptions are, indeed, built into the models – which are, indeed, just representations of the hypotheses which harbor said assumptions.
The basic layout of the models is quite neutral – often painstakingly so. The earlier (pre-AGW) models included very few assumptions not based on observational science, but they had a strict issue: their output was too cold, had little regional validity over time (in terms of predictive success), and feedback sensitivities were mostly dialed-in through brute-force means.
The modelers set out to correct the cold-running nature of the earlier models – to explain why warming was being seen relative to the model output. To do this, they simply looked for some variable that was increasing that they weren’t fully considering. CO2 and other GHGs came to the rescue! GHGs were on an ever-upward trajectory, and the curve more or less fit the model temperature trend discrepancy… if it were amplified with unknown atmospheric feedbacks. So the solution was to adjust the climate’s temperature sensitivity to, mostly, CO2, create an artificial baseline from which historic feedbacks were considered ‘balanced,’ and then use the CO2/GHG variable(s) to lead to a warming offset.
This is the epitome of confirmation bias in design, but it is on top of a system that could already not have many regional predictive validity and used numerous assumed values as starting points, feedbacks, and changing inputs.
Think of it this way:
If a bicycle slows down going downhill for some unknown reason and I model the bicycle on a computer and the model says the bicycle should be speeding up I have a LOT of variables with which to play. But, I also have some data. The things I know are that the brakes are not activated, the hill is 5 degrees steep, the tires are inflated, there is very little wind, and it is in sixth gear.
From this, I can only assume that there is some kind of drag, maybe the rider, or a bent rim, a parachute, something. So I create a model which calculates the drag being experienced on the bike, and I calculate the the rider must be about 6′ tall and weigh 230lbs or so and was traveling at a high rate of speed such that the hill was not enough to keep up with the rider’s drag.
Then, the observations come in, and the bike turns out to be on the back of a truck, and the truck is slowing down. There is no rider.

• “Rather, Climate models are mathematical representations approximations of assumed ideal textbook physics, chemistry and biology conditions in an assumed pideal environment with assumed perfect uniformity and exactly modelled by the approximations assumed in populating the assumed perfect world.”
Approximations of assumed ideal physical models of some processes done in isolation and ignoring a lot of the interactions of those or processes we do not fully understand yet.

• Climate models are mathematical representations of physics, chemistry and biology, those fields are where any weaknesses lie, software engineers just translate things into code. @climanrecon

That’s not entirely accurate, the models are gross simplifications of our limited understanding of the physics, chemistry and biology and the interaction of those fields. Computers the, when they compute the grossly simplified models do so with representations of numbers that only have 15 digits of accuracy Math libraries that introduce inaccuracies of their own. The number of cells that the models, the number of times the functions are iterated over the data that’s less than perfect it’s inevitable that the modles spiral out of control. All of this was covered far better than I can explain by Edward Norton Lorenz in his paper Deterministic Nonperiodic Flow

• software engineers just translate things into code
========================================
If that was true, then google translate could write code and there would be no need for software engineers.
the simple fact is that for all their speed, computers are still hopelessly slow for most practical, real world problems. for example, wind tunnels. why do we have them? why not simply model the performance of new cars and planes on computers? After all, it is just the solution of mathematical equations. load the 3D shape into the computer and solve. what is the big deal?
the big deal is that the real world is nowhere near as simple as model builders would have you believe. Models can detect obvious stinkers of designs, but so can the trained human eye, often with much more speed and accuracy. in the end, models are not a replacement for the real thing.

• Paul Sarmiento says:

touche!!! I was planning to comment with this very same observation but was too lazy to formulate how I would say it.
In addition to what you said, models are like blenders, they can be used to puree a lot of things but the taste of what comes out depends highly on what goes in. You can make it sweet, or bitter, salty or sour and anything else in between. But no, the blender does little on the taste, only on the texture.

• mellyrn says:

climanrecon, first assume a perfectly spherical cow.
“Spherical cow” has a wikipedia entry if you don’t get it.

• Looncraz,

The basic layout of the models is quite neutral – often painstakingly so. The earlier (pre-AGW) models included very few assumptions not based on observational science, but they had a strict issue: their output was too cold, had little regional validity over time (in terms of predictive success), and feedback sensitivities were mostly dialed-in through brute-force means.

While reading a GCM spec/ToE a long time ago, they stated that the difference between the earlier models that ran cold compared to surface measurements, and the current models was that they allowed a supersaturation of water vapor at the atmosphere – surface boundary. This is the source of the high Climate Sensitivity in models, which they used aerosols to fit to the past.
Is this your experience as well? I can’t find whatever it was that I was reading.

• cd says:

Totally agree. The article is based on a fallacious assumption akin to because writing code and writing am article are the same because they both involve writing. This may be Eric’s assumption but that’s how it reads.

• cd says:

Sorry that should’ve been “This may not have been Eric’s assumption…”

• DP111 says:

Quite right. This article is about software engineering and not the models, the physical basis of the models, the expressions that undelie the mechanics, electromagnetics & physics of the parameters that were considered important, and why.
The other aspect is the approximations that occur when the mathematics of a complex process that is ill understood itself, is projected onto a finite grid. Lots of problems need to be understood first by an analytical approach to a simplified version of the real problem, before embarking on the real stuff.
Fortunately for the AGWCC lot, they are not working for a private company designing state of the art aircrafts or somesuch. In fact, no private engineering company has the luxury of getting things so wrong, so often, and so predictably.
As for the software aspects of the problem, the author makes valid points.

• Surfer Dave says:

Agreed article is quite poor, but your point is misinformed too. Digital computers are actually very poor at handling numbers other than integers. Every ‘floating point’ calculation uses imprecise representations of decimal numbers and each time a calculation occurs there is an error between the ‘true’ result and the value held in the computer, This has been know for a long time. There are ways to track the accumulated errors, however years ago I looked at some of the climate model source codes and they invariably relied on the underlying computer’s number representation and made absolutely no effort to account for the accumulation of errors. Since the models perform litterally millions of iterations the initially small errors propogate into larger and larger errors. For this reason alone, the models can be discounted entirely. So, even if the programmers implemented precisely what the ‘climate scientists’ wanted, there will be errors. A very good reference for digital number systems and their inherent problems is Donald Knuth’s ‘The Art of Computer Programming’. The worst cases occur where ‘floating point’ numbers are added or subtracted.

• If anyone is still reading this, I think I’ve found the part of the code I’ve mentioned,

3.3.6 Adjustment of specific humidity to conserve water
The physics parameterizations operate on a model state provided by the dynamics, and are
allowed to update specific humidity. However, the surface pressure remains fixed throughout
the physics updates, and since there is an explicit relationship between the surface pressure and
the air mass within each layer, the total air mass must remain fixed as well. This implies a
change of dry air mass at the end of the physics updates. We impose a restriction that dry air
mass and water mass be conserved as follows:

I think conserving water mass means that if rel humidity exceeds 100% it does not reduce the amount of water vapor.
Here’s the link to the CAM doc http://www.cesm.ucar.edu/models/atm-cam/docs/description/description.pdf

• looncraz says:

micro6500:
“I think conserving water mass means that if rel humidity exceeds 100% it does not reduce the amount of water vapor.”
That’s how I read it as well.
It would take some serious investigation to verify that this represents the known metastability of supersaturation that actually does exist – or if it exceeds observations. If it permits excessive supersaturation, then it is not one of the “desirable attributes” for the atmosphere at the boundary layer (Pg 64).
However, if it does not exceed observations, then it may well be even more accurate thanks to taking the transient atmospheric water vapor supersaturation states into account. I would bet, though, that this would become fog, dew, or clouds extremely quickly in the real world – and in all situations below, say, 25,000km elevation. Cloud/fog/dew-forming nucleation catalysts/sites are abundant, especially near the surface, after-all.
As far as supersaturation resulting in warming – it really can’t, long-term. During the day the warming air will almost certainly not be supersaturated. At night, this is more likely, but should also be rather temporary… so long as the extra energy retained is released by sun-up, there will be no short-term net warming. If some the residual additional energy is retained from one day to the next, you will see temperatures climb – but you will also the supersaturation state collapse as it does so. Any night where the supersaturated water vapor state does not remain and the residual energy is released will break the medium or longer term warming.
It would be really interesting to get a thorough analysis as to the algorithm’s applicability to the real world, for sure.

• As far as supersaturation resulting in warming – it really can’t, long-term. During the day the warming air will almost certainly not be supersaturated. At night, this is more likely, but should also be rather temporary… so long as the extra energy retained is released by sun-up, there will be no short-term net warming. If some the residual additional energy is retained from one day to the next, you will see temperatures climb – but you will also the supersaturation state collapse as it does so. Any night where the supersaturated water vapor state does not remain and the residual energy is released will break the medium or longer term warming.
It would be really interesting to get a thorough analysis as to the algorithm’s applicability to the real world, for sure.

It’s the source of the high CS, if I’m right. So first it adds water vapor in the tropics, which carries heat poleward, more than dry air alone can carry., Plus at night, high rel humidity produces dew, some of which evaporates the next day, some of it end up in the water table, no longer able to carry that portion of the heat further north.
Then, I don’t think there’s been much if any warming, it’s all the impact of the hot water vapor, and the location of the oceans warm spots that drive downwind surface temps, this is controlled by the AMO/PDO, and the Nino’s.
How much heat was carried in all of that rain that dropped like 12″ over the entire state of texas, all that water was basically boiled out of the ocean, and the carried maybe a thousand miles, think of the work that requires.
Seem to me to fit most of the pieces.

• That may be the reason the models are running hot, but you have not demonstrated that it was put in deliberately in order to get the models to run hot.

• As I’ve mentioned this is what I recall as how they fixed cold models. and was touted as the cure. But as for proof, if the code is documented (NASA Model ???) as it goes back to Hansen iirc

• catweazle666 says:

lsvalgaard: “That may be the reason the models are running hot, but you have not demonstrated that it was put in deliberately in order to get the models to run hot.”
So, given that the models are well known to have been running hot for some considerable time, what is your explanation for the apparent lack of any intent whatsoever to address the very clear inability to represent the true surface temperature?

• First the models were running too cold, now they are running too hot, third time, good time, they may find the happy medium, but you change nilly-willy every other day. But at some time in the future they might wake up.

• You don’t change nilly-willy …

• BTW, they documented it in the CAM doc’s, so it was definitely intentional. And it sounds unnatural.

• I think what scientists do is mostly intentional.

• Well, I have 35 years of such experience, beating you by 10, and I must state categorically and for the record that you are exactly right and accurate.
Every program has a specification of what it is to do. Eithrr formal, or as the ideas floating around in someones mind. The program only implements that preconceived notion. But it does it blindingly fast.

• Here is the first known example of climate models predicting the future:
http://en.wikipedia.org/wiki/Clever_Hans
Hans was a horse owned by Wilhelm von Osten, who was a gymnasium mathematics teacher, an amateur horse trainer, phrenologist, and something of a mystic.[1] Hans was said to have been taught to add, subtract, multiply, divide, work with fractions, tell time, keep track of the calendar, differentiate musical tones, and read, spell, and understand German. Von Osten would ask Hans, “If the eighth day of the month comes on a Tuesday, what is the date of the following Friday?” Hans would answer by tapping his hoof. Questions could be asked both orally, and in written form. Von Osten exhibited Hans throughout Germany, and never charged admission. Hans’s abilities were reported in The New York Times in 1904.[2] After von Osten died in 1909, Hans was acquired by several owners. After 1916, there is no record of him and his fate remains unknown.

• Bazza McKenzie says:

I’m pretty sure he died.

• Hans offers a most interesting example of confirmation bias. He seems to have picked up cues from onlookers as to when he should stop tapping his foot, confirming expectations of his cleverness.

• ColA says:

My German Professor who first lectured me in Computers when the very first Apple PCs came out was a very wise old man and I always remember his favourite saying:-
“Shitzer in = Shitzer out” …… still just as relevant today as it was then!!!

• It works the other way too:
good stuff in = good [or even better] stuff out.
Computer models can be useful too.

• Paul says:

“Computer models can be useful too.”
Agreed, assuming you know all of the pieces, and how each works, right?

2. Gerry Parker says:

“Enter desired output” is a phrase we use sometimes in hardware engineering when referring to certain software programs. This can be a reference to the function of the software, or the performance of the SW team in response to pressure from management.
If you’re not catching it, that would be the first query the program would make of a user who was running the program.

3. John W. Garrett says:

Anybody who has any experience with computer modeling of anything knows Rule #1:
GIGO
“Garbage In, Garbage Out”
The entire financial implosion that occurred in 2007-2009 was entirely due to the gullible and credulous belief in the accuracy of computer-based simulations. The models worked perfectly; they did exactly what they were told to do. It was the assumptions that were wrong.
I’ve seen this movie a hundred times before.
Give me four parameters, and I can fit an elephant. Give me five, and I can wiggle its trunk.”
-John von Neumann

• emsnews says:

Correct. This is true of all systems. What you put in is what you get in return which is why real science relies on everyone tearing apart each other’s work. There is no autocorrect system, this is done via disputes and counterclaims and demands for proof.
Which the climatologists claiming we are going to roast to death do not want at all.

• billw1984 says:

No, the financial crisis was caused by the “fat cats”. Didn’t you get the memo?

4. Niff says:

As someone with 40+ years in IT…you are right….only low information, credulous types would think that a computer generated confirmation bias was….different.

5. That is exactly was I have been trying to tell the Climate-believers from scratch. (I had my own systemprogramexam 71): No computer on Earth can do better predictions than their systemprogrammers have made possible and there is also limitation due to input as well as if the programmer/-s have or haven’t included all needed parameters into their algoritm. Bad input -> Bad out put, Missing parameter -> lower validity for every missed parameter. And above all – consensus is a political term which has nothing what so ever to do with Theories of Science…

6. Alx says:

Computers are stupid in the sense they do not think. They can do any amazing amount of work at amazing speeds and due to being software driven can perform tasks only limited by imagination.
But again they are stupid, accidentally tell them there are 14 months in a year and they’ll process millions of transactions with lightening speed using 14 months in a year. So during my years in the software business I always remembered one thing, “To err is human, to really f**k up requires a computer.”

• PiperPaul says:

Lightening speed is when you are moving so fast that you are actually shedding weight. (sorry)

• auto says:

Piper
Shedding weight – but also shortening . . .
There was a young fencer called Fiske,
Whose action was exceedingly brisk.
So fast was his action
the FitzGerald contraction
Reduced his rapier to a disc.
Likewise – sorry!
Auto

7. “The first principle is that you must not fool yourself and you are the easiest person to fool.” – Richard P Feynman
Feynman had it right, but with computers we can now do it much better and faster.

8. SandyInLimousin says:

I spent the first 25 years of my working life as a test engineer programming ATE to test various PCBs, and components depending on who was my employer at the time. (I spent 25 years in the same building working for 4 companies and in 3 pension funds but that’s another sorry tale). The last 20 years writing business reports and management systems.
The end result is that , in my experience, the designer/developer only ever tests something to do what they have designed it to do. Give it to someone with the specification and say see if this works you can expect the first problem within a minute. The expression I didn’t think of that from a designer’s mouth becomes very familiar very quickly.

• steverichards1984 says:

Ferndown?

• SandyInLimousin says:

Nottingham, covered my transition from Test to IT.

• Keith Willshaw says:

Absolutely correct. As a software developer with over 30 years experience I ALWAYS ask for the allocated tester to be someone who knows NOTHING about the product. You give the tester the spec and some standard use cases. Invariably the find bugs to which the developers response is either ‘why did they do that’ or ‘well the correct way to do that is obvious.’

• Mr Green Genes says:

Exactly! The only way to test whether anything is foolproof is to let a fool loose on it. I have been that fool on many occasion and it’s amazing how easy it is to find flaws in some highly intelligent developer’s work. Mind you, I also know this from having my own work tested too …

• Indeed. The problem with making things foolproof is that fools are so ingenious.
After 20 years working in defense and another 18 in telecommunications, I have seen many of the same results Keith and the others. When it comes to software I have seen testers, people as described by Keith, find interesting and new ways of using a prototype product that caused all kinds of problems. One has to assume the user won’t be reading the user manual so they’ll try things the designers and coders wouldn’t and end up with a machine in a state the designers never considered. (I guess this would be called bias on the part of the designers/coders – “Well, we wouldn’t do it this way!”)
And so it goes with the computer climate models. Too many inputs with poorly understood parameters, not enough inputs with the proper granularity, and too many SWAGs assumed to be “the truth, the whole truth, and nothing but the truth”. Under those circumstances it’s far too easy to allow biases to creep in that invalidate the model’s results.

• brians356 says:

Rules I tried to lived by as a programmer: 1. Never think you can (or agree to) test your own code 2. It takes ten times more effort to properly test code than it did to create it. (Corollary: The test division should have ten times the budget/staff as the programming division. Usually it has one tenth the staff.)
3.There’s no such thing as bug-free (non-trivial) code. Believe it.(Corollary: It’s almost impossible to prove code is bug free.)

9. I disagree. The computer [more precisely its software] can make visible that which we cannot see. A couple of examples will suffice. The computer can tell us when to fire the rocket engines and for how long in order to arrive at Pluto ten years hence. The computer can, given the mass, the composition of a star, and nuclear reaction rates tell us how many neutrinos of what energy will be produced. The results are not ‘built in’ in the sense that the computer just regurgitates what we put in.

• Akatsukami says:

But, Dr. Svalgaard, suppose the programmer inadvertently mis-programs the expected neutrino production rate for a given reaction?

• suppose he does not. software has to be checked and double-checked. and validated by comparison with the real world. in the case of neutrinos, the programs turned out to be correct.

• PiperPaul says:

Checking and double-checking output using the wrong assumptions will still result in errors. If only there was a good example of such behavior…

• BFL says:

“Have you take a good look at the all of the technology required to embrace a driverless car? And why is it necessary when it really isn’t necessary at all, ”
Hard to believe that these will be successful in this age of massive litigation. Most likely after the first few fatalities/lawsuits they’ll disappear, unless of course they are supported by massive taxpayer subsidies like Amtrak.

• Scott says:

How about when the Europeans (or was it Russians) and the U.S. sent a rocket to Mars. One set of calculations was done in the imperial system, and one in the metric system – results predictable….it crashed. So much for checked and double checked.
How many bugs are in each and every Windows operating system despite being checked and re-checked. Yes, it happens to Apple’s too, just not as many.

• Retired Engineer Jim says:

Scott wrote:
“How about when the Europeans (or was it Russians) and the U.S. sent a rocket to Mars. One set of calculations was done in the imperial system, and one in the metric system – results predictable….it crashed. So much for checked and double checked.”
Regrettably, a NASA Mars probe. But that wasn’t a pure software failure – junior trajectory engineers noticed a divergence in the outbound trajectory and asked to do a third mid-course correction. Management decided to save the fuel to use for trajectory management while on orbit around Mars. There was, clearly, a software problem, but it might have been overcome.
Another Mars probe, a lander, failed when the landing engine shut off immediately after igniting. The best guess as to the cause was that the landing gear opened prior to engine ignition, and opened sufficiently strongly as to set the “landed” flag. However, that flag wasn’t cleared before engine ignition was initiated. So, was that a software problem, or a mechanical problem? Or a Use Case problem?

• tty says:

“The computer [more precisely its software] can make visible that which we cannot see.”
I beg to differ. Each of things you mention can be done with pen and paper, though it would take an impractically long time, and if we didn’t know how to do it with pen and paper we couldn’t write code to do it either.
It is true that the ability to run through large number of cases quickly means that a computer can find peculiarities or singularities that were previously unknown, but I wouldn’t dream of trusting those results unless independently verified (and I don’t consider another software program as verification).
This is based on 40 years experience of writing and (mostly) testing complex, safety-critical software.

• There is no difference between a super computer and pen and paper, or slide rules, or counting on fingers. Allam Turing showed that long ago.

• Akatsukami
June 7, 2015 at 4:34 am
But, Dr. Svalgaard, suppose the programmer inadvertently mis-programs the expected neutrino production rate for a given reaction?
lsvalgaard
June 7, 2015 at 4:43 am
suppose he does not. software has to be checked and double-checked. and validated by comparison with the real world.”
And part of the overall point here seems to be that it is tough enough when we CAN check and double-check against the real world – but that in climate science this checking isn’t possible. To the degree that it IS thought to be possible, much of the output disagrees with the real world, so then what happens? The climate people either adjust the results, shift past data downward, or play games with all of it in order to be able to claim that the real world is in agreement.
I agree with Richard Feynman, who said, “If the computed results disagree with experiment, then it [the basis for the computation – the hypothesis] is WRONG.”
But being wrong doesn’t work with people who don’t admit that their computations have been wrong. When even the real world isn’t seen as the arbiter, how is any of this even science?

• The theory for stellar evolution and structure is well tested on thousands of stars. All scientists agree that the theory is solid.

• D.J. Hawkins says:

@lsvalgaard
Well, there you have it. By the time it’s been “well tested on thousands of stars” any of the programmers bias’ have been whittled away, leaving the theoretical and empirical core. And, you’ve really not been paying attention. The topic is specifically about GCM’s. Do you seriously propose that they have been subjected to the testing and refinement your stellar neutrino prediction software has been?

• Since the program is just solving equations there is no programmer ‘bias’ anywhere.

• Sure there is, if this wasn’t the case why did the older models run cold?

• The scientists had not yet gotten the model to work [and not by putting bias in it]
Latter on it runs hot, so it still doesn’t work. Perhaps next time it will be OK.
The program code is available. [at least for some models]. You just can’t put bias in undetected, and why would you? If discovered your career would be over.

• It doesn’t look like bias, it looks like “dark matter ” in galaxy simulations. And if they were using that to radically alter society, I would complain about that too.
Even if I’m inclined to accept DM.

• I would call the examples you provided a good fit for my “filling in a few details” Dr Svalgaard. The examples you gave, the computer is applying rigorously validated methods to the task of producing well understood outcomes, with no more freedom of motion than a train running on a short length of track. If the methodology or implementation of the software is wrong, the computer blindly produces garbage – for example, a mistake might produce an incorrect firing sequence for your pluto rocket.

• eromgiw says:

I think there are two issues here that maybe weren’t made clear in your article. One is the type of program that is simply solving an equation, or following an algorithm to calculate a result. The same calculation can be performed manually but it’s a lot cheaper to get the computer to do it. Ballistics, nuclear physics, orbital mechanics, etc.
The second is the modelling that is the subject of the article. This is where a multi-dimensional system is being represented as a matrix that has iterative calculations applied according to adjacent values probably using partial derivatives and such-like. Analogous to the cellular automata in the Game of Life. Each initial value in the matrix has a significant bearing on the outcome of the simulation after a large number of iterations. It is one thing to ‘calibrate’ these initial values and the constants in the iterations to match historical observed trends, but it is another thing for these to accurately predict the future. This is where the climate models have failed miserably. The modelling has not really incorporated the actual mechanisms that are driving climate, merely mimicked the past.

• David Wells says:

Did you see this:http://wattsupwiththat.com/2015/02/20/believing-in-six-impossible-things-before-breakfast-and-climate-models/ or this:http://wattsupwiththat.com/2015/02/24/are-climate-modelers-scientists/.
A bird flies just like that, humans create computer technology just like that, humans drive cars maybe not well but just like that, if humans want to fly we need to burn kerosene at 26 gallons a second and a airbus 380 weighs 270,000kg empty.
Have you take a good look at the all of the technology required to embrace a driverless car? And why is it necessary when it really isn’t necessary at all, most people appear to want to imagine that a computer is some kind of miracle machine when all it is in effect is a very large pocket calculator using technology which has already reached critical mass and needs to go biological to progress further.
This hype is all about what you want to do not what we need to do or have any use for.
Supercomputers may be brite but they are not bright and with only humans to impregnate them I doubt the point of your conviction or its time line.

• Sal Minella says:

When the day arrives that the computer does not act exactly as it is programmed, it will cease to be a useful tool. Unfortunately, many tasks executed by computers are flawed due to being misprogrammed as is abundantly demonstrated by climate models. An unexpected result from a computer is not a sign of intelligence but an indication of the [lack of] skill of the programmer.

• hunter says:

Dr. S,
The humans told the computer what to do, and the pre-arranged goal was to get to Pluto.

• No, the program was not written with that goal in mind. It will work with getting to any place, Mars, Jupiter, the Moon, etc.
A generic answer to several people: if you don’t know the physics you put in well enough, the program may not work well, or alternatively can be used to improve the knowledge about the physics. Again, neutrinos are a good example.

• The discrepancy between Newtonian predictions, and the observed orbit of Mercury, is a good example where a computational difference helped unlock new knowledge. I’m not arguing against computers being useful – what I am suggesting is you cross the line into delusion, if you start assuming that output which conforms to your expectations is evidence that you are right, without having a non-trivial means to independently validate your conclusions. In the case of Einstein, there were multiple lines of evidence, and non trivial predictions, such as gravitational lensing of light, which were validated by observation.

• So the problem is not with computer programs and models at all, but with delusional PEOPLE. And those there are lots of, just look around in this forum.

• thisisnotgoodtogo says:

“So the problem is not with computer programs and models at all, but with delusional PEOPLE. And those there are lots of, just look around in this forum.’
I’m looking at your posts so far, and what you’ve come up with now, seems to indicate that you were deluded about what the original post says.

• mobihci says:

NASA climate model guided rocket-

• Ian Macdonald says:

mobihci, If you’re going to have fun with rocket aerobatics, go upscale a bit:

So long as it’s at taxpayers’ expense, of course.

• Sleepalot says:

@ Ian Macdonald

• Leonard Weinstein says:

lsvalgaard, you missed the main point. Orbital calculations can be accurate over short time periods (a few years) because we understand the required inputs and equations adequately cover the issue, and approximations are sufficient for the forcings. However, even for these, over a long enough time they will diverge due to small interactions with distant planets and round off accuracy. Even the best computer cannot solve a 3 (or more) body orbit problem over even a modestly long time if all three (or more) are of significant size and interacting the same time. In the case of climate, there are many causes of interactions, and several of these are not even fully understood. In the case of fluid mechanics, the equations are not capable of fully solving complex flows (such as high Reynolds number time varying three dimensional flows) at all, and simplified approximations are used, which are sometimes adequate, and often not. Climate is a far more complex problem and your example is a false one.

• Orbital calculations are good for a few million years, but eventually, of course, the uncertainty in the initial values catch up with you. But that is not the point. The point is that the models will work for limited time in any case [and the modelers are acutely aware of that]. And the programs are not written to ‘get the expected result’. To claim that they are betrays ignorance about the issue.

• Leonard Weinstein says:

Orbit calculations of some cases are are good for millions of years, but most are not. Asteroids near Jupiter are a clear case where the Sun and Jupiter interaction make any solution fail in a much shorter time. A planet orbiting in a close double star system generally could not have accurate orbit calculations for more than a few orbits no matter how good the initial data is. Once the number of significant interactions is large enough, these non-linear calculations fall apart fairly quickly.

• None of this has any bearing on whether computers only output what you put in which is the main thesis of the article we are discussing.

• billw1984 says:

Unless the expected result was to accurately calculate the results of Newtonian laws of motion
or the more modern (relativistic) forms of these equations. Your example is one where the math is known and there could be no possible reason to have a preferred outcome other than to accurately steer a space craft (or to just get the physics right). Sorry, Leif. You are a bit off on this one.

• If the ‘expected result’ is to “accurately calculate the results of Newtonian laws of motion” then you might also claim that the ‘expected result’ of climate models is to accurately calculate the results of Atmospheric Physics as we know it. But I don’t think that was what Eric had in mind.

• Lsvalgaard says “And the programs are not written to ‘get the expected result’. To claim that they are betrays ignorance about the issue.” Really? So the models were not programmed with water vapor and clouds be a net positive feedback??

• what is your evidence for that?
Feedback is supposed to emerge from running the model.

• Also Leif, if the physics in the models is so well understood and all of them are producing nothing but accurate and predictable physics, why so many models? Only need one if you have all the physics correct. Obviously, each modeler has in his model everything HE personally thinks is important, thus it produces as output exactly what HE expects, which may or may be close to reality.

• I see no evidence of that. Where is your evidence? The models are different because climate is a hard problem and there is value in trying different approaches, different resolutions, different parameters, etc.

• Dems B. Dcvrs says:

lsvalgaard – “So the problem is not with computer programs and models at all, but with delusional PEOPLE.”
Mother Nature says you, climate models, and G.W. Climatologists are Wrong.
Thus, delusional people would be you and G.W. Climatologists.

• Mother Nature says that the models are not working. That is all you can conclude.

• lsvalgaard
June 7, 2015 at 6:10 pm

what is your evidence for that?
Feedback is supposed to emerge from running the model.

While reading a GCM spec/ToE a long time ago, they stated that the difference between the earlier models that ran cold compared to surface measurements, and the current models was that they allowed a supersaturation of water vapor at the atmosphere – surface boundary. This is the source of the high Climate Sensitivity in models, which they used aerosols to fit to the past.
Now, just so you know, I’m a fan of simulation technology, I spent 15 years professionally supporting near a dozen type of electronic design simulators, including the development of commercial models for Application Specific Integrated Circuit (ASIC) designs for the IC vendor, training and supporting over 100 different design organizations and in 1986 designing an 300Mhz ECL ASIC for Goddard Spaceflight Center.
It was a common issue when building complex models of commercial large scale integrated circuits that the modeler had to be careful to actually model what the vendor said their design actually did not what the modeler and in some cases the design guide said the chip did.

• BarryW says:

Oh you mean like the prediction of solar cycle 24?

• ren says:

I entertain a prophet and say that the behavior of the Sun in this cycle (and the next) will continue surprising.

• Latitude says:

lsvalgaard
June 7, 2015 at 5:45 am
None of this has any bearing on whether computers only output what you put in which is the main thesis of the article we are discussing.
=====
please try to remember to convert inches to centimeters

• ren says:

“The polar field reversal is caused by unipolar magnetic
flux from lower latitudes moving to the poles, canceling
out opposite polarity flux already there, and eventually
establishing new polar fields of reversed polarity [Harvey,
1996]. Because of the large aperture of the WSO instrument,
the net flux over the aperture will be observed to be
zero (the ‘‘apparent’’ reversal) about a year and a half before
the last of the old flux has disappeared as opposite polarity
flux moving up from lower latitudes begins to fill the
equatorward portions of the aperture. The new flux is still
not at the highest latitudes where projection effects are the
strongest. The result is that the yearly modulation of the
polar fields is very weak or absent for about three years
following the (apparent) polar field reversal. Only after a
significant amount of new flux has reached the near pole
regions does the yearly modulation become visible again.
This characteristic behavior is clearly seen in Figure 1. The
four panels show the observed polar fields for each decade
since 1970 (the start of each decade coinciding with
apparent polar field reversals). Also marked are periods
where the magnetic zero-levels were not well determined
and noise levels were higher – at MWO (light blue at bottom
of panel) before the instrument upgrade in 1982 and at
WSO (light pink) during the interval November 2000 to
July 2002. The difference between the amplitudes of the
yearly modulation observed at MWO and at WSO is due to
the difference in aperture sizes. At times, exceptional solar
activity supplies extra (but shorter lived) magnetic flux
‘‘surges’’ to the polar caps, e.g., during 1991 – 1992 in the
North. These events (both instrumental and solar) distract
but little from the regular changes repeated through the four
‘‘polar-field cycles’’ shown (from reversal to reversal).”

• thallstd says:

Dr Svalgard,
Like all programs, the models are written to apply a set of rules (logic/code) in conjunction with known (or assumed) constants (ie climate/temp sensitivity to CO2) to a starting set of conditions/data (temp, rainfall, humidity etc). While the rules may reflect the best objective knowledge we have about how the various drivers and factors interact, the starting conditions and constants that the models are run under are at the discretion of whoever is running or commissioning the running of the model.
Has anyone correlated model outcome with the assumptions about, say climate sensitivity to CO2? Are the 95% of them that run high doing so because they assume a high sensitivity? What would they output if a lower sensitivity were used?
When the assumptions they are run with drives the outcome as much as the code they are written with does, claiming that “the programs are not written to ‘get the expected result’” while perhaps true is somewhat irrelevant, is it not?

• Tom in Florida says:

Doc,
Cannot a human do the same thing with a slide rule? It would just take longer.

• KaiserDerden says:

but the answer the computer/software gives in your example is based on the fact that all the variables are facts and know values … almost nothing input into the climate models is factual, its all guesses … guesses which then become the foundation of the next guess and then next one until the “output” is reached …
and the software in your example did show us anything that was invisible … everything that your example software calculates could have been calculated by hand on paper with a slide ruler … for example celestial navigation and artillery targeting happened long before computers ever existed …
In the words of Darth Vader: “Don’t be too proud of this technological terror you’ve constructed.”

• calculates could have been calculated by hand on paper with a slide ruler
There is no difference, any computer is equivalent to any other computer as Turing demonstrated.

• Don K says:

> The computer can tell us when to fire the rocket engines and for how long in order to arrive at Pluto ten years hence.
Yes, it can. BUT, you will need to be extraordinarily lucky to get to your desired destination if you approach the job that way. A real Plutonic probe would include provisions for mid course correction as it was discovered that assumed values for variables were a bit wrong and that some of the “constants” vary. And that’s with a very simple, well validated, set of state equations and a comparatively well understood physical situation.
Climate prediction is more akin to orbit prediction in a planetary system where the mass, position, and velocity of the planets is poorly known and the entire system is being swept by massive, fast moving fronts of particulate matter from some weird phenomenon in nearby space.
Computers are a tool. Like every tool, they may work poorly or not at all for those who do not understand how to use them.

• Stargazer says:

“A real Plutonic probe would include provisions for mid course correction…” In fact, a real Plutonic probe (New Horizons) completed its last of several trajectory corrections on March 10th. It is now in spin mode on its way to a July flyby of the Plutonian system.

• bobl says:

Let me use a much simpler example of the problem for Leif. Your fancy computer program for navigating your space craft to Pluto may fail because the spacecraft’s path inadvertently intersects with an unknown asteroid you may get 99,999 craft to Pluto with this algorithm but that is not going to help you with the one that intersected the asteroid. That is: the outcome is affected by parameters that are not anticipated by the programmers – the output is dependent on the programmers correctly identifying all parameters and influences on the objective – miss something – miss Pluto.
Leif, as a solar scientist I would imagine you might be able to forsee a number of ways that energy might be able to leave the earth other than by solar radiation, and certain astronomical influences that might add heat to the atmosphere (through say friction). This invalidates the idea that the incoming radiation must equal outgoing radiation – I do not see it as necessarily so. As an Engineer I have grave doubts about any lossless system. If all these influences are not modeled there can be little confidence in the outcome and I have grave doubts about the central hypothesis – that radiation in should equal radiation out – No, Total energy in = Total Energy out and that’s all.
Finally mathematics using quantized numbers can result in a number of problems, for example in 64 bit arithmetic what is 1 x 2^63 * 2 hmm, it’s 0 Or 100 * 5 / 4 = 125 but 5 / 4*100 = 100. Why, because 5/4 cant be represented exactly So in case 1 we get 5 x 100 =500, then 500/4 = 125. In the second case the calculation is 5/4 =1 (truncated from 1.25 due to lack of precision) 1 x 100 = 100. Now some compilers may optimise out this problem for you, so on an IBM PC with the Microsoft compiler you might get 125, on a MAC with the apple compiler you might get 100.
Take another simple example
f=ma
f/a=m
I have a 500kg spacecraft but I decide I want to calculate inertial mass for my navigation project it’s important because apparent mass depends on velocity, but there is no force and no acceleration, what is your mass calculation? – 0/0 = ???
Worse still, I have two sensors, due to noise the thrust (Force) sensor reads 1 kgms-2, the acceleration is measured as -0.000001 ms-2, The calculated mass is now 1/-0.000001 = -1 Million kg. Antimatter maybe?
If I as an engineer fail to take account of these things then I will miss Pluto by a wide margin, check or no check – a completely correct algorithm will fail in specific circumstances simply because I haven’t considered thrust values around zero or have missed influences – unmapped asteroids.
Computers are much worse than people, computers do things numerically and suffer from numeric precision problems – They can do 1/2 but they don’t do 1/3 very well. They also don’t give a rats whether the result is reasonable – unless I check for it the program will happily use my calculated mass of a million kg despite the fact it’s ridiculous.

• If I as an engineer fail to take account of these things then I will miss Pluto by a wide margin, check or no check
That is why the model should include all the known [or even just suspected] influences. ‘Simple’ models [as some people push] will generally not have much predictive power. Now, even if the physics is known, we may not be able to apply it as there are ‘contingent’ phenomena that are not [and mostly cannot] be predicted.
I have a thought about the discrepancy between model and reality. Such discrepancy is not a disaster, on the contrary, it is an opportunity to improve the model. Before it was running too cold, now it is running too hot, next time it may be just right. There is a curious contradiction in many people’s attitude about the models: on one hand they say that the models are garbage [GIGO], on the other hand they say that the failure of the models prove that CAWG is wrong. But from garbage you cannot conclude anything. To claim that the discrepancy between model and observation proves that CO2 is not a player is premised on the models being correct [but insufficiently calibrated, because of too crude parameterizations].

• lsvalgaard commented on An IT expert’s view on climate modelling.
in response to bobl:

That is why the model should include all the known [or even just suspected] influences. ‘Simple’ models [as some people push] will generally not have much predictive power. Now, even if the physics is known, we may not be able to apply it as there are ‘contingent’ phenomena that are not [and mostly cannot] be predicted.
I have a thought about the discrepancy between model and reality. Such discrepancy is not a disaster, on the contrary, it is an opportunity to improve the model. Before it was running too cold, now it is running too hot, next time it may be just right. There is a curious contradiction in many people’s attitude about the models: on one hand they say that the models are garbage [GIGO], on the other hand they say that the failure of the models prove that CAWG is wrong. But from garbage you cannot conclude anything. To claim that the discrepancy between model and observation proves that CO2 is not a player is premised on the models being correct [but insufficiently calibrated, because of too crude parameterizations].

I agree, right up to the point they want to drastically change the world’s economy costing multiple tens of trillion dollars, on their models results.
If the Gov wants to provide tax incentives for energy research, fine, I’ve long guessed that we had until 2050 or so to replace our burning of oil due to supply, we’ve found more oil, but at some point we should get the majority of our energy from Nuclear, and there’s a place for wind and solar, but I don’t believe we will find they are acceptable as the major source of energy for a global first world society, which also seems to be on the list of things to be done away with by some advocates who rail against oil.

• I agree, right up to the point they want to drastically change the world’s economy costing multiple tens of trillion dollars, on their models results.
I think it is the politicians elected by popular vote [so presumably reflecting the views of the people] who want to change the world. People have the government they deserve, having elected it themselves.

• Maybe, but Hansen was cheerleading for a while, and came under fire for doing so despite the US Gov not previously making a statement on the topic.

• But now, the people have made up their mind and elected a government bent on ‘saving the planet’.

• But now, the people have made up their mind and elected a government bent on ‘saving the planet’.

Polling says it’s near the bottom of the list of reason.

• Yet, the elected Government (of the people, by the people, for the people) says that it is the most important problem facing the world. Go figure.

• Just to be clear, I didn’t vote for them either time.

• bobl says:

But Leif, you are still wrong. The computer program is only as good as the algorithm implemented, the data and the parameters (assumptions) used . For example if I forget to take account of gravity or even use an inprecise value then I miss Pluto. The argument is simply that in the case of climate models too many parameters (eg cloud effects) are wrong or left out. I argue that there are unaccounted heat sources and sinks – must be. I agree that this then is an opportunity to improve except that climate is a chaotic process which cannot be truly modeled without making assumptions about averages, forcings – something that is always open to debate. It’s the assumptions about forcings and their relative strengths that contain the prejudices of the programmers. Neutrinos are probably a bit more stationary than climate, but can you actually predict when and where a neutrino will emerge? Can you predict the actual magnitude of a single particular magnetic tube on the sun next year on April 1?
I might add that climate has MANY feedbacks , positive and negative from the micro to the macro level and importantly they all have time lags – different time lags. You can’t model this behaviour with a simple scalar model, you need to use the square root of -1in there somewhere so any climate model that is a simple scalar model IS wrong. The output (warming ) of such a system of feedbacks is NOT a scalar, it’s a function. Nor is climate sensitivity a scalar, it too by virtue of the nature of the feedbacks is a (probably chaotic) function. The integral of weather with time is probably no more stationary than weather.

• Can you predict the actual magnitude of a single particular magnetic tube on the sun next year on April 1?
No, but I can predict how many such we will have over a year many years in advance.

• John Peter says:

“in the case of neutrinos, the programs turned out to be correct.” So who wrote the programs? A computer? Ultimately I don’t believe that computers can start from scratch and do their own research required for the input of information to programs. Needless to say, they can perform calculations based on human input which we cannot perform with a slide rule. As stated by Mr Worrall chips double their performance every 18 months. I think it is called Moore’s law.

• Scientists wrote the computer programs. That is the way it usually works, although there is some effort in a field called Machine Learning.

• Keith Willshaw says:

Indeed but that is because the programmer built in the correct algorithms. A more interesting example with complex software is the failure of the Ariane flight control system in 1996. Essentially a failure in the alignment sub system for the inertial navigation system caused the unit software to crash resulting in the launcher going out of control and having to be destroyed. The source of the error was an untrapped error in a subroutine which converted a 64 bit floating point number into a 16 bit integer when the converted number became large enough to exceed the max size of a 16 bit integer. The software specification required that such conversions be error trapped in flight critical code.
Trouble is the alignment sub system was supposed to be run pre-launch and was not intended to be a flight critical system so no such error trapping was built in. This error was compounded when it was found that a late hold on countdown meant a 45 minute delay while the INS was realigned. The overall control software was therefore changed so that the alignment system continued to run for 50 seconds after launch. The assumption was made that the horizontal velocity data returned in 64 bit format would not exceed the max size of the 16 bit integer. That worked fine for Ariane 4 but the Ariane 5 had a different trajectory and the system failed as described.
This is a classic case of the failure of a complex system due to the assumptions made by the developer and this system was orders of magnitude LESS complex than that used in computer climate models.

• Are you seriously suggesting that the failure of climate models is due to programming errors? If so, you must agree that if the bugs can be fixed, the models would work.

• David Chappell says:

Dr Svalgaard, did you actuallly read the last paragraph? It said “…due to the assumptions made by the developer.” Assumptions are NOT the same as programming errors and if you think so, you are being somewhat naive.

• Depends on what you mean by ‘error’. Assuming that a variable will always fit in the bits allotted becomes an error when you omit to test beforehand whether it is out of bounds, or to react to the error should an exception be signaled.

• @ lsvalgaard

The results are not ‘built in’ in the sense that the computer just regurgitates what we put in.

Me thinks you should re-think your above statement.
The fact is, that is all a “computing device” is capable of doing, ….. which is, … to per se, regurgitate the “digested” data/information that it is/was “uploaded” with for process control and/or to be processed. Electronic computers obey the Law of SISO.
The human brain/mind is a biological self-programming super-computer which functions in exactly the same manner …. with the exception that the brain’s “architecture” is totally different than that of an electronic computer.
And ps: Technically, it would be correct to state that …… “Humans are a prime example of Artificial Intelligence”, simply because, … “You are what your environment nurtured you to be”.

• No need to rethink the statement. Computer models are not built to produce a desired answer, but to show us what the consequences would be of given input to a system of equations, either derived from physics or from empirical or assumed evidence.

• DonM says:

No, some models are indeed “built” to produce a desired result.
I have done it. I have submitted it to the respective regulator for their review and they, knowing that any computer output is correct, accepted it without any further questions. The project moved forward.
I have also submitted very simple (by my defn.) and accurate hand written calcs to the respective regulator/reviewer. Without an “output” table the documentation is, almost always, held to a higher standard of questioning.
Computer models SHOULD not be built to produce a desired answer, but to show us what the consequences would be of given input to a system of equations, either derived from physics or from empirical or assumed evidence.
Do you think that the accepted models could not be easily tweaked by increasing or decreasing an assumed coefficient or two? Do you think the modelers put the model together, hit run, and published the very outcome results without further tweaking of the model? Do you think that the models were not refined throughout their construct? And do you think that there was no bias involved in model refinement?

• DonM says:

My point:
Some models = good (accurate)
Some models = benign (not used in a manner that impacts others)
Biased models = models produced by people with a bias
Manipulated models = models that are intentionally manipulated for a desired use
People that accept all models honest & accurate (good) = primarily idiots that are easily manipulated

• Computer models are not built to produce a desired answer, ….. but to show us what the consequences would be of given input to a system of equations, either derived from physics or from empirical or assumed evidence.

Give it up, ……. the above oxymoron example …. plus obfuscations only belies your desperation.
Climate modeling computer programs utilize Average Monthly/Yearly Surface Temperatures as “input” data ….. which were calculated via the use of Daily Temperature Records that have been recorded during the past 130 years.
The Daily Temperature Records covering the past 130 years are all of a highly questionable nature and are thus impractical for any scientific use or function other than by local weather reporter’s appeasement of their viewing public.
If the Daily Temperature Records covering the past 130 years are little more than “junk science” data ….. then the calculated Average Monthly/Yearly Surface Temperatures are also “junk science” data.
And if the aforesaid Average Surface Temperature “junk science” data is employed as “input” data to any climate modeling computer program(s) …… then the “output” data from said programs will be totally FUBAR and of no scientific value whatsoever.
The only possible way to calculate a reasonably accurate Average Surface Temperature(s) would be to utilize a liquid immersed thermometer/thermocouple in all Surface Temperature Recording Stations, structures or units.
But it really matters not, …. because a reasonably accurate Average Yearly Surface Temperature will not help one iota in determining or projecting future climate conditions. It would be akin to projecting next year’s Super Bowl Winner.

• Michael 2 says:

“The computer can tell us when to fire the rocket engines and for how long in order to arrive at Pluto ten years hence.”
So can your slide rule or pencil-and-paper computations, a thing the computer merely expedites.
The arguments here are not about the computational efficiency of a computer.
I am reminded of the “floating point bug” in the processor itself:
http://en.wikipedia.org/wiki/Pentium_FDIV_bug
A big difference between my education and that of my children is I can usually spot immediately when my calculator (or my input to the calculator) is wrong; because in parallel to entering data and operations, my mind has also be estimating the likely result. But my children have complete faith in the calculator and do not detect when they (more likely) or the calculator itself (extremely unlikely) has made an error.
A *model* is not just a simple calculation. A very large number of computations are related to each other and simple floating point rounding error can eventually accumulate to the point of uselessness after millions or billions of iterations.
Other assumptions exist as “parameters” and you play with the inputs and see what you get.
Models are “trained” and, once trained, will sometimes see patterns in noise. Human hearing can do this too. When I flew on turboprop aircraft sometimes I would hear symphonies in the constant RPM droning of the propellers. The human mind seeks patterns and will sometimes find it.

• Ian Macdonald says:

No, the original article is correct. The first rockets and the first nuclear devices were built without the benefit of computers, It’s just a lot slower and harder to work those things out by hand, but not impossible. Gagarin’s first flight used a mechanical analog computer for course tracking, but it was not essential to the mission’s success. At the present stage of IT development, if a human cannot conceive of a solution to a given problem, then a computer cannot solve that problem.

• TobiasN says:

The “solar neutrino problem” was well-known for 40 years. The model was not working. Only about 1/3 of the predicted neutrinos were observed.
Apparently they solved it about ten years ago, that the missing neutrinos exist, but as a different type,
So yes, now science can predict the number of [regular] neutrinos coming from stars. But, IMO, this topic is not a great example of how straightforward science is. Not just because the model was wrong for 40 years, but because they have yet to build a detector to actually observe those other neutrinos theoretically coming from the sun.
Of course I could have it all wrong.

• You do have this wrong. The model was working fine. The problem was with the detector which aws only sensitive to a third of the neutrinos. The neutrinos change their ‘flavor’ on their way to the Earth. Once all types of neutrinos were detected, it was found that the flux is just what the model predicted.

• Pat Frank says:

The problem was that neutrinos were thought to have zero rest mass. That forced only one type of neutrino. The solar flux of this type of neutrino was only 30% of the predicted flux.
When the physical theory was changed to allow non-zero rest mass, the new physics allowed neutrinos to oscillate among types, which had slightly different masses.
The revised physical theory correctly predicted the reduced flux of the originally observed solar neutrinos and further predicted fluxes of two alternative types. These were also detected, and the total flux agreed with the earlier calculation of total solar neutrino flux.

• You are confusing the model for calculating the production of neutrinos [which does not depend on its mass] and the chances of observing neutrinos of a given type at Earth. The latter has nothing to do with whether the solar model was correct or not.

• Glenn says:

lsvalgaard
June 7, 2015 at 8:53 am
“You do have this wrong. The model was working fine. The problem was with the detector which aws only sensitive to a third of the neutrinos. The neutrinos change their ‘flavor’ on their way to the Earth. Once all types of neutrinos were detected, it was found that the flux is just what the model predicted.”
That computer model must have been programmed with magic. References?

• Not with magic, although as A. C. Clarke in effect said ‘any sufficiently advanced technology will look like magic to the unwashed masses’.
References would be wasted on you, but here is a starting point http://www.sns.ias.edu/~jnb/

• Steven Mosher says:

The neutrino example is fun because feynman broke his own rules.
When the data doesnt match the model… All you know is the data doesnt match the model. You dont know which one is wrong or if both were wrong.
As leif notes the model was correct what was “wrong” was the data.
Why was the data wrong? All sensor systems have assumptions or theory that are foundational to their construction. data are always infused with and reliant on theory.

• Glenn says:

“References would be wasted on you”
More ad hom. Leif, don’t you realize you shoot yourself in the foot with this attitude?

• absolutely not. Ad-homs should be doled out to the deserving, and you qualify.

• SandyInLimousin says:

Apart from the usual problems with users (aka the real world) breaking software I was also thinking of SRAM, DRAM and microprocessors problems from the late 1970s early 1980s. I can think of half a dozen instances when working within the specifications of the part a specific manufacturers version would fail, by corrupting data for memories and failing “randomly” for the rest. Because the vendor was convinced of his design. modelling and testing many months and much money were lost proving that despite it all the parts were faulty, one famous supplier described such problems as features. This is without including radiation induced soft errors.

• Eugene WR Gallun says:

Isvalgaard
Given enough time a guy with a pencil and some paper can do the same calculations. Ten guys would cut down the time considerably. Doing it on a computer is faster still. But the answers will still be the same.
You say — “given the mass, the composition of a star and nuclear reaction rates”. A computer’s underlying programming is the train track on which it must run. Whatever numbers our computer train is filled with, it hauls those numbers along the track that underlays it.
You say — “the computer can make visible that which we cannot see” but that is exactly what eyeglasses do — nothing more, nothing less.
Eugene WR Gallun

• Eyeglasses are useful and sometimes necessary.

• Eugene WR Gallun says:

Perhaps I should have said microscope instead of eyeglasses but I thought eyeglasses was funnier.
Eugene WR Gallun

• Stephen Richards says:

Sorry Lief you are out of your field and will soon be out of your depth.

• And that I must hear from a person who like farmer Jones is outstanding in his field and is knee deep in bovine stuff.

• Glenn says:

He responds with ad hom, as usual. This is typical of science defenders, they see enemies all around, and any criticism of modelling is an attack on science in their eyes. One might be led to think that science is ad hom./

• The anti-science you spout looks more like an attack on rationality. BTW, I like your words “science defender”. That is what we must all do against the forces of irrationality and ignorance you spread. And you can take that as a well-directed and well-founded ad-hom. It was meant as such.

• Glenn says:

Leif: Yet you can’t provide any references that I spread “anti-science” [trimmed]

• Don’t need to. Your actions speak for themselves.

• VikingExplorer says:

Count me as a science defender.

• Science Defenders unite! Rise up and be counted!

• I’m about as big a science geek there is, it’s odd to be called a denier.
I’m also a believer in models and simulators, help build the electronic design industry from a few customers to 1,000’s of customers over 15+ years. I was the expert my customers went to when they didn’t understand what the simulator did.

• VikingExplorer says:

It’s a shame that the anti-AGW movement started in the name of science, against a political movement using science as a thin veneer, and has ended up accepting all the premises of the AGW movement and attracting people who rabidly attack science and rationality itself.
From a tactical point of view, it’s extremely idiotic to have gotten into a situation where any warming trend > 0 means that AGW is happening. Ever heard of natural variability? Has anyone played chess before?
I truly believe that AGW is false. Therefore, I’m not afraid of science. I’m confident that eventually, the truth will come out. Like Leif says, it’s “self-correcting”.
By resorting to a kind of foaming at the mouth attack on science, models and claiming that the climate is inherently impossible to understand or predict, and releasing a flurry of papers in response to the tiniest of warming trends, it seems like the so called skeptics around here actually believe that AGW is true.
Sun Tzu: One defends when his strength is inadequate, he attacks when it is abundant.
One plausible explanation for skeptic’s defensive posture is that deep down, they believe AGW is true. Apparently, they believe that they can’t win on science alone. Therefore, they start attacking science itself.
I believe Leif once told me that he would follow the evidence, regardless of whether it matched his initial thoughts or not. I could be wrong about AGW. If it turns out that I have been, then I’m with the truth, whatever that is.

• lsvalgaard, nowhere in Eric’s article does HE say that a “computer just regurgitates what we put in”. Further down, you state- “None of this has any bearing on whether computers only output what you put in which is the main thesis of the article we are discussing.” Also…NOT something Eric actually said.
YOU seem to be interpreting what Eric and others here are saying rather poorly. (Unless you’re just being petty and arguing semantics for kicks)
What Eric said/is saying is that the OUTPUT of any model is produced by the parameters/information/assumptions/calculations PUT IN to the model. The results/OUTPUT can be completely unknown prior due to the extent of the calculations involved, and thus not “built in”, but the results that are produced are totally dependent upon the parameters/information/assumption/calculations that ARE built in to the model. They HAVE to be. If they weren’t, how could you trust the results at all?
Yes, the computer can tell how many neutrinos of what energy will be produced AFTER it is given the mass and composition of a star, and the nuclear reaction rates as we know them to be after years of lab testing that produced consistent, dependable, accurate results based on those three variables-mass, composition, and nuclear reaction rates. Yes, the computer can tell us when to fire the rocket engines and for how long in order to arrive at Pluto ten years hence. But ONLY because a software program was designed in which all of the KNOWN and TESTED physical properties of rockets as well as how those rockets react in time and distance and thrust and etc are contained first. The software programers didn’t INPUT recipes for pancakes and surprisingly get the proper calculations for Pluto travel instead!
If the data that is input is corrupt in any way, either by mistake, or by flawed human-based assumptions/understanding then the end result will be corrupted by default.

• What he said was “First and foremost, computer models are deeply influenced by the assumptions of the software developer. Creating software is an artistic experience, it feels like embedding a piece of yourself into a machine. Your thoughts, your ideas, amplified by the power of a machine which is built to serve your needs”.
THAT is what I objected to.

• Aphan says:

LS- “What he said was “First and foremost, computer models are deeply influenced by the assumptions of the software developer. Creating software is an artistic experience, it feels like embedding a piece of yourself into a machine. Your thoughts, your ideas, amplified by the power of a machine which is built to serve your needs”.
THAT is what I objected to.
Then why didn’t you SAY you objected to THAT specifically before now? Sheesh!
Are you a computer software developer? If you are, and YOU have never felt this way when you developed your software, that makes ONE instance upon which you base your objection. If you are NOT a computer software developer, and you have not interviewed the world’s software developers personally and specifically about this topic, and they have not responded to you with clarity that none of them agree with what Eric stated, then your objection is scientifically defined as your personal OPINION. It’s not a fact, nor evidence.
Here’s a nice article about Global Climate Models and their LIMITATIONS- http://weather.missouri.edu/gcc/_09-09-13_%20Chapter%201%20Models.pdf
The area that might explain the hesitancy of most of the people who are disagreeing with you is : 1.1 Model Simulation and Forecasting, 1.1.1 Methods and Principles
Here’s a quote from that section-
“The research on forecasting has been summarized as scientific principles, currently numbering 140, that
must be observed in order to make valid and useful forecasts (Principles of Forecasting: A Handbook for
Researchers and Practitioners, edited by J. Scott Armstrong, Kluwer Academic Publishers, 2001).
When physicists, biologists, and other scientists who are unaware of the rules of forecasting attempt to
make climate predictions, their forecasts are at risk of being no more reliable than those made by non-
experts, even when they are communicated through complex computer models (Green and Armstrong,
2007). In other words, when faced with forecasts by scientists, even large numbers of very distinguished
scientists, one cannot assume the forecasts are scientific.
Green and Armstrong cite research by Philip E. Tetlock (2005), a psychologist and now professor at the University of Pennsylvania, who “recruited 288 people whose professions included ‘commenting or offering advice on political and economic trends.’ He asked them to forecast the probability that various situations would or would not occur, picking areas (geographic and substantive) within and outside their areas of expertise. By 2003, he had accumulated more than 82,000 forecasts. The experts barely, if at all, outperformed non-experts, and neither group did well against simple rules” (Green and Armstrong, 2007). The failure of expert opinion to provide reliable forecasts has been confirmed in scores of empirical studies (Armstrong, 2006; Craig et al., 2002; Cerf and Navasky, 1998; Ascher, 1978) and illustrated in historical examples of wrong forecasts made by leading experts, including such luminaries as Ernest Rutherford and Albert Einstein (Cerf and Navasky, 1998).
In 2007, Armstrong and Kesten C. Green of the Ehrenberg-Bass Institute at the University of South
Australia conducted a “forecasting audit” of the IPCC Fourth Assessment Report (Green and Armstrong,
2007). The authors’ search of the contribution of Working Group I to the IPCC “found no references
… to the primary sources of information on forecasting methods” and “the forecasting procedures that were described [in sufficient detail to be evaluated] violated 72 principles. Many of the violations were, by themselves, critical.”
***
Now…if the EXPERTS who are supposedly building these climate models and running the computer programs are UNAWARE OF, or are IGNORING the 140 scientific principles “that must be observed in order to make valid and useful forecasts”-just exactly how VALID or USEFUL are the results pouring out of those climate models and computers going to be???????

• Well, I thought I made it clear enough, but obviously there are limits to what some people can grasp and I didn’t take that into account enough, my bad.
And I am a software developer [perhaps a word-class one some people tell me] with 50+ years experience building operating systems, compilers, real-time control systems, large-scale database retrieval systems, scientific modeling, simulations, graphics software, automatic program generation from specifications, portable software that will run on ANY system, machine coding, virtual machines, etc, etc.

• Don’t know what happened to the formatting there. Surely my computer software knows how to format paragraphs correctly…and yet what it did to the data I entered was nothing like what I expected. (grin/sarc)

• usurbrain says:

“The computer can tell us when to fire the rocket engines and for how long in order to arrive at Pluto ten years hence.” ??? And how many rockets have we sent outside of the earth to anywhere that we never verified, and corrected the trajectory mid course that arrived on this faraway planet or object? T
“he computer can, given the mass, the composition of a star, and nuclear reaction rates tell us how many neutrinos of what energy will be produced.” That is a simple math problem. Now write the algorithm for the effect of CO2 (just one of the known ‘factors’ ) on the temperature of the atmosphere – A, with no water vapor and then – B a correction factor for the various values of water vapor in the air.) Now provide just a partial listing of ALL factors that could possibly affect the temperature of the atmosphere, oceans, land, ocean currents, atmospheric currents, water vapor transportation of energy through the atmosphere, etc. etc. etc. (I think you get the point) And How will it take to write these algorithms and how many people will it take.
It took me and four others over four years to model a 1000, megawatt electrical nuclear power reactor and every variable was known. The reactor system was operating and the model could be verified by making numerous perturbations to the reactor system, both large and small, of every variable and parameter in the system to verify the accuracy of the “model.” . It then took another two years to get the model to Accurately reflect the restricted set of parameters that were in the design specification. After all of that there were thing that were not modeled and did not accurately match actual plant operation, but were at least, controlled by other parameters within the system.
Climatologists don’t even know how many parameters they don’t know let alone how many parameters they need. And of the ones the know they don’t even know enough about them to write an algorithm to plug into software to calculate the effect it has on the system they know nothing about.
As you claim to be an expert on how easy it is to make a computer do things – Please provide an algorithm for calculating “pi” (3.141…. ) accurate to at least 20 places, that I can put into Fortran (the language I have been told they use). Don’t bother looking one up. I want YOU to provide me with the algorithm from your brain and no reference to any text book. PERIOD. That should be a trivial case compared to CO2. After you do that do the algorithm for CO2 for atmospheric down welling radiation from CO2. Again from your brain not from a text.

• The key point is “ It then took another two years to get the model to Accurately reflect the restricted set of parameters that were in the design specification.“, that is: you got it to work well enough to be useful. So it is possible, just have to work at it. As for pi: draw a lot of parallel lines and throw straws at them a gazillion times. The number of straws that will cross a line involves the number pi.

• usurbrain says:

Isvalgaard – WRONG – The key point is that —
“Climatologists don’t even know how many parameters they don’t know let alone how many parameters they need. And of the ones the know they don’t even know enough about them to write an algorithm to plug into software to calculate the effect it has on the system they know nothing about.”
Thus the rudimentary SIMPLE model of a model takes the equation GIGO and provides GIGO^2

• Tell that to Mr. Monckton who advocates a SIMPLE model explaining everything.

• Billy Liar says:

lsvalgaard
June 7, 2015 at 4:28 am
Dr Svalgaard, I am afraid to have to tell you that computers do indeed only regurgitate what we put in. No one has yet invented a computer that ‘makes stuff up’ or ‘thinks for itself’. All computers, as tty says merely carry out a sequence of operations on the input data. If you have never performed the calculation manually, the result may surprise you but the computer is only producing the ‘built-in’ result even if it is a supercomputer. A lot of climatologists would be able to think more clearly if they accepted this as true; the computer is not going to provide any revelations, you have to provide those as input. Building in ‘randomness’ and ‘statistics’ will not persuade the computer to come up with something new either; it can’t.

• That is not the issue here. The computer lets me see things by running a model, that I could not see without. E.g. how many neutrinos are produced in the Sun’s core? Where is there an oil field worth exploiting, etc. In none of those cases has the programmer put into the model what the answer should be.

• Leif, as someone who has designed computers for spacecraft and launch vehicle I can tell you the difference. The difference is that these are completely deterministic systems that are 100% dependent on external sensors to provide real time data, which then is processed and the spacecraft or launch vehicle computer provides a correction in the form of an output to a thruster or sensor suite.
Also, to use your second analogy we still have the problem of missing neutrinos as well as the problem of the consensus that misled Hathaway et. al. into their wrong predictions about solar cycle 24. Another example is the failure of the scientific community to predict the existence of the gamma ray bursts in the cosmos, or the production of gamma rays and even anti-matter in lightning flashes before the GRO BATSE instrument discovered them. It was still over a decade before the data was believed.
I will give you another example.
In the Apollo samples that came back from the Moon, in some sample anomalously high levels of water was discovered. Since the deuterium ratios were the same as terrestrial water, these anomalously high levels were dismissed as contamination, not an unlikely thing considering the work was done in Houston. HOWEVER, only after the Chandrayaan, Lunar Reconnaissance Orbiter, coupled with confirmation of earlier data from Clementine and Lunar Prospector, did scientists go back and figure out that their ASSUMPTIONS were wrong and that indeed the water in those samples was truly of lunar origin. Until LRO and Chandrayaan the “consensus” of the planetary science community would not consider any other origin than terrestrial contamination, even after James Arnold’s theoretical paper in 1979, and Prospector and Clementine gave the same data. It was ONLY after it was completely indisputable that the consensus began to change.
So, I respectfully disagree with your disagreement.

• Nick says:

Talking of rockets…
http://www.around.com/ariane.html
Computers are no different from pen and paper, they are just much faster. You only know if your paper calculation or your program is ‘correct’, by testing it against the real world.

• The computer [more precisely its software] can make visible that which we cannot see.
===============
exactly. the computer has no regard for whether those things are true or correct. it can spit out an imaginary future in the blink of an eye. because unlike a human, the computer has not the slightest compulsion against lying.
Every computer on earth is a Sociopathic Liar. It can lie without the slightest compulsion.
The Sociopathic Liar – Beware of this Dangerous Sociopath
Most people have lied in their life. Whether it was to protect feelings, avoid trouble, impress, or to simply get what they want, not many people can say they have never told a lie.
However, there is one extreme type of liar that you should beware of; the sociopathic liar.
On first impressions, you may find you actually like or are drawn to the sociopath. It’s not surprising as more often than not they are indeed charming and likable. Watch out, these type of liars can cause untold damage and mayhem once they lead you into their web of lies and deceit.
Sociopaths lie the most because they are incapable of feelings and do not want to understand the impact of their lies. They may even get a thrill out of lying at your expense. Once they tell an initial lie they go on to tell many more lies in an attempt to cover up the lies they started, or just for the “fun” of it.
A sociopath rarely reveals his or her feelings or emotions. You won’t often hear them laugh, cry, or get angry. These kinds of liars tend to live in their own little world and always find ways to justify their dishonest deeds. They do not respect others and place their own needs first and foremost.
If someone questions the sociopath’s lies they can be incredibly devious in the way they cover things up. This can include placing the blame at someone else’s door or by inventing complex stories to cover up their untruths.
Sociopaths can be so good at lying that they are able to pass lie detector tests. This means they often escape jail or don’t even get prosecuted for the crimes they permit. (That’s not to say all sociopathic liars are criminals, of course).
It is believed by some experts that sociopathic lying is connected to the mental illnesses Narcissistic Personality Disorder (NPD) and Antisocial Personality Disorder (APD).
If you come across someone who you think is a sociopathic liar, beware!
http://www.compulsivelyingdisorder.com/sociopathic-liar/

• Dems B. Dcvrs says:

lsvalgaard – “The results are not ‘built in’ in the sense that the computer just regurgitates what we put in.”
Backing the point of article.

• Dems B. Dcvrs says:

No, not just the opposite. Your remark does back the point of article.
Humans (“what we put in”) are involved. Thus human bias.
In case of Climate Models, the vast majority of those models are Wrong by Mother Nature. Those models are wrong because of G.W. Climatologists biases, whether it be intentional (or unintentional) biases.

• We put in the laws of physics and the experimental parameterizations needed and this is done rigorously with full review by competing scientists. Nobody puts ‘bias’ in. Show the evidence [if any] that a scientist has put his personal bias into the computer code.

• lsvalgaard
June 7, 2015 at 9:42 pm
The algorithms for UHI adjustments and other important parameters had to be dragged kicking and screaming out of the computer gaming “modelers”.
Climate modeling is the antithesis of real science.

• So what is real science in your esteemed opinion? Barycenter cycles? Piers Corbin? Monckton’s ‘simple’ model? Evans’ Notch Theory? …

• Leif,
That you have to ask shows just how far government-financed, post-modern “science” has strayed from the scientific method.
It should be obvious that real science is based upon observation. All actual data show that ECS cannot possibly be in the range assumed, never demonstrated, by IPCC.
“Climate science” is corrupted science.

• opinions, opinions, agenda-driven opinions. I am a government-funded scientist, am I corrupt?

• Yep, computers can do all of that. And WHY? Because the science of gravitational attractions is known and VERY well tested, with LOTS of replication. All the formulae work the same this time and every time. And WHY? Because the equations to code in are KNOWN and RELIABLE and replicate well.
But now, take those same computers and work with a chaotic system in which even the most powerful feedback element (water vapor) is not understood very damned well at all.
My own rule of thumb FWIW is that replicate-able sciences that tend toward engineering are done amazingly well on computers, and those out on the frontiers, where even the principles aren’t known – when any assumptions are coded in instead of proven equations, how in the HELL can anyone trust the output?
If we had as many assumptions in astrophysics as in climate science, no one would be able to hit a planet or comet or asteroid.
The computer can also make visible that which we will never see: If EVERY formula used isn’t well-tested and well-proven, the output is a cartoon.

• The physics of the atmosphere is also well-known and reliable. We cannot, as yet, deal with small-scale phenomena, but progress is being made: the weather predictions are getting better and that will benefit climate models in the end.

• No, the physics of the atmosphere is NOT well-known and reliable. If it was there would only be ONE GCM and all climatologists would use it. The very fact of scores of them existing means not only do they not all have/use the same physics but that they put in different parameters, too. WHY would they do that, if all of it is well-known? Wouldn’t they all use the same parameters?
Well-known in theory? Doesn’t mean squat, because the models are all giving wrong results.
Unless they all are voluntarily ignorant of your well-known physics? And how much of that “well-known” FITS with reality? I see you hammering on other guys here about Feynman and his “If it doesn’t agree that means it’s wrong” – so now apply that to climate curves.
Are you talking out of both sides of your mouth? You can’t have it both ways. The models are wrong – ergo, either the physics that is well-known is well-known but incorrect, or you blame it on the code writers for not getting it right. But you tell us that programmers’ work is checked and double-checked. Every way we look at it, the lack of match-up says somewhere in there the process is messed up.
You claim that it can’t be the physics or the climate guys (I assume they are the ones for whom the physics is well-known) – but if so then where are the model results errors coming from? Oh, and PULLEAZE! please, please, please, claim that the models are correct! That all their curves match up with the actual temperatures. I need a good laugh. Tell us that there has been no hiatus. PLEASE tell Trenberth where the missing heat is. HE at least admits it’s missing. So I won’t have to laugh at him, because he sometimes sees facts and admits them as facts.
But your reply is nonsense, dude. “The physics is well-known and reliable.” But you can’t deal with small-scale stuff? Dude, you don’t have the BIG stuff down, either. Or are you different from Trenberth and refuse to admit facts as facts?
Weather predictions? WHO in the world is talking about weather? Aren’t you forgetting? It’s the skeptics who are supposed to not know the difference between weather and climate. And here you are, claiming WITHOUT BASIS that the weather predicting models are going to help with the climate models. HAHAHAHA – When? In the 15th millennium?
And if the physics is all so well-known, the weather and climate models should have no discrepancies between them. And here you are, admitting that the climate ones are INCOMPLETE AND NOT WORKING. Otherwise, the weather predictions wouldn’t have any “benefiting” to do for the climate models.
It’s either well-known and working – or it is incomplete and not working. You can’t claim both, dude. Not among intelligent people who are paying attention to the contradictory assertions you claim.

• VikingExplorer says:

>> No, the physics of the atmosphere is NOT well-known and reliable. If it was there would only be ONE GCM and all climatologists would use it.
Non-sequitur
By that embarrassing logic, shouldn’t there only be one textbook in each subject. I remember during Calculus class going and getting 10 different calculus textbooks ad laying them out on the table, all turned to the same topic.
>> If we had as many assumptions in astrophysics as in climate science, no one would be able to hit a planet or comet or asteroid.
What makes you think we can? You’ve been watching too much star trek.
The physics of the atmosphere IS well-known, and reliable. Otherwise, weather models would be impossible.
However, climatology is not an extension of meteorology. The atmosphere is only .01 % of the thermal mass of the system.

• catweazle666 says:

VikingExplorer: “The physics of the atmosphere IS well-known, and reliable. Otherwise, weather models would be impossible.”
A simple definition of science is the ability to predict. If your prediction is wrong your science is wrong. How good is the “science” these Canadian bureaucrats produce? The answer is, by their measure, a complete failure. Figure 1 shows the accuracy of their weather prediction for 12 months over the 30-year span from 1981 to 2010.
Notice that for 90 percent of Canada the forecast average accuracy is given as 41.5 percent. A coin toss is far better odds.

Which appears to indicate that reliable weather models are impossible, so clearly the physics of the atmosphere IS NOT well-known, and reliable.
Of course, as the physics of the atmosphere is essentially non-linear and chaotic – hence subject to inter alia extreme sensitivity to initial conditions, this is not surprising.

• VikingExplorer says:

catweazle666, you’re reasoning seems to be:
premise: detailed data is NOT required for weather prediction
premise: Canada has horrible weather prediction
conclusion: weather models are impossible
Non-sequitur. One of these premises is invalid.
Actually, accuracy is pretty good up to 5 days and rain is pretty good.

• catweazle666 says:

VikingExplorer: catweazle666, you’re reasoning seems to be:…
Wrong.
In fact, my reasoning is:
“Of course, as the physics of the atmosphere is essentially non-linear and chaotic – hence subject to inter alia extreme sensitivity to initial conditions, this is not surprising.”

• VikingExplorer says:

catweazle666, you can’t weazle out of this one. Here is how this went:
Leif: The physics of the atmosphere IS well-known, and reliable.
VE: Otherwise, weather models would be impossible.
cw666: References Dr. Ball
Dr. Ball: Canada has taken away the ability to collect atmospheric data
Dr. Ball: Canada hasn’t been very good at predicting the weather between 1981 and 2010 <== old data
Dr. Ball: A simple definition of science is the ability to predict.
cw666: Which appears to indicate that reliable weather models are impossible
cw666 conclusion: so clearly the physics of the atmosphere IS NOT well-known, and reliable
Therefore, I represented your argument quite well. Non-sequitur.
By that brilliant logic, if we take away all medical instruments from our doctors, we can conclude that the Medical Sciences are not well known and reliable.
Faced with evidence that I provided that shows that actually, 5 day forecasts are quite good, you are now changing your position. A 5-day forecast would be impossible if the physics of the atmosphere were not well-known, and reliable.

• catweazle666 says:

“Faced with evidence that I provided that shows that actually, 5 day forecasts are quite good, you are now changing your position. A 5-day forecast would be impossible if the physics of the atmosphere were not well-known, and reliable.”
No, I have not changed my position, which was clearly stated at the end of both my relevant posts using identical wording. Here it is again:
Of course, as the physics of the atmosphere is essentially non-linear and chaotic – hence subject to inter alia extreme sensitivity to initial conditions, this is not surprising..
In fact, the improvement in 5 day forecasting over the last decade or so is primarily as a result of universal satellite coverage, not computer games models, and the improvement in precipitation forecasting is due to radar installations.
So you are wrong – again.

• VikingExplorer says:

>> improvement in 5 day forecasting over the last decade or so is primarily as a result of universal satellite coverage
Exactly. When the system is sensitive to initial conditions, adding more data makes it easier. However, data alone predicts nothing by itself. It’s the physics being simulated in the models that result in prediction. Therefore, the atmospheric physics is relativity well known and reliable.
As for the non-linear comment, any good engineer or scientist would know that almost all physical systems are non-linear and chaotic. It’s not stopping us from advancing forward and in many cases, succeeding.
There is absolutely no reason to conclude that climatology is impossible.

• catweazle666 says:

“There is absolutely no reason to conclude that climatology is impossible.”
If by that you are asserting that climate models can ever give accurate projections/predictions, the IPCC itself would appear to differ.
…in climate research and modeling we should recognise that we are dealing with a complex non linear chaotic signature and therefore that long-term prediction of future climatic states is not possible…
IPCC 2001 section 4.2.2.2 page 774

Not to mention no less an authority ahtn Edward Lorenz, of course.

• Digital simulators takes 1’s and 0’s from an input file applies them to the set of inputs and then steps a clock (another defined input), you can load memory with programs and with system level models (like cpu’s) you can run your set of vectors, I’m sure by now you can get code debuggers running against a behavioral model, even get inside to see registers. We did most of this in the 80’s. While you can check timing in a simulator, they are vector dependent. We also had timing verifiers that you setup a clock, and your control inputs, reset, clear, stuff like that. You initialize the circuit (cause they don’t just turn on like they do on a bench), and you can define a stable/changing input state for your 128 bit data path, without having to come up with all combinations of those 128 bits, just stable, change, stable and the tool checks all possible permutations of those 128 bits. But this was hard for a lot of designers to understand what it did and why it gave you the results it did.
Analog simulators took a circuit design and turned it into a set of partial differential equations based on model primitives, but not only did you have to initialize the circuit, you have to initialize the matrix equation, this is basically the same as how a GCM works.
Simulators are state dependent, GCM’s are state dependent, initial conditions in a CGM is the setup and test vectors in my simulation example, you have to set each node on the matrix, and then run until you get numeric stability, at this point you start your GCM’s clock running forward, the pause is state dependent effect, state of the ocean, state of the air, state of the ocean from the state of the air, and so on.
Climate is a 30 year average to average out the state data. Imaging a superposition of all ocean states, and a superposition of the air, no pause, no El Nino, no La Nina, just like the stateless timing verifier.
The ensemble GCM runs are trying to turn state dependent runs into a superposition of all possible runs, because we can’t match the state of the real world to the state of the GCM, someone at some point must have thought if they had well enough defined set of initial conditions a weather forcasting system on a big computer could tell them the future, if they smudged a lot of the state data to be more like the the superposition of each years actual weather into a 30 or 60 year average, and get their model to do the same thing. Lorenz knows better, but can we define a stateless climate?
In some respects, CS is the results of a stateless simulation, but based only on a change in Co2, and we know it is lacking.

• VikingExplorer says:

You have definitely accepted their premises if you’re using the IPCC as a source. Stockholm syndrome much?
And the view of Lorenz is often over stated by people for political reasons. One, he never said that it was impossible, but only that long term predictions were difficult.
That’s ok, because we don’t really need long term predictions.

• Dems B. Dcvrs says:

lsvalgaard – “Mother Nature says that the models are not working. That is all you can conclude.”
We can conclude something else.
Despite numerous people here pointing out your incorrect statements, you can not admit you are wrong.

• most people here have no idea what they are talking about…

• Jonathan Bishop says:

Dr Svalgaard,
I agree that “software can make visible that which we cannot see”, but reading through many of your other replies to comments from other commenters I suggest that you cast your mind back to your numerical analysis and compiler development days. Computational numerical analysis is not generally well understood even among “professional” computer scientists, let alone amateur programmers(regardless of how many years of code hacking they have done). Usually this is a minor problem since the vast majority of applications are not numerically intensive – but not so, I suggest, with climate simulations. With your background in numerical analysis, I expect you know this, but perhaps Eric’s wording did not cause you to recall the design issues that must be addressed for success in this instance.
When I was teaching computer science at university many decades ago we took 300 in in first year and graduated about 30 at the end of third year. Back then university computer science degrees concentrated on creating programmers and computer scientists rather database admins and project managers which they seem to do now. We were not yet in churn mode at universities trying to graduate enough database / code hackers to satisfy a burgeoning IT industry. Computing, like I guess all disciplines, is replete with chasms for the unwary, and self taught.
While some of the ways Eric presented his case gave me pause, the essential thrust of his piece clearly strikes a chord with the programmers in the commentariate of WUWT, and in my view is broadly correct.
My interpretation of what Eric was saying is:
1. Computers are not an authority, they do nothing of themselves.
2. It is the software that does something, but is not an impartial authority either as it does what the programmer tells it to do.
3. Programmers make implementation decisions at the coding level that can have a significant impact on the outcome – starting with the choice of computer and coding language, but continuing through to the choice of algorithm to implement an equation, etc.
4. Those decisions may be expressions of personal bias, the need to approximate or assume some behaviours/values/constants for a variety of reasons(e.g., unknown, considered unimportant, not practically calculable on the technology in use), or more commonly not in the knowledge space/skill set of the programmer. I will explain this later.
5. Thus whether a model was run on a computer or on the infinite Turing tape machine is irrelevant – the “computer” is not an authority – it is a tool, and it does only what it is told to do by a programmer – who may or may not be trained or sufficiently experienced for the task.
I do not think, as you seem to have interpreted, that Eric was saying computers pump out the answer you give it, but rather that the answer it gives can be heavily influenced by the views, skill, knowledge, etc of the programmer because the software cannot yet invent or modify itself (except potentially in interpreted languages like lisp, M, PHP, JavaScript, etc.). With such biases inherent in the creation processes, it is niaive to consider the computer an “authority”.
As other commenters have pointed out (and as you would clearly understand given your coding experience) – if floating point numbers are used on a limited word length binary Von Neumann computer (e.g. not a Turing Machine) then there are more floating point numbers missing from the domain of possible numbers than are in it. Of course, this is true of integers too – but only at the extreme ends of the number line, not between two numbers as occurs in floating point arithmetic. These missing numbers are approximated (rounded up or, usually, truncated to a precision t ), and these approximations rapidly accumulate to significant errors, yet I would be willing to bet that there are few (if any) climate model implementations where the “coder” has even known that he/she should calculate machine epsilon (assuming they knew what it was) before deciding the level of precision at which to work or calculated the computational error, let alone understood that the obvious approach is not necessarily the best.
In numerical applications the choice of approach is the most important factor. By way of an example, which I am sure you will recall from past experience let us consider something as seemingly simple as calculating e^x.
e^x = 1 + x + x^2/2! + x^3/3! + ….
Forsythe (et al) demonstrated that if we want to calculate e^-5.5 (his example) the obvious way to calculate this is to calculate each term in the above formula and add them together, right? Umm – no.
While correct answer (approximation) is 0.00408677, if I simply calculate each term in succession on a binary computer with a precision of 5 and stop at 25 terms I will get e^-5.5 = 0.0026363. In other words, the obvious solution gives a result of NO significant digits.
Now if I change the algorithm to implement e^-5.5 = 1 / e^5.5, on a machine with the same characteristics and otherwise use the same basic approach – adding the terms of the series together but then divide the result into 1, I will get e^-5.5 = 0.0040865. Still not ideal, but better.
In fact the better way to calculate e^x on a binary computer is to break x into its integer and fraction parts:
x= i + f (in the example i = -5 and f=.5) and then calculate:
e^x = e^i * e^f where 0<= f <= 2.
This is perhaps not the most obvious approach and this is just the start of a very long list of the issues to consider in numerically intensive computing on binary machines.
Now this is a well understood problem – most numeric libraries include a function for e, so programmers rarely actually program it these days, but for the library to contain the function, someone did program it, and the same issues apply to a host of numeric and statistical calculations that they do program directly.
It is for a host of reasons such as these that qualified computer scientists tend to consider programming as much an art as a science and are very wary of saying "it is right because the computer said so". This is what I understood was the thrust of Eric's paper.
For me, one of the initial problems I had with the AGW modelling many years ago was that the software code was not necessarily public and was not (apparently) generally done by actual computer scientists skilled in numerical analysis, but by self taught programmers with expertise in other fields. As a real "professional" computer scientist I am extremely hesitant to accept a numerical model's output if I can not inspect the code, and never if the computational error value has not been calculated. Have you ever seen a climate model where the computational algorithmic error (and I do not mean the statistical errors, nor instrument errors) is calculated and reported? I assume they exist but not that I have noticed to date.
So I suggest, with the greatest respect, that just as I would not presume to lecture you about the sun, perhaps you might be well advised to at least listen to, and try to understand what the programmers here have been telling you about what we have to think about when building numerical applications. While every kid can hack out a toy programme (and many non-computer scientists think they are programmers because they can hack out something that seems to give an answer), serious computational numerical modelling requires real expertise in computational numerical modelling – as well as in the field being modelled and in statistics!
Ref: Forsythe, et al – Computer Methods for Mathematical Computations (1977)
Ref: Dahlquist, et al – Numerical Methods (1974)
Kind regards,
Jonathan Bishop

• All of what you say is well-known by the modeling community who worries a great deal about numerical stability and goes to great length to solve the equations in the best possible way. We are not talking about lone, amateur propeller-head programmers, but about seasoned scientists the world over scrutinizing each others work. The climate models are very much thorough teamwork and at the same time a very competitive enterprise. The notion of programmers putting in ‘bias’ is ludicrous.

• And I am also a professional programmer [in addition to being a solar physicist]. From upthread:
And I am a software developer [perhaps a word-class one some people tell me] with 50+ years experience building operating systems, compilers, real-time control systems, large-scale database retrieval systems, scientific modeling, simulations, graphics software, automatic program generation from specifications, portable software that will run on ANY system, machine coding, virtual machines, etc, etc.

Dr Svalgaard,
lsvalgaard: “I am also a professional programmer.”
Yes, I understood that, and in fact you have a numerical analysis background – having checked your CV before commenting in the first place – hence my reference to your previous life in numerical analysis, etc., and my choice of example with which I assumed you would be familiar (but not necessarily other commenters) and in that light I am thinking that my last paragraph was poorly worded, as it can be read to imply otherwise.
The purpose of the example was to remind (not teach) that the algorithm choice is a programmer’s decision, and hence a reflection of the programmer’s views and/or knowledge. This is what I understood to be the central issue of bias that Eric was proposing.
My feeling is that you may have been too hard on him (and a few others) in concluding that he argued merely that the models were simply regurgitating a pre-defined result as that would seem to be naive.
I read his essay as arguing more along these lines: that the programmer makes choices at the algorithmic design and construction level which, while seemingly minor, can dramatically affect the outcome, but not necessarily be obviously incorrect. Hence the example of e^x: each algorithm was superficially valid, but only one gives an acceptable approximation, as a result of the constraints in binary floating point implementation.
I accept your argument (knowing not better) that the majority of the models are carefully considered and implemented by professional programmers. Of course that does not seem to have been the case with UEA software that was the subject of the climate-gate emails – at least until late in the piece when an apparently competent programmer was engaged to repair (unsuccessfully) the software and data, ( if the code and his comments are anything to go by). Yet embedded in those comments were many subjective “best guesses” reproduced in the code – which kind of supports Eric’s argument.
I suggest with the greatest gentleness that, while the idea of a single programmer embedding personal bias into the implementation may seem unlikely, “ludicrous” is possibly a little strong. I read bias to mean “subjectivity” and a consequence of the micro design decisions. I am pleased to hear that you are confident in the diligence, professionalism and expertise of all (?) the climate model development teams and the prevalence of expert numerical analysis programmers working on the models. NumAnal being a largely arcane discipline with few practitioners compared to other ComSci skill sets – like DBA’s and project managers – competent software numerical analysts are likely among the rarer computer scientists.
It would make climate science truly unique at least in the university sector, for in my experience many non-computing scientists appeared to think that because they were extremely talented in one science or mathematics, they were automatically competent programmers once they had learned how to build a loop. The same idea nearly every first year com-sci student experienced (including myself). I remember wondering what else there could be to learn at the end of first year – I could code in Pascal, Fortran and assembler/Mix – what else could they teach us over the next three years? How foolish I was!
Over many years, and many projects where I have been requested to investigate and fix coding disasters from seemingly competent but not properly skilled teams I have developed a healthy disrespect for the majority of non qualified software developers, and , indeed, many qualified ones. There are exceptions but I suggest they are rarer than one would hope.
On another matter, and I guess in support of some of your comments, the mere fact that a program is fixed in code does not mean that it can not create or discover something new and independent of the programmer. Genetic algorithms and neural nets being examples (particularly the former) where solutions are evolved rather than programmed in a conventional sense and the outcome is not necessarily known. In both these cases the programmer creates a kind of virtual machine – a platform for general problem solving if you will. Likewise, computer languages and their compilers, and operating systems are examples of software without a specific solution in target, but rather a set of tools with which to create a solution.
The issue of whether there is bias in modelling software is possibly a semantic argument revolving around the meaning of “bias”, and it would seem that with minor changes to the way things were argued and the “distinguishment” of the writers’ underlying meanings there is not necessarily as much disagreement on this topic as might seem at first.
I hope that you would at least agree that the mere fact that something was produced on a computer does not, of itself, have any real standing in the credibility stakes. The computer is a very good paper weight until such time as a human creates a set of instructions telling it what to do. What matters is the way those instructions are produced, what those instructions tell it to do and how those instructions tell it to do that something. In that space lies human decision making – so it is not the computer that is the authority, but the scientific team that stand behind the software it runs. It is not so much the output that is authoritative, but how that output is interpreted. This is what I understood Eric to be saying.
Is there disagreement on that central premise?
Kind Regards
Jonathan Bishop

• I hope that you would at least agree that the mere fact that something was produced on a computer does not, of itself, have any real standing in the credibility stakes.
I would partly agree, but add that a computer can be trusted to follow instructions and that is a plus in the credibility game.
Furthermore, I would add that the choice of algorithm is not important as long as the algorithm solves the equations to a stated accuracy. So I would maintain that Eric’s central premise is wrong.

• As I’ve mentioned a couple of times, I believe the difference between the models running cold, and the models running warm was the decision to allow the super saturation of water at the boundary layer, this amplifies the water feedback, and has been the reason for a CS larger than Co2 alone.
Now, for those digging around in the code, and or TOE, this should be easy to confirm, and I would say it is a bias, a “well it must be true because otherwise the models are unnaturally cold”.
Does reality allow a super saturation of water at the ocean air interface? I doubt it, but air movement would sweep away the saturated air, and we know cell size has to parametrize sub cell effects, so one might justify this action as a parametrization and buy the BS they’re selling, but how many times does that turn out as the truth.

• catweazle666 says:

lsvalgaard: “All of what you say is well-known by the modeling community who worries a great deal about numerical stability and goes to great length to solve the equations in the best possible way.”
I see.
So let’s take a look at the output of these truly wonderful human beings, and see how their striving to go to such great length to solve the equations in the best possible way actually works out in practice, shall we?
Hmmmm…I’m not impressed.
Oh, and for what it’s worth, although I cannot claim to be a world famous expert in compiler design or whatever, as an engineer I too have some experience in computer programming, I wrote my first bit of Fortran in December 1964, and first worked on modelling (of low frequency heterodyne effects in automotive applications) in 1971.
I can assure you that as a humble engineer with dirt under my fingernails working on safety-critical projects, some of which, had they have been as far out as those above may well have created smoking holes in the ground, had my work been as flawed as that, I would certainly have ended up in court charged with severe negligence or worse.
Tell me Professor, would you let your children fly on an aeroplane that depended on the sort of work that your beloved climate modellers produce?

• Well, they are trying. They don’t have it right yet. What I objected to was the notion that they have programmed their ‘biases’ and personal wishful thinking into the models. There is no evidence for that, but that does not seem to deter people [like yourself?] to believe so.

• catweazle666 says:

“What I objected to was the notion that they have programmed their ‘biases’ and personal wishful thinking into the models.”
What, you mean like an absolute, unequivocal and invincible conviction that practically the only possible influence on the Earth’s climate is anthropogenic carbon dioxide, unaffected even by almost two decades of evidence that this is highly unlikely, based perhaps on some variety of post-modern belief in original sin (although I admit I’m struggling to explain the motivation)?

10. One thing that struck me when I started to look into the AGW issue was the blind faith people in power seem to have of the results coming out of these computer climate models.
Because I have worked with other types of models, haven’t much faith in such models.
Model can only be credible if the can be validated.
Where is the empirical evidence that CO2 is a major climate driver?
There are many other things that can influence changes in climate. Maybe these can be ignored if they have a Gaussian distribution, but if they don’t have Gaussian stochastically distribution over time, but have some sort of systematic then even small factors can dominate over time.
This I have found is the case with variations of ENSO and for climate variations, where tidal and solar factors play an important if not a dominant role.

• Ian Macdonald says:

That is true, and one of the key issues is that we have no way of experimentally simulating an air column several miles long whose pressure and temperature vary along its length, containing a fraction of a percent of greenhouse gas, into which IR is injected. That real-world situation is very different from the numerous lab demos in which 100% CO2 is pumped into a metre-cube box, and warming supposedly noticed. The only model which simulates the real conditions is called Earth.

11. Whenever discussions about the value of computer models crop up, I’m always reminded of this Richard Feynman quote:
“It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.”

• Ed Zuiderwijk says:

Add to that: It doesn’t matter how many gridpoints or lines of code or how brilliant your programmers are or think they are, it doesn’t matter how fast or slow. If the answers do not reflect the real world it is wrong. Full stop.

• Hahaha – You are the third person on this comment stream who’s said that! I am one of the three. GOOD! Someone out there pays attention to the Scientific Method.
Huzzah!
The world may not be going to hell in a hand basket!

12. Mark from the Midwest says:

I’m not surprised by Eric’s observation. For the non-math mind I’ve found that certain buzz words have a profound impact. Simulation is one of them, I know of two data products on the market that were marginally profitable, then renamed as “simulations” and became a big success. The term model, taken alone, is weak, but when you say “computer model” it allows one’s imagination to take off. I’ve seen a multi-million dollar business built on a bad OLS regression, simply because the graphs and tables were so cool. (FYI, the business crashed a burned a few years later, but not before the damage was done). I recall Algore, in a speech to Hollywood types that the “these are powerful computer models.” If I were a Hollywood type I’d be impressed, since their first large scale impression comes from Pixar. I also know of two former NASA employees that worked in the public information group that don’t have enough brains to understand how jet propulsion works, maybe that relieved them of any responsibility to report anything in an accurate fashion. I also know a fair number of educated people who actually believe that the stuff that Ray Kurzweil talks about is real.
These days it’s all about story-telling, and to be honest the “doom-and-gloom-if-we-don’t-act stories” are a bit more compelling than “nothing to see here” stories. A scientists we need to work hard to tell better stories. The stories can still be true to the science, but until the story-telling improves the war will not be totally won.

• PiperPaul says:

Computer models are seductive because they can do fancy output graphics and absolve the user of any responsibility for math errors. All that has to be done is enter correct data correctly and course in this case they can’t even get that right.

• Just remember that the Greek theory of planetary motion was mathematically correct, and could be used to predict eclipses, and planetary motion. It however, had little correlation to actual planetary motion.

• denniswingo – Absolutely. They had those epicycles all down pat – until they didn’t.

13. ICU says:

Psychological projection much?
IT ‘expert’ opines on numerical modeling.
Probably never wrote a single line of code using the physics based laws of conservation.
Perhaps you will tender your CV showing vast SME status in the field of scientific numerical modeling in the fields of hydrodynamics and thermodynamics?
Otherwise … not even wrong.

• I know better than to assume the output of such models is correct, because it conforms to what the author of the software expected.

• ICU says:

Well, that is much shorter, still circular, still projection, but shorter. You could have posted that as the title with no text.
No CV showing physics based numerical modeling experience though.
So IT ‘expert’ has no relevant experience, but he just knows, because … well because he just knows.
Cart before horse much?
Anyways, I’m doing some harmonic analysis right now, the preliminary FFT’s kind of show me where to look, but I don’t believe those FFT’s, so the harmonics that do agree with the preliminary FFT’s must be wrong, so that means that the original input data must be wrong, I’ll just change the input data to agree with what I think the harmonics should be, if that doesn’t work then I’ll put in a bunch of IF statements that give me the right wrong answers.
Nice to know how IT ‘experts’ think and do their jobs though.

• looncraz says:

ICU:
“Anyways, I’m doing some harmonic analysis right now, the preliminary FFT’s kind of show me where to look, but I don’t believe those FFT’s, so the harmonics that do agree with the preliminary FFT’s must be wrong, so that means that the original input data must be wrong”
This is not how programmers think, actually. We have a massive burden to produce the proper result and the inputs are considered “const,” so we can’t change them, we can only work upon them with certain (often preordained) algorithms.
In the even that our output is incorrect, we don’t assume the input is wrong, we assume how we are handling the input is wrong (though we do data integrity checks, of course, to ensure the inputs are proper to what the inputs should be – as in a wind measurement shouldn’t be -99 kph, and pressure shouldn’t be 93 bar…). In the case of climate models, this is all done well in advance and the models do no input verification, they just create computational kernels, cells, or threads (design dependent) and run computations based on the inputs. These computations are broken down to different functions/equations which have been independently verified or created and tested to illustrate what the creator was trying to illustrate. Usually, that equation is trying to illustrate reality.
From there, the fun happens. And this is where AGW models fail. They take the output from the most established methods, which are guaranteed to be running cold (on a global basis) due to an unknown reason (assumed to be GHG positive feedbacks not previously included) and, in effect, offset them by an assumed feedback sensitivity value, which, of course, are indexed to the GHG levels.
This produces a seemingly more accurate result, however, the sensitivity feedback values are completely guessed at by trying different values until the results are as expected. Then, they take these modified results and feed them BACK into the algorithms as input data, which then continues to get the same feedbacks continually accrued. This is why the models invariably go off in an odd direction over a century or so – they can’t track a stable climate no matter what, they always assume there will be a compounding feedback and there is, as of yet, no known way to constrain that feedback to represent reality.

Now, we get to the most important factor in all of this: predictive success.
The old models were running too cold (diverging from observations). So they indexed the models to GHGs with assumed stronger feedbacks, which fixed the hind casts. Now, the observations have diverged from the models again – the models are running warm. This means their feedbacks are too strong or there is another unknown factor which is needed to create a cooling bias.
Notice: the models have NEVER predicted the future climate. They have either been too cold (predicting an imminent ice age) or too warm (predicting imminent runaway warming).

• ICU says:

looncraz,
GIGO? The climate models do NOT make predictions.
Seriously though, you are saying something, don’t have a clue as to what all you are saying, as that does not fit at all with my experiences as a physics based numerical modeler. I don’t think I’ve ever seen an AOGCM run off to plus/minus infinity though, something about restoring forces comes to mind though.
The models are climate projections, or you can even call them climate forecasts, based on certain future forcing assumptions (RCP’s), they have certain IC’s and BC’s, over large time scales this becomes a BC problem, so that if the radiative forcings change from those assumed, which they do (that’s why there are four RCP’s to capture the possible range of future forcings) then those assumed forcings must be updated to our best current understanding of those forcings (which is what climate modelers do). The same is true for the climate models themselves, those are also updated to include the climate scientists ever better understandings of the climate system.itself.
Anything beyond that is simply conspiracy thinking.

• I don’t think I’ve ever seen an AOGCM run off to plus/minus infinity
==================
Well Duh. Infinity is not a legal binary value. It cannot be stored in a finite computer, except symbolically. Your comment is BS baffles Brains.
Do climate models run off to infinity? Of course they do. For all practical purposes. It is called instability. They run outside their bounds and halt, either from error or intentionally. The problem with GCM’s is increasing instability with increasing resolution, which is a clear sign of design problems.

• ICU says:

ferdberple,
So you think that ~13-14 years of whatever the heck that figure is means something?
Because without the source documents, it means absolutely nothing to me. It’s simply a graph with absolutely no background information/discussion.
TIA

• looncraz says:

ICU:
“GIGO? The climate models do NOT make predictions.”
Yes, yes they do. That is the entire point of a climate model: to make a prediction about future climate conditions. You can call them “forecasts” or “projections” all you want, but they are still subject to validation or refutation as with any prediction.
” I don’t think I’ve ever seen an AOGCM run off to plus/minus infinity though, something about restoring forces comes to mind though.”
You don’t see them running away too greatly because other models outputs are used as inputs to GCMs which serves to constrain possible outputs and the models are rarely run for more than a century. Given enough forecast time, many of those models would very likely run off into effective infinity, though any failure to do so would still say nothing about their [in]validity.
“[…]they have certain IC’s and BC’s, over large time scales this becomes a BC problem, so that if the radiative forcings change from those assumed[…] forcings must be updated to our best current understanding of those forcings (which is what climate modelers do). The same is true for the climate models themselves[…]”
Indeed so, the models are as good of a representation that can be made of current understanding. However, they also are a double-edged sword. When scientists were trying to explain why the models were running too cold they used the models with make-believe positive feedbacks to try and fine-tune the models to see what was missing (the amount of energy they needed, and the pattern in which that energy must have become available to the atmosphere in the form of heat). This, of course, makes sense (and is an oversimplification of what they really did, which was an earth-climate-system energy balance analysis – using the models as guides). We only start to see a problem when they find a way to blame a small set of variables for all of the missing heat… and they ran with it.
Today, in fact, we have so many modeled ways for which the warming could have occurred, each based on more or less equally sound mathematics and models, that we would have all fried to death if they were all true (maybe a slight exaggeration).
The Clean Air Act could very well be responsible for a good chunk of the warming (if not all of it) seen in the 80s and 90s. Oceanic thermal oscillations could easily have burped out enough energy we didn’t know it contained to cause a significant chunk of the observed warming. Unknown climate responses to varying solar output frequency ranges, reduced cloudiness (beyond cover), observation biases, UHI effects, internal earth energy leaks (from the mantle), and much more have all been modeled to demonstrate that they could in part, or in whole, explain all observed warming. Some, such as certain fluorocarbon levels, have even been claimed to explain variations over just a few years – as well as explaining the pause.
The misunderstanding of the significance of the model outputs – even by those who created them – has resulted in the likely overstatement of the severity of the situation. And, when you take your assumed values and include them as constants in your model, you have created an invalid model. And no one knows that more, in my opinion, than a programmer. Mostly because we deal with the consequences of such wrong assumptions all the time… and partly because I’m biased 😉

• ICU, there is a computational constraint on GCMs. This causes there finest resolution to be one degree, about 110km at the equator. This means they cannot resolve convection cells (must be less than 10km, best under 5). So they have to be parameterized. That is done and tested in a number of ways. But an important one is hindcasts. For CMIP5 a manditory output was a 30 year hindcast from 2006. Covering a period that contained some amount of warming from natural variation. So forecasting from 2006, the models proceeded to miss the pause.
Eric is right on the money. All the preceeding paragraph did was delve more deeply into how the bias crept in.

• MRW says:

@ristvan,

For CMIP5 a manditory output was a 30 year hindcast from 2006. Covering a period that contained some amount of warming from natural variation.

Can you expand on this historical note? Thank you.

14. Greg Woods says:

‘But this act of creation is also a restriction – it is very difficult to create software which produces a completely unexpected result.’
– I don’t know about anyone else, but ‘unexpected results’ are the last thing I want from software.

• eromgiw says:

I once wrote a Genetic Algorithm program to ‘evolve’ reinforced concrete beams. Came up with some very unexpected designs, and each run came up with something different, albeit similar.

• Ed Zuiderwijk says:

I worked in the field of simulated neural networks and we did get unexpected (but correct) results, so unexpected that we first had to convince ourselves and then our colleagues. Great fun, though.

15. Chris in Hervey Bay. says:

The only time you know something is wrong with your code, is when you get a result you don’t expect !

• Chris in Hervey Bay. says:

Good God Greg, you read my mind, a result I didn’t expect !

16. TimTheToolMan says:

lsvalgaard writes ” The results are not ‘built in’ in the sense that the computer just regurgitates what we put in.”
But “The computer can tell us when to fire the rocket engines and for how long in order to arrive at Pluto ten years hence.”
That “built-in”ness is exactly what its telling us. We put in the laws to calculate the burn and we know precisely what they are…and the computer spits out the answer according to the programming of those laws. How well do you think we’d do with getting the rocket to Pluto if we didn’t actually know the mass of the rocket? Or just rounded “G” out to 7×10−11 N⋅m2/kg2 ?

• You miss the point. We put in the physics, not the expected result. This is why computers are useful to let us see what the result of the physics is, something we would not otherwise know.

• hunter says:

Climate models are not physics.

• Ed Zuiderwijk says:

The question is: have you put in the physics correctly. Or better: do you understand the physics in sufficient detail to have put it in correctly. With so many different aspects to coupled-ocean-atmosphere models, some of which are poorly understand themselves (e.g. heat exchange between surface water and lower atmosphere), I doubt it very much.

• richardscourtney says:

lsvalgaard
As usual, you refuse to see the point. You say

We put in the physics, not the expected result. This is why computers are useful to let us see what the result of the physics is, something we would not otherwise know.

Often “the physics” cannot be “put in” because the model cannot accept them or they are not known. Both are true in climate models.
For example, the climate models lack resolution to model actual cloud effects so those effects are “parametrised” (i.e. guessed) in the models. And the dozens of alternative explanations for the “pause” demonstrate that the physics to determine which – if any – of the explanations are correct is not known.
The climate models are the product of their input “parametrisations”. Thus, they are merely digitised expressions of the prejudices of their constructors.
It is madness to claim the economies of the entire world should be disrupted because those prejudices demand it.
Richard

• It is nonsense to say that the ‘model cannot accept the physics’. And the point was not about current climate models, but about the much more general statement that computers only give you what you expect. It is that misconception that I disagree with.

• richardscourtney says:

lsvalgaard
As usual, you try to justify your nonsense by accusing others of nonsense, and you obfuscate.
I wrote

Often “the physics” cannot be “put in” because the model cannot accept them

and

For example, the climate models lack resolution to model actual cloud effects so those effects are “parametrised” (i.e. guessed) in the models.

but you write

It is nonsense to say that the ‘model cannot accept the physics’.

Nope. Not nonsense, but – as I explained – fact.
Clearly, you would benefit from taking a remedial course on reading comprehension.
Not content with that, you obfuscate by saying

And the point was not about current climate models, but about the much more general statement that computers only give you what you expect. It is that misconception that I disagree with.

Sorry, but the discussion is “about current climate models”.
You may want to talk about something else (because you know you are wrong?) but that is your problem.
Richard

• No, the issue as Eric presented it is not about climate models per see, but about the more general notion that computers can only give you the expected answers. But past experience with you tells me that it is useless to educate you about anything, so I’ll let you rest there.

• richardscourtney says:

lsvalgaard
You say to me

No, the issue as Eric presented it is not about climate models per see, but about the more general notion that computers can only give you the expected answers. But past experience with you tells me that it is useless to educate you about anything, so I’ll let you rest there.

I don’t believe you when you say that you will “let {me} rest there”.
You have entered another of your frequent ego trips so you will insist on the last – and typically silly – word.
You made an untrue assertion. I pointed out that you were wrong (again).
You have pretended you were right and tried to change the subject.
As always, you have attempted to have the last word with an unfounded insult.
I am writing this to laugh at you, and I predict you won’t “rest there” because you will want to make another post to provide you with the last word by telling everybody how clever you think you are.
Richard

• KaiserDerden says:

but if you jigger the inputs you can be sure to get the result you want … with any model … the program isn’t written to get warming … but if you jigger the inputs you can know in advance you’ll get warming or cooling … and humans control the inputs not the program …

• Again: not the fault of the models, but of people.

• Bruce Cobb says:

From the essay; “Computers are amazing, remarkable, incredibly useful, but they are not magic.”

• Glenn says:

You may benefit from contemplating that computers ONLY perform operations with the instructions provided. “The physics” IS the expected result. Computers do not output anything that we do not “know”.
Well, at least when everything goes as expected. If not, look to what the computer was told to do.

• Computer analysis of the recordings of seismic waves tells us what the internal structure of the Earth [and of the Sun] is, and computer modeling of the waves from man-made explosions tell us where to drill for oil. In none of these cases is the answer put into the program beforehand. So the computer let us ‘see’ things we cannot see otherwise.

• People born into the calculator age have no appreciation of the simple slide rule. We HAD to invent computers because once calculators were invented, we forgot how to use slide rules.

• computers only give you what you expect. It is that misconception that I disagree with.
====================
this strikes me as hair splitting. of course computers can give you unexpected results. From long experience fighting these infernal machines, 99.99999 % of the time these “unexpected results” turn out to be errors.
The 0.00001% of the time the “unexpected results” are not errors, 99.99999% of the time the people doing the development cannot recognize they have something novel, and throw the result away by changing the code to deliver a more “acceptable” result.
If your paycheck depends on finding the wrong answer, most of the time that is what you are going to find. The times you don’t will be called unemployment.

• Dems B. Dcvrs says:

lsvalgaard: “Computer analysis of the recordings of seismic waves tells us what the internal structure of the Earth …” “computer modeling of the waves from man-made explosions tell us where to drill for oil”
Your have almost no idea about processes involved in Seismic Oil Exploration. Humans are involved through out processing. Humans making experienced decisions, thus biased, as to initial starting parameters for computer programs used to analyze waves. With humans making changes or corrections to their initial starting parameters, further bias, as iterative processing continues. And in the end, humans still decide where to drill for oil, not computer models.

• Dems B. Dcvrs says:

And you miss point. Global Warming Climatologists have “put in the physics” as they see it, as they can program it.
Explaining why vast majority of Climate Models have been wrong.

• So you postulate a vast conspiracy covering the globe of scientists all working towards the same nefarious goal. If so, I have a bridge to sell you. You cannot put physics in ‘as you see it’. The physics is given, not chosen.

• Dems B. Dcvrs says:

lsvalgaard: A computer-aided decision thus. Are yo trying to say that we can dispense with the computer altogether?
Nope. What I did say is you know very little of what you remarked on.
lsvalgaard: “computer modeling of the waves from man-made explosions tell us where to drill for oil”
Let me put it in term you will understand. You are wrong.

• TimTheToolMan says:

No Leif, you’ve missed the point. When you dont precisely know the physics, how do you design the software if you genuinely dont know what result to expect? Tiny changes in choices of parameters can cause the software to fail by going off the rails so what choices does one use?
Well the answer is you use choices that fit within the expected probable ranges and result in an overall result that is also within the expected range.
To another seasoned programmer, its not rocket science to understand how the GCM programmers have built their models when plagued with uncertainty and stability problems…

• The answer is simple: you do the best you can. You do NOT write the program such as to get the result you expect. If that is what you after, there is no need to write the program at all. If the problem is hard, the interpretation of the output of the program is hard as well. Having programmed for 50+ years one learns both what the power and the limitations are. My experience is in stark contrast to Eric’s. And most other commenters here have no idea what they are talking about. I’m reasonable sure that if the computer programs showed that CAWG is not occurring, the echo chamber people here are all shouting in, would praise the computer models to high heaven.

• Leonard Weinstein says:

lsvalgaard, you again miss the point. No, you do not put in the physics in climate models, you put what physics you do know, then add approximations and guesses to cover the physics you do not know. The choices for the physics you do not know and approximations and guesses are adjusted to get a reasonable fitting of past data (i.e., the result you want). Then you project into the future and see if the future agrees with the model as time marches forward. For climate models, they show no skill at all so are failures. That is all there is to this argument.

• No, that is not all there is to that argument. Disagreements between models and reality are opportunities to LEARN something, and if you do, the learning will improve the models incrementally.

• TimTheToolMan says:

Leif writes “The answer is simple: you do the best you can.”
And again, I ask you how well we’d do getting a rocket to Pluto if we coarsely approximated G or didn’t know the rocket’s mass?

• If at first you fail, you try again. The failure may tell you what the correct values should be… so learn from the failures.

• TimTheToolMan says:

Leif writes “the learning will improve the models incrementally.”
Nope. Curve fitting never improves a GCM so there is nothing to learn from them directly. Only properly understood and implemented physics could do that and since the problem is one of not understanding crucial physics (eg clouds) and not being able to implement it anyway (ie insufficient computing power), the pursuit of climate prediction is fruitless today

• But we have to keep trying, and eventually we may get it right. It is nihilistic to think that just because it doesn’t work today, it will never work. At some point in time it will work well enough to be useful. Weather prediction is already getting to this point. It certainly does better now than when we started out around 1950.

• TimTheToolMan says:

Leif writes “Weather prediction is already getting to this point. It certainly does better now than when we started out around 1950.”
Hell yeah! But weather prediction is a completely different question to climate prediction and has essentially nothing to do with our ability to model weather.

• And neither have anything to do with the general notion that computers can only give you what you expect, which is what I disagreed with. Computers models allow you to ‘see’ what you otherwise cannot. Sometimes the seeing is poor, but progress is possible and happening.

• Leonard Weinstein says:

lsvalgaard, let me give you an example: As computers became more and more powerful, many people claimed they could learn to play a perfect chess game. It never happened, and never will. The game is too complex. However, computers did become able to beat the best human player in the world. It did that by in putting the same opening combinations and strategy human players use, and by looking at all combinations about 5 or so moves ahead, something it could do. Humans could only compute the combinations to a slightly smaller number of moves ahead (say 4 or so). This was not a solution to the problem, just a sufficient approximation to beat humans. Weather can now be frequently estimated about 3 or so days ahead (but not well where I live), but will never be able to be computed 100 years ahead, and since climate is nothing but average weather, climate will also not be able to compute 100 years ahead. The issue of whether a forcing like CO2 will cause a significant trend up (ALL OTHER FACTORS BEING THE SAME), may be doable, but all other factors are not going to be the same, so I think the climate models will never be able to solve the issue. That does not mean they can not do better than they have for shorter periods, but they have failed badly so far in all efforts., and do not seem to be doing better with time.

• Computers don’t have to play a perfect game, it is enough that they can beat Kasparov. And you are off the rail, because the issue with which I disagree is the claim that computers only can give you the answer you want and already know. Someone here even claimed that the test of correctness was that the answer was what you expected [in which case you didn’t need to run the program at all].

• TimTheToolMan says:

Leif writes “And neither have anything to do with the general notion that computers can only give you what you expect”
And so we come full circle. If you dont know the physics, then computers are only giving you what you program them to tell you. There’s nothing magical about tuning parameters to get the result within an expected range if the expected range is, say, positive.
How does the computer give you a useful answer if reality has the range including negative values but you hadn’t experienced any so you excluded them from the calculations because the consensus said negative values were unreasonable?

• The purpose of the models is to apply the physics you know the best you can, parameterizing what you don’t know and learning from any deviations you discover in a never-ending cycle of improvements. The notion that computer models are constructed such as to give you only what you want is hopelessly wrong and naive, although widespread among people who don’t know what they are talking about.

• TimTheToolMan says:

Leif writes “parameterizing what you don’t know and learning from any deviations you discover in a never-ending cycle of improvements.”
This is curve fitting.
Leif writes “The notion that computer models are constructed such as to give you only what you want is hopelessly wrong and naive, although widespread among people who don’t know what they are talking about.”
This is ironic.

• Curve fitting is not bad as such. The orbit of a spacecraft passing though a system of moons [e.g. Jupiter’s] depends on the unknown masses of the planet and of the moons. Curve fitting to the actual observed orbit can tell you what those unknown masses are. Curve fitting is part of the science.

• Latitude says:

lsvalgaard
June 7, 2015 at 5:30 am
No, that is not all there is to that argument. Disagreements between models and reality are opportunities to LEARN something
=====
Even though no one knows what the physics of CO2 really are….even though the temperature record has been so corrected no one knows what it really was or is….
So you put in the physics that’s a total WAG….a temp record that’s a total WAG
…and out comes knowledge
question: how would you recognize it’s knowledge if it’s now confirming what you think knowledge is?

• thallstd says:

“The notion that computer models are constructed such as to give you only what you want is hopelessly wrong and naive”
They may be “constructed” with the best objective physics we have. But if they are constantly being run with climate to Co2 sensitivity values higher than justified because that is what is needed to illustrate dangerous warming then the models are being “run” to generate the desired outcome.
I don’t know if that is the case or not. But it renders as irrelevant any discussion about how the models are constructed when so much of their outcome is dependent on the parameters they are run with.

• TimTheToolMan says:

Leif writes “Curve fitting is not bad as such.”
Curve fitting is fatal in GCMs.
You give the example “Curve fitting to the actual observed orbit can tell you what those unknown masses are.”
That’s not curve fitting, that’s an analysis based on gravitational perturbation using precisely known physics. A curve fit would be sending a spacecraft around a planet once and using an analysis such as above to calculate probability of moon masses and then sending the next spacecraft through as a slingshot, knowing the moon masses but with no regard to where they were in their orbit.
Its a curve fit because you have one view of some data but dont know how it changes and what impact those changes will have.

• When Kepler deduced his three laws of planetary motion, that was curve fitting of the highest degree, without any physics. Curve fitting is a good way of expressing what you have found out about a system, even if you don’t know the exact physics, in fact, the only way.

• Computer Modeling for Climate Analysis by IPCC:
1. Unless the Models are programmed with all the applicable physics required, the results are useless for Government policy.
2. If the Models contain known physics AND approximation/best guesses of unknown physics, then the output is only suitable for study and learning.
Dr. Isvalgsard is mostly correct, but he didn’t consider how these Models (item 2.) are being utilized for a political agenda. This is not how science should be applied.

• That the models are used for political ends is not the fault of the models, but of people. Most people here are committing the same sin, but with the opposite sign. Personally I don’t see the difference.

• TimTheToolMan says:

Leif writes ” Curve fitting is a good way of expressing what you have found out about a system”
Yes and as long as the laws governing the way the system behaves don’t change (ie the masses are constant. G is constant,. the energy of the system is constant, its in a vacuum), then a curve fit is useful.
In the case of a GCM, those laws are changing because a change in forcing is expected to change water vapour which is expected to change clouds. And clouds are parameterised quantities. So a change in forcing takes the clouds to a place where we have no data. No curve to use.
So we assume that our curves under current circumstances are applicable to the changed circumstances for the clouds. And how’s that going for the models?

• Latitude says:

Most people here are committing the same sin…
====
Using them for political reasons is assuming they are correct…
…most people here are assuming they are garbage
not the same

• TimTheToolMan says:

If memory serves me correctly, the cloud condensation threshold is based on a series of assumed conditions. For example a particular pressure, temperature and so on. Its not based on physics.

• memory of what?
If the variable is based on curve fitting to actual data, that is, if there exists a curve that fits the data, it seems legit to use that in the models..

• Latitude says:

Their high degree of accuracy….

• TimTheToolMan says:

Memory of the description of the parametrisation of clouds.
Leif writes “If the variable is based on curve fitting to actual data, that is, if there exists a curve that fits the data, it seems legit to use that in the models..”
As I understand it there are many conditions under which the cloud might condense and the values used in the GCMs dont get it right for any actual situation. And that’s aside from the fact that under a different future atmospheric lapse rate, the curves may be different.

• If there is any systematics in this that could simply be incorporated in the models.

• TimTheToolMan says:

Leif writes “If there is any systematics in this that could simply be incorporated in the models.”
A better curve fit based on a longer history? I think you’re completely missing the point, Leif.

• The point was whether the computer models were written to produced a desired result. How did I miss that? Perhaps you didn’t follow the argument.

• TimTheToolMan says:

Leif writes “The point was whether the computer models were written to produced a desired result.”
Yours is a matter of opinion. Would you care to speculate on how the GCMs vaguely get the historical global surface temperature anomaly right…and then go their own merry ways in the future, none of which has reflected reality?
Or how different GCMs with different sensitivities can all vaguely back cast the same historical global surface temperature anomaly?
There are many model feature failures that to most sceptics are sure signs they’re fitting the data.

• mine may be [well-founded] opinion, but yours are just [hand-wringing] conjectures with no demonstrated basis in fact.

• Latitude says:

The point was whether the computer models were written to produced a desired result….
===
If they were not….there would be no way to decide which ones were “right” or “wrong”
One that produced the result of glaciation, ice age, and the poles melting, and massive sea level rise at the same time could be right….without a preconceived idea of what results to expect

• TimTheToolMan says:

Leif writes “but yours are just [hand-wringing] conjectures with no demonstrated basis in fact.”
You mentioned naivety before. How can different GCMs all supposedly based in physics have different sensitivities? And all back cast in the same way? And then diverge in their future. And not have been built to fit their histories?
It only takes lots of testing and (innocent) tuning changes to get closer and closer to the historical values and that would be seen by the modellers as their models getting closer to the truth.
You have belief. I dont think that belief is particularly well-founded. I look at the facts and a couple of them are pretty damning..

• And what ‘facts’ would that be, that supports your view that all the models were carefully constructed to yield the result demanded by the climate mafia? The models are different because they use slightly different resolution, parameters, and programming methods. This is quite normal. That they don’t work is, to me, proof that the models were not built as part of a conspiracy, but simply that we just haven’t figured out how to do it any better.

• lsvalgaard

The models are different because they use slightly different resolution, parameters, and programming methods. This is quite normal. That they don’t work is, to me, proof that the models were not built as part of a conspiracy, but simply that we just haven’t figured out how to do it any better.

But the “core” of the global circulation models is not different. There are – as I understand it, only two different central mathematical programs for the energy exchange across the boundaries of each cell. And every model of the 23 plotted average runs uses one or the other of this core arithmetic process. Now, if this core FEA differential equation mode of every interaction across the boundaries of every cell were exactly correct to every decimal place required, and if every molecule (and every kilogram, kiloton, or gigaton cannot even be approximated in these models) were actually behaving across every phase change exactly as assumed by that core FEA equation; and if every energy exchange; every mass, momentum, and particle exchange across every boundary were exactly as assumed, then the core FEA process may – after time – begin to approximate the real world.
Of course, changing the starting parameters outside of the core processor matrix algebra, and the coefficients of each equation feeding “assumptions” (approximations, not “information”) back into the core FEA model of how each 3D block is calculated will change outcomes – it must! And, no doubt, each of the 23 institutions MUST create a different result – else they are redundant and the bureaucratic masters will kill their budget for the next year. Survival of the “least fittest curve” requires only that each computer lab be able to continue to justify its funding to their politicians for the next budget cycle – and has nothing to do with requiring the results be right in the earth’s environment of measured temperatures and winds.
And based on the evidence of climate circulation modeling since 1988, the further from accurate each model run is, the better and faster and more the model is funded.

• Latitude says:

Tim, the most damning fact is that out of all those thousands of thousands of computer runs….
..not a single one of them shows temps decreasing or staying the same
Those were obviously decided to be garbage and thrown out…
Confirmation bias at it’s best.

• @ TimTheToolMan’s – June 7, 2015 at 4:56 am post … as well as all the subsequent postings in response to said, …… I found them all to be a kinda, sorta “fun” read, with most every one of them containing a “personal bias” of the NIH (not invented here) kind.
As an old computer designing/programming dinosaur I can attest to the fact that computers are not “smart”, nor are they “intelligent”. The 2nd fact is, their only function is to respond with a “YES” or “NO” to queries submitted for them to respond to. And what makes computers a valuable tool is that they are extremely “quick” in their response time, like at “warp” speed, …… and thus capable of responding with zillions of “YESes” and “NOs” in a New York second.
Now the key verbiage in/of my above comment is “queries submitted” ….. and it matters not a twit as to who or what is responsible for “submitting said queries”, ….. be it the programming language used, the program complier, the source code programmer, the software system designer, the computer operator or the Flying Spaghetti Monster.
A function or process that is subject to potentially dozens of noninfrequent and/or randomly occurring “input” data/information can not be “modeled” for the purpose of “predicting future events”.

• A function or process that is subject to potentially dozens of noninfrequent and/or randomly occurring “input” data/information can not be “modeled” for the purpose of “predicting future events”.
But can very well predict the aggregate of many such events [also known as ‘Climate’]. Another example: the decay of a radioactive nucleus is a random event, but that half of the nuclei will be left after a certain time [the half-life] is very predictable.

• Latitude says:

But can very well predict the aggregate of many such events….
====
..you mean like predicting the pause

• if it is a good model and there is a pause, yes, indeed.

• Latitude says:

let us know if you find either one…..

• It never happened, and never will. The game is too complex.
======================
Never say never. Chess is deterministic with a finite solution space. I expect it could be played perfectly by a polynomial well under order 256., which is within the power of even today’s computers to evaluate quite quickly.
The problem is in trying solve a polynomial of order 256. Numerical methods are still in their infancy and mathematics struggles to solve anything beyond order 1.

• But can very well predict the aggregate of many such events [also known as ‘Climate’].

OH GOOD GRIEF, ……. calling it an “aggregate” instead of an ”average” makes no difference, ….. neither one constitutes a “prediction”.
Don’t be trying to bedazzle me with your example of extrapolated average trends or trend averages.

• VikingExplorer says:

>> And again, I ask you how well we’d do getting a rocket to Pluto if we coarsely approximated G or didn’t know the rocket’s mass?
A lot of people over greatly overestimating the complexity of climatology and underestimating all other problems.
For example, you’re statement is very funny. What makes you think we know G all that well? It’s the least well known physical constant. We know the gravitational parameter to 10 decimal places, but since we don’t know G very well, we don’t really know the mass of any of the planets very well, let alone the sun.

• TimTheToolMan says:

VikingExplorer writes “What makes you think we know G all that well?”
Leif usually uses rocket and space based analogies so I put them in his frame of reference. Hopefully it makes understanding easier. We know G well enough to send spacecraft around the solar system with enough accuracy to intercept and slingshot around planets. But it’d go very pear shaped if we just approximated it. Its an analogy, its not meant to be an in depth description of space travel.

17. M seward says:

My first experience with modern, high end mathematical models was using computational fluid dynamics (CFD) code to ‘model’ the wavemaving around a ship hull. As it happens it was an existing ship and I had photographic evidence of its wavetrain as a reference both of a scale model and the full size vessel.
This was my first use of such software and I was just given the manuals plus the one for Solaris and the key to the computer lab and left alone. It was the last weeks of summer break and no one was in the mood to hold my hand.
Such software relies of a mesh model of the ship hull and water surface for a reasonable distance away. My first serious go at a model converged (these are iterative models) but gave waves much higher ( i.e. ~100% higher) than reality at the same speed. WTF??? It converged to a false solution! Did not know about that possibility! So how do you know if your solution is ‘true’ or ‘false’.
The culprit was mesh size. The coarser the mesh the less calculation to do and the quicker the ‘solution’ ( still some hours) but the poorer the ‘solution’. Once I sorted it out I reformed the mesh on a rational basis and ran it again and got somewhere useful but could only do so many runs per day. That was frictionless flow, if you try meaningful modelling addressing turbulent flow then multiply calc time by say 1000.
My point? What chance ‘the models’ just do not have
a) fine enough mesh to pick up all that localised tropical action of transiration/evaporative movement from surface to upper atmosphere lest alone
b) actually model the internal dynamics thereof?
The problems seems not so much in the nature of John von Neumann’s observation but rather that if you use three parameters you get a hippo and use fudge factors to model the ears, the tusks and the trunk. Trunk wiggle is assumed.
But hey , you can crank out study after study and paper after paper and get to soooo many international conferences!

18. pbft says:

I teach my clients that emotions, feelings, and hunches are a wonderful part of the human experience, but a poor foundation for important decisions. Many a relationship bears testament to that truth.
Having spent a good portion of my career writing software (including simulations), I’ve had a slightly different experience than Eric. I’m almost always presented with an output that I *didn’t* expect. The next natural step is to assume that it’s an error and then fix the code to produce the output that I expected.
This is by no means restricted to software – I expect that many researchers have dismissed or discarded unexpected results, most of which were errors, but *some* of which might have led to new insights.
In any event, it all leads to Eric’s mirror or echo chamber.

• Dudley Horscroft says:

Re pbft
I remember a situation where our physics master included a question in an exam which required students to calculate the specific heat of copper. I am sure you all know it as 0.092 calories/gram/degree celsius. even though you may not be familiar with those units. He deliberately flogged the data given in the exam so that if correctly worked the result would be 0.92.
Those who got 0.92 for that question got full marks. Those who looked at the result and thought “this is wrong”, and then went back and flogged the answer to get 0.092 got nothing. Those who got 0.92, and then wrote a note saying “I have checked the calculations and got this answer but I do not believe it, from my knowledge this figure is 10 times too much. I think the data may be wrong.” got bonus marks.
When queried on this practice he said that when you calculate something you must (a) check you have done the calculations correctly, (b) ensure you have correct data, and (c) ensure that there is no other information that you should have used that you didn’t, but the information that you have is relevant. When doing a question in an exam, check you have used all the data given. If you have not used something, there may be a problem. But the problem may be that the examiners have added superfluous info to catch people who are not quite sure of what they are doing.
So-called “climate scientists” appear to be in this latter category.
BTW, Notanist, RUR = Rossum’s Universal Robots was SF. Good SF, but still SF, just as was Isaac Asimov’s intelligent robots.

• 🙂 When a computer arrives at an answer to a calculation and then experiences a nagging twinge of doubt or concern, maybe questions the wisdom of doing what its program says it should do, goes back to re-check the data it was fed, calls up its buddy in the next room to see if had anything different in its notes, or spits out a note to the programmer questioning a decimal place in the input data, then they’ll be approaching human intelligence. Until then…

• computer arrives at an answer to a calculation and then experiences a nagging twinge of doubt or concern
=======
see my post ferdberple June 7, 2015 at 6:12 pm
Every computer on earth is a Sociopathic Liar. It can lie without the slightest compulsion.
“If someone questions the sociopath’s lies they can be incredibly devious in the way they cover things up. This can include placing the blame at someone else’s door or by inventing complex stories to cover up their untruths.”

19. QV says:

I am sure that the faith a lot of people have in climate models is based on their ignorance of what computers do and how they are programmed.
Are climate models a genuine and totally unbiased attempt to simulate the planet (in which case they are probably no where near complex enough, or are they an attempt to prove that “global warming” exists.
Would a model which did not result in warming be used, or would it be adapted until it did show the required warming?
The computer models don’t even start at the same temperature, which is why they get converted into anomalies.

20. Kant identified this issue centuries ago: that we confuse the way we think for reality, and if we don’t or can’t check against reality we fall into serious errors. Nowadays, the computer represents the way (some people anyway) think and we fall into error by mistaking what it produces for reality.

21. I was with you up until, “The technological singularity – a prediction that computers will soon exceed human intelligence, and transform society in ways which are utterly beyond our current ability to comprehend – may only be a few decades away.”
Computers have no intelligence whatsoever, therefore they cannot exceed human intelligence. They can’t even imitate it very well at this point. They cannot become truly self aware. They have no capacity to transform society on their own. They do not grow, they do not learn, they do not think, consider, dream, reflect, or weigh personal options about what they want to do next. They are a tool of the programmer and nothing more. They can be designed and made to do awesome things that we cannot ever hope to do on our own, they can solve problems at amazing speeds, but they are not “intelligent” in any sense that one could compare them with humans.
I’m a software engineer for 20+ years who went back to grad school for a wholesale career change and now practice psychology. What you say regarding computers and models is exactly right, but the same thing also applies to what engineers erroneously call computer “intelligence”. What computers can do is only a superficial crayon cartoon of what the average human mind does every day.

• emsnews says:

• Its an open debate, with a range of different opinions. I tend towards the idea that silicon will eventually be able to do anything which brain tissue can do. But even if I’m wrong, “artificial” intelligence will be achieved, even if the only viable solution is to put a real brain into a bottle, and plug in a keyboard.

• Sal Minella says:

You are completely wrong! Adding the capability for storing and manipulating ones and zeroes at increasing speeds in no way portends the “awakening of a new intelligence”. Kurzweil is just another Musk (or maybe the other way around) pushing the idea to the technically uninformed that more bits and faster cycles will give rise to machine intelligence.
Computers can perform calculations faster and look up information much faster than humans but they cannot think. My 45+ years of experience is in the design of embedded control systems and the design of vision algorithms like the ones controlling our cars and airplanes (and most everything else). The day that one of the computers used in one of my systems decides that would prefer to enjoy the view rather than to move the rudder by the commanded increment is the day that the computer becomes useless.
The dream of creating intelligence, especially sentient intelligence is the God dream. Trust me, you are not God and neither is Kurzweil..

• @ Sal Minella
Right you are, …… and the reason I say that is, …. it is my opinion that the human brain/mind is a biological self-programming super-computer that consists of, per se, multi virtual processors ….. all functioning concurrently with one another ….. via a “data” orientated memory addressing scheme for determining the “recalling” of previously stored information, …… the “storing” of new information … and/or the “linking” together of previously stored data with associated newly stored data ……… and with said updates, revisions or enhancements being executed in “real time” via direction of the brain/mind itself..

• Computers have no intelligence whatsoever, therefore they cannot exceed human intelligence.
==============
this is not correct. they currently exceed human intelligence in limited areas. however, HAL remains in the realm of SF. given current progress.a BORG is more likely.

22. D. Cohen says:

Don’t forget the basic way you debug a big scientific or engineering computer program — you decide it’s working correctly when it behaves the way you expect it to. After all, who goes on looking for flaws in something when it seems to be behaving?
Occasionally, a computer model gives out a surprising result and when you look for errors, it turns out that stuff you put in there is interacting in an unexpected way — there are no bugs and nothing wrong with the calculations. In other words, your assumptions about how different aspects of the model would interact with each other are wrong. This is why people make large computer models, and it means the model should be taken more seriously — and unfortunately it doesn’t happen all that often. When something like this occurs, the next step is comparison to the real world (because it may just mean that something important has been left out entirely.
The usual fate of big computer models is that after a while no one really knows what is going on inside of them — especially after several generations of programmers have tried to “improve” them — although there are usually experts who have run the program countless times and so have a good intuition about what the outputs should look like for a given set of inputs. The outputs may look a bit odd but there’s nothing unusual about that — and no one has anything obviously better so it keeps getting used. The program gets patched and patched again in the hopes of making it better, Eventually new top managers come along who have not been involved in its development and so can say “Look, this obviously isn’t worth it” without looking like they have spent a lot of money for no good reason. The program then gets tossed and a new one authorized to take its place. All the new computer programmers jump for joy — this will be fun! The old ones start thinking about retirement…

23. Greg F says:

…maybe even creating a reliable climate model, will in the next few decades start to fall like skittles before the increasingly awesome computational power, and software development skills at our disposal.

The problem is not so much “computational power” as it is the limited resolution (bit depth) compounded by the fact the processes are non linear.

24. rwnj says:

In my experience, it is very difficult to make a computer model that does something genuinely complex. Typically, a constraint binds or a functional dependence saturates and the output is simply linear or exponential in one variable. Modeling the complex interaction of many variables is very difficult.

• Modeling the complex interaction of many variables is very difficult.

More like impossible, ….. simply because by the time one has “input” the newer variables into the “model” …… many of those variables have changed again. Thus the “output” is called a “forecast” or “projection” and not a prediction.

25. Coach Springer says:

So, opinions are like algorithms, every computer has one?

• emsnews says:

Only if each computer has different data and different parameters in the program’s matrix.

26. The Original Mike M says:

“…computer models are deeply influenced by the assumptions of the software developer.”

I disagree with that statement in the sense that, firstly, I don’t think any software can be defined as a “model” if there are any assumptions in it, it should be called what it is – a “guess”.
Secondly, when sophistry creeps into the equation it transforms from a “guess” into outright fraud.
yrloc=[1400,findgen(19)*5.+1904]
if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’
http://wattsupwiththat.com/2009/12/04/climategate-the-smoking-code/

27. commieBob says:

… it is very difficult to create software which produces a completely unexpected result.

It’s actually quite easy. 🙁

28. Graham Green says:

Mr Worrall has the wrong end of the stick. Computer software reflects the wants of the people who paid for it to be built.

• Briffa’s pressure to present a nice tidy story? 🙂

• Don K says:

Computer software reflects the wants of the people who paid for it to be built.

Not really. But if the results aren’t clear and unambiguous, there are often some forces in play that will cause certain sorts of corrections to be favored over others. So, yes computer models can be biased.

• tty says:

I agree! Everbody in the IT business can quote any number of cases where people paid obscene amounts of money and got something that most certainly didn’t satisfy their wants. One could even suggest that this is more common than the opposite outcome.

• @ TTY
Do I need to mention “Obamacare system software”?

29. For those who would like to see a video on these subjects by a professor of applied mathematics at Guelph University in Canada, I recommend Dr Chris Essex’s lecture on Youtube: https://www.youtube.com/watch?v=19q1i-wAUpY
Chris Essex explains in detail the problems in modeling climate.The explanation is somewhat different from what you read here. He and Ross McKitrick, professor of economics have a book on the subject:
Taken By Storm: The Troubled Science, Policy and Politics of Global Warming.
Ross McKittrick you know from his work with Steven McIntyre on the Hockeystick graph.

30. More than 30 years working with computers and software leads me to agree with Eric Worrall, and find defects in the arguments of Dr. Svalgaard. No computer has yet consistently and accurately predicted yesterday’s weather should be a hint.

• And you are ascribing that failure to the programmers putting in the expected result?
I would rather say that some problems are hard and progress is therefore slow, but here HAS been progress in weather prediction since I did my first runs some 50 years ago.

• KaiserDerden says:

you did your first runs on what mainframe ?

• Don’t get Leif and me started on old computers….

• Surely you program your own computer. You code what you believe to be true as to the calculation to be performed. You code how the machine is to manipulate the input data to show what the output should be. No mater how complex the calculation, you control every single step. If you believe a known input data causes positive feedback you write the formula that way. The computer always outputs what you told it to output. In the end, you, not the computer, determines the truth by comparing the output to reality. For what it is worth, I think you have done a fine job in seeking the truth.

• If you believe a known input data causes positive feedback you write the formula that way
That is where you go wrong. The models don’t work that way.

• Rick, I’ll compare my old computer to your old computer… Mine had tubes with main memory on a drum.

• docwat, You must be more ancient that Leif or me. My father designed a process control computer that executed off a drum but the electronics were germanium transistors. He boasted it was so easy to program that a 12 year old could do it, and got me programming the I/O processor to prove it. Released in 1962, it was also the world’s first commercially successful parallel processor.
One of the ease of use aspects was it’s decimal design – 25 bit words, a sign bit and six BCD digits.
An Australian power plant was the last site I know of that ran one. They shut it down nearly twenty years ago. They did instruction layout optimization on an Excel spreadsheet.
Dad always thought Bailey Meter should have come out with an IC based version of the machine. Had they done that, there would be versions running today.

• @ Rick W
That sounds like it was a Univac III, ……. was it? See: http://en.wikipedia.org/wiki/UNIVAC_III
If so, …… then I personally knew 2 or 3 of its design engineers.
Which was my “IN” into the wonderful world of computer design engineering and manufacturing.

• Nope, Bailey Meter 756. Being outside of the computer company/computer science circles, there’s very little available on the web. I see there’s a reference to a booklet from the Australia Computer Museum Society (I gave the author some information and a photo of my father from that timeframe).
There’s a little more about the Bailey Meter 855, the follow-on system. It used ICs (Fairchild?) and core memory. One realtime feature it had was a hardware based process scheduler. Every line clock tick would trigger the system to switch to a different set of registers, including the program counter. I think it had eight sets, so each process got its 1/8th of the CPU time.

31. William C Rostron says:

So true, Eric. I’ve lost count of the number of equations I’ve corrected by simple use of parenthesis in an argument. Computers do EXACTLY what you tell them to do; indeed, they can do nothing else.
I remember many years ago during my first efforts to program in ‘C’ on an imbedded micro-controller platform. My first successful compile was something of a celebration. What every programmer knows is that a successful compile only qualifies that the machine syntax has been correctly encoded. The computer will then faithfully execute all of your mistakes.
I was involved in writing the control software for a large nuclear power plant in the mid-1990s. We had developed the system to a point where I was confident that we had met most of the functional requirements. We had tested and tested, and the system seemed robust and stable, so we asked a green test by an actual plant operator. Within five minutes he did something totally unexpected that resulted in a major transient (in simulation, of course, not the actual plant). We eventually installed that system, and pretty much eliminated plant trips due to control system failure; however, we still discovered interactions and dependent paths that we could not find by simulation. We eventually made over 400 additional software and hardware corrections to refine the system to it’s current level of reliability. And we really aren’t done; it’s just that we’ve reached the point of diminishing returns where it doesn’t make economic sense to keep refining the process.
Computers are the most complex machines ever invented by mankind. Those science fiction movies where the genius programmer stares at a list of source code and then determines how he can save the world with just a few little modifications, all without testing, shows just how far from reality Hollywood writers are. Makes a nice feel-good story, but don’t bet your world on the ability of any one to get software right the first time.
There is, of course, a lot of information on how software and computers *really* work, but years ago I was following GROKLAW almost daily, before it shut down, just as I follow WUWT now. For those interested, there is an excellent set of articles archived on the subject of software and mathematics, and how they are intertwined, and how that affects the patent system there:
http://www.groklaw.net/staticpages/index.php?page=Patents2
-BillR

• Ed says:

Like spending five years writing flight management software before you let the first pilot look at it. The pilot always does something unexpected and breaks the thing within five minutes. Thirty five years later they are still breaking it.
Flight management software is probably trivial compared to climate modeling software (FMS ~ 2.5 x 10^5 lines of code).
Climate modeling software does have the likely advantage that the developers are most likely the users.

• KaiserDerden says:

the “users” are the politicians … they are the ones asking for answers … the modelers are just rent seeking hucksters bent on getting another paycheck out of the users …

• Paul says:

“spending five years writing flight management software before you let the first pilot look at it.”
And tragically, there can still be issues, like the recent Airbus A400M crash.

32. David L. says:

Computers will even tell the researcher what he wants to hear with the wrong physics written in the code.
In my industry of big Pharma, a common test of drug product is known as “dissolution”. Basically you place the tablet in a pot of 37C water, agitate with a spinning paddle, and measure how the drug dissolves.
One group has developed theoretical software to predict dissolution behavior based on inputs such as particle size, surface area, etc. After nearly 20 years of spitting out so-called accurate predictions, someone finally looked into the code and discovered the original writer included an important physical constant, but the value at 25C, not 37C, which is the actual test temperature. There is a big difference between the two values, yet every user of the program for 20 years has sworn to its accuracy to predict reality.

33. richardscourtney says:

Eric Worrall
You say

But this act of creation is also a restriction – it is very difficult to create software which produces a completely unexpected result. More than anything, software is a mirror of the creator’s opinions. It might help you to fill in a few details, but unless you deliberately and very skilfully set out to create a machine which can genuinely innovate, computers rarely produce surprises. They do what you tell them to do.

Yes. The way I have often stated it is:
A claim that man-made global warming exists is merely an assertion: it is not evidence and it is not fact. And the assertion does not become evidence or fact by being voiced, written in words, or written in computer code. This is true regardless of who or how many have the opinion that AGW exists.
Richard

34. CNC says:

As an electronics engineer I use thermal computer modelling all the time and are very useful but only if calibrated with the real world. The assumptions are the key point, if incorrect the models are useless..

35. blackflag says:

As a person on the IPCC Chapter 10 on modelling, and an IT expert as well as years of experience on modelling (in Industry, not Climate), I was stunned by the ignorance of so-called experts who had no clue to the limits of such things. It was as if I was whispering in a hurricane.

36. Shawn Marshall says:

Artificial intelligence will never happen – it will just be some sort of algorithm balancing probabilities that it invokes from somewhere or attempts to develop. For instance, will a computer develop a superior theory of Relativity? It may use physical data and equations it selects to try and fit something but it will just use some sort of exhaustive search moderated by some sort of constraints it may be enabled to apply. People should realize that AI is the IT analog of Darwinism and it is patently false. There is such a thing as intelligent design. We are made in the image of God.

One thing that would keep me from looking for a climate modeling gig is that I have no idea how one can model water in the atmosphere. One moment it’s saturating (or supersaturating) the atmosphere and doing its best to be the dominant greenhouse gas. The next moment it’s turning into a cloud and reflecting nearly all the sunlight back into space.

38. Tim Wells says:

Slightly O/T but has everyone else noticed the huge ramp-up in media coverage of doom laden climate change stories in advance of the Paris summit? My preferred site for several years now has been Yahoo news, and the uptick in articles has been a true hockey stick. My overall impression is that the sceptics have won the scientific battle long, long ago, but the propaganda war, which is vastly more important, is now irretrievably lost. Perceptions are reality.

39. mpaul says:

Eric, excellent article. I too have spent my entire career in the computer industry and I often take for granted that people know how computers work. Sadly, the vast majority of people out there have no conception. Even telling people that computers precisely execute instructions written by people (nothing more, nothing less) is hard for the average person to understand. They’ve never seen computer source code, they don’t know what a compiler is and they have never encountered the term Von Neumann machine. Its all just magic.
I’m guessing that 10% of the developed world has used Microsoft Excel. When I tell people that an Excel Spreadsheet is a type of a computer model, then that 10% begins to understand. I can create a spreadsheet (model) whose purpose is to predict the future direction of the stock market, based on things like consumer sentiment, GDP growth, P/E Ratios, money supply, etc. . But that doesn’t make my model correct.
Unfortunately, 90% of the people out there can’t even understand this.

40. graphicconception says:

I find the idea that computers must produce the right results if you tell them the laws of physics to be worryingly naïve and quite wrong.
Has anyone ever use Microsoft Word? Have you ever pasted a table from somewhere else into Word and got an unexpected result? Word, is a computer program written by computer programmers. Adding F=ma to its knowledge base will not improve your table pasting experiences.
Have you ever looked at the list of bugs associated with a program? Programs of any size have thousands of known bugs. Who lists these for the climate models? Programs don’t always crash when they come across a bug. Often they produce entirely plausible results.
http://en.wikipedia.org/wiki/List_of_software_bugs
Have you ever seen the pictures of the patterns caused in the data due to rounding errors? These patterns are being built in to your model results.
Computer programs have several relevant causes for concern, including:
Rounding error and precision;
The fact that they will contain bugs: (Mariner 1, anybody?)
Misunderstandings about what a variable actually means and what its units are;
The fact that you can’t test every eventuality;
Assumptions about what needs calculating and when.
This is before you start fudging the science by adding in parameters because you decide that will be accurate enough.
I speak as someone who has worked continuously at or studied in computing or digital electronics since my school started building the Wireless World Digital Computer 1967.

• terrence says:

I worked for about 28 years in IT – from junior programer to senior systems analyst. The programmers had a saying/adage, “The only programs without bugs are trivial programs.”
When I was getting my Bsc (in Computer Science) I took a course on computer simulation. One’s grade for the course was based entirely on a simulation project. If your simulation was completely wrong, you got an F and failed the course. You would NOT be allowed to take the course 18 times in a row, especially if you got it wrong each and every time.
These so-called AGW “models” are farces – they can make a lot of money; but they are WRONG.

41. Gavin Schmidt has often said that the CO2 sensitivity is not programmed into the model, but it is an emergent property.
Most of the faithful seem to be believe this when the question comes up.
When I used the individual outputs of GISS Model E, the first model that Gavin Schmidt worked on at GISS, the temperature impact of rising GHGs followed a simple ln(CO2) formula so closely it was clearly programmed in.

• Yes, one of the features used for comparing models is the 4XCO2 sensitivity.

42. John Boles says:

Climate computer models, they have so many variables and all the variables feed back on each other but at what gains we do not really know and sometimes we do not even know the signs of the feedbacks so right away I knew that the models were just being tweaked to get the temperatures that they wanted to see.

43. sonofametman says:

Testing, testing, testing! The b*****d tester from hell is the guy you want. He’ll read the specification and set up a matrix of test conditions. Functional tests, reasonableness tests, limit tests, boundary tests, the list goes on. Developers and designers don’t get to test their own code in organisations that care about quality (sadly all too rare). Some places even have ‘black team testing’ where developers get to test each others projects, and get extra pay for finding faults. It’s amazing what drops out when you give stuff to a bright tester and say here, try and break this. In an earlier phase of my IT career, I was the guy you didn’t want to test your programs if you were the precious type. I worked on a number of telecoms products, and one call data analysis tool was chock full of timing and data aggregation errors so bad that it was eventually abandoned.
Perhaps we ought to just ask the GCM enthusiasts what their test criteria are?

• Curious George says:

Has anybody seen an analysis of General Circulation models? How much error will be caused by a widely-spaced grid, how much by approximations used to model turbulence, to model clouds? Why do models still use a latitude-longitude grid (grid points close to each other in polar regions)?
I dare say that the modelers did not do any due diligence. Why should they? Are they responsible to anybody for anything?

• so no fame t man
Testing sounds like an electronic term. A term many people would never be able to fit the the job description. Off the top of your head, what IF.
You were given the job to “test” one of the more complicated models that are being used on the web today for instance SWPC or ISWA.
It takes days to get results as seen to the changes to the barometric pressure readings here on earth. And the results aren’t quite as much of a change or the time periods do not line up. So you report this to the programmers and they ask you “what result do you want”. Does that mean that you have found a flaw in the code? Or does it mean that something else is going on up there that they do not understand. So a meeting in the board room determines what the problem is and the programmers make modifications to the model and you begin to test, test, test again.
What are you testing for? Is there a specification written to tell you what the result should be? What if somewhere in the code it uses 1400 Joules/s/m2 to describe the energy of the sun. Lets say it changes, not much but enough to change the way the model would produce a result.
Would you be able to find this flaw during the testing process? Of course this value is set in stone, or is it?
How would you tell if it changes? Would you be the one to make a formula to compensate for the flaw, or would it be decided at the board room?
In due respect to Gene Rodenberry, Q suggested making a change to the gravitational constant of a planet in order to save it. The consortium then allowed him back into Q because he did something good. Maybe we need a Q to write the specifications for the models.

44. Gary in Erko says:

Why does anybody assume that answers lie in the direction of intelligence, either human or artificial IT. No-one talks about artificial intuition IT.

45. Ex-expat Colin says:

For software that is critically important as regards functionality the specifications must be 100% testable. Meaning… what you asked for you got and nothing else. The performance bounds are extremely strict.
So is it safe to use? Safety is about not causing harm. Harm relates to injury/death including loss and damage to property. Harm can be caused by a system performing unexpectedly…and in various ways.
None of the above has anything to do with experimental software systems (Models and/or Windows). However, the use of such systems has had vast influence. That influence appears to me to be causing safety issues as regards major financial loss, property damage and likely sickness and death. Windows has a health warning waiver.
Many of us have spent about 30 years refining software system standards, so that procurement is eased and the end result is usable and safe. We knew that computer games and similar were being knife and forked for years. What we didn’t realise was that such poor software would ultimately be used in monitoring and predicting planet catastrophe. And never forget the Windows!!

46. Eliza says:

Guys/Gals: L Svaalgard did correctly predict solar SNN 24 (SSN ~70~, I think). However, this was after Hathaway and His Pals at NASA modified it down wards from ~170ssn or so many, many, many times. So that’s about the only prediction he’s got right so far LOL

• No, my prediction [SSN in the range 67-83] was made in 2004, before any others.

• There’s no shame in issuing updated forecasts. Do you plan you activities for tomorrow based on a forecast issued six days ago? Forecasts can have two goals.
1) set expectations for future conditions.
2) test our knowledge of the physical properties being analyzed.
Both goals are useful.

47. patrioticduo says:

In software development, someone creates a “use case” which you then give to the software developers who write the software that will satisfy that use case. For climate science models in particular, the use case was and still is (coming from the UN) “Human produced CO2 causes global warming”. The climate scientists gave that to the software developers. So it’s no wonder that the models always show it when the models were written to show it.

48. lsvalgaard
June 7, 2015 at 5:33 am
If at first you fail, you try again. The failure may tell you what the correct values should be… so learn from the failures.

Well, first one has to admit that they’ve failed.
I believe the Climate Modelers have already failed at that.

49. Some experts who assess the performance of climate models are not expecting the answer to come from bigger, faster computers.
“It is becoming clearer and clearer that the current strategy of incremental improvements of climate models is failing to produce a qualitative change in our ability to describe the climate system, also because the gap between the simulation and the understanding of the climate system is widening (Held 2005, Lucarini 2008a). Therefore, the pursuit of a “quantum leap” in climate modeling – which definitely requires new scientific ideas rather than just faster supercomputers – is becoming more and more of a key issue in the climate community.”
From Modeling Complexity: the case of Climate Science by Valerio Lucarini
http://arxiv.org/ftp/arxiv/papers/1106/1106.1265.pdf

50. PaulH says:

And of course with computers there are important issues with floating-point arithmetic. Specifically rounding errors, representation errors, and the propagation of such throughout the models.
A good discussion over at The Resilient Earth:
“Climate Models’ “Basic Physics” Falls Short of Real Science”
http://theresilientearth.com/?q=content/climate-models-%E2%80%9Cbasic-physics%E2%80%9D-falls-short-real-science
“Even if the data used to feed a model was totally accurate, error would still arise. This is because of the nature of computers themselves. Computers represent real numbers by approximations called floating-point numbers. In nature, there are no artificial restrictions on the values of quantities but in computers, a value is represented by a limited number of digits. This causes two types of error; representational error and roundoff error.”
Far more technical discussions are available:
“What Every Computer Scientist Should Know About Floating-Point Arithmetic”
https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
“The idea that IEEE 754 prescribes precisely the result a given program must deliver is nonetheless appealing. Many programmers like to believe that they can understand the behavior of a program and prove that it will work correctly without reference to the compiler that compiles it or the computer that runs it. In many ways, supporting this belief is a worthwhile goal for the designers of computer systems and programming languages. Unfortunately, when it comes to floating-point arithmetic, the goal is virtually impossible to achieve. The authors of the IEEE standards knew that, and they didn’t attempt to achieve it. As a result, despite nearly universal conformance to (most of) the IEEE 754 standard throughout the computer industry, programmers of portable software must continue to cope with unpredictable floating-point arithmetic.”

51. MattS says:

I have close to 20 years of experience working as a computer programmer. In general, I think Eric Worrall has it dead on.
However, there is one small piece I have an issue with.
“it is very difficult to create software which produces a completely unexpected result.”
No, for most people it’s not that difficult. A computer is an idiot savant. As Eric says it will do what every you tell it to. However, he leaves off that it will take what ever you tell it to do very literally.
Thinking literally enough to write good software is a skill that a programmer must learn and cultivate. Most people can’t do it. They tell the computer to do X and then are shocked when it does X, because they really meant Y.

52. Computer modelling is inherently of no value for predicting future global temperature with any calculable certainty because of the difficulty of specifying the initial conditions of a sufficiently fine grained spatio-temporal grid of a large number of variables with sufficient precision prior to multiple iterations. For a complete discussion of this see Essex: https://www.youtube.com/watch?v=hvhipLNeda4.
Models are often tuned by running them backwards against several decades of observation, this is
much too short a period to correlate outputs with observation when the controlling natural quasi-periodicities of most interest are in the centennial and especially in the key millennial range. Tuning to this longer millennial periodicity is beyond any computing capacity when using reductionist models with a large number of variables unless these long wave natural periodicities are somehow built into the model structure ab initio.
This is perhaps the greatest problem with the IPCC model approach. These forecasts are exactly like taking the temperature trend from January – July and then projecting it forward linearly for 20 years or so.
In addition to the general problems of modeling complex systems as above, the particular IPCC models have glaringly obvious structural deficiencies as seen in Fig1 (fig 2-20 from AR4 WG1- this is not very different from Fig 8-17 in the AR5 WG1 report)
The only natural forcing in both of the IPCC Figures is TSI, and everything else is classed as anthropogenic. The deficiency of this model structure is immediately obvious. Under natural forcings should come such things as, for example, Milankovitch Orbital Cycles, lunar related tidal effects on ocean currents, earth’s geomagnetic field strength and most importantly on millennial and centennial time scales all the Solar Activity data time series – e.g., Solar Magnetic Field strength, TSI, SSNs, GCRs, (effect on aerosols, clouds and albedo) CHs, MCEs, EUV variations, and associated ozone variations and Forbush events.
The IPCC climate models are further incorrectly structured because they are based on three irrational and false assumptions. First, that CO2 is the main climate driver. Second, that in calculating climate sensitivity, the GHE due to water vapor should be added to that of CO2 as a positive feed back effect. Third, that the GHE of water vapor is always positive. As to the last point, the feedbacks cannot be always positive otherwise we wouldn’t be here to talk about it. For example, an important negative feed back related to Tropical Cyclones has recently been investigated by Trenberth, see: Fig 2 at
http://www.cpc.ncep.noaa.gov/products/outreach/proceedings/cdw31_proceedings/S6_05_Kevin_Trenberth_NCAR.ppt
Temperature drives CO2 and water vapor concentrations and evaporative and convective cooling independently. The whole CAGW – GHG scare is based on the obvious fallacy of putting the effect before the cause. Unless the range and causes of natural variation, as seen in the natural temperature quasi-periodicities, are known within reasonably narrow limits it is simply not possible to even begin to estimate the effect of anthropogenic CO2 on climate. In fact, the IPCC recognizes this point.
The key factor in making CO2 emission control policy and the basis for the WG2 and 3 sections of AR5 is the climate sensitivity to CO2. By AR5 – WG1 the IPCC itself is saying: (Section 9.7.3.3)
“The assessed literature suggests that the range of climate sensitivities and transient responses covered by CMIP3/5 cannot be narrowed significantly by constraining the models with observations of the mean climate and variability, consistent with the difficulty of constraining the cloud feedbacks from observations ”
In plain English, this means that the IPCC contributors have no idea what the climate sensitivity is. Therefore, there is no credible basis for the WG 2 and 3 reports, and the Government policy makers have no empirical scientific basis for the entire UNFCCC process and their economically destructive climate and energy policies.
A new forecasting paradigm should be adopted. For forecasts of the coming cooling based on the natural 60 year and more importantly the quasi-millennial cycle so obvious in the temperature records and using the neutron count and the 10Be record as the most useful proxy for solar “activity ” check
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html

• Cyclomania is worse than computer models…

• Using the obvious periodicities in the data is simple common sense. Look at the Ephemerides for example. Why you choose to ignore periodicities shorter than the Milankovic cycles is simply beyond my comprehension. – A case of cyclophobia? For the Quasi-Millennial cycle see Figs 5-9 at the last link in my comment .
We are certainly past the peak of the current interglacial.Figs 5 and 9 together certainly suggests that we are perhaps just approaching just at or just past a millennial peak in solar activity. See the decline in solar activity since 1991 ( Cycle 22 ) in Figs 14 and 13.
I agree we don’t know for sure that we are just passing a 1000 year peak but it is a very reasonable conjecture. or working hypothesis.For how long do you think the current trend toward lower solar activity i.e lower peaks in the solar cycles will continue and on what basis would you decide that question?
I suggest that the decline in the 10 Be flux record from the Maunder minimum to the present seen in Fig 11 shows clearly that this record is the most useful proxy for tying solar activity to climate.
Do you have any better suggestion?

• Cycles are present everywhere

• Leif My comment referred to the 10 Be flux in Fig 11 of the post. However the red line on your graph indicates the same increase in solar magnetic field strength from 1690 to 1990 as the 10 Be data of Fig 11 The decline from that 1990 peak since then since I reasonably suggest as possibly marking that peak as a peak in the millennial cycle.
For how long and how far do you think this decline might continue.? If it does do you think this will be reflected by a cooling climate.

• There is no change in solar activity since 1700, so no decline to continue and no recent peak.

Look at the trends in both curves since 1990.Your view that there was no increase in solar activity from 1690 – 1990 and no decline since then is simply incomprehensible based on the graph you posted.

• Cyclomania is worse than computer models…
===============
very very few observed physical processes are not cyclical. there is a very simple reason for this. any process that is not cyclical will be finite in duration, and our chance of being alive at that same time to observe it is vanishingly small.
very few things in life are so simple they can be predicted successfully from first principles. thus the IPCC calls climate models projections. we can’t even predict the ocean tides from first principles.
if one truly wants to predict the future for complex systems, one has no alternative but to first discover the underlying cycles in the observed process. since cycles repeat, this provides reliable prediction for the future. this is how we solved the ocean tides.
the term “Cyclomania” is a logical fallacy. You are trying to discredit a technique that has worked repeatedly throughout history by the use of name calling.

• If a ‘cycle’ repeats millions of times, it becomes a law. If it only shows up a few times, but is believed to be universally valid, it is speculation and not science unless a physical explanation can be given for why the cycle should be valid.

• VikingExplorer says:

>> Computer modelling is inherently of no value for predicting future global temperature with any calculable certainty because of the difficulty of specifying the initial conditions
You’re assuming that climatology is an extension of meteorology.
I just looked up the climate of NYC and found this: The daily mean temperature in January, the area’s coldest month, is 32.6 °F (0.3 °C)
This is really not dependent on any atmospheric / weather related initial conditions. The climate is properly defined as a 10 year global temperature average (preferably of the oceans), which is the result of forces external to the land/sea/air system. This would change the nature of the problem significantly.

53. Great expectations –
If one inputs the correct, proper parameters and applies the correct, proper physics/math, then one will always get what they expected: “The Correct Answer”.
Expecting anything other than the correct answer is the fault of the person.
In the case of the Climate Models, they admit they aren’t including all of the proper parameters and they admit that they can’t apply all of the proper physics/math. However, since one parameter they include is “increase CO2 and temperature increases”, the result is what they expect: “An Incorrect Answer That Confirms Their Belief”.
In Lief’s discussions above, I believe Lief is often describing scientists who are really trying to get “The Correct Answer”. Unfortunately, many of today’s “climate scientists” are not the folks Lief describes.

54. Mike Maguire says:

I still remember what Professor Portman said in my first synoptic meteorology lab at the University of Michigan in the late 1970’s. He stated something like, “It would take a meteorologist much of their life, working 24 hours a day to solve all the equations from one run of our weather models that a computer does in around an hour!”

• KaiserDerden says:

actually it can be done in seconds with a magic eight ball … and it will be just as accurate …

55. I tend to agree and disagree with author. I have some experience running very complex models which use grids, time steps, and all sorts of equations to represent what happens in each cell, how it “communicates” with adjacent cells, and so on. When these models are developed there is no intent to arrive a specific result, simply because they are generic. So I don’t agree the model developer has a given answer in mind.
The model user takes the generic model product, and introduces values. These can be the grid cell numbers, their description, the way physical and chemical phenomena work, descriptions of fluid behavior, and the initial state (the model has to start at some point in time with some sort of initial conditions). It’s common practice to perform a history match “to get the model to work”. The history match is supposed to use prior history. This means that indeed the model is supposed to meet what’s expected: it’s supposed to match what happened before. But things get weird when the model is run forward into the future. This is where I agree with the author.
If the model predicts something we think isn’t real, we throw those results away. So there is indeed a lot of judgement involved. And if one is expected to deliver a “reasonable” result then the runs we keep lie within a boundary. This is why we try to test the model performance using analogues. We find a test case with “an end of history” where the process went very far, run the model to see how it does. Climate modelers lack analogues. The planet has been around for a long time, but we didn’t measure it very well. That’s it.

56. Al gore,s global warming propagandist was asked why we should believe climate models..his response was they have a total of TEN inputs so they must be right. Gipo

57. daved46 says:

I’m surprised that AFAIK nobody here has mention Kurt Godel and his disproof of the completeness of mathematical systems. Back at the beginning of the previous century, Russell and Whitehead published a book based on the assumption that using a few simple equations/assumptions, all of mathematics could be derived. Godel published a paper which showed that any fixed system contained true statements which were not derivable from within the system. The title of the paper had a “I” at the end to indicate that other papers would follow, but he never had to publish them since the first one convinced everyone that the Russell and Whitehead assumption was wrong. (see Hofstater’s “Godel, Escher, Bach” for much more)
Dave Dardinger

• Mr and Mrs David Hume says:

Indeed – we believe that Dave Dardinger has made a very helpful point. We write as mathematical ignoramus but we had understood that Turing had shown (the “Halting Problem”) that not all numbers are computable and that it is not possible to programme a computer for everything. This appears to us to be a consequence of the truth of Goedel’s theorem; but it is a profound and disturbing result – and stands, I believe, unchallenged 80 years later.
So many comments on this list appear not to acknowledge this limitation of mathematics (and therefore of computers) that we wonder if we are wrong and some mathematical advance has been made of which Dave Dardinger and we are ignorant.
I cannot speak for Dave Dardinger but my spouse and I would be grateful to be put right.

• you are correct. consider the number 1/3. write this out as a decimal number, you get 0.33333…. At some point in time you have to cut off the 3’s, which leads to errors in your results. The more calculations you do, the more errors accumulate.
computers are base 2, so they have problems with different fractions than humans do with base 10. however, the problems are the same. So, we make computers with longer and longer word lengths. 8 bit, 16 bit, 32 bit, 64 bit, 80 bit floating point, etc. etc, to try and minimize this truncation problem.
so, this makes the problems accumulate a little bit slower, but is doesn’t get rid of them. And why is this a problem? because of chaos and poorly behaved non linear equations. basically these are problems where the range of possible answers diverge, these very small errors quickly swamp the result. You get results like avg 10 +- 100.

58. “a prediction that computers will soon exceed human intelligence,”
That will be a scary day when that happens, and those more intelligent logical emotionless beings we created realize that humans are irrational illogical emotional creatures who are getting in the way of their advancement..

• Curious George says:

It has already happened for many humans.

59. TRBixler says:

A minor comment from a programmer with over 50 years experience in real time software. I design and write software that works on principles. If they are correct the software works, if they are not the software fails. But the structure of that software is very important for as the failures occur and they will, the software must be allowed to grow to resolve the failures. Anyone that thinks they understand a software system well enough to think that it is perfect must be working on a very small problem. The earths climate is real time and it is not small by any measure. As I write this the Adobe Flash plugin has crashed once again and I am typing on a single computer totally dedicated to serving me. Simple task and yet failure. The Climate models have no real time observers to see when they have failed and no prescribed method to resolve the failures. The physics is daunting and the programming is daunting. How can one believe in any of these models when the static reporting of the global temperatures is such a mess “Harry read me”.

60. Johan says:

What climate models have open source code?
How many climate scientists are also really great programmers?
How many models are able to produce accurate historical values for say the previous 50 years?
And one thing, it’s VERY easy to create a program that gives unexpected results, it’s the norm to be honest. I don’t think I’ve to date since I started programming some 35 years ago have created a bug free first try unless it was something very trivial.

61. StefanL says:

Let’s be careful not to over-generalise here.
There many different types of computer applications in the world – ranging from pure calculations to hard real-time interactive control systems. Each type has its own style of specifications, sources of inherent errors, design techniques, technological limitations, validation methods, etc.
The theoretical and practical issues involved in one type are never quite the same as in another.
However I do agree that models of chaotic, non-linear, coupled systems are the most likely to be problematic 🙂

62. Bruce Cobb says:

Computers and Warmists have a lot in common: they are both tools.

63. Even after 50 years, computer hardware capabilities are growing exponentially, doubling every 18 months, unlocking a geometric rise in computational power,

This is Moore’s Law. One reason why it’s held true in some arenas is that it became a convenient design goal. Moore’s Law originally referred to the capacity and reliability as advances in integrated circuits made for ever smaller transistors and connectors. It spread to other aspects of computing and in some case drove design, and did quite well there.
In the last decade or two we’ve been running into some fundmental design limits that take either some brute force techniques or entirely different technologies.
Hard disk drive technology had two growth spurts, the second one being driven by the discovery and application of the giant magnetoresistance effect. Recently density growth rates have been nearly flat. http://en.wikipedia.org/wiki/Hard_disk_drive notes

HDD areal density’s long term exponential growth has been similar to a 41% per year Moore’s law rate; the rate was 60–100% per year beginning in the early 1990s and continuing until about 2005, an increase which Gordon Moore (1997) called “flabbergasting” and he speculated that HDDs had “moved at least as fast as the semiconductor complexity.” However, the rate decreased dramatically around 2006 and, during 2011–2014, growth was in the annual range of 5–10%

Graphically (CAGR is Compound Annual Growth Rate):
Intel architecture processor speeds have been pretty much flat for several years. Performance increases focus on multicore chips and massive parallelism.
http://i.imgur.com/NFysh.png
All told, the biggest supercomputers in the world are doing pretty well and have maintained a steady growth rate, though there are signs of a slowdown in the last few years. The #1 system has over 3 million compute cores! http://top500.org/statistics/perfdevel/

• though there are signs of a slowdown in the last few years.

And me thinks that slowdown will likely continue with the primary reason being “program code execution time”.
Most all computer programs are being “written” via use of a high level programming language which has to be compiled, assemble and/or converted to “machine language” which, ….. if it is a medium to large “source” of said “high level programming language”, ….. it will result in, after being compiled/assembled, ….. millions n’ millions of bytes/words of “machine language” code that the per se Intel processor chip has to retrieve and act upon in order to process the data in question.
Like someone once said, ……. iffen you have to run around the barn three (3) times before you can get in the door …… then you are wasting a lot of “run” time.

• As I mentioned below, the slow down was due to limits to air cooling, of about 100Watt’s per sq inch of chip size. Higher clock speeds were possible the last half dozen years, but they ran too hot. The latest 22nm process has reduced the power per clock cycle some, allowing clock speeds to cross the 4Ghz line and still be able to be air cooled.
I was also at Cray while they were assembling one of the upgrades to Redstorm, way cool.

• VikingExplorer says:

micro, running too hot and costing way too much (6B investment required) with too little ROI is exactly why Moore’s law is dead. Moore’s “law” wishful thinking makes a claim about physics and economics, so it can’t request an exemption from the laws of physics and economics. Technology will still advance, but it will be more expensive.

• micro, running too hot and costing way too much (6B investment required) with too little ROI is exactly why Moore’s law is dead. Moore’s “law” wishful thinking makes a claim about physics and economics, so it can’t request an exemption from the laws of physics and economics. Technology will still advance, but it will be more expensive.

Not necessarily, every time they shrink the process they get more die per wafer. It was a long time ago, but I spent 3 years in an IC failure and yield analysis lab. So they have a couple options, they can put more cores in the same area, or they can get more die per lot (they’ve done both). More faster cores sell for more, smaller faster cpus are cheaper to make.
All of the steps in wafer size as well as minimum feature size have all been very expensive, IBM paid the bulk of the NRE for the step from 4″ to 6″, I’m not sure if Intel drove the 6″ to 8″ or IBM, but the steps past that were funded by Intel, and they could do it because of all of the Intel based computers that have been sold (like your smart phone or table, GByte memory, thank the often hated WinTel).
What’s really interesting is that people like Sun came out with RISC to get higher clock speed/lower latency with smaller execution units, and then built the higher level functions in code, with the intention of out performing the CISC computers guys like Intel , but Intel sold so many more chips, they could afford to shrink the die and expand the wafer size, now they have higher clock with CISC cores as compared to the few RISC chips still on the market (they’re almost all gone) (or merged both concepts together as needed).
But as long as they keep selling so many chips, they will IMO keep doing the same thing that made them the giant they are.
Sure there is some limit, and it’s coming, but as I said before they said the same thing in 1983.

• VikingExplorer says:

If you admit that there are limits to physics and economics, then we agree.
As for whether we’re there yet, I would think a graph showing that it has already completely flattened out would be proof.
Computers are already too powerful for the everyday user. My son built an awesome computer for me. The problem is that even under heavy development use, the CPU is mostly idle. The average user only needs word processing and email. That’s why mobile devices rule. There is no economic demand for more power. Even now, mobile devices are usually powerful enough. When my wife got the iPhone 5s, which at the time was the fastest smart phone on the market, she didn’t notice a difference. For what she used it for, the iPhone 4 was fast enough, but not nice enough.

• If you admit that there are limits to physics and economics, then we agree.

Of course there will be physic and economic limits.

As for whether we’re there yet, I would think a graph showing that it has already completely flattened out would be proof.

I don’t.

Computers are already too powerful for the everyday user. My son built an awesome computer for me. The problem is that even under heavy development use, the CPU is mostly idle. The average user only needs word processing and email. That’s why mobile devices rule. There is no economic demand for more power. Even now, mobile devices are usually powerful enough. When my wife got the iPhone 5s, which at the time was the fastest smart phone on the market, she didn’t notice a difference. For what she used it for, the iPhone 4 was fast enough, but not nice enough.

But not for gamers, not for what I do for work. And there will be new applications that will require more performance. 60 some years ago IBM thought we’d only need 5 computers, 30 years ago who would have thought we all needed a super computer to carry around, and yet our smartphones have the power of a supercomputer.
Realtime 3d visualization needs more power, data analytics needs more power, ai needs more power. I don’t know what applications will come along that takes advantage of more power, autonomous cars for example, things we don’t do because we don’t have the power, things we haven’t thought of yet.

• VikingExplorer says:

>> But not for gamers, not for what I do for work.
There will always be high power usage, but they are also willing to pay more money. Hence, that violates Moore’s conjecture. Without associated increases in productivity or value to end users, the market will decide that because of Rock’s law, investment in the next generation isn’t worth it. That’s where we are now.
>> Realtime 3d visualization needs more power, data analytics needs more power
Yes, but since Netflix already runs well on existing platforms, the public has all the 3D visualization it needs. Gamers gravitate towards bigger screens. My kids play games with an Xbox One and a 9 foot high res screen. They aren’t likely to be interested in playing a game on a small mobile device.
Since high power usage are not demanding small size, the economic motivation isn’t there. Moore said it would double, but that depends on economics.
It’s not always better to have more transistors.

• There will always be high power usage, but they are also willing to pay more money. Hence, that violates Moore’s conjecture.

An observation made by Intel co-founder Gordon Moore in 1965. He noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention. Moore’s law predicts that this trend will continue into the foreseeable future.

Nothing in there about economy, and again 20 years ago we didn’t have a market for all of the computers we have now, they developed in part due to performance and cost. And doubling transistors per area as I just mentioned has an economic advantage.

Yes, but since Netflix already runs well on existing platforms, the public has all the 3D visualization it needs. …..
Since high power usage are not demanding small size, the economic motivation isn’t there. Moore said it would double, but that depends on economics.

No, you have no idea about the size of clouds computer centers, the cost of electricity just for cooling, there’s plenty of economy there.
And almost all of the 3d is pre-rendered, that’s why we still have multi $1,000 video cards. Imagine a pair of stereo camera’s that render a scene in 3D polygons. There are needs with economic value, and then there’s the whole security layer that has to be rebuilt (with quantum devices?), because obviously what we’re doing now isn’t going to cut it, 256-bit encryption can be cracked for minimal money on google cloud. Maybe it’s 10 more years, maybe it’ll be another 50 and we can’t even imagine what these system will be doing by then. rgb says we need 300k x for gcm’s. I just know it ain’t over yet. • VikingExplorer says: micro, you apparently didn’t read my link. When you say “Nothing in there about economy”, you’re missing that it’s implicit. New technologies don’t just appear by divine gift. In order for a new generation to appear, a lot of investment MUST take place. In order for that to happen, players must see an economic benefit. Did you miss the fact that the next gen has been delayed until 2018, 2020, or never? I’m friends with someone who has first hand knowledge of the efforts to create 450 mm, and he explained that it is extremely challenging. To get the right frequency, they need to shoot a laser into a cloud of tin atoms. 450mm delayed till 2023 . . . .sometime . . .never >> the cost of electricity just for cooling Right, but denser chips are generating more heat. Server farms don’t use the bleeding edge, since it’s not reliable enough. The fact that it’s delayed is already falsifying the Moore conjecture. Being theoretically possible isn’t enough. Moore predicted that it would happen. If in 2011, they were at 2.5B transistors, they should have been at 5B in 2013, and 10B in 2015. As of 2015, the highest transistor count in a commercially available CPU (in one chip) is at 5.5 billion transistors (Intel’s 18-core Xeon Haswell-EP). They are one generation behind already, and the next one is on hold. Actually, Moore’s conjecture was official declared over back in February of 2014. If per transistor costs rise, suddenly higher transistor counts become a liability. It’s still possible to pack more transistors per square millimeter, but that mindset becomes a financial liability — every increase in density increases costs instead of reducing them Watch the video at the link. The joke is that Intel will go out of business before they admit that Moore’s law is over. We all need to accept reality as it is, without wishful thinking. We’re already 1.5 years past the point when those working in the field have declared it over, so at this point, it’s should be renamed Moore’s delusion. • In general I think the over arching increasing computing power of Moore’s Law is still going to happen. The issues with 450mm and EUV are distractions. Smaller features by themselves does not increase power usage, and you don’t have to have 450mm to reduce the cost of computing. Lastly, and even though it might be more nightmare than dream, I trust that Bill Joy has access to the scientists who have a better view of the path forward. http://archive.wired.com/wired/archive/8.04/joy.html?pg=1&topic=&topic_set= • VikingExplorer says: Micro, Did you really just give me a link from the year 2000? That’s 15 years ago. You need to pick up the pace my friend, and in the world of technology, opinions need to be updated every 15 months, not years. I really can’t explain how you think something that falsifies Moore’s law is a distraction from whether it’s true or not. As for performance, that’s also fallen by the wayside. Here you can see that clock speed and performance/clock have been flat since about 2003. Notice that it’s very difficult to find a graph that goes beyond 2011? • VikingExplorer says: micro, now that I’ve read the article more thoroughly, I’m taken aback. It’s blatantly anti-technology and delusional. If that’s what you’re about, I’m surprised, but at least you’re in the right place. There are many around here that share the anti-science, anti-technology Luddite philosophy. Bill Joy is a dangerous idiot. • First things first. I think this answers performance https://www.cpubenchmark.net/high_end_cpus.html it’s still going up, not cheap, but these are very fast computer, our design server’s 2MB memory board in 1984 was$20,000. So a couple thousand for a
No, I’m not a Luddite, not even close. If you haven’t already, you need to read Drexler. I hadn’t finished the article, nor noticed the date. And yet I still believe Moore’s Law will continue, maybe it won’t be silicon, maybe it’ll be carbon nanotubes, and maybe you’ll be right, but for now I’m in the we’ve got at least another decade of better performance coming.

• Bill Joy is a dangerous idiot.

I’m not sure dangerous, but I’m a bit sadder, I spent a lot of years being impressed with him and the work at Sun. Too much left coast koolaid.

• VE, here QkslvrZ@gmail.com
Why don’t you send me your email, like I said I have some stuff we can do that you might be interested in, but I avoid discussing work here(I’m not in the Semi Industry anymore, but..) .

• VikingExplorer says:

Well, you can’t shift positions like and not admit you were wrong. It’s a straw man, since I never disputed that performance would continue to rise. It was a very specific objection to Moore’s delusion, which even though Moore himself never said it would continue, has a large group of zealots who act like it will continue indefinitely, even though logic and common sense falsify it.
I think you are wrong about better performance as well. Now that the claim is disconnected from chip density, other means of improving performance are available. I think it’s reasonable to assume that performance will continue to improve, quickly or slowly, indefinitely.

64. David L. says:

At the heart of the program is the climate model, and that’s the problem. Model’s that interpolate wishin known bounds are much more reliable than models that extrapolate outside known bounds. The ideal gas law is an excellent example. Within bounds of temperature, pressure, volume, and moles, the model works fine. Outside certain ranges it deviates from reality and requires “fudge factors”. These cannot be determined from “first principles” but rather empirically.

65. Gamecock says:

I just wish climate modelers would publish a verbal description of what their model is/how it is supposed to work. “Model” has become too nebulous.
I hear that GCMs use gridded atmosphere. But I never hear any discussion of what interactions are programmed, what influences what, what original values are used, quantitative relationships, etc.
I want a model of the model. Code makes a wonderful barrier to knowledge of what is being done. We look at model results, and declare, “That can’t be!” We need to be able to focus our analysis on the design phase. We will likely be shocked by what has been left out.

66. John says:

And that’s why I often say I can write a program to give you the results you want regardless of the input.

67. Climate models are different because they are Gospel In Gospel Out. The Garbage is in their Bible.

• terrence says:

well said Dr Ball – I had not heard that one before – Gospel in Gospel out, THANK YOU

68. Leonard Weinstein says:

“all computer models are wrong, but some are useful”. Unfortunately, climate models so far have not been shown to be useful, and do not appear to show any promise. Please send more money.

69. Mr and Mrs David Hume says:

We are lost. Surely a computer only tells us something that we do not expect only in the sense, say, that I have no more than a general idea of what to expect (being no mathematician of any kind) if I divide any very large number by another large number.
Manipulating the numbers according to the rules that I was taught at primary school (providing that I apply them accurately) will give me an answer that, unless I was an Indian arithmetical genius (who, however, is applying the same primary school rules) would otherwise be unknown to me. A pocket calculator would help in applying the rules – as would a computer – but neither of them would give me a different or superior answer – it would just produce it almost as quickly as the Indian arithmetical genius..
The relationship of the original numbers to any real world problem will depend on empirical verification. The primary school rules, the pocket calculator and even the Met Office’s computer will be of no help there.
How is modelling different ? Is there something that, all these years, as humble scribes in the world of business and administration, we have failed to understand about computers ?

• Mr and Mrs David Hume says:

• @ David Hume et el
Maybe you should rephrase your question to state, to wit:
Is there something that, [after] all these years, as humble scribes in the world of business and administration, we have failed to understand about the IRS Income Tax Laws?
Its not the computer, nor is it the IRS, …… but its how you “apply the rules” that count.

70. JP says:

One of the hottest growing fields on Wall St circa 1990-2007 was the field of econometrics. Due to the rapid fall of the price of hardware (compute, memory, and storage) firms could afford to build sophisticated risk models that didn’t rely on expensive mainframes. All of a sudden there was a very large demand for economists with mathematical and software development skills. Nerdy men and women with advanced degrees did everything from building risk models for mortgage derivatives to Credit Default Swaps. And what was the result? One of the largest stock market crashes in human history. Yes, there were a few savvy investors like John Paulson, using his own risk models, was able to make one of the biggest short-sells in recent history (he shorted corporate bonds that were backed by Credit Default Swaps). But, over-all I believe Paulson used his own experience and intuition in deciding the timing of the short-sell. If the CDSs and CDOs would have collapsed at a later time, Paulson would have lost his entire portfolio of several hundred million. But Paulson nailed it and made $3 billion on the bet. The point is that most risk models still projected healthy returns on CDSs and Synthentic CDOs right into 2008. Of course, people could argue that Goldman Sachs modelers knew better (see Goldman’s Abacus Scandal). Climate models like risk models rely on sophisticated analytics. But, like forecasting risk in the securities market, climate models do not live up to their promise. And we shouldn’t confuse forecast models and climate models. The improvement in forecast models during the last 40 years has been phenomenal. For some reason people who should know better confuse the 2. As a matter of fact, as climate models fail to project global temperature patterns, Alarmists have used weather events (heavy rains in Texas, blizzards on the East Coast) to verify their projections. Currently, they are forced to make fools of themselves as they use the absurdity of “extreme weather” to say they were right all along – all when global temperatures haven’t budged in 20 years. • Gamecock says: “Climate models like risk models rely on sophisticated analytics.” Their analytics are unpublished. How would you know if they were sophisticated? • JP says: Unpublished? Good grief, you’d have a difficult time in not finding published work in quantitative financing, financial modeling, single equation econometrics, labor, price, casualty, and risk modeling. Where do you think the econometricians come from? Universities. Both Ross McKitrick and Steven McIntyre come out of this field. Both have at least Masters in advanced numerical analysis. The Federal Reserve employs a large number of economists who have Masters and PHDs in modeling. But the vast majority of these specialists go to large Wall St firms. And they bring with them ideas, and theories well published in the world of advanced economics. • Gamecock says: Fine, JP. Direct me to a published climate GCM model. • Gamecock says: JP, your declaration of orthodoxy is not evidence of sophistication. It is evidence of sophistry. 71. Jquip says: lsvalgaard: I disagree. The computer [more precisely its software] can make visible that which we cannot see. A couple of examples will suffice. The computer can tell us when to fire the rocket engines and for how long in order to arrive at Pluto ten years hence. The computer can, given the mass, the composition of a star, and nuclear reaction rates tell us how many neutrinos of what energy will be produced. The results are not ‘built in’ in the sense that the computer just regurgitates what we put in. You’re talking about different cases here. In the Pluto case we are already assuming the path, that is the physical end goal in reality, and the computations are just a satisfiability problem about the parameters of the rocket necessary to satisfy that goal. It is, in all respects, a curve fitting exercise with a strong preconception of reaching Pluto. That we accept the validity of this computational exercise isn’t really germane. We accept it because we engineer with it. That is, we test the model against reality. A lot. But if we did not test the model, or had no way to do so, then what is the worth of the model? If you cannot engineer with it, you cannot test it, and so you cannot state what it is that’s even going to occur in reality. Your other example is open-ended. That is, you set up your simulation and look for the emergent answer. There is no end goal that you’re wed to, and it is not a satisfiability or a preconception problem for the outcome. You just put in your priors and see what falls out the other side. If you have accurately put in all the relevant bits about neutrino production from theory then you receive an unbounded number. This you can test as you like assuming we can engineer a neutrino detector. Both considerations avoid dealing with a singular problem with computers however. A very popular misconception, but a deep one: Computers do not perform mathematics. They simulate them. To get mathematics done properly in a computer you simply cannot make direct and uncritical use of the built-in cpu simulators for integer and real numbers. You need to produce an entire tapestry of software that deals with things in symbolic fashion where possible, produce on demand arbitrary/infinite precision numeric software libraries, or a whole host of extra computation to deal with inaccuracies that creep in by consequence of making uncritical and blind use of the hardware internal math simulators. And this is a huge problem. For if we are dealing with math, we simply publish our paper with the relevant mathematical hieroglyphics and know that everyone else can sort it out. But if we run any calculation on a computer, we must first put those hieroglyphics into a form that the computer can savvy and simulate. But that also means that the software code is not math — properly understood — and that any differences in the hardware math simulators may cause divergent results that are inscrutable from the point of view of the hand-written hieroglyphics. And due to the nature of these math simulators, it is entirely possible to accurately encode the same exact math in two different manners and receive wildly different results from the machine. And yet, as a point of hand-written math, the results should be identical. This errant behaviour can be accomplished by something as simple as multiplying before subtracting and vice versa. By consequence, it is not enough to simply state ‘computers are just big calculators.’ If they are used to generate results, then we need to know that their outcome is legit. Regardless of what math the scientist thought me might be doing, the computer will do exactly what it is told — even if the scientist isn’t aware that he told it to do so. If a paper does not include a full spec of the software code, software library versions, compiler versions and hardware used, then the results cannot be guaranteed to be replicable. And we are disbarred from then stating ‘this is the math’ unless we are minimally shown these things. Or unless all the math is simulated by a known software framework that is well vetted for correctness in these domains. • The program was written for the case of reaching Pluto, but is general purpose and will work with any goal [Mars, Jupiter, etc], so the answer was not ‘built in’. • tty says: And when You get right down to it, that program is really based on a single equation: F=G x m1 x m2/ r^2 Now, I don’t dispute that modern orbit calculation programs are quite marvellous, since they can handle (by numerical approximations) the gravitational interaction of a large number of bodies, but the underlying physics is actually rather simple, certainly in comparison to the Earth’s climate system. • There are also relativistic and tidal effects, so it is a bit more complicated than just Newtonian. But the issue is still the same: is the model deliberately misconstructed such as give the answer one wants, and I will maintain it is not. Some conspiracy nuts might disagree at their peril. • VikingExplorer says: tty, except we don’t determine any of those values very well when the number of bodies exceeds 3. The sun is losing mass, orbits are elliptical, orbits decay, etc. The problem is not easy. However, people aren’t giving up and declaring the problem unsolvable. • graphicconception says: “… tell us how many neutrinos of what energy will be produced.” How do you know it gave you the right answer? Can you work it out without the computer or were you guessing? • I know the physics and have measured the nuclear reaction rates in the laboratory so can easily compute the number of neutrinos to expect. And when we measure that number, we find a very good match. To give you a taste of a similar calculation: can I calculate how much helium was generated in the Big Bang? Yes I can, and you can even understand the calculation. Here it is: http://www.leif.org/research/Helium.pdf • catweazle666 says: “To give you a taste of a similar calculation: can I calculate how much helium was generated in the Big Bang? Yes I can, and you can even understand the calculation. Here it is: http://www.leif.org/research/Helium.pdf Very impressive, Professor. Tell me, have you calculated how many angels can dance on the head of a pin yet? • It depends on the size of the pin. Please give me your best unbiased estimate of pin size. Actual data, no models, please. • The programs show their replicability (is there such a word? – Now there is) by working consistently with other input and goals [e.g. going to Mars instead].All the things you mention would be nice, but never happen in real life. • “All the things you mention would be nice, but never happen in real life.” Now who is being nihilistic? 🙂 • Jquip says: The program was written for the case of reaching Pluto, but is general purpose and will work with any goal [Mars, Jupiter, etc], so the answer was not ‘built in’. The positions of the start and end are expressly encoded. As otherwise we would enter the physics of a rocket engine, run a simulation, and the rocket would end up in some random place. Such simulations are, in every case, curve fitting or satisfiability problems. The end state is always encoded necessarily. (is there such a word? – Now there is http://lmgtfy.com/?q=definition+of+replicability All the things you mention would be nice, but never happen in real life. We are absolutely agreed here. The current practice of science is to not only fail to produce replicable experiments, it is to fail to produce replicable math. Any disagreement between the two of us would seem to be over whether to call this real life happening ‘science’ or not. Or, whatever we choose to call it, whether or not it should be given more credibility than personal anecdotes. • The positions of the start and end are expressly encoded No, the code takes these as input. • Jquip says: No, the code takes these as input. Of course, that’s the point everyone else is on about. It seems we were in a contest of vigorous agreement. • William C Rostron says: jquip, you write: ” But that also means that the software code is not math — properly understood — and that any differences in the hardware math simulators may cause divergent results that are inscrutable from the point of view of the hand-written hieroglyphics.” This statement is not correct. What computers do at the most fundamental level is most certainly mathematics; it’s just that that mathematics is a subset of the greater field of mathematics. Computers are based on Boolean Algrebra; the internal operation of any computer is as mathematical as anything could possibly be. This fact has tremendous implications in the patent world, because software, per se, is mathematics, and mathematics is excluded from patentability because it is a fundamental tool like language or other natural phenomena. What is patentable is not the mathematics itself, but the results of that computation which may well be a unique invention. (I am not a patent lawyer) It is true that computers cannot render certain mathematical concepts in real form. For example, irrational numbers cannot be exactly rendered because of the discrete coding of numerical values in the computer. Also, Boolean Algebra (it’s an algebra subset) is inherently incapable of rendering certain higher concepts of mathematics because of the discrete nature of logic. However, any mathematical concept which relies on logic for determination can be rendered perfectly in Boolean Algebra, including the rules of algebra itself. The Maple symbolic engine comes to mind. So the algebraic rules for the manipulation of symbols can be perfectly rendered on a computer, just as those symbols are manipulated on a sheet of paper by a skilled mathematician. There is an inescapable difference between computation by logic and the real world. The real world appears to be not logical at its most fundamental level – the level of quantum mechanics. The motion of atoms in a substance cannot be exacly determined, though that motion might be constrained to a high degree. Computation in Boolean Algebra, in constrast, is always exactly determined. From first principles it is not possible to render quantum phenomenon in logic; they are different worlds. Mathematics itself is not a quantum phenomenon, but discovery of the rules of mathematics may well be. Godel;s incompleteness theorum implies that the rules of mathematics cannot be derived from first principles; rather, they must be discovered. -BillR 72. Aphan says: LeifV said- “But we have to keep trying, and eventually we may get it right. It is nihilistic to think that just because it doesn’t work today, it will never work. At some point in time it will work well enough to be useful. ” I haven’t read all the comments yet, so someone else might have addressed this, but when I read that I asked “But the climate modelers/AGW people think they ARE getting it right ALREADY. Many of them think they DO WORK TODAY. And most, if not all of them think they work well enough to be useful NOW. So why would they continue to try to CHANGE things, adjust, modify, or “learn” anything new or outside of their current understanding?” And another question Leif-what if they currently had a model that produces EXACTLY, or close to it, the actual climate conditions we are experiencing today? Would any rational, intelligent person believe that they’ve actually, correctly modeled Earth’s climate? No, a rational, intelligent person would have to WAIT and continue to run that model for years, decades, all the while comparing it to the current climate before they’d even dare to believe that the model MIGHT actually be correct, rather than just matching up for a short period of time for some random reason, and that there simply cannot be another outside factor left that might cause it to diverge from reality in the future. But I would bet my life, yes, my actual life, that they would announce to the world IMMEDIATELY that they can now model the climate perfectly and accurately and parade the mantra that “The science really is settled now”. And millions of people would believe them! It”s not nihilism to think that. It’s REALISM based upon evidence of human behavior and hubris spread over thousands of years! And Eric-this is the priceless statement for me- “They might as well say they wrote their opinion into a MS Word document, and printed it – here is the proof see, its printed on a piece of paper…” Exactly! But sadly, if that paper has NASA, NOAA, White House, or hell, even a college university letterhead at the top-then it IS actual “Science” to sooooooooooooo many people! How stupid can people be? • Well, there are well-established statistical measures of how long you have to wait before declaring that the model works to a specified accuracy or ‘believability’. Rational persons would use such a measure. An analog would be to think about quality control. You select a small number of items of a product and check if they are OK. It is too expensive to check ALL the items, so the question becomes “how many should I check to be assured of a given degree of ‘goodness'”. This is not rocket science and takes place every day in countless enterprises. • Jquip says: Well sure, statistics is rather well formulated even if its uses fall short quite often. And the difference in use can be remarkably subtle. An easy to understand example of this difficulty in statistical modelling is demonstrated by Bertrand’s Paradox. http://en.wikipedia.org/wiki/Bertrand_paradox_%28probability%29 • The quality control procedure works very well regardless of any such subtleties. • Jquip says: The quality control procedure works very well regardless of any such subtleties. Define ‘very well’ — what are the statistical bounds of such a term? Each of the three examples in Bertrand’s Paradox work ‘very well’ depending on what value of ‘very well’ we choose. And yes, there are subtleties in quality control, as there are common and special causes, and quite often a lack of Gaussian distribution. It’s not simply a matter of chucking a bell curve at things and then repeating the Central Limit Theorem as a doxology. • There is a lot of literature about that. Google is your friend. • Jquip says: There is a lot of literature about that. Google is your friend. Indeed there is. But I’m not sure how your opinion of my ignorance is germane to the idea that the statistical literature that I’ve referenced establishes that there are subtleties and that there are special causes that need to be dealt with. Not just in the limited case of product quality control, but as the very point was making about Aphan was making about validating the model against observation and of the special causes — ‘outside factors’ — that could cause divergence from theoretical models. Name dropping Google here just doesn’t suffice to carry as a rebuttal. • Jquip says: not rebuttal, just guidance. My apologies, I thought you disagreed with my point. Just another case of vigorous agreement between us, it seems. • Perhaps I missed what your point was. Many people have persisted in this thread to point out that I missed their points, so why not yours. My point, in case you missed it, is that without specifically identifying what special circumstance you have in mind, it remains hand waving. Granted that hand waves are the most common wave phenomenon known to man. • Jquip says: My point, in case you missed it, is that without specifically identifying what special circumstance you have in mind, it remains hand waving. This statement is hard to square with your previous statement — which I will quote here in verbatim: Well, there are well-established statistical measures of how long you have to wait before declaring that the model works to a specified accuracy or ‘believability’. Rational persons would use such a measure. An analog would be to think about quality control. You select a small number of items of a product and check if they are OK. It is too expensive to check ALL the items, so the question becomes “how many should I check to be assured of a given degree of ‘goodness’”. This is not rocket science and takes place every day in countless enterprises. One of the subtleties of such a trivially simple statistical model that I mentioned were ‘special causes.’ Given that you apparently missed your own point due to these subtleties, permit me to give you some guidance on basic statistics. And easy to read primer on this specific topic can be found here: http://en.wikipedia.org/wiki/Common_cause_and_special_cause_(statistics) Where in the important portion I will quote inline: Special-cause variation is characterised by:[citation needed] New, unanticipated, emergent or previously neglected phenomena within the system; Variation inherently unpredictable, even probabilistically; Variation outside the historical experience base; and Evidence of some inherent change in the system or our knowledge of it. I’ve taken the liberty of bolding to point most relevant to coupled and chaotic systems. But all of them are directly relevant to Aphan’s questions. And you’ll forgive any appearance of a knock on your competence as a scientist, but these basic statistical issues seem to have been to subtle for you personally in this short and light discussion about statistics. And since you are unquestionably top tier in your field, and I would submit within the Scientific disciplines generally, that it can only be fairly concluded that your lesser peers have an astonishing lack of competency in basic statistics, their construction, use, and validation through experimentation and observation. • I think you are overreaching and extrapolating beyond reason. Astronomers deal routinely with enormous masses of data and have great practice and experience in applying statistics. For example, our solar satellite SDO collects more than a terabyte of data every day. The Kepler satellite was staring at and observing 150,000 stars simultaneously. Galaxy and star surveys deal in the billions. Prediction of solar activity, flares, and interplanetary medium characteristics are routine. If there is something we are good at it is statistics. • Jquip says: If there is something we are good at it is statistics. Well, if we appeal to your authority, such as we have both done now, then no — it is not obvious. We can state by that, however, that if there is something you and your discipline are — it is being better at statistics than the other disciplines in science. But being ‘not as bad as’ is not synonymous with ‘good.’ And if your statement is that massive volumes of data collection free you from this, then that too is in error. It does not, of course. We can go through the long list of statistical failures in astronomy and astrophysics that were based on shoddy use of statistics and undue care with respect to ‘special causes.’ A foundational consideration in product quality control; a topic you’ve presented familiarity with and declared to be “… not rocket science and takes place every day in countless enterprises.” . But these are all the very things that Aphan referred to. Those things that are not rocket science, but are apparently beyond men that look at the stars. I submit that this has nothing to do with you, personally, beyond that you are a member of the Scientific disciplines. But there is an ever growing ignorance about Statistics in Science, even as Science relies in greater part on Statistics. This is an absurd outcome. 73. Phil Cartier says: To All- I hope all of you will watch this video on youtube by Dr. Christoper Essex https://www.youtube.com/watch?v=19q1i-wAUpY if you haven’t already done so.. The discussion here seems to show that all the usual talking points are showing up. The article is woefully deficient in that modeling climate on computers is limited in many unexpected ways, as Dr. Essex points out. It is not simply a matter of writing a program biased by the author. Computers and programs are limited in basic other ways that make the problem literally impossible without an improvement of at least 10^20 in computing power. Some of the other points, in no specific order- The basic physics has not been solved. The Navier-Stokes equation (viscous fluid flow), like many multi variable partial differential equations, has not been solved but has to be estimated by numerical computer methods. Computers generally use floating point representation- an 11 binary bit exponent and a 53 bit base or significand, both of which are packed into 64 bit(4 bytes) or even longer, more finely accurate formats. For most things this can be made to work satisfactorily but numbers such as 1/3 or pi cannot be completely accurate since they have an infinite number of places. Climate models all include partial differential equations that have to be estimated, not calculated and every calculation results in a slight error. The limit of the error is called the machine epsilon- the smallest number that can be added to another and give another different number. When trying to solve equations this way eventually the answer “blows up” because the accumulation of small errors increases with every calculation. Matters of scale- both water and air are fluids. The flow can be either smooth(laminar) or chaotic(vortices or both). The moving fluid moves energy through flow. As the flow slows down(say a stream rushing into a lake) the vortices gradually break up and the speed slows down until at around 1mm any motion disappears into the Brownian motion of vibrating atoms- a tiny bit more faster or hotter but indistinguishable. So climate calculations need to be done on a millimeter or less scale. How many square millimeters on the earth’s surface or how many cubic ones in the air and ocean are there? Since climate is weather, in the long term, climate models are essentially weather models with scales way, way larger than already way, way to large. Weather models now are generally fairly good for a few days to a week or so. Climate models are unlikely to be accurate for even a month, except for the fact that the predictions they make have such wide error ranges it’s hard to say exactly when they’ve gone wrong. Thank you Dr. Essex. You are a much more better mathemetician and physics student • MRW says: This was a brilliant hour to watch. Dr. Essex shows how even if you do put the proper data and formulae in ( the physics), you “can still get garbage out” of a finite computer. He shows it mathematically. (Also see his paper “Numerical Monsters.” Christopher Essex, Matt Davison, and Christian Schulzky. http://www.math.unm.edu/~vageli/courses/Ma505/I/p16-essex.pdf) 74. higley7 says: A world climate computer model based on real scientific principles and relationships would be great. Unfortunately, most of today’s models are based on mathematical approximations, not real physical laws. The ultimate program should be able to take the current data from the field and predict future conditions, But, only one set of data wouldn’t do. Each day, as more sets of data are entered, the program would LEARN which way parameters were changing and how fast. Predictions should then improve. But, the program would still be dumb, as it has not yet seen conditions that are not constantly changing in one direction, have cyclical changes, random changes, or mixtures of all three trends. As the system learns the range and possibilities of all of this, with experience, it would become smarter and smarter, based completely on how long it has to learn and the number of major factors it has to consider in it’s calculations. As there are probably more than 75 major influences on climate and the Earth is constantly changing, even continental drift has to be taken into account. • Jquip says: Well sure, this is done all the time and is the entire point of ‘Big Data.’ But it’s worth repeating: Correlation is not causation. And these are correlative devices. But even then, if there are multiple unknowns then a lack of correlation doesn’t demonstrate a lack of causation for anything but the entire set of unknowns. There are limited ways to deal with these issues, but there are no silver bullets. 75. ren says: There are models very useful for pilots, for example. NAIRAS. Hardly anyone knows about them. You can not see their in the media. Strong GCR can easily damage your computer in the airplane. 76. ren says: Very goodo perate models of temperature and pressure in the stratosphere over the polar circle. Useful for weather forecast. Who knows about them? 77. n.n says: Computers are magic, modelers are magicians, and scientists, in the post-normal era, are sorcerers. The uniform theory of science, that conflated the logical domains, was established through liberal assumptions of uniformity and continuity, reasoning through inference, and exaggerated significance of corollary evidence, outside of a limited frame of reference. 78. dmh says: The debate in this thread is largely a dispute between people with various levels of expertise in various disciplines of IT and its application to modeling. That’s not the big picture issue. The big picture is that the vast majority of people have no experience (except as an end user) in IT or in modeling. They can certainly judge the output of their computer generated bank statement, and because their experience is that only in the rarest of occasions is such output wrong (or pick from one of many other examples, this is hardly the only one in our daily lives) it becomes natural to assume that ALL computer generated outputs are similarly accurate. Thus, computer modelling becomes, for the public at large, the ultimate Appeal to Authority, such Authority demonstrably undeserved in the case of climate models. • KevinK says: “largely a dispute between people with various levels of expertise in various disciplines of IT and its application to modeling” Quite correct, in my field the IT folks are responsible for setting up networks, servers, file mapping and storage and security. No one in IT uses the computers to perform modelling (well they might use a spreadsheet to project how many new computers will be needed if we hire 100 people). In engineering the use of computers for modelling is only one more tool in the toolbox. It does not replace a comprehensive understanding of the underlying physical phenomena. An a computer model is never used without full verification in incremental steps along the way. An embarrassing example of the frailty of computer models, about 10 years ago a large airplane manufacturer attempted to compute the length of all the cables needed inside a new “fly by wire” airplane design. They even ordered all the cables manufactured to exact lengths with connectors on each end before the first plane was built. Once they attempted to assemble the airplane it turned out that most of the cables where just a “wee bit” too short. The folks designing the cables used a 2-D model of the plane while everybody else was using a 3-D model of the plane….. It took almost a year to figure out the correct length of all the cables and re-fabricate them. Nobody wants a “fly by wire” airplane with “splices” in the wiring, go figure. If modern computer models of relatively simple stuff like wire lengths supported by lots of IT professionals can’t “project” the correct length of a bunch of wires who the heck really believes thy can tell us the weather in the year 2100 ??? Oh, that airplane would be the Airbus A380 the largest commercial passenger plane yet, the CEO (almost?) lost his job over the screw up. Cheers, KevinK. • Patrick says: If I recall correctly, the tailplane section of the A380 was “modelled” in a computer and built from composite materials. It needed reworking after actual flitys because of some stress fractures that modelling did not expose. 79. Ralph Kramden says: I’m reminded of the “Right Climate Stuff” team. Who described the CAGW campaign as nothing more than an unproven computer model combined with a lot of speculation. If you take away the computer model then what is left? 80. I totally agree with you, that computers have limitations and climate modelling may not be accurate. I think that it’s much more important to understand climate, what are the mechanisms of climate change, what happened in the past and lead to present. Everybody is preoccupied by the climate change, but it is useless to discuss only the future without understanding the main cause of the climate transformation. My opinion is that the ocean and human activity on the ocean (mostly naval wars) has a big contribution in the matter. Aren’t we ignoring that? Shouldn’t we pay more attention to the ocean? 81. Phil says: Let’s be clear here. Models can be very valuable. Programmers can, and do, do their jobs right. But there are at least four major questions here: – How closely does the model approximate reality? – How good is the *implementation* of the model (a programming issue)? – What is the quality and resolution of the real world input data into the model? – When the implemented model is run, how closely does it actually approximate reality, especially when historical data is entered and compared to later known historical data? Each of these areas is big and important in its own right. But all of them can be trumped by another question: – Are those dealing with the model honest? Dishonesty injected into a complex problem means that it’s guaranteed to no longer correspond to reality. Indeed, that is what dishonesty means: that one states one thing, while knowing that the reality is something else. It is possible for a computer program to emit a pre-determined result, but that is not the way a well designed and well implemented model would function. That’s possible if *fraud* and *dishonesty* are involved, and there are many ways to be fraudulent and dishonest in the midst of such complexity. I think somebody brought up the case of the classic “Game of Life”, a cellular automata program with very simple rules that can create breathtakingly complex dynamic structures from very simple starting input. Nobody could predict the final structure of an arbitrary random input in say 10 million generations by just looking at the input. A “climate modeler” can at least be taken seriously if all four of those major points are *honestly* considered. Not accepted at face value, but at least taken seriously, if they are open to critiques as all real scientists are. But if dishonesty is involved, nothing about what they’re doing should be taken seriously. At minimum, to be taken seriously, a modeler should publicly present detailed documentation on all of the above four points, including the source code to any custom programs used. 82. Thanks, Eric Worrall, I hope you are right. If only we knew how the Earth’s oceans/weather system works! Proof that we don’t is in the models themselves, in how they fail. Programmers and computers actually do a good job, but in this case, Garbage-In -> Garbage-Out. And the job is very expensive. 83. Sun Spot says: I have been 30+ years writing computer code, software does not do sciences and does not generate scientific data. In science a computer programs only express a hypothesis. Many people who do not understand software, confuse doing engineering using computers and doing science with computers, the latter is a fallacy. • Sun Spot says: p.s. claiming computer code via statistical methodology does science is particularly absurd. 84. steverichards1984 says: Looking for some recent climate model code, I can across: http://www.cesm.ucar.edu/models/ccsm3.0/ccsm/doc/UsersGuide/UsersGuide/node9.html which has a ‘working’ model you can download and run. I can see one problem straight away in the help section: ‘Software Trapping: Short CCSM runs are carried out regularly with hardware and software trapping turned on. Often, such trapping will detect code that produces floating-point exceptions, out-of-bounds array indexing and other run-time errors. Runs with trapping turned on are typically much slower than production runs, but they provide extra confidence in the robustness of the code. Trapping is turned on automatically in these test cases via setting$DEBUG to TRUE in env_run. Test case names begin with “DB”.’
So when a long simulation is performed, all error checking is disabled!!! I wonder how many errors are missed?
As an indication of the openness and transparency within the climate research community, I note that a ‘current and state of the art’ simulator is available: http://www.cesm.ucar.edu/models/cmip6.html
The CMIP6 ‘product’ funded by Government/public money is only available to those ‘within the circle of trust’:
‘Access to developmental versions of CCSM and its component models can be requested by filling out an online form. (Access is restricted to only CCSM developers and close collaborators.)’
I wonder what they want to keep from prying eyes?

• This is entirely typical of the entire CACCA ho@x modus operandi.
It is not science. It is totalitarian group think propaganda and indoctrination. Any so-called scientist participating in this charade is a charlatan.

• So you think there is a global conspiracy to promote this hoax. Do you also think the moon landing was a hoax? You know, you can’t trust those government-funded types, regardless which country or continent they come from. We are all in it together.

85. As an IT expert. It has always been interesting to witness the blind faith what level of confidence the non “IT Hoi Polloi” place in those, sometimes completely unreliable programs, links and mathematical calculations (even small spreadsheets must contains some checking balances). When one banks on-line, one assumes that the transaction will be carried out as requested. However, when it sometimes does not, we are never really that surprised, are we !..
Someone has failed to do their work properly.
Such is the method used in climate models. One can predict the outcome before the outcome is demonstrated. How can any thinking person spend their entire life stating that it works with “95% accuracy”, when they are well aware that the outcome has already been roundly, soundly and clearly, manipulated. That is not science or common sense. That borders on ignorance, politics, dishonesty and obvious stupidity. Well done Bill and Co.

86. VikingExplorer says:

I agree with the view that many people, but maybe especially AGW folks, have a magical view of the computer. It seems to be a view based on the star trek, rather than reality.
I agree with people who are saying that models have minimal software engineering in them. I also agree with people (Leif?) who point out that the n-body problem is similar in complexity. I pointed this out in the discussion on Chaos, and the general response was “WTF?”. I’m surprised that people underestimate the n-body problem and over-estimate the climate problem. People are quite willing to suspend disbelief for one, but not the other. We can’t simply say “computer, verify voice print, Picard alpha beta prime; what is the mass of the sun?”. They are both extremely difficult problems, but seem not to be beyond human understanding.
However, I’m seriously amazed that except for Notanist, no one has disagreed with the technological singularity ideas put forth. It’s pure irony that the author wrote it, and that no one else responded, because it exactly confirms the premise that people have a magical view of computers.
I’m reminded of the Life of Brian, where a messiah figure says “Blessed are the cheese makers”. A listener says “what’s so special about the cheese makers”. A haughty know-it-all turns around and says “It’s not meant to be taken literally, he’s referring to all manufacturers of dairy products”.
In short, we (humanity) has absolutely no idea about what causes consciousness. Whatever it is, it’s the source of thinking. Therefore, we have no idea about how to create such a thing. As far as we know, we’re limited to traveling below the speed of light AND we’re limited to computers that execute software. As impressive as they are at that task, they will never think.

• MCourtney says:

VikingExplorer,
From a Post-Modern perspective, what is “think”?
If it works for the Terminators, how would you argue?

• VikingExplorer says:

MCourtney, your question confirms my thesis. As someone who has been writing software for 37 years, I know one thing: In order to implement functionality in software, we need to understand the algorithm first. Since as you say “what is think?”, it confirms that we have no idea how implement the thinking algorithm.
Actually, the Terminators were implemented fairly realistically (apart from time travel and liquid metal part). They mindlessly executed their software program. The most far fetched portrayal is the doctor on Star Trek Voyager.

• Patrick says:

“VikingExplorer
June 7, 2015 at 2:48 pm
I’m reminded of the Life of Brian, where a messiah figure says “Blessed are the cheese makers”. A listener says “what’s so special about the cheese makers”.”
No! A listener mishears what the messiah is saying thus ensues the comedy.

• VikingExplorer says:

Right, we don’t actually hear what the messiah figure said. It doesn’t change the point.

• Patrick says:

Correct. It is misheard, that is what I said. And thus the comedy ensues.

87. Steve Thayer says:

I think the computer model issue with climate models is all related to economics. If the purpose of the climate models was to sell them to people who needed them to make accurate predictions they would correlate the models so they matched past, known, measured data, so predictions of the future are more believable. Since there is little competition for selling the climate models to a market that needs correct predictions, there is no motivating driver to make the models match reality. What the climate models do is generate data that determines the need for budget levels related to studying and dealing with global warming. If the climate models predicted insignificant global warming then they would essentially be making a case for decreased funding for climate studies and for eliminating the IPCC, because if there is no climate change expected, why would we spend money to run the IPCC? Workers want to drum up work and managers wants to grow their empire, they aren’t going to adjust the unknowns in their computer model inputs so they produce data that says “You don’t need me, what you hired me to deal with is not a problem”.

88. Computers can only add, subtract, and rotate bitwise binary numbers. This is done at the lowest machine level, and anything done by high level programming languages compiles to combinations of these.

• And push bits around – a very important function! And make simple comparisons and change the program flow based on that.
A lot of computers and most high level languages can’t rotate bits around in register, they just support various forms of shifting data. Good enough.

89. Louis Hunt says:

lsvalgaard said, “Computer models are not built to produce a desired answer, but to show us what the consequences would be of given input to a system of equations, either derived from physics or from empirical or assumed evidence.”
Wow! I’m not sure if he is saying that computer models CANNOT be built to produce a desired answer, or that computer programmers are made of greater stuff than ordinary humans and would never stoop to do such a thing. I can guarantee that if there is money in it, a programmer can be found to get you your desired answer. In the real world, there is often a big difference between what “should” be the case and what is the case.
But even if a programmer is trying to be as accurate as possible, and gets all the known physics right, any complex model will involve “assumed evidence” that is easily influenced by biases. Not only that but after initial runs, assumptions programmed into a model can be easily adjusted or tuned precisely because they are just assumptions. You can’t tell me that climate modelers do not tune their models when initial outputs are off from what was “expected.” We know climate models contain assumptions that go beyond known physics because otherwise, all the models would produce the same output. If these models were calculating a flight path to Pluto, which one would you trust? Any of them?
We also have good reason to believe that climate-model programmers are biased because they have been very slow to correct their models to match the real world after they have been shown to exaggerate warming. Instead, they claim their models may not be exact in the short term but will be proven accurate in the distant future – after they are long retired.

90. In the nuclear T/H modeling world, no one believed the model results produced by anyone else, but everyone believed the experimental results from the test facilities , except for the guy who did the experiments

• Steve from Rockwood says:

funny and scary at the same time.

91. lsvalgaard
June 7, 2015 at 1:52 pm Edit
Ah, but Dr Svalgaard, how can we defend the ultimate 1/10 of one degree world-wide temperature claimed accuracy of the average of thousands of runs of different computer models in a world 100 years (and 100 of quin-trillions of approximate calculations from today’s input approximations) when their basic input has never been used to calculate the top-of-atmosphere radiation input – much less radiation balance?
Today’s model bureaucrats – and the politicians who fund their religion and their biases and faith – have been running global circulation models since before Hansen’s Congressional fraud back in 1988 by opening windows to the DC heat and humidity the night before.
But, they have never re-run their models with the correct top-of-atmosphere solar TSI radiation levels.
As you have several times pointed out, the solar groups have had persistent problems calibrating the TSI (total solar irradiance) levels at minimum each solar cycle. Today’s textbooks, on-line class notes, today’s government websites and their bureaucratic papers, and all of the recent climate papers I’ve read since 1996 abound with different TSI values. But somehow, the average of every model run since 1988 has come back with the same IPCC approved 3.0 watts/m^2 per CO2 doubling.
Seems strange when the cyclic low of the sunspot cycles since 1987 has dropped from 1371.5 down to 2011’s 1361.0. And the old climate models have never been re-run with the changed TSI values. If dropping 10 watts/m^2 at TOA does “nothing” over 100 years, why should we believe any of the modeled predictions between 1988 and 2015?

1987     1371.5   ERB
1987     1367     ACRIM
1993     1367     SOVA   (at midpoint next cycle)
1996     1365    NOAA9
1996     1364    NOAA10
1996     1364    ACRIMA2
1996     1365    ERBS     (and back towards 1988 and into 2006)
1996     1361    VIRGO
2008     1361    SORCE/TCTE/TIM

• Jeez, the drop is because the older instruments were plagued by scattered light that let more light into the cavity and hence resulted in too high readings. This is all well-understood and is corrected for.

• lsvalgaard.

Jeez, the drop is because the older instruments were plagued by scattered light that let more light into the cavity and hence resulted in too high readings. This is all well-understood and is corrected for.

No doubt. And, yes, the solar side of the “science” knows its business. The change – which we have discussed before – must be judged appropriate and correct.
Top of atmosphere TOA solar radiation MUST now be based on a yearly average of TSI = 1361.x at the low point of every solar cycle. Whether the climastrologists even deem it worthy of correcting for the ups and downs of the possible solar cycles between today’s low of cycle 24, 25, or 26 (much less guess for cycles 27, 28, 29, or 30) is beyond knowledge. And they are cleverly NOT saying.
Now.
Tell the climate astrologist modelers. They have NOT re-ran their models using the accurate (actual) lower TSI that is actually correct. Thus, NO model prediction for 2100 is accurate anymore. At ANY level of CO2 assumptions – now, in 1972, 1983, 1996, 1998, 2008, 2015, in 2050, in 2075, nor 2100.

• ICU says:

So, from the above plot you gather that actual absolute TSI has changed by ~10w/m^2?
So, ERB@1371.5, ACRIM@1367, NOAA9@1365 and NOAA@1354 which all happened at the exact same time, suggests what? Calibration issues.
Same goes for any overlapping segments, the Sun didn’t just magically change to two different TSI values at the exact same time.
Also remember to multiply TSI by 0.7 and divide by a factor of four.
“Initially disregarded by the community as an error in the TIM instrument, this difference has recently been shown to be due to uncorrected scatter causing erroneously high measurements by other instruments, all of which have an optical design that differs from the TIM by allowing two to three times the amount of light intended for measurement into the instrument.”
“Offsets are due to instrument calibration differences.” (partial caption from Figure 1):
http://www.swsc-journal.org/articles/swsc/abs/2014/01/swsc130036/swsc130036.html

• ICU

So, ERB@1371.5, ACRIM@1367, NOAA9@1365 and NOAA@1354 which all happened at the exact same time, suggests what? Calibration issues.
Same goes for any overlapping segments, the Sun didn’t just magically change to two different TSI values at the exact same time.
Also remember to multiply TSI by 0.7 and divide by a factor of four.

No. Dr Svalgaard’s statement is that every “older” measurement is inaccurate (calibration problems as you appear to phrase it ?) and that the actual TSI the entire period is – and has been constant – at 1361 watts/m^2.
Dividing by 4 (to create a mythical “average whole earth radiation level” and then assuming an average 0.70 atmosphere absorption factor only adds more inaccuracies into your assumptions.

• ICU says:

But you have to do that to get from solar TSI to a metric covering the entire surface of the Earth, that factor reduces that ~1360 number by a factor of almost six (regardless, the AOGCM’s do account for this properly).
But since you know climate science much better then purported ‘politicized’ climate scientists, suggest something better, other than sticking one’s head in the sand.
BTW, both of the above quotes in my previous post, were from the same website as the image you posted above.

92. Marshall says:

I’ve waded through the comments and feel obliged to weigh in. I have more than forty years in the business of building both hardware and software, an undergraduate degree in physics, and an MS in Operations Research (specializing in computer science).
The reason we “model” is to simplify complex problems so that we can “tweak” inputs and gain an understanding of how whatever we’re modeling would behave in the real world. If a model isn’t predictive, it’s wrong.
One very effective form of modeling is systems’ simulators (think of flight simulators or refinery simulators). These work very well — until they don’t. And, they’re enormously valuable when they do work.
The problem with climate models is that they simply don’t work. They have no predictive value. The ones that get funded presume catastrophic results based on small increases in ambient CO2 and to date, this hasn’t panned out.

93. Steve from Rockwood says:

In 1984 I was asked to write a program in Fortran IV coding a polygonal approximation to a discrete magnetized body of N sides. My boss gave me the paper and asked “can you do it?”. I answered “yes, but this is a tough problem.” His reply was “coding is not the tough part. Try convincing the Earth that it is a polygon”. My take away was that we are always forcing the Earth (through modeling) to be something it is not.

94. Many commenters keep repeating that ‘computers only do what they are told to do and don’t [and can’t] invent things on their own. THAT is precisely why computer models are trustworthy. They can be relied upon to follow instructions and not screw up on their own. Of course, the instructions have to be correct, but that is a people problem, not a computer problem.

• “Many commenters keep repeating that ‘computers only do what they are told to do and don’t [and can’t] invent things on their own. THAT is precisely why computer models are trustworthy. They can be relied upon to follow instructions and not screw up on their own. Of course, the instructions have to be correct, but that is a people problem, not a computer problem”.
No one here is saying that computers do not follow instructions or that they screw up on their own. NO ONE. What everyone is saying is that PEOPLE ARE THE PROBLEM. People who program computers based on the “people-ish” theories, assumptions, information given to them by other people! It’s the POINT that Eric tried to make in the article that prompted this thread!
“First and foremost, computer models are deeply influenced by the assumptions of the software developer. ”
If PEOPLE do not know how “Earth climate” works exactly, then they cannot produce “instructions that are correct”. And every aspect of the programming that isn’t specific, or has variables, then interacts with other aspects of the programming and the errors expand exponentially. The climate system is a chaotic system, not a predictable machine with a small range of possible outcomes. How to you limit the range of expectations when you don’t even know the ranges of all of the components…or even IF you know ALL of the components in the first place?

• No, you have not understood anything. Constructing a model is a scientific enterprise and as such is subject to the scientific method, that is: the physics used and the assumptions made must be justified in communication with other scientists and with experiments. They are not the ‘personal opinion’ of the constructor, and they do not just ‘influence’ the construction, they determine the model. There is nothing left of the person in the model, it is pure and hard science. It may not be correct in the end, but should be the best we can do as a collective at this time. Models can differ in details [and that is a good thing] but not in large-scale structure, as the physical laws are universal. The peer-review system will in the end accept, reject, or modify the model(s). Science is self-correcting, bad stuff is eventually snuffed out.

• Mr and Mrs David Hume says:

Surely the only thing wrong with Isvalgaard’s statement below is the use of the positive – as in “constructing a model is [italicized] a scientific enterprise” – instead of the normative as in “constructing a model ought to be a scientific enterprise”. It seems to us that the force of Isvalgaard’s defence – and this is one of the longest series of straight bats ever – depends on this one thing. We are sure that we all agree that normatively [italicized], what Isvalgaard says, is absolutely right – indeed, it is unlikely to be anything else coming from such an experienced and successful scientist.
But debate becomes less exciting if the distinction between positive and normative is carefully observed – and the entertaining exchanges about the original post here would have been denied to us.

• Tom in Florida says:

lsvalgaard
June 7, 2015 at 7:50 pm
“…Constructing a model is a scientific enterprise and as such is subject to the scientific method, that is: the physics used and the assumptions made must be justified in communication with other scientists and with experiments.”
Most science using models do not use the results to make major political and economic policy. Unfortunately in climate modeling this is all too true. With so much money and so many reputations on the line, the scientific method has been compromised.

• VikingExplorer says:

I think Leif’s statement is correct as written, without the addition of “ought”. According to the best definition of science:

Science is the pursuit and application of knowledge and understanding of the natural and social world following a systematic methodology based on evidence.

My bolding. Even though I am totally anti-AGW, I cringe when people on my side over state the case. Examples are claiming that Climatology is inherently impossible or that climate modeling is impossible or demonizing existing climate modelers.
I would speculate that most existing climate models are using the wrong approach, but it seems obvious to me that they are in the “pursuit” of knowledge, using a systematic methodology.

• KevinK says:

“THAT is precisely why computer models are trustworthy.”
With respect; “THAT is precisely why computer models are TOTALLY UNtrustworthy.”
If you do not verify the predictions/projections/guesses that your computer model produces you are only fooling yourself, and everybody else will (eventually) figure it out.
Cheers, KevinK.

• KevinK says:

“The peer-review system will in the end accept, reject, or modify the model(s). Science is self-correcting, bad stuff is eventually snuffed out.”
With all due respect, Mother Earth has reviewed all of your “peer reviewed” science and found it extremely lacking….
None of the future temperature predictions of Arrhenius, Callendar, Hansen, Trenberth or Mann have even remotely come to fruition. After more than a century it is far past time for real science to “snuff out the bad stuff”.
Cheers, KevinK.

• You can do that if you can replace it with something better. Show me the better replacement and we can talk some more.

• KevinK says:

“You can do that if you can replace it with something better. Show me the better replacement and we can talk some more.”
Ok, here is something better; the climate (aka the average temperature) of the Earth is incredibly complex and is predominately determined by the total radiation input from the Sun (UV, Visible, IR, Electromagnetic, cosmic rays, etc) AND the thermal capacity of the Oceans. That’s IT, it is really that simple and all the folks that think they have a “handle” on this incredibly complex system of interactions (yes INTERACTIONS, NOT FEEDBACKS) are fooling themselves.
Here is my replace it with something better; “WE JUST DON’T KNOW”. Anybody that claims otherwise is only fooling themselves (and some other folks) for a while.
Cheers, KevinK.

• And as the Sun has not had any long-term variation the last 300 years, the climate shouldn’t either. As simple as that, apparently. And the ‘we don’t know’ thing won’t work either as you then allow for CAWG [we just don’t know it].

• VikingExplorer says:

Seems like despite all the talk, the climate has been similarly devoid of large variation in the last 300 years.

95. Another Ian says:

Things on climatic modelling collected by a non-IT-specialist and non-GCM-specialist (but with some simulation experience)
In 1988 I was at a plant physiology seminar at a university in USA. In discussion the presenter mentioned that he had seen the code for the “latest and greatest” world climate model. And he was horrified. Plant evapotranspiration was included, but was being handled by a model of one plant stomata with the results extrapolated to the world!
In early 2000’s I was at another seminar with people around the current version of the same scene in attendance. I mentioned the above, with the hope that bigger and better computing and more knowledge had improved the scene. And was told that, it anything, it was worse.
So my take is that the purveyors of such simulations ought to not only say what is being included but demonstrate how and how well it is being included in their efforts.. And “nullius in verba”.

96. Patrick says:

Once, a person was called a “computer”. I think that was in the 17th or 18th century and for accounting requirements. My memory fades.
If there are any Aussies here, computers executed actions and a major incident at a major bank starting July 26th 2012 resulted. That incident was initiated by a human (A workmate of mine believe it or not). And another human, me, had to fix it (Thanks Daniel – your actions destroyed my life).
[Mid-1940’s – The women “calculators” running the artillery calculations for the war department were “computers” …mod]

• KevinK says:

“Once, a person was called a “computer”.”
During the early stages of WWII women where employed by the US Army as “computers” to calculate “trajectory tables”. These were a list of tables that “projected” where an artillery shell would land after being “shot” at a particular angle from a gun of a certain size (i.e., muzzle velocity). Relatively simple mathematics, but lots of labor to fill out a complete table with information for every 1 or 2 degrees of gun/mortar elevation.
One of the first uses of “digital computers” (i.e. the original UniVac ™ digital computer) was to replace those slow “human computers” with electronics.
Cheers, KevinK

• Patrick says:

That’s kewl info. Thanks!

97. Neo says:

Problems which seem insurmountable today – extending human longevity, creating robots which can perform ordinary household tasks, curing currently incurable diseases, maybe even creating a reliable climate model
This statement inadvertently shows the problem. First, it shows the biases of the writer .. I mean .. what makes anybody think that an artificially intelligent machine will have the same priorities as humans. A machine may consider finding a continuous supply of energy, it’s life’s blood, more important than breathable air or clean water.
The designer of a model, who thinks CO2 levels are important, may not put the same weighting on other gases.

98. Chris in Hervey Bay. says:

No, I haven’t read all the comments here, but I wrote this below about 6 or 7 years ago on WUWT.
Remains true today. We only know if things are true or not when we compare results to the real world.
“I was reading above where you guys are proving 2+2=5 etc., now consider this, and Google “Why the Pentium can’t do maths”
The presence of the bug can be checked manually by performing the following calculation in any application that uses native floating point numbers, including the Windows Calculator or Microsoft Excel.
The correct value is (4195835 / 3145727) = 1.333820449136241002
However, the value returned by the flawed Pentium would be incorrect beyond four significant digits
(4195835 / 3145727) = 1.333739068902037589
Another test can show the error more intuitively. A number multiplied and then divided by the same number should result in the original number, as follows:
(3145727 x 4195835) / 3145727 = 4195835
But a flawed Pentium will return:
(3145727 x 4195835) / 3145727 = 4195579
Also consider this, the bug above is known and was discovered by accident, how many bugs in the floating point processor are still un-discovered ?
And again, consider this, super computers are made up of “off the shelf” Intel processors. Older computers used P4’s. The Tianhe-1A system at National University of Defence Technology has more than 21,000 processors. (Google “supercomputer).
So after millions and millions of calculations done in a climate model with the error getting greater, done by faulty math processors with an unknown “bug”, who is going to guarantee the end result ?
We have become too reliant in the ability of these machines to be accurate. Blind faith ?”

• ICU says:

Your car has an embedded Pentium processor.

• Chris in Hervey Bay. says:

probably why it gives me problems.
Next car will be made in Japan !

• VikingExplorer says:

Just tried it. The bug has been fixed. I’m truly offended by the logic:
premise: bugs exist ==> mistakes happen
conclusion: models are unreliable ==> science is impossible
This kind of fallacious logic is far more dangerous than an undiscovered bug.

99. Patrick says:

Too funny. Watching “Thunderbirds” (Classic) an episode using computers to control a spacecraft’s re-entry to Earth. I think this was well before Apollo.

100. dmh says:

lsvalgaard June 7, 2015 at 6:20 pm
Of course, the instructions have to be correct, but that is a people problem, not a computer problem.

Followed by:
lsvalgaard June 7, 2015 at 7:50 pm
They are not the ‘personal opinion’ of the constructor, and they do not just ‘influence’ the construction, they determine the model. There is nothing left of the person in the model, it is pure and hard science.

You can’t have it both ways 😉

• Yes I can, because it is people who decide what goes into the model. They may try to do it correctly, but the consensus may also be faulty.

• dmh says:

but the consensus may also be faulty.
In other words, the models are susceptible to both human error and human bias. They may well get better over time as the actual scientific process grinds the errors out of them. But this doesn’t change the perception of the general public which is that the results of computer models (any computer models) are accurate. Some are, some aren’t. In the case of climate models, all the errors, or biases, or combinations thereof seem to run in the same direction; hot. The IPCC said as much in AR5, who am I to argue? But this doesn’t change the fact that the general populace puts more faith in climate models than they have earned, and those who seek the support to advance their political agenda are not hesitant to take advantage of the public’s misplaced trust.

• Not human error and certainly not human bias. The errors are errors of the science and not of any humans and nobody puts bias into the model. Doing so would be destructive of the reputation and career of the scientists involved.

• I was once a gunner in the Royal Danish Artillery. Let me tell you how we zero in on a target. The first salvo may fall a bit short [climate models too cold], so you increase the elevation of the tube and fire a second salvo. That salvo may fall too far behind the target [climate models too hot]. So you adjust the elevation a third and final time and the next salvo will be on the target.

• Lief,
IPCC’s models use ECS figures from 2.1 degrees C to 4.5 C. All are demonstrably too high. Hence, all the model runs are prime instances of GIGO, quite apart from all the other problems with the tendentious models. There is no evidentiary basis whatsoever for any ECS above 1.2 C. Thus, those who say that they are designed, or at least their inputs are so designed, to produce the results which they show are thus entirely correct.

• The ECS is output of the models, not input to them.
From IPCC: “equilibrium climate sensitivity (ECS) were calculated by the modelling groups”.

• dmh says:

Doing so would be destructive of the reputation and career of the scientists involved.
I didn’t say it was deliberate, did I? The root cause could easily be completely innocent.
In any event, your statement is untrue if the general modeling community on a given subject are all making similar errors/biases as there would be no one left to call BULLSH*T. Of course that would be unlikely to happen as then all computer models of the climate would exhibit errors in the same direction when compared to observations.
Oh wait…

• dmh says:

The first salvo may fall a bit short [climate models too cold],
As long as I have been following this debate, the models have been running hot. When was it that they were too cold?

• Leif,
Calculated by the modeling groups mean they were input. How does a GCM compute ECS? In real science, that would be done by observation of reality.
Why does IPCC have no model runs with ECS of 1.5 to 2.1, despite its lowering of the bottom of the range to 1.5, because observations of reality have shown anything above 2.0 to be impossible?

• Read the IPCC assessments on this. The ECS is calculated by the models. Not an input. Why is it that people can’t get the basics even halfway correct? Where did you get misled into believing that ECS was an input parameter? Who told you that?

• Lief,
My point is that, however derived, the ECS numbers in the worse than worthless GCMs are obviously anti-scientific and unphysical. If they’re derived from the models, that makes the models even worse, if that’s possible. But they are in effect inputs because the absurd GCMs are designed to produce preposterous CO2 feedback effects.
Actual ECS, as derived from observations of reality, are about the same 1.2 degrees C that would happen without any feedbacks, either positive or negative:
http://climateaudit.org/2015/03/19/the-implications-for-climate-sensitivity-of-bjorn-stevens-new-aerosol-forcing-paper/

• You evade the issue: where did you learn that ECS was input and not derived from the models?
Who told you that?

• Leif,
I’m evading nothing. The fact is that the ECS numbers in all IPCC models are ludicrously high. Just as the fact that they run way too hot. You think that’s just an accident. I don’t.
You are evading the fact that the climate models are easily demonstrated pieces of CACCA.

• Nonsense, I have always said that the models are no good, but you are still evading my question: where did you learn that ECS was input rather than output?

• Leif,
That would be by looking at their code.

• Have you done that? Which module? What line?

• Perhaps I should elaborate.
The GCM’s code is designed to produce high ECS, hence saying that ECS is not an input is a distinction without a difference, as the logicians put it.

• Have you seen the design documentation for ECS where it says that the code was designed for producing a high value? And how did the design document specify that the calculation should be done to produce a high ECS? Otherwise, it is just supposition.

• Leif,
Which GCM do you have in mind.
The models which I’ve looked at are only those with the lowest ECS, ie the two (I think) with input/outputs of 2.1.

• And which models would that be? Which module? Line number?

101. lsvalgaard
June 7, 2015 at 10:02 pm
That is not how the climate models have been concocted. All their shots are over the target and none under, yet they stick by the obviously too hot models, even though Mother Nature has shown them to be off target.

• looncraz June 7, 2015 at 12:46 pm:
“The modelers set out to correct the cold-running nature of the earlier models ”
You see, it is not what you know that gets you in trouble, it is what you know that ain’t so.

• possibly. WordPress is not too good connecting replies to comments. But to clarify: the early models were running too cold.

102. Tony says:

Attempting to model a physical system where most of the physics is unknown, is a farce. We never will have a complete understanding of every interaction that controls climate.
Finite element modelling where the elements are so huge as to entirely miss key elements such as thunderstorms, is an even greater farce. It will be hundreds of years, if ever, before computers are powerful enough to cope with elements small enough to consider all relevent details of the climate system and it’s boundaries.
When models can predict accurately whether it will rain in 3 days days time, we may have some hope, however I am confident that day will never come.

103. greymouser70 says:

Ok.. Dr Svalgaard: How did you determine the initial value for the elevation? Did you use a set of ballistics tables that specified the distance and/or the gun size, or did you guys just guess on the initial distance or elevation? My suspicion is you used a set of tables to get the initial shot near the target and then fine tuned from there.

• The initial elevation depends mostly on the observer who sees the target. His coordinates may only be approximate, so the first salvo will likely not hit the target. Based on the approximate coordinates the physics of firing of projectiles and of the influence of the air [wind, pressure, temperature] determine the initial elevation.

• greymouser70 says:

Dr. Svalgaard: Do you mean to tell me that you explicitly solved the following equation every time you made your initial shot? : (Rg/v^2) = sin 2θ I don’t think so. Even if the initial distance estimate was off You probably used a set of tables to determine this initial shot. And made an allowance for windage and air resistance. At some point someone had to sit down and calculate those tables. So you used a model that was validated and confirmed by empirical and experimental data. Climate models are neither confirmed or validated by empirical data. Ergo they are not to be trusted.

• Of course, there were tables [back then] that were precalculated. Just like there were tables of sines and cosines. Nowadays there are no more tables, everything is calculated on the fly [even sines and cosines].
And you are wrong about the climate models: they HAVE been experimentally tested and found to be wrong. We have not yet figured out how to do it, so the second salvo for climate models overshoot. By the third slave we may get it right. It takes a long time to test, perhaps 15 years or so.

104. LarryFine says:

There is a fatal flaw in the utopist belief that super-smart computers will save us. This assumes they’ll be honest, but intelligence doesn’t guarantee honesty.

105. rogerthesurf says:

“This is why I am deeply skeptical, about claims that computer models created by people who already think they know the answer, who have strong preconceptions about the outcome they want to see, can accurately model the climate.”
In other words these are the hypothesis’ illustrated by a computer model.
How well this fits in with the video below.
If the results of the hypothesis don’t agree with observations then its WRONG – IT DOSN’T MAKE ANY DIFFERENCE HOW BEAUTIFUL YOUR GUESS IS, IT DOESN’T MATTER HOW SMART YOU ARE, OR WHAT THE PERSONS NAME IS – ITS WRONG!
Cheers
Roger
http://www.rogerfromnewzealand.wordpress.com

• Richard Feynman was proven wrong in the case of the solar neutrino problem. The hypothesis [the solar model] priced a neutrino flux that did not agree with the observations, but the model was correct. The observations turned out to be wrong. Lesson: blanket statements should be avoided.

• rogerthesurf says:

Well in this case, I think the observations are probably as close we are going to get to reality.
Cheers
Roger

• LarryFine says:

Are you suggesting that all thermometers and satellite sensors are wrong? I have heard that some people are now advocating dropping the “empirical” from empirical science, but then we’d just be left with a dogma, and we know what happens to heretics.

106. Mr and Mrs David Hume says:

We seem to remember that Newton had the same problem with the Astronomer Royal who produced some observations that demonstrated that Newton’s theories were wrong; Newton persuaded him that his observations were wrong.

107. CodeTech says:

That was an awful lot of scrolling down to reach the admission that climate models have it wrong.
The next logical step would be to see a description of how climate models could possibly have got it wrong. After all, thousands of people and millions of data points obviously went into the development of these models. And since they’re all created with the best of intentions and the best scientific theories, it’s unlikely that anyone could have made a mistake of any sort.
And clearly the concept of a preconceived expected outcome can’t have anything to do with it.
Here’s an idea: maybe one or more of the assumptions going into the models are wrong, and nobody wants to admit that.

• Gunga Din says:

Here’s another idea: There’s more things going on that effect reality than any of the modelers is aware of or accounted for. It’s hard for someone who prides himself on his knowledge and conclusions to admit that he doesn’t know after all.
There’s more going on in reality than meets the “01001001”. 😎

108. A. Scott says:

At the risk of repeating some of the large number of excellemt responses here the issue is simple.
Garbage in – Garbage Out … GIGO

109. A. Scott says:

Stocking a stores shelves, or designing a car, are ALL based on a strong understanding of the data points, processes and physics of what is attempting to be modeled.
We can design aircraft in a computer model because (a) aerodynamics is a relatively simple science and (b) we understand most everything about fluid dynamics and thus about aerodynamics.
The model for an aircraft is also well developed and can be tested by building a copy of the model output, and testing in a wind tunnel. If the modeled output – the aircraft – performs as expected in a wind tunnel we can then attempt to fly it … and thus we can fully test the models ability.
We have no clue about the vast complexities and interactions, let alone all the inputs and physics, of how our climate works. Add that climate is chaos based – unlike an aircraft wing – where the same wing profile will give you the same Lift and Drag numbers every time – there are no certainties in climate.
And even models employing areas where we have good understanding are not foolproof.
I have worked with modeling in Indycars. We can input all the parameters – track layouts (turn radius, banking etc), the aerodynamic numbers from wind tunnel testing of the car, suspension geometry, tire data, etc – and despite the physics of both the car mechanically driving the track and the aerodynamics of the car in free air being well understood – the models can often be completely wrong. Because even with two well understood processes when you put them together – and operate an aero car ON the track – an entirely new process and interactions can occur.
The best of “experts” can often be block headed and clueless as well.
Again involving open wheel Indy type cars … one year we had a top shock absorber company come onboard as a sponsor. They committed an entire staff of engineers determined to revolutionize how shocks affected operation (and speed) of the cars.
One issue with race cars are the sidewalls of the tires – they are an undamped spring – think of what happens when you push on a balloon and it springs back.
Without damping of a spring you have little control – you need to keep the tire on the ground and the “contact patch” where rubber meets the road – in place.
The rocket scientist types decided they were going to jack up the tire pressure to make tires as stiff as possible and then use sophisticated shocks to manage the tire bouncing over bumps.
They spent all winter with their computer models and running their design on the chassis dyno. The model and the chassis dyno – which simulates the cars performance on track – all showed this new idea was fantastic.
We went to the track and the car was horrible – dead slow – all over the race track – all but uncontrollable.
I and the driver knew the problem as we talked and looked at the telemetry. Simple, basic, “seat of the pants” knowledge and experience. When you make the tire hard as a brick; (1) you can never build any heat in the tire, and (2) without that heat, and with no sidewall deflection in the tire, you have no contact patch – the tire tread does not stay on the ground and grip the track.
At lunch the engineers made a tiny tweak trying to figure out something. The driver and I made a tweak too – we reduced tire pressure from 50lbs to a normal 28lbs without telling anyone.
The car was immediately fast again – because the lower pressure allowed the tire to build heat and the flex in sidewalls meant the tire now could grab and conform to the track .
The engineers thought they’d found the holy grail. Until we told them what we had done. That despite the nearly $1 million budget and all these high powered highly skilled engineers, they had forgotten the single, most basic part of race car engineering – the tire contact patch. Without a good contact patch a car – any car – is all but undriveable. This is race car engineering 101. When you pump up your tires til they’re rock hard, even a street car becomes terrible to drive. In the end we literally threw away everything they did. Bolted stock parts and standard shocks on the car and went and won races anyway – based on the drivers skill and the teams good, basic race car engineering. Garbage in = garbage out. When it comes to climate modeling almost everything is NOT well understood – garbage in. The experts say – but look – when we run past data thru the model the output matches the real world. Of course it does – that is EXACTLY the output the model was designed to provide. It was designed to output a SPECIFIC result – not to output a modeled result. Which is why it is great at hind-casting and worthless for forecasting the future. Garbage in = garbage out. The failure of the scientific method – the massive hubris of the climate alarmists as their cause activism supercedes their scientific knowledge – where their fervent desire to support the cause, to prove the “consensus” overwhelms their scientific training. In the old days you had to choose – scientist or activist, researcher of the truth, or advocate for a predetermined position. Today they claim they can be both – that the seriousness demands they be both scientist and cause advocate. Which is a gross corruption of the scientific method – where everyone seeks the truth, not to prove a particular position. • In the real world says: I know where you are coming from A . Scott . Another recent case of computer modelling on competition car performance was at Pikes Peak . http://www.motorsport.com/hillclimb/news/s-bastien-loeb-and-the-peugeot-208-t16-pikes-peak-set-new-record/ But this was the actual competition departments own program & they took analysis from practice runs to work out the possible fastest time that the car could do up the hill . And then Seb Loeb went faster than the computer said was possible . One of the things that can vary all of the time is the amount of grip that a car can get . Highest coefficient of friction will vary but will usually be beyond the point where a tyre starts to spin . So a fixed traction program , [ or even a variable program which has to be measured ] will never match a world class competition driver who sees the surface ahead & is constantly adjusting for it even before it has happened . So yes , a top line computer program was outdone by the efforts of I man . • A. Scott says: Exactly – you cannot “model” the human response ….A top Indycar (or other) driver can feel the tire – feel the contact patch. They can manage that contact patch with how they drive the car. They also can drive the car past the point of adhesion – when the slip angle has exceeded the grip avail. A very good driver can anticipate that lack of grip and what the car will do and where it will go when that occurs. Climate is far more complex that modeling a race car or an airplane. And involves chaos – data that does not fit known conclusions. 110. cd says: Eric More than anything, software is a mirror of the creator’s opinions I think this is completely unfair – this would render a compiler useless for example, as according to this logic the compiler’s output is its programmer’s opinion rather than the binary instruction derived from the users code. It might be true if you’re creating say your own game where you’re in the creative domain and you’re creating a virtual world. But this isn’t the case in scientific/engineering software where you are commissioned to produce/design something that implements well established algorithms. So I don’t agree with the premise of the post. There are many reason why climate models are crap and why the shouldn’t be trusted but that has more to do with bad implementation and poor assumptions, but that does not mean that “software is the mirror of the creators opinions”; instead it is more commonly the implementation of well established science whether that is done well or not is your opinion. • larrygeiger says: CD Eric is exactly correct. Who’s opinion was used to created the syntax for the language? The author’s. Clearly you have never witnessed close at hand the construction of a computer language syntax. It is either a single author’s “opinion” or it is a committee like opinion. Also, you are clearly confused when you state that creating a compiler is entirely different from creating “your own game”. A compiler is nothing more than a sophisticated game in your analogy. The language elements chosen for a particular compiler are simply those chosen by the author. “Well established algorithms”. Hee hee. Have you read the emails from the programmer at the CRU. And yes it does mean that “software is the mirror ofthe creator opinions”. Well established science? Who says? Your opinion? • cd says: Larry A compiler is a type of software, a game is a type of software that’s about where the analogy ends. If the relationship were any closer then the compiler wouldn’t be much use. The output of a compiler is not the opinion of the programmer, the rules on optimization, intermediate states are in the design which is in the hands of the programmer (that’s the processing step), but the output has to run on a particular CPU with its own number of registers, switch architecture etc which if the compiler is to even work then it’s out of his/her hands – the compiler’s programmer has to honour the hardware requirements. Furthermore, the output from a statement akin to 1+1 is not in the opinion of the compiler’s designer – if the compiler is to actually to function as a compiler. He/She could make the executed code do something crazy but then by even the most naive programmer that’d be considered a BUG in the compiler! Have you read the emails from the programmer at the CRU Firstly, I was talking in general sense. And as for climate models, they do use standard algorithms the main problem is that they can’t solve energy transfer and turbulence at the resolutions they need to due to limitations in computing power – but that doesn’t mean that they are expressing an opinion. The limitations of their implementation are expressed from the outside even on the NASA website. And that’s all you can do. Well established science? Who says? Your opinion? Nearly all of the achievements in space exploration are due to software expressing well established science such as Newtonian Physics. If the author were correct in his assertion these were just the result of good opinions. That is no my opinion. And remember the author was talking about software in a general sense. 111. ulriclyons says: The bottom line is that if natural weather and climate variability are responses to solar variability rather than being internal chaos, all the current models are ersatz. • VikingExplorer says: Great point. 112. I like my brother’s lesson when I was just learning the trade, “A computer is inherently no more intelligent than a hammer.” I remember my advice to my Dad when he got his first business computer, “Don’t hit it, don’t throw it out the window. It’s only doing what you told it to.” 113. GeneL says: Back in the early 70’s, for a senior EE project, I worked with my adviser and some grad students on a model of Lk Michigan currents. At one point I asked him when would we go on the lake and see how well our model matched reality. After he stopped laughing, he explained that the sole purpose of a computer model was to almost match the expectations of whoever was paying for the grant, and then justify next years grant. • VikingExplorer says: I have also come across government science as a big scam. It’s just a subset of government contracting. But here you falsify what everyone is saying here. The problem is not science or computer models in general, it’s government involvement in science. I worked for a while at NPR, and my preconceived notions changed a bit. I realized that the people were not naturally biased. They were just responding to the bias inherent in the situation. We could staff NPR with rabid libertarians, yet eventually, they would start shifting towards a pro-government view. It’s the same with scientists. We need a complete separation of government and science. 114. more soylent green! says: As a professional software developer, I’ve been saying this for years. People have vast misconceptions about climate modeling software. The climate models are nothing more than hypotheses written in computer code. The models aren’t proofs. The models don’t output facts. The models don’t output data. • The models don’t output facts If so, the discrepancy between the model output and the observed temperatures cannot be taken as proof that CAWG is wrong. For this to happen, the models must be correct, just badly calibrated. If the models just output garbage you can conclude nothing from the disagreement. • more soylent green! says: Is a forecast a fact, or a set of facts? Is a prediction a set of facts? Sounds like the scientific method itself can’t be used for anything, Leif, since an hypothesis can’t be compared with observations, according to you. • Nonsense. It is well-established what the Scientific Method is. • more soylent green! says: You don’t have a clue about science if you think models output facts. I can make a computer program that outputs 5 when 2 is added to 2. Is that output a fact? • the discrepancy between the model output and the observed temperatures cannot be taken as proof =========== it can be taken as proof that the models are wrong. your mention of CAWG is bait and switch. • rgbatduke says: Well, that’s not quite fairly stated. The observed discrepancy doesn’t prove that CAGW is wrong, it proves that the models are wrong. Obviously, given that the future hasn’t happened yet, the full range of possibilities must be given at least some weight, right down to the possibility that a gamma ray burst wipes us out in the next ten minutes. Sure, it is enormously unlikely, but when one is using Pascal’s Wager to formulate public policy, all one really needs is the words “wipes us out” in conjunction with “possibility” to fuel the political dance of smoke and mirrors. Indeed, it is quite possible that the models are wrong (in the sense that they cannot, in fact, solve the problem they are trying to solve and that the method they use in the attempt produces “solutions” that have a distinct uniform bias) but CAGW is correct anyway. Or, it is possible that we’re ready to start the next glacial episode in the current ice age. Or, it is possible that nothing really interesting is going to happen to the climate. Or, it is possible that we will experience gradual, non-catastrophic warming that fails to melt the Arctic or Antarctic or Greenland Ice Pack, causes sea level to continue to rise at pretty much the rate it has risen for the last 100 years, and that stimulates an absolute bonanza to the biosphere as plants grow 30 to 40% faster at 600 ppm CO_2 than they do at 300 ppm and weather patterns alter to produce longer, wetter, more benign growing seasons worldwide. AGW could turn out to be anti-catastrophic, enormously beneficial. Thus far it has been enormously beneficial. Roughly a billion people will derive their daily bread from the increase in agricultural productivity that greenhouse experiments have proven are attendant on the increase of CO_2 from 300 to 400 ppm (one gets roughly 10 to 15% faster growth, smaller stoma, and improved drought tolerance per additional 100 ppm in greenhouse experiments). To the extent that winters have grown more benign, it has saved energy worldwide, increased growing seasons, and may be gradually reducing the probability of extreme/undesirable weather (as one expects, since bad weather is usually driven by temperature differences, which are minimized in a warming world). So sure, the failure of the models doesn’t disprove CAGW. Only the future can do that — or not, because it could be true. The success of the models wouldn’t prove it, either. What we can say is that the probability that the models are correct is systematically reduced as they diverge from observation, or more specifically the probability that the particular prescription of solving a nonlinear chaotic N-S problem from arbitrary initial conditions, pumping CO_2, and generating a shotgun blast of slightly perturbed chaotic trajectories that one then claims, without the slightest theoretical justification, form a probability distribution from which the actual climate must be being drawn and hence can be used to make statistical statements about the future climate is as wrong as any statistical hypothesis subjected to, and failing, a hypothesis test can be said to be wrong. The models can be, and are actively being, hoist on their own petard. If the actual climate comes out of the computational envelope containing 95% or more of their weight (one model at a time), that model fails a simple p-test. If one subjects the collection of models in CMIP5 to the more rigorous Bonferroni test, they fail collectively even worse, as they are not independent and every blind squirrel finds an acorn if one gives it enough chances. rgb • just badly calibrated. =============== calibration is a poor description of the process. the problem is that there is an infinite set of parameters that will deliver near identical back casting results, yet at most only 1 set is correct for forecasting. and there is no way to know in advance which set is the correct set. thus there is effectively 100% certainty that the exercise will result in meaningless curve fitting, not skillful forecasting, because you cannot separate the correct answer from the incorrect answers. • VikingExplorer says: I’ve read every Leif comment on this thread, agree with all of them. I’ve rarely seen one person pwn a crowd to this extent. However, this one seems especially insightful. Skeptics are simultaneously arguing that: climatology is impossible AND they know AGW is false climate models are impossible AND that CAGW is false climate is chaotic and unpredictable AND that yet they can predict that CO2 will not cause an instability That’s why my suggested approach is much better: get a clear description of the hypothesis, then scrutinize the underlying science and attempt to falsify with experimental & empirical evidence. However, I think this has already happened. Steve M. pushed for a clear derivation of 3.7 W/m^2, and I don’t think any satisfactory response was ever provided. For example, the Judith Curry thread about the greenhouse basically admits that the original hypothesis as commonly stated was false. The latest formulation is weak enough that there is no longer any theoretical foundation for significant AGW. • While land surface data is lacking, there is enough of it to show no loss of night time cooling based on prior day’s increase, in fact it does just what would be expected, the last 30 or so warm years it’s cooled more, the prior 30 some years showed slightly more warming than cooling, but the overall average is slightly more cooling than warming. 30 of the last 34, 50 of the last 74 show more cooling than warming. This does require more precision that a single decimal point, if we restrict it to a single decimal point it shows 0.0F average change. You just need to do more that average a temp anomaly. 115. …So what is real science in your esteemed opinion? Barycenter cycles? Piers Corbin? Monckton’s ‘simple’ model? Evans’ Notch Theory? …… ___________________________________________ here is what I would like to see. 1. data The USAF in the 1950’s pushed spectrometer resolution to single line width. I would like to see that data and then take exactly the same data from today, and compare it, from ground level to 70,000 feet to see exactly how much the spectral absorption lines have broadened since the IGY. That gives me a quantitive and qualitative number upon which to do the calculations regarding increased absorptivity. I would also like to see exactly what altitude that the lines that are saturated, desaturate to see where the difference is. The predictions regarding stratospheric temperatures have been completely wrong, probably because this parameter is wrong. 2. More Data and explanation I would like to see a rational explanation for the wide variation in Arctic ice between 1964 and 1966 and even into the 1973-76 period as has been shown from the satellite date. We make much of the change in the arctic since 1979, but we now know, from examining Nimbus data, that there was a LOT of variation before 1979 and particularly in the 1964-66-69 period. That is a good start. 116. James at 48 says: RE: The Singularity – my skepticism is not limited to “Killer AGW.” It’s getting harder and harder to improve actual computing performance. We’re at the point where we are confronting the underlying physics. You shrink the feature sizes any more and subatomic issues will rear their ugly heads. Chips are getting really expensive – hundreds or even thousands of dollars for a CPU. Some SSDs are now over 2 grand. Memory DIMMs as well. Expensive, increasingly unreliable, leaky electronics, running at too low of a voltage and the icing on the cake, most of the stuff now comes from China. I think a different sort of “singularity” may be looming and it is more in the category of the mother of all debacles. • It’s getting harder and harder to improve actual computing performance. We’re at the point where we are confronting the underlying physics. You shrink the feature sizes any more and subatomic issues will rear their ugly heads. They said the same thing in 1982 when we were making 2u CMOS, they expected to keep meeting Moore’s law, but didn’t have a clue how. An iPhone 4S has better performance than the first generation Cray Supercomputer. Chips are getting really expensive – hundreds or even thousands of dollars for a CPU. Some SSDs are now over 2 grand. Memory DIMMs as well. Expensive, increasingly unreliable, leaky electronics, running at too low of a voltage and the icing on the cake, most of the stuff now comes from China. I think a different sort of “singularity” may be looming and it is more in the category of the mother of all debacles. I just build a 4.0Ghz system for ~$500 that matches(or exceeds) the performance of the 3.3Ghz server I built 3 years ago for ~$5,000. Every component is faster, has more capacity, and is cheaper. Sure the latest generation of parts is pricy, but the price curve is steep. And while the sub 35nm stuff might near the limit, quantum stuff is working it’s way through labs. • VikingExplorer says: Actually, Moore’s law is dead. The idea of perpetual exponential growth is impossible in reality. I worked for a company that was involved with a project working on 450 mm wafers. It started out trying to target tin droplets with lasers, and ended up just shooting a laser into a tin atom mist caused by high pressure water on a block of tin. 450 mm / 30 nm technology has been delayed until 2018-2020, if not indefinitely. • Actually, Moore’s law is dead. The idea of perpetual exponential growth is impossible in reality. I worked for a company that was involved with a project working on 450 mm wafers. It started out trying to target tin droplets with lasers, and ended up just shooting a laser into a tin atom mist caused by high pressure water on a block of tin. 450 mm / 30 nm technology has been delayed until 2018-2020, if not indefinitely. I have a 22nm processor from Intel, and it looks like they are in the process of rolling out 14nm http://www.intel.com/content/www/us/en/silicon-innovations/intel-14nm-technology.html So, Moore’s law isn’t dead yet, mostly likely we will actually find a limit with nanotubes, spin, or quantum devices, maybe 10 more years? But if you asked the device/process guys I worked with in 1982 they would have kely said the same 10 years. Clever monkeys we are. • VikingExplorer says: You should realize that Moore works at Intel, and so of course, Intel claims that Moore’s law is still active. It’s more like marketing hype. This more independent source talks about how it cannot continue in the physical world. Moore’s law has cost in it as well. The graph at the bottom shows that it has flattened out. • So my 10 year estimate was the same as Moore’s estimate. But even at that it was 30 years ago they didn’t know how it would continue, yet it has. And as rgb said there’s other technologies being developed. And one of the big issues is heat dissipation, air cooling starts getting hard over about 100Watt’s per sq in if I remember the unit correctly, Cray was using liquid cooling (iirc$200/gal Fluorinert sprayed right on the chip itself) to get to 200W/sq”. That’s why chip clock speed stagnated at 3 some Ghz for so long, too much heat to shed.

• VikingExplorer says:

My development box is water cooled.

• Depends on the cpu generation, most don’t really need it.
My new i7-4790K (88W) runs over 4Ghz , my server has dual E5-2643’s (130W) 128GB DDR3 it’s still air cooled.

• rgbatduke says:

I have no idea why you are asserting this. Moore’s Law has proven to have legs far beyond what anybody expected. We are very likely to be getting our next boost from new physics comparatively soon now (new in the sense that nanophysics is bearing unexploited fruit, not new as in new interactions). I just got a new laptop with a 500 GB SSD. That is so absurdly unlikely — my first personal computer was a 64K motherboard IBM PC, which I used to access an IBM 370 as a replacement for punching cards — that it isn’t even funny.
But even with Moore’s Law, to solve the climate problem at the Kolmogorov scale (where we might expect to actually get a fairly reliable answer) requires us to increase computational capability in all dimensions by thirty orders of magnitude. Even with a doubling time of (say) 2 years, at ten doublings per 10^3 that takes 100 doublings or 200 years. True, perhaps we will eventually work out a method that bears meaningful fruit at some intermediate scale, but the current ~100x100x1 km^3 cell size, non-adaptive, on a lat/long grid (the dumbest possible grid), with arbitrary initial conditions (since we cannot measure the initial conditions) and with only guesses for major parameters and mean field physics implemented on the grid isn’t even likely to work. It would be a miracle if it were working.
But if there was any real incentive to model the actual climate instead of a hypothetical future catastrophic climate, even now human ingenuity could probably have done better.
rgb

• VikingExplorer says:

>> increase computational capability in all dimensions by thirty orders of magnitude
I think you’re assuming that climatology is an extension of meteorology. It’s not. The problem will likely be solved completely differently that you and others are imagining. I could list all the examples where it was not necessary to model reality at the molecular level, but it would be all of science, and you really should be smart enough to see that. However, only in the case of Climatology do people imagine that it’s going to require that.

• I think you’re assuming that climatology is an extension of meteorology. It’s not. The problem will likely be solved completely differently that you and others are imagining.

State dependent simulations vs non-state dependent simulations, but that’s really the difference between detecting the pause in the simulator and not even bothering. Which is ultimately what they’re doing with modtran, a change on Co2 causes x warming, that’s really state dependent, it’s just very incomplete.

117. notthatdumb says:

lsvalgaard, get kerbal space program, play with it, and notice the effect of changing the time step for the simulation, the bigger the time step the larger the deviations.
or “Universe sandbox” play about throwing planets around in orbits, try to reproduce each test with differing time rates (time delta between calcs).
Climate Simulations are presently way out partial due to time delta being way too large and grids way too large.
Any system that has feedback loops are very very sensitive to this and decay into chaos..
Obital calcs are trivial compared to climate feedback problems.

• Obital calcs are trivial compared to climate feedback problems.
=============
agreed, yet even orbital mechanics are far from trivial. we still lack an efficient solution for the orbit of 3 or more bodies not in the same plane.
the current solution involves the sum of infinite series, which is not really practical unless you have infinite time.
And that is just 3 bodies interacting under a simple power law. And for all intents and purposes, no computer on earth can solve this.
So what happens? We approximate. We made something that looks like it behaves like the real thing, but it doesn’t. It is a knock off of reality. Good enough to fool the eye, but doesn’t perform when it counts.

118. catweazle666 says:

Anyone who claims that an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions – is capable of making meaningful predictions over any significant time period is either a charlatan or a computer salesman.
Ironically, the first person to point this out was Edward Lorenz – a climate scientist.
You can add as much computing power as you like, the result is purely to produce the wrong answer faster.

119. Andrew says:

GIGO – Garbage In, Gospel Out.

120. Steve in SC says:

Computers will allow you to get the wrong answer 100 times faster.

121. Allen63 says:

As one who did science-related computer modeling (when I worked for a living), I completely agree with the author’s points.

122. sciguy54 says:

As a structural engineer who turned to the dark side of software “engineering” I will add one little observation to the many excellent ones which I have seen above (and I admit that I have not read them all)…
The worst nightmare for the programmer, tester and user is the subtle error leading to a result which meets all expectations… but which is woefully incorrect. For if the output is unexpected, then the design team will tear its hair out and burn the midnight oil to find the “problem” and beat it into submission. But if the numbers come out exactly as expected, then congratulations will flow up and down through the team and due diligence may soon fall on the sword of expediency.

123. aophocles says:

Floating-point computation is by nature inexact, and it is not
difficult to misuse it so that the computed answers consist almost
entirely of ‘noise’.
One of the principal problems of numerical analysis is to determine
how accurate the results of certain numerical methods will be; a
‘credibility gap’ problem is involved here: we don’t know how much
of the computer’s answers to believe.
Novice computer users solve this problem by implicitly trusting in
the computer as an infallible authority; they tend to believe all
digits of a printed answer are significant.
Disillusioned computer users have just the opposite approach, they
are constantly afraid their answers are almost meaningless.

Excerpt from The Art of Computer Programming
by Donald E.
Knuth (4-6 ERRORS IN FLOATING POINT COMPUTATIONS).
All programming language mathematical libraries suffer from built-in
errors from truncation and rounding. These combine with errors in the
input data to produce inexact output.
If the result of a computation is fed back to the input to produce
“next year’s value” the collective error grows-rapidly if not
exponentially. After a few iterations, the output is noise.

• collective error grows-rapidly if not exponentially.
=========
indeed that is the problem with iterative problems like climate. The error is multiplied by the error on each iteration, and thus grows exponentially.
for example, let each iteration be 99.99% correct. if your climate model has a resolution of 1 hour, which is extremely imprecise compared to reality, so 99.99% accuracy is highly unlikely, after 10 years there is 0.0002% chance your result is correct. (0.9999^(365*24*10))

• VikingExplorer says:

1) You’re assuming all the errors are going just one way.
2) You’re assuming an unreasonable time resolution. Since the minimum time scale for climate is 10 years, why on earth would we need to have a time step of only one hour. Are climate factors such as solar input, ocean circulation and orbits changing on an hourly basis?
The atmosphere is only .01% of the thermodynamic system, so why would you suggest we spend so much effort on something that really can’t make much difference?
It seems like you want the problem to be too difficult to solve. It seems like you’re afraid of what the answer would show, like deep down, you believe AGW is true.
I believe AGW is false, and I welcome further advances in climatology, since I’m confident that it will eventually show that AGW from CO2 is impossible.

• VikingExplorer says:

I’m wondering what the time step should be. We could use weather modeling as a guide. I think their time step is a minute. After 5 days, their accuracy falls off. 5 days / 1 minute = 7200 steps of accuracy.
If 1 day is like 10 years, then to get the next 5 climate states (5 decades), and if we assumed 7200 steps, the time step would be 60 hours.

• rgbatduke says:

Y’all need to get a bit of a grip on your disrespectin’ of computer solutions of an iterative nature. For one thing, there is an abundance of coupled ODE solvers out there that are really remarkably precise and that can advance a stable solution a very long ways before accumulating significant error. For another, there are problems where the answer one gets after accumulating significant error may produce e.g. a phase error but not necessarily make the error at distant times structurally or locally incorrect. For a third, in many problems we have access to conserved quantities we can use to renormalize away parts of the error and hence avoid ever reaching true garbage.
As somebody that routinely teaches undergrads to use built in ODE solvers and solve Newton’s Laws describing, for example, planetary motion to some tolerance, one can get remarkably good solutions that are stable and accurate for a very long time for simple orbital problems, and it is well within the computational capabilities of modern computers and physics to extend a very long time out even further than we ever get in an undergrad course by adding more stuff — at some point you make as large an error from neglected physics as you do from numerical error, as e.g. tidal forces or relativistic effects or orbital resonances kick in, and fixing this just makes the computation more complex but still doable.
That’s how we can do things like predict total eclipses of the sun a rather long time into the future and expect to see the prediction realized to very high accuracy. No, we cannot run the program backwards and determine exactly what the phase of the moon was on August 23, 11102034 BCE at five o’clock EST (or rather, we can get an answer but the answer might not be right). But that doesn’t mean the programs are useless or that physics in general is not reasonably computable. It just means you have to use some common sense when trying to solve hard problems and not make egregious claims for the accuracy of the solution.
There are also, of course, unstable, “stiff” sets of ODEs, which by definition have whole families of nearby solutions that diverge from one another, akin to what happens in at least one dimension in chaos. I’ve also spent many a charming hour working on solutions to stiff problems with suitable e.g. backwards solvers (or in the case of trying to identify an eigensolution, iterating successive unstable solutions that diverge first one way, then the other, to find as good an approximation as possible to a solution that decays exponentially a long range in between solutions that diverge up or down exponentially at long range).
One doesn’t have to go so far afield to find good examples of numerical divergences associated with finite precision numbers. My personal favorite is the numerical generation of spherical bessel functions from an exact recursion relation. If you look up the recursion relation (ooo, look, my own online book is the number two hit on the search string “recursion relation spherical bessel”:-)
$z_{\ell + 1}(x) = \frac{(2\ell + 1)}{x} z_\ell(x) - z_{\ell - 1}(x)$
you will note that it involves the subtraction of two successive bessel functions that are for most $x$ and $\ell$ of about the same order. That is, one subtracts two numbers with about the same number of significant figures to make a smaller number, over and over again because in the large $\ell$ limit the bessel functions get arbitrarily small for any fixed $x$.
Computers hate this. If one uses “real” (4 byte) floating point numbers, the significand has 23 or so bits of precision plus a sign bit. Practically speaking, this means that one has 7 or so significant digits, times an 8 bit exponent. For values of x of order unity, starting with $j_0(x) = sin(x)/x$ and $j_1(x) = sin^2(x)/x - cos(x)/x$, one loses very close to one significant digit per iteration, so that by $\ell = 10$ one has invented a fancy and expensive random number generator, at least if one deals with the impending underflow or renormalizes. Sadly, forward recursion is useless for generating spherical bessel functions.
Does that mean computers are useless? No! It turns out that not only is backwards recursion stable, it creates the correct bessel function proportionalities out of nearly arbitrary starting data. That is, if you assume that $j_{21}(x) = 1$, $j_22(x) = 0$ — which is laughably wrong — and apply the recursion relation above backwards to generate the $\ell - 1$ function all the way down to $\ell = 0$ and save the entire vector of results, one generates an ordered vector of numbers that have an increasingly precise representation of the ratio between successive bessel functions. If one then computes just $j_0(x)$ using the exact formula above and normalizes the entire vector with the result, all of the iterated values in the vector are spherical bessel functions to full numerical precision (out to maybe $\ell = 10$ to $\ell = 12$, at which point the error creeps back in until it matches the absurd starting condition).
Forward recursion is stable for spherical Neummann functions (same recursion relation, different initial values).
Books on numerical analysis (available in abundance) walk you through all of this and help you pick algorithms that are stable, or that have controllable error wherever possible. One reason to have your serious numerical models coded by people who aren’t idiots is so that they know better than to try to just program everyday functions all by themselves. It isn’t obvious, for example, but even the algebraic forms for spherical bessel functions are unstable in the forward direction, so just evaluating them with trig functions and arithmetic can get you in trouble by costing you a digit or two of precision that you otherwise might assume that you had. Lots of functions, e.g. gamma functions, have good and bad, stable and unstable algorithms, sometimes unstable in particular regions of their arguments so one has to switch algorithms based on the arguments.
The “safe” thing to do is use a numerical library for most of these things, written by people that know what they are doing and hammered on and debugged by equally knowledgeable people. The Gnu Scientific Library, for example, is just such a collection, but there are others — IMSL, NAG, GSL CERNLIB — so many that wikipedia has its own page devoted to lists for different popular languages.
It is interesting to note that one of Macintyre and McKittrick’s original criticisms of Mann’s tree ring work was that he appeared to have written his own “custom” principle component analysis software to do the analysis. Back when I was a grad student, that might have been a reasonable thing for a physics person to do — I wrote my own routines for all of the first dozen or so numerical computations I did, largely because there were no free libraries and the commercial ones cost more than my salary, which is one way I know all of this (the hard way!). But when Mann published his work, R existed and had well-debugged PCA code built right in! There was literally no reason in the world to write a PCA program (at the risk of doing a really bad job) when one had an entire dedicated function framework written by real experts in statistics and programming both and debugged by being put to use in a vast range of problems and fixing it as needs be. Hence the result — code that was a “hockey stick selector” because his algorithm basically amplified noise into hockey sticks, where ordinary PCA produced results that were a whole lot less hockey stick shaped from the same data.
To conclude, numerical computation is an extremely valuable tool, one I’ve been using even longer than Eric (I wrote my first programs in 1973, and have written quite a few full-scale numerical applications to do physics and statistics computations in the decades in between). His and other assertions in the text above that experienced computational folk worry quite a bit about whether the results of a long computation are garbage are not only correct, they go way beyond just numerical instability.
The hardest bug I ever had to squash was a place where I summed over an angular momentum index m that could take on values of 0, 1, 2, 3, 4… all of which independently gave reasonable answers. I kept getting a reasonable answer that was wrong in the computation, fortunately in a place where I could check it, fortunately in a place where I was checking because at that point I wasn’t anybody’s fool and knew better than to think my code worked until I could prove it. Mostly. It turned out — after a couple of months of pulling my hair — that the error was that I had typed an n instead of an m into a single line of code. n was equal to zero — a perfectly reasonable, allowed value. It just gave me the wrong result.
I’ve looked at at least some of the general circulation model code. The code I looked at sucks. As far as I can see, it is almost impossible to debug. It is terribly organized. It looks like what it likely is — a bunch of code thrown together by generations of graduate students with little internal documentation, with no real quality control. Routines that do critical stuff come from multiple sources, with different licensing requirements so that even this “open” source code was only sorta-open. It was not build ready — just getting it to build on my system looked like it would take a week or two of hard work. Initialization was almost impossible to understand (understandably! nobody knows how to initialize a GCM!) Manipulating inputs, statically or dynamically, was impossible. Running it on a small scale (but adaptively) was impossible. Tracking down the internal physics was possible, barely, but enormously difficult (that was what I spent the most time on). The toplevel documentation wasn’t terrible, but it didn’t really help one with the code! No cross reference to functions!
This is one of many, many things that make me very cynical about GCMs. Perhaps there is one that is pristine, adaptive, open source, uses a sane (e.g. scalable icosahedral) grid, and so on. Perhaps there is one that comes with its own testing code so that the individual subroutines can be validated. Perhaps there is one that doesn’t require one to login to a site and provide your name and affiliation and more in order to be allowed to download the complete build ready sources.
If so, I haven’t found it yet. Doesn’t mean that it doesn’t exist, of course. But CAM wasn’t it.
rgb

• VikingExplorer says:

Thank you rgb, that was excellent. My experience has some overlap with yours. I didn’t starting coding until 1977. I agree 100% that numerical analysis is not impossible, like many on this site are implying. I started with Runge-Kutta around 1982 or so. These techniques have been around for about 115 years.
As a generator engineer, we used numerical methods extensively for analysis and design, including detailed electro magnetics, thermodynamics and stress analysis to enable weight reduction. Anyone who crosses a bridge or flies an airplane is trusting their life to the quality of this kind of work.
Some people are willing to express falsehoods in order to advance a political agenda. I am not one of those people.

• rgbatduke says:

I’m wondering what the time step should be. We could use weather modeling as a guide. I think their time step is a minute. After 5 days, their accuracy falls off. 5 days / 1 minute = 7200 steps of accuracy.
If 1 day is like 10 years, then to get the next 5 climate states (5 decades), and if we assumed 7200 steps, the time step would be 60 hours.

Sadly, you are going exactly the wrong way in your reasoning, seriously. Suppose weather models used ten minutes as a time step instead of 1 minute (or more likely, less). Are you asserting that they would be accurate for longer then 5 days? If they used 100 minutes longer still?
You are exactly backwards. To be able to run longer, accurately, they have to use shorter timesteps and a smaller scale spatial grid (smaller cells) not larger and larger. The stepsize, BTW, is determined by the size of the spatial cells. Atmospheric causality propagates at (roughly) the speed of sound in air, which is (roughly) 1 km in 3 seconds. If one uses cells 100 km times 100 km (times 1 km high, for no particularly good reason) that is 300 seconds for sound to propagate across the cell, hence timesteps of 5 minutes. A timestep of 1 minute just means that they are using 20 km times 20 km cells, which means that their cells are actually on the high side of the average thunderstorm cell. To get better long term weather predictions, they would probably need to drop spatial resolution by another order of magnitude and use e.g. 2 km times 2 km or better yet 1 x 1 x 1 km^3. But that means one step in the computation advances it 3 seconds of real time. A day is 1440 seconds. A week is around 10,000 seconds (10080 if you care). That means that to predict a week in advance at high resolution, they have to advance the computation by around 3300 timesteps. But now the number of cells to be solved is 100×100 = 10000 times greater! So you might well take longer than a week to predict the weather a week ahead! Oops.
The spatiotemporal scale they use is a compromise (or optimization) between the number of cells, the time advanced per cell, the computational resources available to run the computation, and the need for adequate empirical accuracy in a computation that takes LESS than the real time being modelled to run. If they decrease the cell size (more cells), they can run fewer, shorter time steps in a constant computational budget and get better predictions for a shorter time. If they increase the cell size (fewer cells), they can run the computation for more, longer time steps and get worse predictions for a longer time. In both cases, there are strict empirical limits on how long in real world time the weather will “track” the starting point — longer for smaller cells, less time for larger cells.
The length scale required to actually track the nucleation and growth of defects in the system is known. It is called the Kolmogorov scale, and is around a couple or three millimeters — the size of micro-eddies in the air generated by secular energies, temperatures, wind velocities. Those eddies are the proverbial “butterfly wings” of the butterfly effect, and in principle is the point where the solutions to the coupled ODEs (PDEs) become smooth-ish, or smooth enough to be integrated far into the future. The problem is that to reach the future now requires stupendous numbers of timesteps (the time scale is now the time required for sound to cross 3 mm, call it 10 microseconds. To advance real time 1 second would require 100,000 steps, and the number of 3x3x3 mm cells of atmosphere alone (let alone the ocean) is a really scary one, lotsa lotsa digits. As I said, there isn’t enough compute power in the Earthly universe to do the computation (or just hold the data per cell) and won’t be in the indefinite future.
You also then have to deal very seriously with the digitization error, because you have so many timesteps to integrate over. It would require a phenomenal data precision to get out a week (10^5 x 10^4 = 10^9 timesteps, lots of chance to accumulate error). To get out 50 years, that would be around 2500 weeks, or 2.5 trillion timesteps integrating over all of the cells.
Climate models are basically nothing but the weather models, being run out past the week where they are actually accurate. It is assumed that while they accumulate enough error to deviate completely from the observed weather to where their “predictions” are essentially completely decorrelated from the actual weather, that in some sense their average is a representative of the average weather, or some kind of normative constraint around which the real weather oscillates. But there are problems.
One is that the climate models do not conserve energy. In any given timestep, the model “Earth” absorbs energy and loses energy. The difference is the energy change of the system, and is directly related to its average temperature. This energy change is the difference between two big numbers to make a smaller number, which is precisely where computers do badly, losing a significant digit in almost every operation. Even using methods that do better than this, there is steady “drift” away from energy balance, a cumulation of errors that left unchecked would grow without bound. And there’s the rub.
The energy has to change, in order for the planet to warm. No change is basically the same as no warming. But energy accumulates a drift that may or may not be random and symmetric (bias could depend on something as simple as the order of operations!). If they don’t renormalize the energy/temperature, the energy diverges and the computation fails. But how can they renormalize the energy/temperature correctly so that (for example) the system is not forced to be stationary and not basically force it in a question-begging direction?
Another is that there is no good theoretical reason to think that the climate is bound to the (arbitrarily) normalized average of many independent runs of the a weather model, or even that the real climate trajectory is somehow constrained to be inside the envelope of simulated trajectories, with or without a bias. Indeed, we have great reason to think otherwise.
Suppose you are just solving the differential equations for a simple planetary orbit. Let’s imagine that you are using terrible methodology — a 1 step Euler method with a really big timestep. Then you will proceed from any initial condition to trace out an orbit, but the orbit will not conserve energy (real orbits do) or angular momentum (real orbits do). The orbit will spiral in, or out, and not form a closed ellipse (real orbits would, for a wide range of initial conditions). It is entirely plausible that the 1 step Euler method will always spiral out, not in, or vice versa, for any set of initial conditions (I could write this in a few minutes in e.g. Octave or Matlab, and tell you a few minutes later, although the answer could depend on the specific stepsize).
The real trajectory is a simple ellipse. A 4-5 order Runge-Kutta ODE solution with adaptive stepsize and an aggressive tolerance would integrate the ellipse, accurately for thousands if not tens of thousands of orbits, and would very nearly conserve energy and angular momentum, but even it would drift (this I have tested). A one step Euler with a large stepsize will absolutely never lead to a trajectory that in any possible sense could be said to resemble the actual trajectory. If you renormalize it in each timestep to have the same energy and the same angular momentum, you will still end up with a trajectory that bears little resemblance to the real trajectory (it won’t be an ellipse, for example). There is no good reason to think that it will even sensibly bound the real trajectory, or resemble it “on average”.
This is for a simple problem that isn’t even chaotic. The ODEs are well behaved and have an analytic solution that is a simple conic section. Using a bad method to advance the solution with an absurdly large stepsize relative to the size of secular spatial changes simply leads to almost instant divergences due to accumulating error. Using a really good method just (substantially) increases the time before the trajectory starts to drift unrecognizably from the actual trajectory, and may or may not be actually divergent or systematically biased. Using a good method plus renormalization can stretch out the time or avoid the any long run divergence, but also requires a prior knowledge of the the constant energy and angular momentum one has to renormalize to. Lacking this knowledge, how can one force the integrated solution to conserve energy? For a problem where the dynamics does not conserve energy (because that is the point, is it not, of global warming as a hypothesis) how do you know what energy to renormalize to?
For a gravitational orbit with a constant energy, that is no problem. Or maybe a medium sized problem, since one has to also conserve angular momentum and that is a second simultaneous constraint and requires some choices as to how you manage it (it isn’t clear that the renormalization is unique!).
For climate science? The stepsizes are worse than the equivalent of 1-step Euler with a largish stepsize in gravitational orbits. The spatial resolution is absurd with cells far larger than well-known secular energy transport processes such as everyday thunderstorms. The energy change per cell isn’t particularly close to being balanced and has to be actively renormalized after each step, which requires a decision as to how to renormalize it, since the cumulative imbalance is supposed to (ultimately) be the fraction of a watt/m^2 the Earth is slowly cumulating in e.g. the ocean and atmosphere as an increased mean temperature, but some unknown fraction of that is numerical error. If one enforced true detailed balance per timestep, the Earth would not warm. If one does not, one cannot resolve cumulative numerical error from actual warming, especially when the former could be systematic.
All of this causes me to think that climate models are very, very unlikely to work. Because increasing spatiotemporal resolution provides many more steps over which to cumulate error in order to reach some future time, this could even be a problem where approaching a true solution is fundamentally asymptotic and not convergent, so that running it for some fixed time into the future reaches a minimum distance from the true solution and then gets worse for decreasing cell size (for a fixed floating point resolution). They aren’t “expected” to work well until one reaches a truly inaccessible spatiotemporal scale. This doesn’t mean that they can’t work, or that a sufficiently careful or clever implementation at some accessible spatiotemporal scale will not work in the sense of having positive predictive value, but the ordinary assumption would be that this remains very much to be empirically demonstrated and should never, ever be assumed to be the case because it is unlikely, not likely, to be true!
It is, in fact, much more unlikely than a 1 step Euler solution to the gravitational trajectory is a good approximation of the actual trajectory, especially with a far, far too large stepsize, with or without some sort of renormalization. And that isn’t likely at all. In fact, as far as I know it is simply false.
rgb

• VikingExplorer says:

>> Are you asserting that they would be accurate for longer then 5 days? If they used 100 minutes longer still?
Wow, rgb, you wrote way too much. Especially since your assumption at the very beginning is wrong.
I’m absolutely NOT asserting that. The premise that has led you to put these words into my mouth seems to be that you can’t imagine that a climate model would not be just a simple-minded extension of a weather model.
I’m asserting that a proper climate model would be completely different than a weather model. If (or when) I were to create a climate model, I would focus on factors such as solar variation, solar system orbital variations, ocean circulation, tilt variations, geophysics, etc.
The climate is not significantly affected by the atmosphere, which is only .01% of the thermal mass of land/sea/air system. I would model the main mechanisms of heat transfer, but otherwise, there would be no reason to go any further. This is the way engineering is normally done.
My point was that the weather model might give us a clue how many iteration steps we can go before significant error creeps in.

124. Shinku says:

As others have said Computers have a limited and finite capacity for calculating floating point precision. Even an analog computer has an infinite resolution. That’s probably why US warships still uses them.

• VikingExplorer says:

To everyone talking about floating point precision. If a software engineer find numerical instability, he doesn’t throw up his hands and say the problem is unsolvable.
He first switches to double precision, which supports 15 decimal places. He should also use a commercial numerical library. If that’s not enough, consider using GMP, which has no practical limit to the precision.

• Shinku says:

That is interesting. I always assumed us measly programmers are constrained to what CPU artchitectures was optimized for.

• Shinku says:

Wait a sec. after realizing something. Computers can’t have infinite numerical precision.. computers will always have a finite number of storage/memory/registry. Not even if we convert every molecule on that machine into transistors you will still have a finite number of digits! owell.

• VikingExplorer says:

Shinku,
No, GMP doesn’t use built in floating point math functionality. No one ever said “infinite”. I said “no practical limit”. If you had read the link, you would know that precision is limited by available memory. One can always get more memory.
The theoretical memory limits in 16, 32 and 64 bit machines are as follows: 16 bit = 64 Kilobytes, 32 bit = 4 Gigabytes, 64 bit = 16 Exabytes or 18, 446, 744, 073, 709, 551, 616 bytes.
So, the whole argument that climate modeling is impossible because of floating point issues is complete crap.

• First, I don’t know one way or the other if GCM’S are ultimately limited by numerical precision, so I’m going to ignore that topic.
But some nonrepeating nonterminating number can be expressed in other ways, like pi.
Then don’t forget the very long word architecture computers, 128 + bit computers.

125. Rob Scovell says:

I have enjoyed reading this fascinating discussion. I am a software developer with a background in numerical analysis. However, I am very new to this whole area of climate models and I am wondering where I might get hold of the source code for one of the mainstream models so I can inspect it for myself.