An IT expert's view on climate modelling

Computer circuit board and cd romGuest essay by Eric Worrall

One point struck me, reading Anthony’s fascinating account of his meeting with Bill McKibben. Bill, whose primary expertise is writing, appears to have an almost magical view of what computers can do.

Computers are amazing, remarkable, incredibly useful, but they are not magic. As an IT expert with over 25 years commercial experience, someone who has spent a significant part of almost every day of my life, since my mid teens, working on computer software, I’m going to share some of my insights into this most remarkable device – and I’m going to explain why my experience of computers makes me skeptical, of claims about the accuracy and efficacy of climate modelling.

First and foremost, computer models are deeply influenced by the assumptions of the software developer. Creating software is an artistic experience, it feels like embedding a piece of yourself into a machine. Your thoughts, your ideas, amplified by the power of a machine which is built to serve your needs – its a eerie sensation, feeling your intellectual reach unfold and expand with the help of a machine.

But this act of creation is also a restriction – it is very difficult to create software which produces a completely unexpected result. More than anything, software is a mirror of the creator’s opinions. It might help you to fill in a few details, but unless you deliberately and very skilfully set out to create a machine which can genuinely innovate, computers rarely produce surprises. They do what you tell them to do.

So when I see scientists or politicians claiming that their argument is valid because of the output of a computer model they created, it makes me cringe. To my expert ears, all they are saying is they embedded their opinion in a machine and it produced the answer they wanted it to produce. They might as well say they wrote their opinion into a MS Word document, and printed it – here is the proof see, its printed on a piece of paper…

My second thought, is that it is very easy to be captured by the illusion, that a reflection of yourself means something more than it does.

If people don’t understand the limitations of computers, if they don’t understand that what they are really seeing is a reflection of themselves, they can develop an inflated sense of the value the computer is adding to their efforts. I have seen this happen more than once in a corporate setting. The computer almost never disagrees with the researchers who create the software, or who commission someone else to write the software to the researcher’s specifications. If you always receive positive reinforcement for your views, its like being flattered – its very, very tempting to mistake flattery for genuine support. This is, in part, what I think has happened to climate researchers who rely on computers. The computers almost always tell them they are right – because they told the computers what to say. But its easy to forget, that all that positive reinforcement is just a reflection of their own opinions.

Bill McKibben is receiving assurances from people who are utterly confident that their theories are correct – but if my theory as to what has gone wrong is correct, the people delivering the assurances have been deceived by the ultimate echo chamber. Their computer simulations hardly ever deviate from their preconceived conclusions – because the output of their simulations is simply a reflection of their preconceived opinions.

One day, maybe one day soon, computers will supersede the boundaries we impose. Researchers like Kenneth Stanley, like Alex Wissner-Gross, are investing their significant intellectual efforts into finding ways to defeat the limitations software developers impose on their creations.

They will succeed. Even after 50 years, computer hardware capabilities are growing exponentially, doubling every 18 months, unlocking a geometric rise in computational power, power to conduct ever more ambitious attempts to create genuine artificial intelligence. The technological singularity – a prediction that computers will soon exceed human intelligence, and transform society in ways which are utterly beyond our current ability to comprehend – may only be a few decades away. In the coming years, we shall be dazzled with a series of ever more impressive technological marvels. Problems which seem insurmountable today – extending human longevity, creating robots which can perform ordinary household tasks, curing currently incurable diseases, maybe even creating a reliable climate model, will in the next few decades start to fall like skittles before the increasingly awesome computational power, and software development skills at our disposal.

But that day, that age of marvels, the age in which computers stop just being machines, and become our friends and partners, maybe even become part of us, through neural implants – perfect memory, instant command of any foreign language, immediately recall the name of anyone you talk to – that day has not yet dawned. For now, computers are just machines, they do what we tell them to do – nothing more. This is why I am deeply skeptical, about claims that computer models created by people who already think they know the answer, who have strong preconceptions about the outcome they want to see, can accurately model the climate.

0 0 votes
Article Rating
615 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
CodeTech
June 7, 2015 4:08 am

Well put, and I wholeheartedly agree.

billw1984
Reply to  CodeTech
June 7, 2015 6:42 am

Good points. Then add to that the following: 1. confirmation bias (related to main point), 2. noble cause confirmation, 3. peer pressure (both confirmation and condemnation), 4. human tendency to not want to admit error, 5. human tendency to extrapolate trends, 6. need to get grants and please referees of grants and papers and you have our present situation. Now, being a robot myself – I am not subject to any of these.
We have a very nice test coming the next 10-15 years. The weak solar cycle may not play any role at all but this will be a nice test to see if it does. The PDO seems to have switched and now various climate agencies are saying the AMO may have as well. So, a very nice test of the relative strengths of the natural and CO2 forcings. I hope there are not any large volcanic eruptions the next 10 years as that will add another variable to the mix and make it harder to sort out.

Alcheson
Reply to  billw1984
June 7, 2015 5:38 pm

Bill, we had a nice test the past 15 years and the CAGW models failed miserably. However, because the climate scientists have the ability to rewrite temperature history and make whatever temperature trend they desire, how confident are you that a cooling trend will be allowed to stand? Even if a cyclic cooling trend is underway (which I think it is since weather around the world sure does seem an awful lot like it did in the 1970s), adjustments will be made to insure the warming continues unabated, or at least long enough to have successfully put into place the political agenda. Once the agenda is complete, we will magically see cooling and we are all to bow down to the world saviors.

climanrecon
Reply to  CodeTech
June 7, 2015 8:19 am

Sorry, but this article is very poor, the software engineering aspect of any computer-based model is insignificant, apart from the obvious need to avoid bugs. Climate models are mathematical representations of physics, chemistry and biology, those fields are where any weaknesses lie, software engineers just translate things into code.
In principle climate models could run on hydraulic computers, but that would not allow plumbers to comment on their fidelity.

Retired Engineer Jim
Reply to  climanrecon
June 7, 2015 9:08 am

Exactly – the model is a set of equations. Ideally, well documented, with all known assumptions written down and with any limitations of application also noted. If it is an initial-value problem, the modeler needs to document how to define the initial value. If it is a boundary-value problem, then the means for defining the boundary values must be documented. And the majority of the problems are encountered / committed during the building of the model.
Ideally, the model is then handed to a group of software engineers independent of the modeller, and as was said, they “just code the model”. (It isn’t ever that easy.) An area in which more problems may arise is in the SE’s choice of solution techniques, especially if the SEs aren’t aware of any stability issues in the solution technique.
Where real serious problems arise is if the modeller(s) are also the “coders”. Then the modeller/coder can introduce all sorts of biases, intentionally or otherwise.
Been there, (unfortunately) done that, have the T-shirts.

Pseudo Idiot
Reply to  climanrecon
June 7, 2015 9:18 am

When a computer modeler chooses to model the geography of the planet at a resolution that’s too large to represent (for example) small clouds (all modelers are guilty of this, because of technical limitations), it is not a weakness of physics, chemistry or biology.
It means that the output of the models is useless because the models ignore (or attempt to approximate) important physical properties of nature, and the results have an unknowable level of error.

Reply to  climanrecon
June 7, 2015 9:24 am

Climanrecon – the software side of the models is more influential than you credit. The limitations on computation power, the grid sizes, interaction between grids and the workarounds needed to approximate the physics, thermodynamics, chemistry, etc are the reasons why a simulator is only a mirror of the people who programmed and ran it. I have been running computer simulations for 40 years and I can assure you that Eric’s article is very spot on.

RACookPE1978
Editor
Reply to  climanrecon
June 7, 2015 9:48 am

climanrecon

Climate models are mathematical representations of physics, chemistry and biology, those fields are where any weaknesses lie, software engineers just translate things into code.

Rather, Climate models are mathematical representations approximations of assumed ideal textbook physics, chemistry and biology conditions in an assumed pideal environment with assumed perfect uniformity and exactly modeled by the approximations assumed in populating the assumed perfect world.
For example, dust particles are finely modeled by approximations of a few samples of dust particles over California’s LA basin – because the original “climate models” were regional areas for tracking dust plumes and car pollution clouds. And there are many “scientific papers: touting the improvement in dust particle modeling and the timy parameters of the “selected average” dust particle for the global circulation models.
But are these assumed particle properties correct for the southern hemisphere of little land area at all and virtually no cars even on what little land area is present?
Rather, the “model” is calibrated back to the stated conditions of “the earth’s average albedo is 30%. Unless it is 29.9995% Or 30.0001 percent. Or has “greened down” as the world’s plants grow faster, higher, stronger with m ore leaves ever earlier in the season – and is now 29.5% … All without the model ever changing.
THe Big Government-paid self-called “scientists” are paid to create models that must create catastrophe. Or their political paymasters fire them.

Reply to  climanrecon
June 7, 2015 9:52 am

There was another group that had this supreme confidence in computer models. It was the MIT study that was commissioned by the Club of Rome. They made all these predictions based upon their computer models about what was going to happen to the world.
They were wrong in every single prediction.
Computers are the ultimate appeal to authority, but their ability to predict the future, especially something as complex as climate, is no better than reading entrails.

looncraz
Reply to  climanrecon
June 7, 2015 11:38 am

I’m a programmer who has examined climate model code (and worked on them) and I can tell you, without a doubt, that numerous assumptions are, indeed, built into the models – which are, indeed, just representations of the hypotheses which harbor said assumptions.
The basic layout of the models is quite neutral – often painstakingly so. The earlier (pre-AGW) models included very few assumptions not based on observational science, but they had a strict issue: their output was too cold, had little regional validity over time (in terms of predictive success), and feedback sensitivities were mostly dialed-in through brute-force means.
The modelers set out to correct the cold-running nature of the earlier models – to explain why warming was being seen relative to the model output. To do this, they simply looked for some variable that was increasing that they weren’t fully considering. CO2 and other GHGs came to the rescue! GHGs were on an ever-upward trajectory, and the curve more or less fit the model temperature trend discrepancy… if it were amplified with unknown atmospheric feedbacks. So the solution was to adjust the climate’s temperature sensitivity to, mostly, CO2, create an artificial baseline from which historic feedbacks were considered ‘balanced,’ and then use the CO2/GHG variable(s) to lead to a warming offset.
This is the epitome of confirmation bias in design, but it is on top of a system that could already not have many regional predictive validity and used numerous assumed values as starting points, feedbacks, and changing inputs.
Think of it this way:
If a bicycle slows down going downhill for some unknown reason and I model the bicycle on a computer and the model says the bicycle should be speeding up I have a LOT of variables with which to play. But, I also have some data. The things I know are that the brakes are not activated, the hill is 5 degrees steep, the tires are inflated, there is very little wind, and it is in sixth gear.
From this, I can only assume that there is some kind of drag, maybe the rider, or a bent rim, a parachute, something. So I create a model which calculates the drag being experienced on the bike, and I calculate the the rider must be about 6′ tall and weigh 230lbs or so and was traveling at a high rate of speed such that the hill was not enough to keep up with the rider’s drag.
Then, the observations come in, and the bike turns out to be on the back of a truck, and the truck is slowing down. There is no rider.

Reply to  climanrecon
June 7, 2015 12:08 pm

“Rather, Climate models are mathematical representations approximations of assumed ideal textbook physics, chemistry and biology conditions in an assumed pideal environment with assumed perfect uniformity and exactly modelled by the approximations assumed in populating the assumed perfect world.”
Approximations of assumed ideal physical models of some processes done in isolation and ignoring a lot of the interactions of those or processes we do not fully understand yet.

Reply to  climanrecon
June 7, 2015 4:21 pm

Climate models are mathematical representations of physics, chemistry and biology, those fields are where any weaknesses lie, software engineers just translate things into code.

That’s not entirely accurate, the models are gross simplifications of our limited understanding of the physics, chemistry and biology and the interaction of those fields. Computers the, when they compute the grossly simplified models do so with representations of numbers that only have 15 digits of accuracy Math libraries that introduce inaccuracies of their own. The number of cells that the models, the number of times the functions are iterated over the data that’s less than perfect it’s inevitable that the modles spiral out of control. All of this was covered far better than I can explain by Edward Norton Lorenz in his paper Deterministic Nonperiodic Flow

ferdberple
Reply to  climanrecon
June 7, 2015 5:56 pm

software engineers just translate things into code
========================================
If that was true, then google translate could write code and there would be no need for software engineers.
the simple fact is that for all their speed, computers are still hopelessly slow for most practical, real world problems. for example, wind tunnels. why do we have them? why not simply model the performance of new cars and planes on computers? After all, it is just the solution of mathematical equations. load the 3D shape into the computer and solve. what is the big deal?
the big deal is that the real world is nowhere near as simple as model builders would have you believe. Models can detect obvious stinkers of designs, but so can the trained human eye, often with much more speed and accuracy. in the end, models are not a replacement for the real thing.

Paul Sarmiento
Reply to  climanrecon
June 7, 2015 9:33 pm

touche!!! I was planning to comment with this very same observation but was too lazy to formulate how I would say it.
In addition to what you said, models are like blenders, they can be used to puree a lot of things but the taste of what comes out depends highly on what goes in. You can make it sweet, or bitter, salty or sour and anything else in between. But no, the blender does little on the taste, only on the texture.

mellyrn
Reply to  climanrecon
June 8, 2015 3:48 am

climanrecon, first assume a perfectly spherical cow.
“Spherical cow” has a wikipedia entry if you don’t get it.

Reply to  climanrecon
June 8, 2015 5:36 am

Looncraz,

The basic layout of the models is quite neutral – often painstakingly so. The earlier (pre-AGW) models included very few assumptions not based on observational science, but they had a strict issue: their output was too cold, had little regional validity over time (in terms of predictive success), and feedback sensitivities were mostly dialed-in through brute-force means.

While reading a GCM spec/ToE a long time ago, they stated that the difference between the earlier models that ran cold compared to surface measurements, and the current models was that they allowed a supersaturation of water vapor at the atmosphere – surface boundary. This is the source of the high Climate Sensitivity in models, which they used aerosols to fit to the past.
Is this your experience as well? I can’t find whatever it was that I was reading.

cd
Reply to  climanrecon
June 8, 2015 6:35 am

Totally agree. The article is based on a fallacious assumption akin to because writing code and writing am article are the same because they both involve writing. This may be Eric’s assumption but that’s how it reads.

cd
Reply to  climanrecon
June 8, 2015 6:37 am

Sorry that should’ve been “This may not have been Eric’s assumption…”

DP111
Reply to  climanrecon
June 8, 2015 3:54 pm

Quite right. This article is about software engineering and not the models, the physical basis of the models, the expressions that undelie the mechanics, electromagnetics & physics of the parameters that were considered important, and why.
The other aspect is the approximations that occur when the mathematics of a complex process that is ill understood itself, is projected onto a finite grid. Lots of problems need to be understood first by an analytical approach to a simplified version of the real problem, before embarking on the real stuff.
Fortunately for the AGWCC lot, they are not working for a private company designing state of the art aircrafts or somesuch. In fact, no private engineering company has the luxury of getting things so wrong, so often, and so predictably.
As for the software aspects of the problem, the author makes valid points.

Surfer Dave
Reply to  climanrecon
June 8, 2015 10:45 pm

Agreed article is quite poor, but your point is misinformed too. Digital computers are actually very poor at handling numbers other than integers. Every ‘floating point’ calculation uses imprecise representations of decimal numbers and each time a calculation occurs there is an error between the ‘true’ result and the value held in the computer, This has been know for a long time. There are ways to track the accumulated errors, however years ago I looked at some of the climate model source codes and they invariably relied on the underlying computer’s number representation and made absolutely no effort to account for the accumulation of errors. Since the models perform litterally millions of iterations the initially small errors propogate into larger and larger errors. For this reason alone, the models can be discounted entirely. So, even if the programmers implemented precisely what the ‘climate scientists’ wanted, there will be errors. A very good reference for digital number systems and their inherent problems is Donald Knuth’s ‘The Art of Computer Programming’. The worst cases occur where ‘floating point’ numbers are added or subtracted.

Reply to  climanrecon
June 12, 2015 5:57 am

If anyone is still reading this, I think I’ve found the part of the code I’ve mentioned,

3.3.6 Adjustment of specific humidity to conserve water
The physics parameterizations operate on a model state provided by the dynamics, and are
allowed to update specific humidity. However, the surface pressure remains fixed throughout
the physics updates, and since there is an explicit relationship between the surface pressure and
the air mass within each layer, the total air mass must remain fixed as well. This implies a
change of dry air mass at the end of the physics updates. We impose a restriction that dry air
mass and water mass be conserved as follows:

I think conserving water mass means that if rel humidity exceeds 100% it does not reduce the amount of water vapor.
Here’s the link to the CAM doc http://www.cesm.ucar.edu/models/atm-cam/docs/description/description.pdf

looncraz
Reply to  micro6500
June 12, 2015 3:14 pm

micro6500:
“I think conserving water mass means that if rel humidity exceeds 100% it does not reduce the amount of water vapor.”
That’s how I read it as well.
It would take some serious investigation to verify that this represents the known metastability of supersaturation that actually does exist – or if it exceeds observations. If it permits excessive supersaturation, then it is not one of the “desirable attributes” for the atmosphere at the boundary layer (Pg 64).
However, if it does not exceed observations, then it may well be even more accurate thanks to taking the transient atmospheric water vapor supersaturation states into account. I would bet, though, that this would become fog, dew, or clouds extremely quickly in the real world – and in all situations below, say, 25,000km elevation. Cloud/fog/dew-forming nucleation catalysts/sites are abundant, especially near the surface, after-all.
As far as supersaturation resulting in warming – it really can’t, long-term. During the day the warming air will almost certainly not be supersaturated. At night, this is more likely, but should also be rather temporary… so long as the extra energy retained is released by sun-up, there will be no short-term net warming. If some the residual additional energy is retained from one day to the next, you will see temperatures climb – but you will also the supersaturation state collapse as it does so. Any night where the supersaturated water vapor state does not remain and the residual energy is released will break the medium or longer term warming.
It would be really interesting to get a thorough analysis as to the algorithm’s applicability to the real world, for sure.

Reply to  looncraz
June 12, 2015 3:22 pm

As far as supersaturation resulting in warming – it really can’t, long-term. During the day the warming air will almost certainly not be supersaturated. At night, this is more likely, but should also be rather temporary… so long as the extra energy retained is released by sun-up, there will be no short-term net warming. If some the residual additional energy is retained from one day to the next, you will see temperatures climb – but you will also the supersaturation state collapse as it does so. Any night where the supersaturated water vapor state does not remain and the residual energy is released will break the medium or longer term warming.
It would be really interesting to get a thorough analysis as to the algorithm’s applicability to the real world, for sure.

It’s the source of the high CS, if I’m right. So first it adds water vapor in the tropics, which carries heat poleward, more than dry air alone can carry., Plus at night, high rel humidity produces dew, some of which evaporates the next day, some of it end up in the water table, no longer able to carry that portion of the heat further north.
Then, I don’t think there’s been much if any warming, it’s all the impact of the hot water vapor, and the location of the oceans warm spots that drive downwind surface temps, this is controlled by the AMO/PDO, and the Nino’s.
How much heat was carried in all of that rain that dropped like 12″ over the entire state of texas, all that water was basically boiled out of the ocean, and the carried maybe a thousand miles, think of the work that requires.
Seem to me to fit most of the pieces.

Reply to  micro6500
June 12, 2015 3:26 pm

That may be the reason the models are running hot, but you have not demonstrated that it was put in deliberately in order to get the models to run hot.

Reply to  lsvalgaard
June 12, 2015 3:30 pm

As I’ve mentioned this is what I recall as how they fixed cold models. and was touted as the cure. But as for proof, if the code is documented (NASA Model ???) as it goes back to Hansen iirc

catweazle666
Reply to  lsvalgaard
June 12, 2015 5:57 pm

lsvalgaard: “That may be the reason the models are running hot, but you have not demonstrated that it was put in deliberately in order to get the models to run hot.”
So, given that the models are well known to have been running hot for some considerable time, what is your explanation for the apparent lack of any intent whatsoever to address the very clear inability to represent the true surface temperature?

Reply to  catweazle666
June 12, 2015 8:54 pm

First the models were running too cold, now they are running too hot, third time, good time, they may find the happy medium, but you change nilly-willy every other day. But at some time in the future they might wake up.

Reply to  catweazle666
June 12, 2015 9:25 pm

You don’t change nilly-willy …

Reply to  climanrecon
June 12, 2015 3:32 pm

BTW, they documented it in the CAM doc’s, so it was definitely intentional. And it sounds unnatural.

Reply to  micro6500
June 12, 2015 5:02 pm

I think what scientists do is mostly intentional.

E.M.Smith
Editor
Reply to  CodeTech
June 7, 2015 4:59 pm

Well, I have 35 years of such experience, beating you by 10, and I must state categorically and for the record that you are exactly right and accurate.
Every program has a specification of what it is to do. Eithrr formal, or as the ideas floating around in someones mind. The program only implements that preconceived notion. But it does it blindingly fast.

ferdberple
Reply to  CodeTech
June 7, 2015 6:03 pm

Here is the first known example of climate models predicting the future:
http://en.wikipedia.org/wiki/Clever_Hans
Hans was a horse owned by Wilhelm von Osten, who was a gymnasium mathematics teacher, an amateur horse trainer, phrenologist, and something of a mystic.[1] Hans was said to have been taught to add, subtract, multiply, divide, work with fractions, tell time, keep track of the calendar, differentiate musical tones, and read, spell, and understand German. Von Osten would ask Hans, “If the eighth day of the month comes on a Tuesday, what is the date of the following Friday?” Hans would answer by tapping his hoof. Questions could be asked both orally, and in written form. Von Osten exhibited Hans throughout Germany, and never charged admission. Hans’s abilities were reported in The New York Times in 1904.[2] After von Osten died in 1909, Hans was acquired by several owners. After 1916, there is no record of him and his fate remains unknown.

Bazza McKenzie
Reply to  ferdberple
June 8, 2015 2:29 am

I’m pretty sure he died.

Reply to  ferdberple
June 9, 2015 11:54 am

Hans offers a most interesting example of confirmation bias. He seems to have picked up cues from onlookers as to when he should stop tapping his foot, confirming expectations of his cleverness.

ColA
Reply to  CodeTech
June 7, 2015 10:38 pm

My German Professor who first lectured me in Computers when the very first Apple PCs came out was a very wise old man and I always remember his favourite saying:-
“Shitzer in = Shitzer out” …… still just as relevant today as it was then!!!

Reply to  ColA
June 7, 2015 10:40 pm

It works the other way too:
good stuff in = good [or even better] stuff out.
Computer models can be useful too.

Paul
Reply to  ColA
June 8, 2015 6:37 am

“Computer models can be useful too.”
Agreed, assuming you know all of the pieces, and how each works, right?

Gerry Parker
June 7, 2015 4:14 am

“Enter desired output” is a phrase we use sometimes in hardware engineering when referring to certain software programs. This can be a reference to the function of the software, or the performance of the SW team in response to pressure from management.
If you’re not catching it, that would be the first query the program would make of a user who was running the program.

John W. Garrett
June 7, 2015 4:14 am

Anybody who has any experience with computer modeling of anything knows Rule #1:
GIGO
“Garbage In, Garbage Out”
The entire financial implosion that occurred in 2007-2009 was entirely due to the gullible and credulous belief in the accuracy of computer-based simulations. The models worked perfectly; they did exactly what they were told to do. It was the assumptions that were wrong.
I’ve seen this movie a hundred times before.
Give me four parameters, and I can fit an elephant. Give me five, and I can wiggle its trunk.”
-John von Neumann

emsnews
Reply to  John W. Garrett
June 7, 2015 4:54 am

Correct. This is true of all systems. What you put in is what you get in return which is why real science relies on everyone tearing apart each other’s work. There is no autocorrect system, this is done via disputes and counterclaims and demands for proof.
Which the climatologists claiming we are going to roast to death do not want at all.

Editor
Reply to  emsnews
June 7, 2015 6:05 am

Garbage In, Gospel Out.

billw1984
Reply to  John W. Garrett
June 7, 2015 6:45 am

No, the financial crisis was caused by the “fat cats”. Didn’t you get the memo?

Niff
June 7, 2015 4:17 am

As someone with 40+ years in IT…you are right….only low information, credulous types would think that a computer generated confirmation bias was….different.

June 7, 2015 4:18 am

That is exactly was I have been trying to tell the Climate-believers from scratch. (I had my own systemprogramexam 71): No computer on Earth can do better predictions than their systemprogrammers have made possible and there is also limitation due to input as well as if the programmer/-s have or haven’t included all needed parameters into their algoritm. Bad input -> Bad out put, Missing parameter -> lower validity for every missed parameter. And above all – consensus is a political term which has nothing what so ever to do with Theories of Science…

Alx
June 7, 2015 4:21 am

Computers are stupid in the sense they do not think. They can do any amazing amount of work at amazing speeds and due to being software driven can perform tasks only limited by imagination.
But again they are stupid, accidentally tell them there are 14 months in a year and they’ll process millions of transactions with lightening speed using 14 months in a year. So during my years in the software business I always remembered one thing, “To err is human, to really f**k up requires a computer.”

PiperPaul
Reply to  Alx
June 7, 2015 7:08 am

Lightening speed is when you are moving so fast that you are actually shedding weight. (sorry)

auto
Reply to  PiperPaul
June 7, 2015 1:47 pm

Piper
Shedding weight – but also shortening . . .
There was a young fencer called Fiske,
Whose action was exceedingly brisk.
So fast was his action
the FitzGerald contraction
Reduced his rapier to a disc.
Likewise – sorry!
Auto

June 7, 2015 4:24 am

“The first principle is that you must not fool yourself and you are the easiest person to fool.” – Richard P Feynman
Feynman had it right, but with computers we can now do it much better and faster.

SandyInLimousin
June 7, 2015 4:25 am

I spent the first 25 years of my working life as a test engineer programming ATE to test various PCBs, and components depending on who was my employer at the time. (I spent 25 years in the same building working for 4 companies and in 3 pension funds but that’s another sorry tale). The last 20 years writing business reports and management systems.
The end result is that , in my experience, the designer/developer only ever tests something to do what they have designed it to do. Give it to someone with the specification and say see if this works you can expect the first problem within a minute. The expression I didn’t think of that from a designer’s mouth becomes very familiar very quickly.

steverichards1984
Reply to  SandyInLimousin
June 7, 2015 5:14 am

Ferndown?

SandyInLimousin
Reply to  steverichards1984
June 7, 2015 8:43 am

Nottingham, covered my transition from Test to IT.

Keith Willshaw
Reply to  SandyInLimousin
June 7, 2015 7:13 am

Absolutely correct. As a software developer with over 30 years experience I ALWAYS ask for the allocated tester to be someone who knows NOTHING about the product. You give the tester the spec and some standard use cases. Invariably the find bugs to which the developers response is either ‘why did they do that’ or ‘well the correct way to do that is obvious.’

Mr Green Genes
Reply to  Keith Willshaw
June 8, 2015 12:04 am

Exactly! The only way to test whether anything is foolproof is to let a fool loose on it. I have been that fool on many occasion and it’s amazing how easy it is to find flaws in some highly intelligent developer’s work. Mind you, I also know this from having my own work tested too …

DCE
Reply to  Keith Willshaw
June 8, 2015 8:15 am

Indeed. The problem with making things foolproof is that fools are so ingenious.
After 20 years working in defense and another 18 in telecommunications, I have seen many of the same results Keith and the others. When it comes to software I have seen testers, people as described by Keith, find interesting and new ways of using a prototype product that caused all kinds of problems. One has to assume the user won’t be reading the user manual so they’ll try things the designers and coders wouldn’t and end up with a machine in a state the designers never considered. (I guess this would be called bias on the part of the designers/coders – “Well, we wouldn’t do it this way!”)
And so it goes with the computer climate models. Too many inputs with poorly understood parameters, not enough inputs with the proper granularity, and too many SWAGs assumed to be “the truth, the whole truth, and nothing but the truth”. Under those circumstances it’s far too easy to allow biases to creep in that invalidate the model’s results.

brians356
Reply to  SandyInLimousin
June 8, 2015 3:29 pm

Rules I tried to lived by as a programmer: 1. Never think you can (or agree to) test your own code 2. It takes ten times more effort to properly test code than it did to create it. (Corollary: The test division should have ten times the budget/staff as the programming division. Usually it has one tenth the staff.)
3.There’s no such thing as bug-free (non-trivial) code. Believe it.(Corollary: It’s almost impossible to prove code is bug free.)

June 7, 2015 4:28 am

I disagree. The computer [more precisely its software] can make visible that which we cannot see. A couple of examples will suffice. The computer can tell us when to fire the rocket engines and for how long in order to arrive at Pluto ten years hence. The computer can, given the mass, the composition of a star, and nuclear reaction rates tell us how many neutrinos of what energy will be produced. The results are not ‘built in’ in the sense that the computer just regurgitates what we put in.

Akatsukami
Reply to  lsvalgaard
June 7, 2015 4:34 am

But, Dr. Svalgaard, suppose the programmer inadvertently mis-programs the expected neutrino production rate for a given reaction?

Reply to  Akatsukami
June 7, 2015 4:43 am

suppose he does not. software has to be checked and double-checked. and validated by comparison with the real world. in the case of neutrinos, the programs turned out to be correct.

PiperPaul
Reply to  Akatsukami
June 7, 2015 7:13 am

Checking and double-checking output using the wrong assumptions will still result in errors. If only there was a good example of such behavior…

BFL
Reply to  Akatsukami
June 7, 2015 8:10 am

“Have you take a good look at the all of the technology required to embrace a driverless car? And why is it necessary when it really isn’t necessary at all, ”
Hard to believe that these will be successful in this age of massive litigation. Most likely after the first few fatalities/lawsuits they’ll disappear, unless of course they are supported by massive taxpayer subsidies like Amtrak.

Scott
Reply to  Akatsukami
June 7, 2015 8:15 am

How about when the Europeans (or was it Russians) and the U.S. sent a rocket to Mars. One set of calculations was done in the imperial system, and one in the metric system – results predictable….it crashed. So much for checked and double checked.
How many bugs are in each and every Windows operating system despite being checked and re-checked. Yes, it happens to Apple’s too, just not as many.

Retired Engineer Jim
Reply to  Akatsukami
June 7, 2015 9:17 am

Scott wrote:
“How about when the Europeans (or was it Russians) and the U.S. sent a rocket to Mars. One set of calculations was done in the imperial system, and one in the metric system – results predictable….it crashed. So much for checked and double checked.”
Regrettably, a NASA Mars probe. But that wasn’t a pure software failure – junior trajectory engineers noticed a divergence in the outbound trajectory and asked to do a third mid-course correction. Management decided to save the fuel to use for trajectory management while on orbit around Mars. There was, clearly, a software problem, but it might have been overcome.
Another Mars probe, a lander, failed when the landing engine shut off immediately after igniting. The best guess as to the cause was that the landing gear opened prior to engine ignition, and opened sufficiently strongly as to set the “landed” flag. However, that flag wasn’t cleared before engine ignition was initiated. So, was that a software problem, or a mechanical problem? Or a Use Case problem?

tty
Reply to  Akatsukami
June 7, 2015 10:12 am

“The computer [more precisely its software] can make visible that which we cannot see.”
I beg to differ. Each of things you mention can be done with pen and paper, though it would take an impractically long time, and if we didn’t know how to do it with pen and paper we couldn’t write code to do it either.
It is true that the ability to run through large number of cases quickly means that a computer can find peculiarities or singularities that were previously unknown, but I wouldn’t dream of trusting those results unless independently verified (and I don’t consider another software program as verification).
This is based on 40 years experience of writing and (mostly) testing complex, safety-critical software.

Reply to  tty
June 7, 2015 10:22 am

There is no difference between a super computer and pen and paper, or slide rules, or counting on fingers. Allam Turing showed that long ago.

Steve Garcia
Reply to  Akatsukami
June 7, 2015 10:14 pm

Akatsukami
June 7, 2015 at 4:34 am
But, Dr. Svalgaard, suppose the programmer inadvertently mis-programs the expected neutrino production rate for a given reaction?
Reply
lsvalgaard
June 7, 2015 at 4:43 am
suppose he does not. software has to be checked and double-checked. and validated by comparison with the real world.”
And part of the overall point here seems to be that it is tough enough when we CAN check and double-check against the real world – but that in climate science this checking isn’t possible. To the degree that it IS thought to be possible, much of the output disagrees with the real world, so then what happens? The climate people either adjust the results, shift past data downward, or play games with all of it in order to be able to claim that the real world is in agreement.
I agree with Richard Feynman, who said, “If the computed results disagree with experiment, then it [the basis for the computation – the hypothesis] is WRONG.”
But being wrong doesn’t work with people who don’t admit that their computations have been wrong. When even the real world isn’t seen as the arbiter, how is any of this even science?

Reply to  Steve Garcia
June 7, 2015 10:30 pm

The theory for stellar evolution and structure is well tested on thousands of stars. All scientists agree that the theory is solid.

D.J. Hawkins
Reply to  Akatsukami
June 8, 2015 2:30 pm

@lsvalgaard
Well, there you have it. By the time it’s been “well tested on thousands of stars” any of the programmers bias’ have been whittled away, leaving the theoretical and empirical core. And, you’ve really not been paying attention. The topic is specifically about GCM’s. Do you seriously propose that they have been subjected to the testing and refinement your stellar neutrino prediction software has been?

Reply to  D.J. Hawkins
June 8, 2015 2:53 pm

Since the program is just solving equations there is no programmer ‘bias’ anywhere.

Reply to  lsvalgaard
June 8, 2015 2:58 pm

Sure there is, if this wasn’t the case why did the older models run cold?

Reply to  micro6500
June 8, 2015 3:11 pm

The scientists had not yet gotten the model to work [and not by putting bias in it]
Latter on it runs hot, so it still doesn’t work. Perhaps next time it will be OK.
The program code is available. [at least for some models]. You just can’t put bias in undetected, and why would you? If discovered your career would be over.

Reply to  lsvalgaard
June 8, 2015 3:48 pm

It doesn’t look like bias, it looks like “dark matter ” in galaxy simulations. And if they were using that to radically alter society, I would complain about that too.
Even if I’m inclined to accept DM.

eromgiw
Reply to  Eric Worrall
June 7, 2015 5:10 am

I think there are two issues here that maybe weren’t made clear in your article. One is the type of program that is simply solving an equation, or following an algorithm to calculate a result. The same calculation can be performed manually but it’s a lot cheaper to get the computer to do it. Ballistics, nuclear physics, orbital mechanics, etc.
The second is the modelling that is the subject of the article. This is where a multi-dimensional system is being represented as a matrix that has iterative calculations applied according to adjacent values probably using partial derivatives and such-like. Analogous to the cellular automata in the Game of Life. Each initial value in the matrix has a significant bearing on the outcome of the simulation after a large number of iterations. It is one thing to ‘calibrate’ these initial values and the constants in the iterations to match historical observed trends, but it is another thing for these to accurately predict the future. This is where the climate models have failed miserably. The modelling has not really incorporated the actual mechanisms that are driving climate, merely mimicked the past.

David Wells
Reply to  Eric Worrall
June 7, 2015 5:30 am

Did you see this:http://wattsupwiththat.com/2015/02/20/believing-in-six-impossible-things-before-breakfast-and-climate-models/ or this:http://wattsupwiththat.com/2015/02/24/are-climate-modelers-scientists/.
A bird flies just like that, humans create computer technology just like that, humans drive cars maybe not well but just like that, if humans want to fly we need to burn kerosene at 26 gallons a second and a airbus 380 weighs 270,000kg empty.
Have you take a good look at the all of the technology required to embrace a driverless car? And why is it necessary when it really isn’t necessary at all, most people appear to want to imagine that a computer is some kind of miracle machine when all it is in effect is a very large pocket calculator using technology which has already reached critical mass and needs to go biological to progress further.
This hype is all about what you want to do not what we need to do or have any use for.
Supercomputers may be brite but they are not bright and with only humans to impregnate them I doubt the point of your conviction or its time line.

Sal Minella
Reply to  Eric Worrall
June 7, 2015 6:08 am

When the day arrives that the computer does not act exactly as it is programmed, it will cease to be a useful tool. Unfortunately, many tasks executed by computers are flawed due to being misprogrammed as is abundantly demonstrated by climate models. An unexpected result from a computer is not a sign of intelligence but an indication of the [lack of] skill of the programmer.

hunter
Reply to  lsvalgaard
June 7, 2015 4:59 am

Dr. S,
The humans told the computer what to do, and the pre-arranged goal was to get to Pluto.

Reply to  hunter
June 7, 2015 5:04 am

No, the program was not written with that goal in mind. It will work with getting to any place, Mars, Jupiter, the Moon, etc.
A generic answer to several people: if you don’t know the physics you put in well enough, the program may not work well, or alternatively can be used to improve the knowledge about the physics. Again, neutrinos are a good example.

Reply to  Eric Worrall
June 7, 2015 5:24 am

So the problem is not with computer programs and models at all, but with delusional PEOPLE. And those there are lots of, just look around in this forum.

thisisnotgoodtogo
Reply to  hunter
June 7, 2015 6:05 am

“So the problem is not with computer programs and models at all, but with delusional PEOPLE. And those there are lots of, just look around in this forum.’
I’m looking at your posts so far, and what you’ve come up with now, seems to indicate that you were deluded about what the original post says.

mobihci
Reply to  hunter
June 7, 2015 7:25 am

NASA climate model guided rocket-

Ian Macdonald
Reply to  hunter
June 7, 2015 8:58 am

mobihci, If you’re going to have fun with rocket aerobatics, go upscale a bit:

So long as it’s at taxpayers’ expense, of course.

Sleepalot
Reply to  hunter
June 7, 2015 11:48 pm

@ Ian Macdonald

Leonard Weinstein
Reply to  lsvalgaard
June 7, 2015 5:15 am

lsvalgaard, you missed the main point. Orbital calculations can be accurate over short time periods (a few years) because we understand the required inputs and equations adequately cover the issue, and approximations are sufficient for the forcings. However, even for these, over a long enough time they will diverge due to small interactions with distant planets and round off accuracy. Even the best computer cannot solve a 3 (or more) body orbit problem over even a modestly long time if all three (or more) are of significant size and interacting the same time. In the case of climate, there are many causes of interactions, and several of these are not even fully understood. In the case of fluid mechanics, the equations are not capable of fully solving complex flows (such as high Reynolds number time varying three dimensional flows) at all, and simplified approximations are used, which are sometimes adequate, and often not. Climate is a far more complex problem and your example is a false one.

Reply to  Leonard Weinstein
June 7, 2015 5:21 am

Orbital calculations are good for a few million years, but eventually, of course, the uncertainty in the initial values catch up with you. But that is not the point. The point is that the models will work for limited time in any case [and the modelers are acutely aware of that]. And the programs are not written to ‘get the expected result’. To claim that they are betrays ignorance about the issue.

Leonard Weinstein
Reply to  Leonard Weinstein
June 7, 2015 5:44 am

Orbit calculations of some cases are are good for millions of years, but most are not. Asteroids near Jupiter are a clear case where the Sun and Jupiter interaction make any solution fail in a much shorter time. A planet orbiting in a close double star system generally could not have accurate orbit calculations for more than a few orbits no matter how good the initial data is. Once the number of significant interactions is large enough, these non-linear calculations fall apart fairly quickly.

Reply to  Leonard Weinstein
June 7, 2015 5:45 am

None of this has any bearing on whether computers only output what you put in which is the main thesis of the article we are discussing.

billw1984
Reply to  Leonard Weinstein
June 7, 2015 6:52 am

Unless the expected result was to accurately calculate the results of Newtonian laws of motion
or the more modern (relativistic) forms of these equations. Your example is one where the math is known and there could be no possible reason to have a preferred outcome other than to accurately steer a space craft (or to just get the physics right). Sorry, Leif. You are a bit off on this one.

Reply to  billw1984
June 7, 2015 6:59 am

If the ‘expected result’ is to “accurately calculate the results of Newtonian laws of motion” then you might also claim that the ‘expected result’ of climate models is to accurately calculate the results of Atmospheric Physics as we know it. But I don’t think that was what Eric had in mind.

Reply to  Leonard Weinstein
June 7, 2015 5:50 pm

Lsvalgaard says “And the programs are not written to ‘get the expected result’. To claim that they are betrays ignorance about the issue.” Really? So the models were not programmed with water vapor and clouds be a net positive feedback??

Reply to  alcheson
June 7, 2015 6:10 pm

what is your evidence for that?
Feedback is supposed to emerge from running the model.

Reply to  Leonard Weinstein
June 7, 2015 5:54 pm

Also Leif, if the physics in the models is so well understood and all of them are producing nothing but accurate and predictable physics, why so many models? Only need one if you have all the physics correct. Obviously, each modeler has in his model everything HE personally thinks is important, thus it produces as output exactly what HE expects, which may or may be close to reality.

Reply to  alcheson
June 7, 2015 6:12 pm

I see no evidence of that. Where is your evidence? The models are different because climate is a hard problem and there is value in trying different approaches, different resolutions, different parameters, etc.

Dems B. Dcvrs
Reply to  Leonard Weinstein
June 7, 2015 8:43 pm

lsvalgaard – “So the problem is not with computer programs and models at all, but with delusional PEOPLE.”
Mother Nature says you, climate models, and G.W. Climatologists are Wrong.
Thus, delusional people would be you and G.W. Climatologists.

Reply to  Dems B. Dcvrs
June 7, 2015 8:46 pm

Mother Nature says that the models are not working. That is all you can conclude.

Reply to  Leonard Weinstein
June 8, 2015 5:33 am

lsvalgaard
June 7, 2015 at 6:10 pm

what is your evidence for that?
Feedback is supposed to emerge from running the model.

While reading a GCM spec/ToE a long time ago, they stated that the difference between the earlier models that ran cold compared to surface measurements, and the current models was that they allowed a supersaturation of water vapor at the atmosphere – surface boundary. This is the source of the high Climate Sensitivity in models, which they used aerosols to fit to the past.
Now, just so you know, I’m a fan of simulation technology, I spent 15 years professionally supporting near a dozen type of electronic design simulators, including the development of commercial models for Application Specific Integrated Circuit (ASIC) designs for the IC vendor, training and supporting over 100 different design organizations and in 1986 designing an 300Mhz ECL ASIC for Goddard Spaceflight Center.
It was a common issue when building complex models of commercial large scale integrated circuits that the modeler had to be careful to actually model what the vendor said their design actually did not what the modeler and in some cases the design guide said the chip did.

BarryW
Reply to  lsvalgaard
June 7, 2015 5:31 am

Oh you mean like the prediction of solar cycle 24?

Reply to  BarryW
June 7, 2015 5:34 am

Yes, the very successful prediction of solar cycle 24, eleven years ago: http://www.leif.org/research/Cycle%2024%20Smallest%20100%20years.pdf

ren
Reply to  BarryW
June 7, 2015 6:01 am

I entertain a prophet and say that the behavior of the Sun in this cycle (and the next) will continue surprising.

Latitude
Reply to  BarryW
June 7, 2015 6:09 am

lsvalgaard
June 7, 2015 at 5:45 am
None of this has any bearing on whether computers only output what you put in which is the main thesis of the article we are discussing.
=====
please try to remember to convert inches to centimeters

ren
Reply to  BarryW
June 7, 2015 6:11 am

“The polar field reversal is caused by unipolar magnetic
flux from lower latitudes moving to the poles, canceling
out opposite polarity flux already there, and eventually
establishing new polar fields of reversed polarity [Harvey,
1996]. Because of the large aperture of the WSO instrument,
the net flux over the aperture will be observed to be
zero (the ‘‘apparent’’ reversal) about a year and a half before
the last of the old flux has disappeared as opposite polarity
flux moving up from lower latitudes begins to fill the
equatorward portions of the aperture. The new flux is still
not at the highest latitudes where projection effects are the
strongest. The result is that the yearly modulation of the
polar fields is very weak or absent for about three years
following the (apparent) polar field reversal. Only after a
significant amount of new flux has reached the near pole
regions does the yearly modulation become visible again.
This characteristic behavior is clearly seen in Figure 1. The
four panels show the observed polar fields for each decade
since 1970 (the start of each decade coinciding with
apparent polar field reversals). Also marked are periods
where the magnetic zero-levels were not well determined
and noise levels were higher – at MWO (light blue at bottom
of panel) before the instrument upgrade in 1982 and at
WSO (light pink) during the interval November 2000 to
July 2002. The difference between the amplitudes of the
yearly modulation observed at MWO and at WSO is due to
the difference in aperture sizes. At times, exceptional solar
activity supplies extra (but shorter lived) magnetic flux
‘‘surges’’ to the polar caps, e.g., during 1991 – 1992 in the
North. These events (both instrumental and solar) distract
but little from the regular changes repeated through the four
‘‘polar-field cycles’’ shown (from reversal to reversal).”

thallstd
Reply to  BarryW
June 7, 2015 6:14 am

Dr Svalgard,
Like all programs, the models are written to apply a set of rules (logic/code) in conjunction with known (or assumed) constants (ie climate/temp sensitivity to CO2) to a starting set of conditions/data (temp, rainfall, humidity etc). While the rules may reflect the best objective knowledge we have about how the various drivers and factors interact, the starting conditions and constants that the models are run under are at the discretion of whoever is running or commissioning the running of the model.
Has anyone correlated model outcome with the assumptions about, say climate sensitivity to CO2? Are the 95% of them that run high doing so because they assume a high sensitivity? What would they output if a lower sensitivity were used?
When the assumptions they are run with drives the outcome as much as the code they are written with does, claiming that “the programs are not written to ‘get the expected result’” while perhaps true is somewhat irrelevant, is it not?

Tom in Florida
Reply to  lsvalgaard
June 7, 2015 6:15 am

Doc,
Cannot a human do the same thing with a slide rule? It would just take longer.

KaiserDerden
Reply to  lsvalgaard
June 7, 2015 7:02 am

but the answer the computer/software gives in your example is based on the fact that all the variables are facts and know values … almost nothing input into the climate models is factual, its all guesses … guesses which then become the foundation of the next guess and then next one until the “output” is reached …
and the software in your example did show us anything that was invisible … everything that your example software calculates could have been calculated by hand on paper with a slide ruler … for example celestial navigation and artillery targeting happened long before computers ever existed …
In the words of Darth Vader: “Don’t be too proud of this technological terror you’ve constructed.”

Reply to  KaiserDerden
June 7, 2015 7:05 am

calculates could have been calculated by hand on paper with a slide ruler
There is no difference, any computer is equivalent to any other computer as Turing demonstrated.

Don K
Reply to  lsvalgaard
June 7, 2015 7:30 am

> The computer can tell us when to fire the rocket engines and for how long in order to arrive at Pluto ten years hence.
Yes, it can. BUT, you will need to be extraordinarily lucky to get to your desired destination if you approach the job that way. A real Plutonic probe would include provisions for mid course correction as it was discovered that assumed values for variables were a bit wrong and that some of the “constants” vary. And that’s with a very simple, well validated, set of state equations and a comparatively well understood physical situation.
Climate prediction is more akin to orbit prediction in a planetary system where the mass, position, and velocity of the planets is poorly known and the entire system is being swept by massive, fast moving fronts of particulate matter from some weird phenomenon in nearby space.
Computers are a tool. Like every tool, they may work poorly or not at all for those who do not understand how to use them.

Stargazer
Reply to  Don K
June 7, 2015 11:43 am

“A real Plutonic probe would include provisions for mid course correction…” In fact, a real Plutonic probe (New Horizons) completed its last of several trajectory corrections on March 10th. It is now in spin mode on its way to a July flyby of the Plutonian system.

bobl
Reply to  Don K
June 8, 2015 6:32 am

Let me use a much simpler example of the problem for Leif. Your fancy computer program for navigating your space craft to Pluto may fail because the spacecraft’s path inadvertently intersects with an unknown asteroid you may get 99,999 craft to Pluto with this algorithm but that is not going to help you with the one that intersected the asteroid. That is: the outcome is affected by parameters that are not anticipated by the programmers – the output is dependent on the programmers correctly identifying all parameters and influences on the objective – miss something – miss Pluto.
Leif, as a solar scientist I would imagine you might be able to forsee a number of ways that energy might be able to leave the earth other than by solar radiation, and certain astronomical influences that might add heat to the atmosphere (through say friction). This invalidates the idea that the incoming radiation must equal outgoing radiation – I do not see it as necessarily so. As an Engineer I have grave doubts about any lossless system. If all these influences are not modeled there can be little confidence in the outcome and I have grave doubts about the central hypothesis – that radiation in should equal radiation out – No, Total energy in = Total Energy out and that’s all.
Finally mathematics using quantized numbers can result in a number of problems, for example in 64 bit arithmetic what is 1 x 2^63 * 2 hmm, it’s 0 Or 100 * 5 / 4 = 125 but 5 / 4*100 = 100. Why, because 5/4 cant be represented exactly So in case 1 we get 5 x 100 =500, then 500/4 = 125. In the second case the calculation is 5/4 =1 (truncated from 1.25 due to lack of precision) 1 x 100 = 100. Now some compilers may optimise out this problem for you, so on an IBM PC with the Microsoft compiler you might get 125, on a MAC with the apple compiler you might get 100.
Take another simple example
f=ma
f/a=m
I have a 500kg spacecraft but I decide I want to calculate inertial mass for my navigation project it’s important because apparent mass depends on velocity, but there is no force and no acceleration, what is your mass calculation? – 0/0 = ???
Worse still, I have two sensors, due to noise the thrust (Force) sensor reads 1 kgms-2, the acceleration is measured as -0.000001 ms-2, The calculated mass is now 1/-0.000001 = -1 Million kg. Antimatter maybe?
If I as an engineer fail to take account of these things then I will miss Pluto by a wide margin, check or no check – a completely correct algorithm will fail in specific circumstances simply because I haven’t considered thrust values around zero or have missed influences – unmapped asteroids.
Computers are much worse than people, computers do things numerically and suffer from numeric precision problems – They can do 1/2 but they don’t do 1/3 very well. They also don’t give a rats whether the result is reasonable – unless I check for it the program will happily use my calculated mass of a million kg despite the fact it’s ridiculous.

Reply to  bobl
June 8, 2015 7:53 am

If I as an engineer fail to take account of these things then I will miss Pluto by a wide margin, check or no check
That is why the model should include all the known [or even just suspected] influences. ‘Simple’ models [as some people push] will generally not have much predictive power. Now, even if the physics is known, we may not be able to apply it as there are ‘contingent’ phenomena that are not [and mostly cannot] be predicted.
I have a thought about the discrepancy between model and reality. Such discrepancy is not a disaster, on the contrary, it is an opportunity to improve the model. Before it was running too cold, now it is running too hot, next time it may be just right. There is a curious contradiction in many people’s attitude about the models: on one hand they say that the models are garbage [GIGO], on the other hand they say that the failure of the models prove that CAWG is wrong. But from garbage you cannot conclude anything. To claim that the discrepancy between model and observation proves that CO2 is not a player is premised on the models being correct [but insufficiently calibrated, because of too crude parameterizations].

Reply to  lsvalgaard
June 8, 2015 8:37 am

lsvalgaard commented on An IT expert’s view on climate modelling.
in response to bobl:

That is why the model should include all the known [or even just suspected] influences. ‘Simple’ models [as some people push] will generally not have much predictive power. Now, even if the physics is known, we may not be able to apply it as there are ‘contingent’ phenomena that are not [and mostly cannot] be predicted.
I have a thought about the discrepancy between model and reality. Such discrepancy is not a disaster, on the contrary, it is an opportunity to improve the model. Before it was running too cold, now it is running too hot, next time it may be just right. There is a curious contradiction in many people’s attitude about the models: on one hand they say that the models are garbage [GIGO], on the other hand they say that the failure of the models prove that CAWG is wrong. But from garbage you cannot conclude anything. To claim that the discrepancy between model and observation proves that CO2 is not a player is premised on the models being correct [but insufficiently calibrated, because of too crude parameterizations].

I agree, right up to the point they want to drastically change the world’s economy costing multiple tens of trillion dollars, on their models results.
If the Gov wants to provide tax incentives for energy research, fine, I’ve long guessed that we had until 2050 or so to replace our burning of oil due to supply, we’ve found more oil, but at some point we should get the majority of our energy from Nuclear, and there’s a place for wind and solar, but I don’t believe we will find they are acceptable as the major source of energy for a global first world society, which also seems to be on the list of things to be done away with by some advocates who rail against oil.

Reply to  micro6500
June 8, 2015 8:42 am

I agree, right up to the point they want to drastically change the world’s economy costing multiple tens of trillion dollars, on their models results.
I think it is the politicians elected by popular vote [so presumably reflecting the views of the people] who want to change the world. People have the government they deserve, having elected it themselves.

Reply to  lsvalgaard
June 8, 2015 10:35 am

Maybe, but Hansen was cheerleading for a while, and came under fire for doing so despite the US Gov not previously making a statement on the topic.

Reply to  micro6500
June 8, 2015 10:39 am

But now, the people have made up their mind and elected a government bent on ‘saving the planet’.

Reply to  lsvalgaard
June 8, 2015 10:47 am

But now, the people have made up their mind and elected a government bent on ‘saving the planet’.

Polling says it’s near the bottom of the list of reason.

Reply to  micro6500
June 8, 2015 10:50 am

Yet, the elected Government (of the people, by the people, for the people) says that it is the most important problem facing the world. Go figure.

Reply to  lsvalgaard
June 8, 2015 10:52 am

Just to be clear, I didn’t vote for them either time.

bobl
Reply to  Don K
June 8, 2015 4:21 pm

But Leif, you are still wrong. The computer program is only as good as the algorithm implemented, the data and the parameters (assumptions) used . For example if I forget to take account of gravity or even use an inprecise value then I miss Pluto. The argument is simply that in the case of climate models too many parameters (eg cloud effects) are wrong or left out. I argue that there are unaccounted heat sources and sinks – must be. I agree that this then is an opportunity to improve except that climate is a chaotic process which cannot be truly modeled without making assumptions about averages, forcings – something that is always open to debate. It’s the assumptions about forcings and their relative strengths that contain the prejudices of the programmers. Neutrinos are probably a bit more stationary than climate, but can you actually predict when and where a neutrino will emerge? Can you predict the actual magnitude of a single particular magnetic tube on the sun next year on April 1?
I might add that climate has MANY feedbacks , positive and negative from the micro to the macro level and importantly they all have time lags – different time lags. You can’t model this behaviour with a simple scalar model, you need to use the square root of -1in there somewhere so any climate model that is a simple scalar model IS wrong. The output (warming ) of such a system of feedbacks is NOT a scalar, it’s a function. Nor is climate sensitivity a scalar, it too by virtue of the nature of the feedbacks is a (probably chaotic) function. The integral of weather with time is probably no more stationary than weather.

Reply to  bobl
June 8, 2015 7:32 pm

Can you predict the actual magnitude of a single particular magnetic tube on the sun next year on April 1?
No, but I can predict how many such we will have over a year many years in advance.

John Peter
Reply to  lsvalgaard
June 7, 2015 7:39 am

“in the case of neutrinos, the programs turned out to be correct.” So who wrote the programs? A computer? Ultimately I don’t believe that computers can start from scratch and do their own research required for the input of information to programs. Needless to say, they can perform calculations based on human input which we cannot perform with a slide rule. As stated by Mr Worrall chips double their performance every 18 months. I think it is called Moore’s law.

Reply to  John Peter
June 7, 2015 7:42 am

Scientists wrote the computer programs. That is the way it usually works, although there is some effort in a field called Machine Learning.

Keith Willshaw
Reply to  lsvalgaard
June 7, 2015 7:41 am

Indeed but that is because the programmer built in the correct algorithms. A more interesting example with complex software is the failure of the Ariane flight control system in 1996. Essentially a failure in the alignment sub system for the inertial navigation system caused the unit software to crash resulting in the launcher going out of control and having to be destroyed. The source of the error was an untrapped error in a subroutine which converted a 64 bit floating point number into a 16 bit integer when the converted number became large enough to exceed the max size of a 16 bit integer. The software specification required that such conversions be error trapped in flight critical code.
Trouble is the alignment sub system was supposed to be run pre-launch and was not intended to be a flight critical system so no such error trapping was built in. This error was compounded when it was found that a late hold on countdown meant a 45 minute delay while the INS was realigned. The overall control software was therefore changed so that the alignment system continued to run for 50 seconds after launch. The assumption was made that the horizontal velocity data returned in 64 bit format would not exceed the max size of the 16 bit integer. That worked fine for Ariane 4 but the Ariane 5 had a different trajectory and the system failed as described.
This is a classic case of the failure of a complex system due to the assumptions made by the developer and this system was orders of magnitude LESS complex than that used in computer climate models.

Reply to  Keith Willshaw
June 7, 2015 7:48 am

Are you seriously suggesting that the failure of climate models is due to programming errors? If so, you must agree that if the bugs can be fixed, the models would work.

David Chappell
Reply to  Keith Willshaw
June 7, 2015 8:19 am

Dr Svalgaard, did you actuallly read the last paragraph? It said “…due to the assumptions made by the developer.” Assumptions are NOT the same as programming errors and if you think so, you are being somewhat naive.

Reply to  David Chappell
June 7, 2015 8:24 am

Depends on what you mean by ‘error’. Assuming that a variable will always fit in the bits allotted becomes an error when you omit to test beforehand whether it is out of bounds, or to react to the error should an exception be signaled.

Reply to  lsvalgaard
June 7, 2015 7:52 am

@ lsvalgaard

The results are not ‘built in’ in the sense that the computer just regurgitates what we put in.

Me thinks you should re-think your above statement.
The fact is, that is all a “computing device” is capable of doing, ….. which is, … to per se, regurgitate the “digested” data/information that it is/was “uploaded” with for process control and/or to be processed. Electronic computers obey the Law of SISO.
The human brain/mind is a biological self-programming super-computer which functions in exactly the same manner …. with the exception that the brain’s “architecture” is totally different than that of an electronic computer.
And ps: Technically, it would be correct to state that …… “Humans are a prime example of Artificial Intelligence”, simply because, … “You are what your environment nurtured you to be”.

Reply to  Samuel C Cogar
June 7, 2015 7:58 am

No need to rethink the statement. Computer models are not built to produce a desired answer, but to show us what the consequences would be of given input to a system of equations, either derived from physics or from empirical or assumed evidence.

DonM
Reply to  Samuel C Cogar
June 7, 2015 9:35 am

No, some models are indeed “built” to produce a desired result.
I have done it. I have submitted it to the respective regulator for their review and they, knowing that any computer output is correct, accepted it without any further questions. The project moved forward.
I have also submitted very simple (by my defn.) and accurate hand written calcs to the respective regulator/reviewer. Without an “output” table the documentation is, almost always, held to a higher standard of questioning.
Computer models SHOULD not be built to produce a desired answer, but to show us what the consequences would be of given input to a system of equations, either derived from physics or from empirical or assumed evidence.
Do you think that the accepted models could not be easily tweaked by increasing or decreasing an assumed coefficient or two? Do you think the modelers put the model together, hit run, and published the very outcome results without further tweaking of the model? Do you think that the models were not refined throughout their construct? And do you think that there was no bias involved in model refinement?

DonM
Reply to  Samuel C Cogar
June 7, 2015 10:00 am

My point:
Some models = good (accurate)
Some models = bad (inaccurate)
Some models = benign (not used in a manner that impacts others)
Biased models = models produced by people with a bias
Manipulated models = models that are intentionally manipulated for a desired use
People that accept all models honest & accurate (good) = primarily idiots that are easily manipulated

Reply to  Samuel C Cogar
June 8, 2015 5:06 am

Computer models are not built to produce a desired answer, ….. but to show us what the consequences would be of given input to a system of equations, either derived from physics or from empirical or assumed evidence.

Give it up, ……. the above oxymoron example …. plus obfuscations only belies your desperation.
Climate modeling computer programs utilize Average Monthly/Yearly Surface Temperatures as “input” data ….. which were calculated via the use of Daily Temperature Records that have been recorded during the past 130 years.
The Daily Temperature Records covering the past 130 years are all of a highly questionable nature and are thus impractical for any scientific use or function other than by local weather reporter’s appeasement of their viewing public.
If the Daily Temperature Records covering the past 130 years are little more than “junk science” data ….. then the calculated Average Monthly/Yearly Surface Temperatures are also “junk science” data.
And if the aforesaid Average Surface Temperature “junk science” data is employed as “input” data to any climate modeling computer program(s) …… then the “output” data from said programs will be totally FUBAR and of no scientific value whatsoever.
The only possible way to calculate a reasonably accurate Average Surface Temperature(s) would be to utilize a liquid immersed thermometer/thermocouple in all Surface Temperature Recording Stations, structures or units.
But it really matters not, …. because a reasonably accurate Average Yearly Surface Temperature will not help one iota in determining or projecting future climate conditions. It would be akin to projecting next year’s Super Bowl Winner.

Michael 2
Reply to  lsvalgaard
June 7, 2015 8:19 am

“The computer can tell us when to fire the rocket engines and for how long in order to arrive at Pluto ten years hence.”
So can your slide rule or pencil-and-paper computations, a thing the computer merely expedites.
The arguments here are not about the computational efficiency of a computer.
I am reminded of the “floating point bug” in the processor itself:
http://en.wikipedia.org/wiki/Pentium_FDIV_bug
A big difference between my education and that of my children is I can usually spot immediately when my calculator (or my input to the calculator) is wrong; because in parallel to entering data and operations, my mind has also be estimating the likely result. But my children have complete faith in the calculator and do not detect when they (more likely) or the calculator itself (extremely unlikely) has made an error.
A *model* is not just a simple calculation. A very large number of computations are related to each other and simple floating point rounding error can eventually accumulate to the point of uselessness after millions or billions of iterations.
Other assumptions exist as “parameters” and you play with the inputs and see what you get.
Models are “trained” and, once trained, will sometimes see patterns in noise. Human hearing can do this too. When I flew on turboprop aircraft sometimes I would hear symphonies in the constant RPM droning of the propellers. The human mind seeks patterns and will sometimes find it.

Ian Macdonald
Reply to  lsvalgaard
June 7, 2015 8:32 am

No, the original article is correct. The first rockets and the first nuclear devices were built without the benefit of computers, It’s just a lot slower and harder to work those things out by hand, but not impossible. Gagarin’s first flight used a mechanical analog computer for course tracking, but it was not essential to the mission’s success. At the present stage of IT development, if a human cannot conceive of a solution to a given problem, then a computer cannot solve that problem.

TobiasN
Reply to  lsvalgaard
June 7, 2015 8:48 am

The “solar neutrino problem” was well-known for 40 years. The model was not working. Only about 1/3 of the predicted neutrinos were observed.
Apparently they solved it about ten years ago, that the missing neutrinos exist, but as a different type,
So yes, now science can predict the number of [regular] neutrinos coming from stars. But, IMO, this topic is not a great example of how straightforward science is. Not just because the model was wrong for 40 years, but because they have yet to build a detector to actually observe those other neutrinos theoretically coming from the sun.
Of course I could have it all wrong.

Reply to  TobiasN
June 7, 2015 8:53 am

You do have this wrong. The model was working fine. The problem was with the detector which aws only sensitive to a third of the neutrinos. The neutrinos change their ‘flavor’ on their way to the Earth. Once all types of neutrinos were detected, it was found that the flux is just what the model predicted.

Pat Frank
Reply to  TobiasN
June 7, 2015 10:46 am

The problem was that neutrinos were thought to have zero rest mass. That forced only one type of neutrino. The solar flux of this type of neutrino was only 30% of the predicted flux.
When the physical theory was changed to allow non-zero rest mass, the new physics allowed neutrinos to oscillate among types, which had slightly different masses.
The revised physical theory correctly predicted the reduced flux of the originally observed solar neutrinos and further predicted fluxes of two alternative types. These were also detected, and the total flux agreed with the earlier calculation of total solar neutrino flux.

Reply to  Pat Frank
June 7, 2015 11:11 am

You are confusing the model for calculating the production of neutrinos [which does not depend on its mass] and the chances of observing neutrinos of a given type at Earth. The latter has nothing to do with whether the solar model was correct or not.

Glenn
Reply to  TobiasN
June 7, 2015 11:13 am

lsvalgaard
June 7, 2015 at 8:53 am
“You do have this wrong. The model was working fine. The problem was with the detector which aws only sensitive to a third of the neutrinos. The neutrinos change their ‘flavor’ on their way to the Earth. Once all types of neutrinos were detected, it was found that the flux is just what the model predicted.”
That computer model must have been programmed with magic. References?

Reply to  Glenn
June 7, 2015 11:40 am

Not with magic, although as A. C. Clarke in effect said ‘any sufficiently advanced technology will look like magic to the unwashed masses’.
References would be wasted on you, but here is a starting point http://www.sns.ias.edu/~jnb/

Reply to  TobiasN
June 7, 2015 1:55 pm

The neutrino example is fun because feynman broke his own rules.
When the data doesnt match the model… All you know is the data doesnt match the model. You dont know which one is wrong or if both were wrong.
As leif notes the model was correct what was “wrong” was the data.
Why was the data wrong? All sensor systems have assumptions or theory that are foundational to their construction. data are always infused with and reliant on theory.

Glenn
Reply to  TobiasN
June 7, 2015 7:48 pm

“References would be wasted on you”
More ad hom. Leif, don’t you realize you shoot yourself in the foot with this attitude?

Reply to  Glenn
June 7, 2015 8:10 pm

absolutely not. Ad-homs should be doled out to the deserving, and you qualify.

SandyInLimousin
Reply to  lsvalgaard
June 7, 2015 9:01 am

Apart from the usual problems with users (aka the real world) breaking software I was also thinking of SRAM, DRAM and microprocessors problems from the late 1970s early 1980s. I can think of half a dozen instances when working within the specifications of the part a specific manufacturers version would fail, by corrupting data for memories and failing “randomly” for the rest. Because the vendor was convinced of his design. modelling and testing many months and much money were lost proving that despite it all the parts were faulty, one famous supplier described such problems as features. This is without including radiation induced soft errors.

Eugene WR Gallun
Reply to  lsvalgaard
June 7, 2015 9:52 am

Isvalgaard
Given enough time a guy with a pencil and some paper can do the same calculations. Ten guys would cut down the time considerably. Doing it on a computer is faster still. But the answers will still be the same.
You say — “given the mass, the composition of a star and nuclear reaction rates”. A computer’s underlying programming is the train track on which it must run. Whatever numbers our computer train is filled with, it hauls those numbers along the track that underlays it.
You say — “the computer can make visible that which we cannot see” but that is exactly what eyeglasses do — nothing more, nothing less.
Eugene WR Gallun

Reply to  Eugene WR Gallun
June 7, 2015 9:53 am

Eyeglasses are useful and sometimes necessary.

Eugene WR Gallun
Reply to  Eugene WR Gallun
June 7, 2015 9:55 am

Perhaps I should have said microscope instead of eyeglasses but I thought eyeglasses was funnier.
Eugene WR Gallun

Stephen Richards
Reply to  lsvalgaard
June 7, 2015 10:46 am

Sorry Lief you are out of your field and will soon be out of your depth.

Reply to  Stephen Richards
June 7, 2015 10:53 am

And that I must hear from a person who like farmer Jones is outstanding in his field and is knee deep in bovine stuff.

Glenn
Reply to  Stephen Richards
June 7, 2015 11:19 am

He responds with ad hom, as usual. This is typical of science defenders, they see enemies all around, and any criticism of modelling is an attack on science in their eyes. One might be led to think that science is ad hom./

Reply to  Glenn
June 7, 2015 11:23 am

The anti-science you spout looks more like an attack on rationality. BTW, I like your words “science defender”. That is what we must all do against the forces of irrationality and ignorance you spread. And you can take that as a well-directed and well-founded ad-hom. It was meant as such.

Glenn
Reply to  Stephen Richards
June 7, 2015 7:51 pm

Leif: Yet you can’t provide any references that I spread “anti-science” [trimmed]

Reply to  Glenn
June 7, 2015 7:56 pm

Don’t need to. Your actions speak for themselves.

VikingExplorer
Reply to  Stephen Richards
June 8, 2015 12:23 pm

Count me as a science defender.

Reply to  VikingExplorer
June 8, 2015 12:25 pm

Science Defenders unite! Rise up and be counted!

Reply to  lsvalgaard
June 8, 2015 12:48 pm

I’m about as big a science geek there is, it’s odd to be called a denier.
I’m also a believer in models and simulators, help build the electronic design industry from a few customers to 1,000’s of customers over 15+ years. I was the expert my customers went to when they didn’t understand what the simulator did.

VikingExplorer
Reply to  Stephen Richards
June 8, 2015 4:00 pm

It’s a shame that the anti-AGW movement started in the name of science, against a political movement using science as a thin veneer, and has ended up accepting all the premises of the AGW movement and attracting people who rabidly attack science and rationality itself.
From a tactical point of view, it’s extremely idiotic to have gotten into a situation where any warming trend > 0 means that AGW is happening. Ever heard of natural variability? Has anyone played chess before?
I truly believe that AGW is false. Therefore, I’m not afraid of science. I’m confident that eventually, the truth will come out. Like Leif says, it’s “self-correcting”.
By resorting to a kind of foaming at the mouth attack on science, models and claiming that the climate is inherently impossible to understand or predict, and releasing a flurry of papers in response to the tiniest of warming trends, it seems like the so called skeptics around here actually believe that AGW is true.
Sun Tzu: One defends when his strength is inadequate, he attacks when it is abundant.
One plausible explanation for skeptic’s defensive posture is that deep down, they believe AGW is true. Apparently, they believe that they can’t win on science alone. Therefore, they start attacking science itself.
I believe Leif once told me that he would follow the evidence, regardless of whether it matched his initial thoughts or not. I could be wrong about AGW. If it turns out that I have been, then I’m with the truth, whatever that is.

Reply to  lsvalgaard
June 7, 2015 11:31 am

lsvalgaard, nowhere in Eric’s article does HE say that a “computer just regurgitates what we put in”. Further down, you state- “None of this has any bearing on whether computers only output what you put in which is the main thesis of the article we are discussing.” Also…NOT something Eric actually said.
YOU seem to be interpreting what Eric and others here are saying rather poorly. (Unless you’re just being petty and arguing semantics for kicks)
What Eric said/is saying is that the OUTPUT of any model is produced by the parameters/information/assumptions/calculations PUT IN to the model. The results/OUTPUT can be completely unknown prior due to the extent of the calculations involved, and thus not “built in”, but the results that are produced are totally dependent upon the parameters/information/assumption/calculations that ARE built in to the model. They HAVE to be. If they weren’t, how could you trust the results at all?
Yes, the computer can tell how many neutrinos of what energy will be produced AFTER it is given the mass and composition of a star, and the nuclear reaction rates as we know them to be after years of lab testing that produced consistent, dependable, accurate results based on those three variables-mass, composition, and nuclear reaction rates. Yes, the computer can tell us when to fire the rocket engines and for how long in order to arrive at Pluto ten years hence. But ONLY because a software program was designed in which all of the KNOWN and TESTED physical properties of rockets as well as how those rockets react in time and distance and thrust and etc are contained first. The software programers didn’t INPUT recipes for pancakes and surprisingly get the proper calculations for Pluto travel instead!
If the data that is input is corrupt in any way, either by mistake, or by flawed human-based assumptions/understanding then the end result will be corrupted by default.

Reply to  Aphan
June 7, 2015 11:34 am

What he said was “First and foremost, computer models are deeply influenced by the assumptions of the software developer. Creating software is an artistic experience, it feels like embedding a piece of yourself into a machine. Your thoughts, your ideas, amplified by the power of a machine which is built to serve your needs”.
THAT is what I objected to.

Reply to  Aphan
June 7, 2015 5:24 pm

LS- “What he said was “First and foremost, computer models are deeply influenced by the assumptions of the software developer. Creating software is an artistic experience, it feels like embedding a piece of yourself into a machine. Your thoughts, your ideas, amplified by the power of a machine which is built to serve your needs”.
THAT is what I objected to.
Then why didn’t you SAY you objected to THAT specifically before now? Sheesh!
Are you a computer software developer? If you are, and YOU have never felt this way when you developed your software, that makes ONE instance upon which you base your objection. If you are NOT a computer software developer, and you have not interviewed the world’s software developers personally and specifically about this topic, and they have not responded to you with clarity that none of them agree with what Eric stated, then your objection is scientifically defined as your personal OPINION. It’s not a fact, nor evidence.
Here’s a nice article about Global Climate Models and their LIMITATIONS- http://weather.missouri.edu/gcc/_09-09-13_%20Chapter%201%20Models.pdf
The area that might explain the hesitancy of most of the people who are disagreeing with you is : 1.1 Model Simulation and Forecasting, 1.1.1 Methods and Principles
Here’s a quote from that section-
“The research on forecasting has been summarized as scientific principles, currently numbering 140, that
must be observed in order to make valid and useful forecasts (Principles of Forecasting: A Handbook for
Researchers and Practitioners, edited by J. Scott Armstrong, Kluwer Academic Publishers, 2001).
When physicists, biologists, and other scientists who are unaware of the rules of forecasting attempt to
make climate predictions, their forecasts are at risk of being no more reliable than those made by non-
experts, even when they are communicated through complex computer models (Green and Armstrong,
2007). In other words, when faced with forecasts by scientists, even large numbers of very distinguished
scientists, one cannot assume the forecasts are scientific.
Green and Armstrong cite research by Philip E. Tetlock (2005), a psychologist and now professor at the University of Pennsylvania, who “recruited 288 people whose professions included ‘commenting or offering advice on political and economic trends.’ He asked them to forecast the probability that various situations would or would not occur, picking areas (geographic and substantive) within and outside their areas of expertise. By 2003, he had accumulated more than 82,000 forecasts. The experts barely, if at all, outperformed non-experts, and neither group did well against simple rules” (Green and Armstrong, 2007). The failure of expert opinion to provide reliable forecasts has been confirmed in scores of empirical studies (Armstrong, 2006; Craig et al., 2002; Cerf and Navasky, 1998; Ascher, 1978) and illustrated in historical examples of wrong forecasts made by leading experts, including such luminaries as Ernest Rutherford and Albert Einstein (Cerf and Navasky, 1998).
In 2007, Armstrong and Kesten C. Green of the Ehrenberg-Bass Institute at the University of South
Australia conducted a “forecasting audit” of the IPCC Fourth Assessment Report (Green and Armstrong,
2007). The authors’ search of the contribution of Working Group I to the IPCC “found no references
… to the primary sources of information on forecasting methods” and “the forecasting procedures that were described [in sufficient detail to be evaluated] violated 72 principles. Many of the violations were, by themselves, critical.”
***
Now…if the EXPERTS who are supposedly building these climate models and running the computer programs are UNAWARE OF, or are IGNORING the 140 scientific principles “that must be observed in order to make valid and useful forecasts”-just exactly how VALID or USEFUL are the results pouring out of those climate models and computers going to be???????

Reply to  Aphan
June 7, 2015 6:07 pm

Well, I thought I made it clear enough, but obviously there are limits to what some people can grasp and I didn’t take that into account enough, my bad.
And I am a software developer [perhaps a word-class one some people tell me] with 50+ years experience building operating systems, compilers, real-time control systems, large-scale database retrieval systems, scientific modeling, simulations, graphics software, automatic program generation from specifications, portable software that will run on ANY system, machine coding, virtual machines, etc, etc.

Reply to  Aphan
June 7, 2015 5:28 pm

Don’t know what happened to the formatting there. Surely my computer software knows how to format paragraphs correctly…and yet what it did to the data I entered was nothing like what I expected. (grin/sarc)

usurbrain
Reply to  lsvalgaard
June 7, 2015 12:02 pm

“The computer can tell us when to fire the rocket engines and for how long in order to arrive at Pluto ten years hence.” ??? And how many rockets have we sent outside of the earth to anywhere that we never verified, and corrected the trajectory mid course that arrived on this faraway planet or object? T
“he computer can, given the mass, the composition of a star, and nuclear reaction rates tell us how many neutrinos of what energy will be produced.” That is a simple math problem. Now write the algorithm for the effect of CO2 (just one of the known ‘factors’ ) on the temperature of the atmosphere – A, with no water vapor and then – B a correction factor for the various values of water vapor in the air.) Now provide just a partial listing of ALL factors that could possibly affect the temperature of the atmosphere, oceans, land, ocean currents, atmospheric currents, water vapor transportation of energy through the atmosphere, etc. etc. etc. (I think you get the point) And How will it take to write these algorithms and how many people will it take.
It took me and four others over four years to model a 1000, megawatt electrical nuclear power reactor and every variable was known. The reactor system was operating and the model could be verified by making numerous perturbations to the reactor system, both large and small, of every variable and parameter in the system to verify the accuracy of the “model.” . It then took another two years to get the model to Accurately reflect the restricted set of parameters that were in the design specification. After all of that there were thing that were not modeled and did not accurately match actual plant operation, but were at least, controlled by other parameters within the system.
Climatologists don’t even know how many parameters they don’t know let alone how many parameters they need. And of the ones the know they don’t even know enough about them to write an algorithm to plug into software to calculate the effect it has on the system they know nothing about.
As you claim to be an expert on how easy it is to make a computer do things – Please provide an algorithm for calculating “pi” (3.141…. ) accurate to at least 20 places, that I can put into Fortran (the language I have been told they use). Don’t bother looking one up. I want YOU to provide me with the algorithm from your brain and no reference to any text book. PERIOD. That should be a trivial case compared to CO2. After you do that do the algorithm for CO2 for atmospheric down welling radiation from CO2. Again from your brain not from a text.

Reply to  usurbrain
June 7, 2015 12:08 pm

The key point is “ It then took another two years to get the model to Accurately reflect the restricted set of parameters that were in the design specification.“, that is: you got it to work well enough to be useful. So it is possible, just have to work at it. As for pi: draw a lot of parallel lines and throw straws at them a gazillion times. The number of straws that will cross a line involves the number pi.

usurbrain
Reply to  usurbrain
June 7, 2015 12:49 pm

Isvalgaard – WRONG – The key point is that —
“Climatologists don’t even know how many parameters they don’t know let alone how many parameters they need. And of the ones the know they don’t even know enough about them to write an algorithm to plug into software to calculate the effect it has on the system they know nothing about.”
Thus the rudimentary SIMPLE model of a model takes the equation GIGO and provides GIGO^2

Reply to  usurbrain
June 7, 2015 1:19 pm

Tell that to Mr. Monckton who advocates a SIMPLE model explaining everything.

Billy Liar
Reply to  lsvalgaard
June 7, 2015 12:38 pm

lsvalgaard
June 7, 2015 at 4:28 am
Dr Svalgaard, I am afraid to have to tell you that computers do indeed only regurgitate what we put in. No one has yet invented a computer that ‘makes stuff up’ or ‘thinks for itself’. All computers, as tty says merely carry out a sequence of operations on the input data. If you have never performed the calculation manually, the result may surprise you but the computer is only producing the ‘built-in’ result even if it is a supercomputer. A lot of climatologists would be able to think more clearly if they accepted this as true; the computer is not going to provide any revelations, you have to provide those as input. Building in ‘randomness’ and ‘statistics’ will not persuade the computer to come up with something new either; it can’t.

Reply to  Billy Liar
June 7, 2015 1:18 pm

That is not the issue here. The computer lets me see things by running a model, that I could not see without. E.g. how many neutrinos are produced in the Sun’s core? Where is there an oil field worth exploiting, etc. In none of those cases has the programmer put into the model what the answer should be.

Reply to  lsvalgaard
June 7, 2015 4:05 pm

Leif, as someone who has designed computers for spacecraft and launch vehicle I can tell you the difference. The difference is that these are completely deterministic systems that are 100% dependent on external sensors to provide real time data, which then is processed and the spacecraft or launch vehicle computer provides a correction in the form of an output to a thruster or sensor suite.
Also, to use your second analogy we still have the problem of missing neutrinos as well as the problem of the consensus that misled Hathaway et. al. into their wrong predictions about solar cycle 24. Another example is the failure of the scientific community to predict the existence of the gamma ray bursts in the cosmos, or the production of gamma rays and even anti-matter in lightning flashes before the GRO BATSE instrument discovered them. It was still over a decade before the data was believed.
I will give you another example.
In the Apollo samples that came back from the Moon, in some sample anomalously high levels of water was discovered. Since the deuterium ratios were the same as terrestrial water, these anomalously high levels were dismissed as contamination, not an unlikely thing considering the work was done in Houston. HOWEVER, only after the Chandrayaan, Lunar Reconnaissance Orbiter, coupled with confirmation of earlier data from Clementine and Lunar Prospector, did scientists go back and figure out that their ASSUMPTIONS were wrong and that indeed the water in those samples was truly of lunar origin. Until LRO and Chandrayaan the “consensus” of the planetary science community would not consider any other origin than terrestrial contamination, even after James Arnold’s theoretical paper in 1979, and Prospector and Clementine gave the same data. It was ONLY after it was completely indisputable that the consensus began to change.
So, I respectfully disagree with your disagreement.

Nick
Reply to  lsvalgaard
June 7, 2015 4:09 pm

Talking of rockets…
http://www.around.com/ariane.html
Computers are no different from pen and paper, they are just much faster. You only know if your paper calculation or your program is ‘correct’, by testing it against the real world.

ferdberple
Reply to  lsvalgaard
June 7, 2015 6:12 pm

The computer [more precisely its software] can make visible that which we cannot see.
===============
exactly. the computer has no regard for whether those things are true or correct. it can spit out an imaginary future in the blink of an eye. because unlike a human, the computer has not the slightest compulsion against lying.
Every computer on earth is a Sociopathic Liar. It can lie without the slightest compulsion.
The Sociopathic Liar – Beware of this Dangerous Sociopath
Most people have lied in their life. Whether it was to protect feelings, avoid trouble, impress, or to simply get what they want, not many people can say they have never told a lie.
However, there is one extreme type of liar that you should beware of; the sociopathic liar.
On first impressions, you may find you actually like or are drawn to the sociopath. It’s not surprising as more often than not they are indeed charming and likable. Watch out, these type of liars can cause untold damage and mayhem once they lead you into their web of lies and deceit.
Sociopaths lie the most because they are incapable of feelings and do not want to understand the impact of their lies. They may even get a thrill out of lying at your expense. Once they tell an initial lie they go on to tell many more lies in an attempt to cover up the lies they started, or just for the “fun” of it.
A sociopath rarely reveals his or her feelings or emotions. You won’t often hear them laugh, cry, or get angry. These kinds of liars tend to live in their own little world and always find ways to justify their dishonest deeds. They do not respect others and place their own needs first and foremost.
If someone questions the sociopath’s lies they can be incredibly devious in the way they cover things up. This can include placing the blame at someone else’s door or by inventing complex stories to cover up their untruths.
Sociopaths can be so good at lying that they are able to pass lie detector tests. This means they often escape jail or don’t even get prosecuted for the crimes they permit. (That’s not to say all sociopathic liars are criminals, of course).
It is believed by some experts that sociopathic lying is connected to the mental illnesses Narcissistic Personality Disorder (NPD) and Antisocial Personality Disorder (APD).
If you come across someone who you think is a sociopathic liar, beware!
http://www.compulsivelyingdisorder.com/sociopathic-liar/

Dems B. Dcvrs
Reply to  lsvalgaard
June 7, 2015 8:57 pm

lsvalgaard – “The results are not ‘built in’ in the sense that the computer just regurgitates what we put in.”
Backing the point of article.

Reply to  Dems B. Dcvrs
June 7, 2015 9:08 pm

just the opposite.

Dems B. Dcvrs
Reply to  Dems B. Dcvrs
June 7, 2015 9:36 pm

No, not just the opposite. Your remark does back the point of article.
Humans (“what we put in”) are involved. Thus human bias.
In case of Climate Models, the vast majority of those models are Wrong by Mother Nature. Those models are wrong because of G.W. Climatologists biases, whether it be intentional (or unintentional) biases.

Reply to  Dems B. Dcvrs
June 7, 2015 9:42 pm

We put in the laws of physics and the experimental parameterizations needed and this is done rigorously with full review by competing scientists. Nobody puts ‘bias’ in. Show the evidence [if any] that a scientist has put his personal bias into the computer code.

Reply to  Dems B. Dcvrs
June 7, 2015 10:22 pm

lsvalgaard
June 7, 2015 at 9:42 pm
The algorithms for UHI adjustments and other important parameters had to be dragged kicking and screaming out of the computer gaming “modelers”.
Climate modeling is the antithesis of real science.

Reply to  sturgishooper
June 7, 2015 10:25 pm

So what is real science in your esteemed opinion? Barycenter cycles? Piers Corbin? Monckton’s ‘simple’ model? Evans’ Notch Theory? …

Reply to  Dems B. Dcvrs
June 7, 2015 10:30 pm

Leif,
That you have to ask shows just how far government-financed, post-modern “science” has strayed from the scientific method.
It should be obvious that real science is based upon observation. All actual data show that ECS cannot possibly be in the range assumed, never demonstrated, by IPCC.
“Climate science” is corrupted science.

Reply to  sturgishooper
June 7, 2015 10:38 pm

opinions, opinions, agenda-driven opinions. I am a government-funded scientist, am I corrupt?

Steve Garcia
Reply to  lsvalgaard
June 7, 2015 10:01 pm

Yep, computers can do all of that. And WHY? Because the science of gravitational attractions is known and VERY well tested, with LOTS of replication. All the formulae work the same this time and every time. And WHY? Because the equations to code in are KNOWN and RELIABLE and replicate well.
But now, take those same computers and work with a chaotic system in which even the most powerful feedback element (water vapor) is not understood very damned well at all.
My own rule of thumb FWIW is that replicate-able sciences that tend toward engineering are done amazingly well on computers, and those out on the frontiers, where even the principles aren’t known – when any assumptions are coded in instead of proven equations, how in the HELL can anyone trust the output?
If we had as many assumptions in astrophysics as in climate science, no one would be able to hit a planet or comet or asteroid.
The computer can also make visible that which we will never see: If EVERY formula used isn’t well-tested and well-proven, the output is a cartoon.

Reply to  Steve Garcia
June 7, 2015 10:14 pm

The physics of the atmosphere is also well-known and reliable. We cannot, as yet, deal with small-scale phenomena, but progress is being made: the weather predictions are getting better and that will benefit climate models in the end.

Steve Garcia
Reply to  Steve Garcia
June 8, 2015 2:03 am

No, the physics of the atmosphere is NOT well-known and reliable. If it was there would only be ONE GCM and all climatologists would use it. The very fact of scores of them existing means not only do they not all have/use the same physics but that they put in different parameters, too. WHY would they do that, if all of it is well-known? Wouldn’t they all use the same parameters?
Well-known in theory? Doesn’t mean squat, because the models are all giving wrong results.
Unless they all are voluntarily ignorant of your well-known physics? And how much of that “well-known” FITS with reality? I see you hammering on other guys here about Feynman and his “If it doesn’t agree that means it’s wrong” – so now apply that to climate curves.
Are you talking out of both sides of your mouth? You can’t have it both ways. The models are wrong – ergo, either the physics that is well-known is well-known but incorrect, or you blame it on the code writers for not getting it right. But you tell us that programmers’ work is checked and double-checked. Every way we look at it, the lack of match-up says somewhere in there the process is messed up.
You claim that it can’t be the physics or the climate guys (I assume they are the ones for whom the physics is well-known) – but if so then where are the model results errors coming from? Oh, and PULLEAZE! please, please, please, claim that the models are correct! That all their curves match up with the actual temperatures. I need a good laugh. Tell us that there has been no hiatus. PLEASE tell Trenberth where the missing heat is. HE at least admits it’s missing. So I won’t have to laugh at him, because he sometimes sees facts and admits them as facts.
But your reply is nonsense, dude. “The physics is well-known and reliable.” But you can’t deal with small-scale stuff? Dude, you don’t have the BIG stuff down, either. Or are you different from Trenberth and refuse to admit facts as facts?
Weather predictions? WHO in the world is talking about weather? Aren’t you forgetting? It’s the skeptics who are supposed to not know the difference between weather and climate. And here you are, claiming WITHOUT BASIS that the weather predicting models are going to help with the climate models. HAHAHAHA – When? In the 15th millennium?
And if the physics is all so well-known, the weather and climate models should have no discrepancies between them. And here you are, admitting that the climate ones are INCOMPLETE AND NOT WORKING. Otherwise, the weather predictions wouldn’t have any “benefiting” to do for the climate models.
It’s either well-known and working – or it is incomplete and not working. You can’t claim both, dude. Not among intelligent people who are paying attention to the contradictory assertions you claim.

VikingExplorer
Reply to  Steve Garcia
June 8, 2015 4:10 pm

>> No, the physics of the atmosphere is NOT well-known and reliable. If it was there would only be ONE GCM and all climatologists would use it.
Non-sequitur
By that embarrassing logic, shouldn’t there only be one textbook in each subject. I remember during Calculus class going and getting 10 different calculus textbooks ad laying them out on the table, all turned to the same topic.
>> If we had as many assumptions in astrophysics as in climate science, no one would be able to hit a planet or comet or asteroid.
What makes you think we can? You’ve been watching too much star trek.
The physics of the atmosphere IS well-known, and reliable. Otherwise, weather models would be impossible.
However, climatology is not an extension of meteorology. The atmosphere is only .01 % of the thermal mass of the system.

catweazle666
Reply to  VikingExplorer
June 8, 2015 6:03 pm

VikingExplorer: “The physics of the atmosphere IS well-known, and reliable. Otherwise, weather models would be impossible.”
From http://wattsupwiththat.com/2015/06/08/another-model-vs-reality-problem-national-weather-offices-canada-a-case-study-with-national-and-global-implications/
A simple definition of science is the ability to predict. If your prediction is wrong your science is wrong. How good is the “science” these Canadian bureaucrats produce? The answer is, by their measure, a complete failure. Figure 1 shows the accuracy of their weather prediction for 12 months over the 30-year span from 1981 to 2010.
Notice that for 90 percent of Canada the forecast average accuracy is given as 41.5 percent. A coin toss is far better odds.

Which appears to indicate that reliable weather models are impossible, so clearly the physics of the atmosphere IS NOT well-known, and reliable.
Of course, as the physics of the atmosphere is essentially non-linear and chaotic – hence subject to inter alia extreme sensitivity to initial conditions, this is not surprising.

VikingExplorer
Reply to  Steve Garcia
June 8, 2015 6:31 pm

catweazle666, you’re reasoning seems to be:
premise: detailed data is NOT required for weather prediction
premise: Canada has horrible weather prediction
conclusion: weather models are impossible
Non-sequitur. One of these premises is invalid.
Actually, accuracy is pretty good up to 5 days and rain is pretty good.

catweazle666
Reply to  Steve Garcia
June 8, 2015 6:56 pm

VikingExplorer: catweazle666, you’re reasoning seems to be:…
Wrong.
In fact, my reasoning is:
“Of course, as the physics of the atmosphere is essentially non-linear and chaotic – hence subject to inter alia extreme sensitivity to initial conditions, this is not surprising.”

VikingExplorer
Reply to  Steve Garcia
June 8, 2015 9:01 pm

catweazle666, you can’t weazle out of this one. Here is how this went:
Leif: The physics of the atmosphere IS well-known, and reliable.
VE: Otherwise, weather models would be impossible.
cw666: References Dr. Ball
Dr. Ball: Canada has taken away the ability to collect atmospheric data
Dr. Ball: Canada hasn’t been very good at predicting the weather between 1981 and 2010 <== old data
Dr. Ball: A simple definition of science is the ability to predict.
cw666: Which appears to indicate that reliable weather models are impossible
cw666 conclusion: so clearly the physics of the atmosphere IS NOT well-known, and reliable
Therefore, I represented your argument quite well. Non-sequitur.
By that brilliant logic, if we take away all medical instruments from our doctors, we can conclude that the Medical Sciences are not well known and reliable.
Faced with evidence that I provided that shows that actually, 5 day forecasts are quite good, you are now changing your position. A 5-day forecast would be impossible if the physics of the atmosphere were not well-known, and reliable.

catweazle666
Reply to  VikingExplorer
June 9, 2015 8:11 am

“Faced with evidence that I provided that shows that actually, 5 day forecasts are quite good, you are now changing your position. A 5-day forecast would be impossible if the physics of the atmosphere were not well-known, and reliable.”
No, I have not changed my position, which was clearly stated at the end of both my relevant posts using identical wording. Here it is again:
Of course, as the physics of the atmosphere is essentially non-linear and chaotic – hence subject to inter alia extreme sensitivity to initial conditions, this is not surprising..
In fact, the improvement in 5 day forecasting over the last decade or so is primarily as a result of universal satellite coverage, not computer games models, and the improvement in precipitation forecasting is due to radar installations.
So you are wrong – again.

VikingExplorer
Reply to  Steve Garcia
June 9, 2015 8:20 am

>> improvement in 5 day forecasting over the last decade or so is primarily as a result of universal satellite coverage
Exactly. When the system is sensitive to initial conditions, adding more data makes it easier. However, data alone predicts nothing by itself. It’s the physics being simulated in the models that result in prediction. Therefore, the atmospheric physics is relativity well known and reliable.
As for the non-linear comment, any good engineer or scientist would know that almost all physical systems are non-linear and chaotic. It’s not stopping us from advancing forward and in many cases, succeeding.
There is absolutely no reason to conclude that climatology is impossible.

catweazle666
Reply to  VikingExplorer
June 9, 2015 2:10 pm

“There is absolutely no reason to conclude that climatology is impossible.”
If by that you are asserting that climate models can ever give accurate projections/predictions, the IPCC itself would appear to differ.
…in climate research and modeling we should recognise that we are dealing with a complex non linear chaotic signature and therefore that long-term prediction of future climatic states is not possible…
IPCC 2001 section 4.2.2.2 page 774

Not to mention no less an authority ahtn Edward Lorenz, of course.

Reply to  catweazle666
June 9, 2015 3:04 pm

Digital simulators takes 1’s and 0’s from an input file applies them to the set of inputs and then steps a clock (another defined input), you can load memory with programs and with system level models (like cpu’s) you can run your set of vectors, I’m sure by now you can get code debuggers running against a behavioral model, even get inside to see registers. We did most of this in the 80’s. While you can check timing in a simulator, they are vector dependent. We also had timing verifiers that you setup a clock, and your control inputs, reset, clear, stuff like that. You initialize the circuit (cause they don’t just turn on like they do on a bench), and you can define a stable/changing input state for your 128 bit data path, without having to come up with all combinations of those 128 bits, just stable, change, stable and the tool checks all possible permutations of those 128 bits. But this was hard for a lot of designers to understand what it did and why it gave you the results it did.
Analog simulators took a circuit design and turned it into a set of partial differential equations based on model primitives, but not only did you have to initialize the circuit, you have to initialize the matrix equation, this is basically the same as how a GCM works.
Simulators are state dependent, GCM’s are state dependent, initial conditions in a CGM is the setup and test vectors in my simulation example, you have to set each node on the matrix, and then run until you get numeric stability, at this point you start your GCM’s clock running forward, the pause is state dependent effect, state of the ocean, state of the air, state of the ocean from the state of the air, and so on.
Climate is a 30 year average to average out the state data. Imaging a superposition of all ocean states, and a superposition of the air, no pause, no El Nino, no La Nina, just like the stateless timing verifier.
The ensemble GCM runs are trying to turn state dependent runs into a superposition of all possible runs, because we can’t match the state of the real world to the state of the GCM, someone at some point must have thought if they had well enough defined set of initial conditions a weather forcasting system on a big computer could tell them the future, if they smudged a lot of the state data to be more like the the superposition of each years actual weather into a 30 or 60 year average, and get their model to do the same thing. Lorenz knows better, but can we define a stateless climate?
In some respects, CS is the results of a stateless simulation, but based only on a change in Co2, and we know it is lacking.

VikingExplorer
Reply to  Steve Garcia
June 9, 2015 2:54 pm

You have definitely accepted their premises if you’re using the IPCC as a source. Stockholm syndrome much?
And the view of Lorenz is often over stated by people for political reasons. One, he never said that it was impossible, but only that long term predictions were difficult.
That’s ok, because we don’t really need long term predictions.

Dems B. Dcvrs
Reply to  lsvalgaard
June 7, 2015 10:13 pm

lsvalgaard – “Mother Nature says that the models are not working. That is all you can conclude.”
We can conclude something else.
Despite numerous people here pointing out your incorrect statements, you can not admit you are wrong.

Reply to  Dems B. Dcvrs
June 7, 2015 10:15 pm

most people here have no idea what they are talking about…

Jonathan Bishop
Reply to  lsvalgaard
June 11, 2015 12:30 am

Dr Svalgaard,
I agree that “software can make visible that which we cannot see”, but reading through many of your other replies to comments from other commenters I suggest that you cast your mind back to your numerical analysis and compiler development days. Computational numerical analysis is not generally well understood even among “professional” computer scientists, let alone amateur programmers(regardless of how many years of code hacking they have done). Usually this is a minor problem since the vast majority of applications are not numerically intensive – but not so, I suggest, with climate simulations. With your background in numerical analysis, I expect you know this, but perhaps Eric’s wording did not cause you to recall the design issues that must be addressed for success in this instance.
When I was teaching computer science at university many decades ago we took 300 in in first year and graduated about 30 at the end of third year. Back then university computer science degrees concentrated on creating programmers and computer scientists rather database admins and project managers which they seem to do now. We were not yet in churn mode at universities trying to graduate enough database / code hackers to satisfy a burgeoning IT industry. Computing, like I guess all disciplines, is replete with chasms for the unwary, and self taught.
While some of the ways Eric presented his case gave me pause, the essential thrust of his piece clearly strikes a chord with the programmers in the commentariate of WUWT, and in my view is broadly correct.
My interpretation of what Eric was saying is:
1. Computers are not an authority, they do nothing of themselves.
2. It is the software that does something, but is not an impartial authority either as it does what the programmer tells it to do.
3. Programmers make implementation decisions at the coding level that can have a significant impact on the outcome – starting with the choice of computer and coding language, but continuing through to the choice of algorithm to implement an equation, etc.
4. Those decisions may be expressions of personal bias, the need to approximate or assume some behaviours/values/constants for a variety of reasons(e.g., unknown, considered unimportant, not practically calculable on the technology in use), or more commonly not in the knowledge space/skill set of the programmer. I will explain this later.
5. Thus whether a model was run on a computer or on the infinite Turing tape machine is irrelevant – the “computer” is not an authority – it is a tool, and it does only what it is told to do by a programmer – who may or may not be trained or sufficiently experienced for the task.
I do not think, as you seem to have interpreted, that Eric was saying computers pump out the answer you give it, but rather that the answer it gives can be heavily influenced by the views, skill, knowledge, etc of the programmer because the software cannot yet invent or modify itself (except potentially in interpreted languages like lisp, M, PHP, JavaScript, etc.). With such biases inherent in the creation processes, it is niaive to consider the computer an “authority”.
As other commenters have pointed out (and as you would clearly understand given your coding experience) – if floating point numbers are used on a limited word length binary Von Neumann computer (e.g. not a Turing Machine) then there are more floating point numbers missing from the domain of possible numbers than are in it. Of course, this is true of integers too – but only at the extreme ends of the number line, not between two numbers as occurs in floating point arithmetic. These missing numbers are approximated (rounded up or, usually, truncated to a precision t ), and these approximations rapidly accumulate to significant errors, yet I would be willing to bet that there are few (if any) climate model implementations where the “coder” has even known that he/she should calculate machine epsilon (assuming they knew what it was) before deciding the level of precision at which to work or calculated the computational error, let alone understood that the obvious approach is not necessarily the best.
In numerical applications the choice of approach is the most important factor. By way of an example, which I am sure you will recall from past experience let us consider something as seemingly simple as calculating e^x.
e^x = 1 + x + x^2/2! + x^3/3! + ….
Forsythe (et al) demonstrated that if we want to calculate e^-5.5 (his example) the obvious way to calculate this is to calculate each term in the above formula and add them together, right? Umm – no.
While correct answer (approximation) is 0.00408677, if I simply calculate each term in succession on a binary computer with a precision of 5 and stop at 25 terms I will get e^-5.5 = 0.0026363. In other words, the obvious solution gives a result of NO significant digits.
Now if I change the algorithm to implement e^-5.5 = 1 / e^5.5, on a machine with the same characteristics and otherwise use the same basic approach – adding the terms of the series together but then divide the result into 1, I will get e^-5.5 = 0.0040865. Still not ideal, but better.
In fact the better way to calculate e^x on a binary computer is to break x into its integer and fraction parts:
x= i + f (in the example i = -5 and f=.5) and then calculate:
e^x = e^i * e^f where 0<= f <= 2.
This is perhaps not the most obvious approach and this is just the start of a very long list of the issues to consider in numerically intensive computing on binary machines.
Now this is a well understood problem – most numeric libraries include a function for e, so programmers rarely actually program it these days, but for the library to contain the function, someone did program it, and the same issues apply to a host of numeric and statistical calculations that they do program directly.
It is for a host of reasons such as these that qualified computer scientists tend to consider programming as much an art as a science and are very wary of saying "it is right because the computer said so". This is what I understood was the thrust of Eric's paper.
For me, one of the initial problems I had with the AGW modelling many years ago was that the software code was not necessarily public and was not (apparently) generally done by actual computer scientists skilled in numerical analysis, but by self taught programmers with expertise in other fields. As a real "professional" computer scientist I am extremely hesitant to accept a numerical model's output if I can not inspect the code, and never if the computational error value has not been calculated. Have you ever seen a climate model where the computational algorithmic error (and I do not mean the statistical errors, nor instrument errors) is calculated and reported? I assume they exist but not that I have noticed to date.
So I suggest, with the greatest respect, that just as I would not presume to lecture you about the sun, perhaps you might be well advised to at least listen to, and try to understand what the programmers here have been telling you about what we have to think about when building numerical applications. While every kid can hack out a toy programme (and many non-computer scientists think they are programmers because they can hack out something that seems to give an answer), serious computational numerical modelling requires real expertise in computational numerical modelling – as well as in the field being modelled and in statistics!
Ref: Forsythe, et al – Computer Methods for Mathematical Computations (1977)
Ref: Dahlquist, et al – Numerical Methods (1974)
Kind regards,
Jonathan Bishop

Reply to  Jonathan Bishop
June 11, 2015 12:40 am

All of what you say is well-known by the modeling community who worries a great deal about numerical stability and goes to great length to solve the equations in the best possible way. We are not talking about lone, amateur propeller-head programmers, but about seasoned scientists the world over scrutinizing each others work. The climate models are very much thorough teamwork and at the same time a very competitive enterprise. The notion of programmers putting in ‘bias’ is ludicrous.

Reply to  Jonathan Bishop
June 11, 2015 1:48 am

And I am also a professional programmer [in addition to being a solar physicist]. From upthread:
And I am a software developer [perhaps a word-class one some people tell me] with 50+ years experience building operating systems, compilers, real-time control systems, large-scale database retrieval systems, scientific modeling, simulations, graphics software, automatic program generation from specifications, portable software that will run on ANY system, machine coding, virtual machines, etc, etc.

adarkstone
Reply to  Jonathan Bishop
June 11, 2015 6:42 am

Dr Svalgaard,
lsvalgaard: “I am also a professional programmer.”
Yes, I understood that, and in fact you have a numerical analysis background – having checked your CV before commenting in the first place – hence my reference to your previous life in numerical analysis, etc., and my choice of example with which I assumed you would be familiar (but not necessarily other commenters) and in that light I am thinking that my last paragraph was poorly worded, as it can be read to imply otherwise.
The purpose of the example was to remind (not teach) that the algorithm choice is a programmer’s decision, and hence a reflection of the programmer’s views and/or knowledge. This is what I understood to be the central issue of bias that Eric was proposing.
My feeling is that you may have been too hard on him (and a few others) in concluding that he argued merely that the models were simply regurgitating a pre-defined result as that would seem to be naive.
I read his essay as arguing more along these lines: that the programmer makes choices at the algorithmic design and construction level which, while seemingly minor, can dramatically affect the outcome, but not necessarily be obviously incorrect. Hence the example of e^x: each algorithm was superficially valid, but only one gives an acceptable approximation, as a result of the constraints in binary floating point implementation.
I accept your argument (knowing not better) that the majority of the models are carefully considered and implemented by professional programmers. Of course that does not seem to have been the case with UEA software that was the subject of the climate-gate emails – at least until late in the piece when an apparently competent programmer was engaged to repair (unsuccessfully) the software and data, ( if the code and his comments are anything to go by). Yet embedded in those comments were many subjective “best guesses” reproduced in the code – which kind of supports Eric’s argument.
I suggest with the greatest gentleness that, while the idea of a single programmer embedding personal bias into the implementation may seem unlikely, “ludicrous” is possibly a little strong. I read bias to mean “subjectivity” and a consequence of the micro design decisions. I am pleased to hear that you are confident in the diligence, professionalism and expertise of all (?) the climate model development teams and the prevalence of expert numerical analysis programmers working on the models. NumAnal being a largely arcane discipline with few practitioners compared to other ComSci skill sets – like DBA’s and project managers – competent software numerical analysts are likely among the rarer computer scientists.
It would make climate science truly unique at least in the university sector, for in my experience many non-computing scientists appeared to think that because they were extremely talented in one science or mathematics, they were automatically competent programmers once they had learned how to build a loop. The same idea nearly every first year com-sci student experienced (including myself). I remember wondering what else there could be to learn at the end of first year – I could code in Pascal, Fortran and assembler/Mix – what else could they teach us over the next three years? How foolish I was!
Over many years, and many projects where I have been requested to investigate and fix coding disasters from seemingly competent but not properly skilled teams I have developed a healthy disrespect for the majority of non qualified software developers, and , indeed, many qualified ones. There are exceptions but I suggest they are rarer than one would hope.
On another matter, and I guess in support of some of your comments, the mere fact that a program is fixed in code does not mean that it can not create or discover something new and independent of the programmer. Genetic algorithms and neural nets being examples (particularly the former) where solutions are evolved rather than programmed in a conventional sense and the outcome is not necessarily known. In both these cases the programmer creates a kind of virtual machine – a platform for general problem solving if you will. Likewise, computer languages and their compilers, and operating systems are examples of software without a specific solution in target, but rather a set of tools with which to create a solution.
The issue of whether there is bias in modelling software is possibly a semantic argument revolving around the meaning of “bias”, and it would seem that with minor changes to the way things were argued and the “distinguishment” of the writers’ underlying meanings there is not necessarily as much disagreement on this topic as might seem at first.
I hope that you would at least agree that the mere fact that something was produced on a computer does not, of itself, have any real standing in the credibility stakes. The computer is a very good paper weight until such time as a human creates a set of instructions telling it what to do. What matters is the way those instructions are produced, what those instructions tell it to do and how those instructions tell it to do that something. In that space lies human decision making – so it is not the computer that is the authority, but the scientific team that stand behind the software it runs. It is not so much the output that is authoritative, but how that output is interpreted. This is what I understood Eric to be saying.
Is there disagreement on that central premise?
Kind Regards
Jonathan Bishop

Reply to  adarkstone
June 11, 2015 7:07 am

I hope that you would at least agree that the mere fact that something was produced on a computer does not, of itself, have any real standing in the credibility stakes.
I would partly agree, but add that a computer can be trusted to follow instructions and that is a plus in the credibility game.
Furthermore, I would add that the choice of algorithm is not important as long as the algorithm solves the equations to a stated accuracy. So I would maintain that Eric’s central premise is wrong.

Reply to  lsvalgaard
June 11, 2015 7:18 am

As I’ve mentioned a couple of times, I believe the difference between the models running cold, and the models running warm was the decision to allow the super saturation of water at the boundary layer, this amplifies the water feedback, and has been the reason for a CS larger than Co2 alone.
Now, for those digging around in the code, and or TOE, this should be easy to confirm, and I would say it is a bias, a “well it must be true because otherwise the models are unnaturally cold”.
Does reality allow a super saturation of water at the ocean air interface? I doubt it, but air movement would sweep away the saturated air, and we know cell size has to parametrize sub cell effects, so one might justify this action as a parametrization and buy the BS they’re selling, but how many times does that turn out as the truth.

catweazle666
Reply to  Jonathan Bishop
June 11, 2015 6:47 pm

lsvalgaard: “All of what you say is well-known by the modeling community who worries a great deal about numerical stability and goes to great length to solve the equations in the best possible way.”
I see.
So let’s take a look at the output of these truly wonderful human beings, and see how their striving to go to such great length to solve the equations in the best possible way actually works out in practice, shall we?
http://www.drroyspencer.com/wp-content/uploads/CMIP5-73-models-vs-obs-20N-20S-MT-5-yr-means1.png
Hmmmm…I’m not impressed.
Oh, and for what it’s worth, although I cannot claim to be a world famous expert in compiler design or whatever, as an engineer I too have some experience in computer programming, I wrote my first bit of Fortran in December 1964, and first worked on modelling (of low frequency heterodyne effects in automotive applications) in 1971.
I can assure you that as a humble engineer with dirt under my fingernails working on safety-critical projects, some of which, had they have been as far out as those above may well have created smoking holes in the ground, had my work been as flawed as that, I would certainly have ended up in court charged with severe negligence or worse.
Tell me Professor, would you let your children fly on an aeroplane that depended on the sort of work that your beloved climate modellers produce?

Reply to  catweazle666
June 11, 2015 7:38 pm

Well, they are trying. They don’t have it right yet. What I objected to was the notion that they have programmed their ‘biases’ and personal wishful thinking into the models. There is no evidence for that, but that does not seem to deter people [like yourself?] to believe so.

catweazle666
Reply to  lsvalgaard
June 12, 2015 3:10 pm

“What I objected to was the notion that they have programmed their ‘biases’ and personal wishful thinking into the models.”
What, you mean like an absolute, unequivocal and invincible conviction that practically the only possible influence on the Earth’s climate is anthropogenic carbon dioxide, unaffected even by almost two decades of evidence that this is highly unlikely, based perhaps on some variety of post-modern belief in original sin (although I admit I’m struggling to explain the motivation)?

June 7, 2015 4:36 am

One thing that struck me when I started to look into the AGW issue was the blind faith people in power seem to have of the results coming out of these computer climate models.
Because I have worked with other types of models, haven’t much faith in such models.
Model can only be credible if the can be validated.
Where is the empirical evidence that CO2 is a major climate driver?
There are many other things that can influence changes in climate. Maybe these can be ignored if they have a Gaussian distribution, but if they don’t have Gaussian stochastically distribution over time, but have some sort of systematic then even small factors can dominate over time.
This I have found is the case with variations of ENSO and for climate variations, where tidal and solar factors play an important if not a dominant role.

Ian Macdonald
Reply to  Per Strandberg (@LittleIceAge)
June 7, 2015 8:47 am

That is true, and one of the key issues is that we have no way of experimentally simulating an air column several miles long whose pressure and temperature vary along its length, containing a fraction of a percent of greenhouse gas, into which IR is injected. That real-world situation is very different from the numerous lab demos in which 100% CO2 is pumped into a metre-cube box, and warming supposedly noticed. The only model which simulates the real conditions is called Earth.

June 7, 2015 4:39 am

Whenever discussions about the value of computer models crop up, I’m always reminded of this Richard Feynman quote:
“It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.”

Ed Zuiderwijk
Reply to  azleader
June 7, 2015 5:17 am

Add to that: It doesn’t matter how many gridpoints or lines of code or how brilliant your programmers are or think they are, it doesn’t matter how fast or slow. If the answers do not reflect the real world it is wrong. Full stop.

Steve Garcia
Reply to  azleader
June 8, 2015 2:11 am

Hahaha – You are the third person on this comment stream who’s said that! I am one of the three. GOOD! Someone out there pays attention to the Scientific Method.
Huzzah!
The world may not be going to hell in a hand basket!

Mark from the Midwest
June 7, 2015 4:40 am

I’m not surprised by Eric’s observation. For the non-math mind I’ve found that certain buzz words have a profound impact. Simulation is one of them, I know of two data products on the market that were marginally profitable, then renamed as “simulations” and became a big success. The term model, taken alone, is weak, but when you say “computer model” it allows one’s imagination to take off. I’ve seen a multi-million dollar business built on a bad OLS regression, simply because the graphs and tables were so cool. (FYI, the business crashed a burned a few years later, but not before the damage was done). I recall Algore, in a speech to Hollywood types that the “these are powerful computer models.” If I were a Hollywood type I’d be impressed, since their first large scale impression comes from Pixar. I also know of two former NASA employees that worked in the public information group that don’t have enough brains to understand how jet propulsion works, maybe that relieved them of any responsibility to report anything in an accurate fashion. I also know a fair number of educated people who actually believe that the stuff that Ray Kurzweil talks about is real.
These days it’s all about story-telling, and to be honest the “doom-and-gloom-if-we-don’t-act stories” are a bit more compelling than “nothing to see here” stories. A scientists we need to work hard to tell better stories. The stories can still be true to the science, but until the story-telling improves the war will not be totally won.

PiperPaul
Reply to  Mark from the Midwest
June 7, 2015 7:22 am

Computer models are seductive because they can do fancy output graphics and absolve the user of any responsibility for math errors. All that has to be done is enter correct data correctly and course in this case they can’t even get that right.

Reply to  PiperPaul
June 7, 2015 6:22 pm

Just remember that the Greek theory of planetary motion was mathematically correct, and could be used to predict eclipses, and planetary motion. It however, had little correlation to actual planetary motion.

Steve Garcia
Reply to  PiperPaul
June 8, 2015 4:19 am

denniswingo – Absolutely. They had those epicycles all down pat – until they didn’t.

ICU
June 7, 2015 4:40 am

Psychological projection much?
IT ‘expert’ opines on numerical modeling.
Probably never wrote a single line of code using the physics based laws of conservation.
Perhaps you will tender your CV showing vast SME status in the field of scientific numerical modeling in the fields of hydrodynamics and thermodynamics?
Otherwise … not even wrong.

ICU
Reply to  Eric Worrall
June 7, 2015 11:55 am

Well, that is much shorter, still circular, still projection, but shorter. You could have posted that as the title with no text.
No CV showing physics based numerical modeling experience though.
So IT ‘expert’ has no relevant experience, but he just knows, because … well because he just knows.
Cart before horse much?
Anyways, I’m doing some harmonic analysis right now, the preliminary FFT’s kind of show me where to look, but I don’t believe those FFT’s, so the harmonics that do agree with the preliminary FFT’s must be wrong, so that means that the original input data must be wrong, I’ll just change the input data to agree with what I think the harmonics should be, if that doesn’t work then I’ll put in a bunch of IF statements that give me the right wrong answers.
Nice to know how IT ‘experts’ think and do their jobs though.

looncraz
Reply to  Eric Worrall
June 7, 2015 12:46 pm

ICU:
“Anyways, I’m doing some harmonic analysis right now, the preliminary FFT’s kind of show me where to look, but I don’t believe those FFT’s, so the harmonics that do agree with the preliminary FFT’s must be wrong, so that means that the original input data must be wrong”
This is not how programmers think, actually. We have a massive burden to produce the proper result and the inputs are considered “const,” so we can’t change them, we can only work upon them with certain (often preordained) algorithms.
In the even that our output is incorrect, we don’t assume the input is wrong, we assume how we are handling the input is wrong (though we do data integrity checks, of course, to ensure the inputs are proper to what the inputs should be – as in a wind measurement shouldn’t be -99 kph, and pressure shouldn’t be 93 bar…). In the case of climate models, this is all done well in advance and the models do no input verification, they just create computational kernels, cells, or threads (design dependent) and run computations based on the inputs. These computations are broken down to different functions/equations which have been independently verified or created and tested to illustrate what the creator was trying to illustrate. Usually, that equation is trying to illustrate reality.
From there, the fun happens. And this is where AGW models fail. They take the output from the most established methods, which are guaranteed to be running cold (on a global basis) due to an unknown reason (assumed to be GHG positive feedbacks not previously included) and, in effect, offset them by an assumed feedback sensitivity value, which, of course, are indexed to the GHG levels.
This produces a seemingly more accurate result, however, the sensitivity feedback values are completely guessed at by trying different values until the results are as expected. Then, they take these modified results and feed them BACK into the algorithms as input data, which then continues to get the same feedbacks continually accrued. This is why the models invariably go off in an odd direction over a century or so – they can’t track a stable climate no matter what, they always assume there will be a compounding feedback and there is, as of yet, no known way to constrain that feedback to represent reality.

Now, we get to the most important factor in all of this: predictive success.
The old models were running too cold (diverging from observations). So they indexed the models to GHGs with assumed stronger feedbacks, which fixed the hind casts. Now, the observations have diverged from the models again – the models are running warm. This means their feedbacks are too strong or there is another unknown factor which is needed to create a cooling bias.
Notice: the models have NEVER predicted the future climate. They have either been too cold (predicting an imminent ice age) or too warm (predicting imminent runaway warming).

ICU
Reply to  Eric Worrall
June 7, 2015 1:50 pm

looncraz,
GIGO? The climate models do NOT make predictions.
Seriously though, you are saying something, don’t have a clue as to what all you are saying, as that does not fit at all with my experiences as a physics based numerical modeler. I don’t think I’ve ever seen an AOGCM run off to plus/minus infinity though, something about restoring forces comes to mind though.
The models are climate projections, or you can even call them climate forecasts, based on certain future forcing assumptions (RCP’s), they have certain IC’s and BC’s, over large time scales this becomes a BC problem, so that if the radiative forcings change from those assumed, which they do (that’s why there are four RCP’s to capture the possible range of future forcings) then those assumed forcings must be updated to our best current understanding of those forcings (which is what climate modelers do). The same is true for the climate models themselves, those are also updated to include the climate scientists ever better understandings of the climate system.itself.
Anything beyond that is simply conspiracy thinking.

ferdberple
Reply to  Eric Worrall
June 7, 2015 6:35 pm

I don’t think I’ve ever seen an AOGCM run off to plus/minus infinity
==================
Well Duh. Infinity is not a legal binary value. It cannot be stored in a finite computer, except symbolically. Your comment is BS baffles Brains.
Do climate models run off to infinity? Of course they do. For all practical purposes. It is called instability. They run outside their bounds and halt, either from error or intentionally. The problem with GCM’s is increasing instability with increasing resolution, which is a clear sign of design problems.comment image

ICU
Reply to  Eric Worrall
June 7, 2015 7:37 pm

ferdberple,
Direct link to that figure at http://www.climateprediction.net/ not wattsupwiththat.*.*.*/2013/05/climateprediction_bad_data.jpg
So you think that ~13-14 years of whatever the heck that figure is means something?
Because without the source documents, it means absolutely nothing to me. It’s simply a graph with absolutely no background information/discussion.
TIA

looncraz
Reply to  Eric Worrall
June 8, 2015 12:40 am

ICU:
“GIGO? The climate models do NOT make predictions.”
Yes, yes they do. That is the entire point of a climate model: to make a prediction about future climate conditions. You can call them “forecasts” or “projections” all you want, but they are still subject to validation or refutation as with any prediction.
” I don’t think I’ve ever seen an AOGCM run off to plus/minus infinity though, something about restoring forces comes to mind though.”
You don’t see them running away too greatly because other models outputs are used as inputs to GCMs which serves to constrain possible outputs and the models are rarely run for more than a century. Given enough forecast time, many of those models would very likely run off into effective infinity, though any failure to do so would still say nothing about their [in]validity.
“[…]they have certain IC’s and BC’s, over large time scales this becomes a BC problem, so that if the radiative forcings change from those assumed[…] forcings must be updated to our best current understanding of those forcings (which is what climate modelers do). The same is true for the climate models themselves[…]”
Indeed so, the models are as good of a representation that can be made of current understanding. However, they also are a double-edged sword. When scientists were trying to explain why the models were running too cold they used the models with make-believe positive feedbacks to try and fine-tune the models to see what was missing (the amount of energy they needed, and the pattern in which that energy must have become available to the atmosphere in the form of heat). This, of course, makes sense (and is an oversimplification of what they really did, which was an earth-climate-system energy balance analysis – using the models as guides). We only start to see a problem when they find a way to blame a small set of variables for all of the missing heat… and they ran with it.
Today, in fact, we have so many modeled ways for which the warming could have occurred, each based on more or less equally sound mathematics and models, that we would have all fried to death if they were all true (maybe a slight exaggeration).
The Clean Air Act could very well be responsible for a good chunk of the warming (if not all of it) seen in the 80s and 90s. Oceanic thermal oscillations could easily have burped out enough energy we didn’t know it contained to cause a significant chunk of the observed warming. Unknown climate responses to varying solar output frequency ranges, reduced cloudiness (beyond cover), observation biases, UHI effects, internal earth energy leaks (from the mantle), and much more have all been modeled to demonstrate that they could in part, or in whole, explain all observed warming. Some, such as certain fluorocarbon levels, have even been claimed to explain variations over just a few years – as well as explaining the pause.
The misunderstanding of the significance of the model outputs – even by those who created them – has resulted in the likely overstatement of the severity of the situation. And, when you take your assumed values and include them as constants in your model, you have created an invalid model. And no one knows that more, in my opinion, than a programmer. Mostly because we deal with the consequences of such wrong assumptions all the time… and partly because I’m biased 😉

Reply to  ICU
June 7, 2015 7:16 am

ICU, there is a computational constraint on GCMs. This causes there finest resolution to be one degree, about 110km at the equator. This means they cannot resolve convection cells (must be less than 10km, best under 5). So they have to be parameterized. That is done and tested in a number of ways. But an important one is hindcasts. For CMIP5 a manditory output was a 30 year hindcast from 2006. Covering a period that contained some amount of warming from natural variation. So forecasting from 2006, the models proceeded to miss the pause.
Eric is right on the money. All the preceeding paragraph did was delve more deeply into how the bias crept in.

MRW
Reply to  ristvan
June 8, 2015 9:24 am

@ristvan,

For CMIP5 a manditory output was a 30 year hindcast from 2006. Covering a period that contained some amount of warming from natural variation.

Can you expand on this historical note? Thank you.

Greg Woods
June 7, 2015 4:40 am

‘But this act of creation is also a restriction – it is very difficult to create software which produces a completely unexpected result.’
– I don’t know about anyone else, but ‘unexpected results’ are the last thing I want from software.

eromgiw
Reply to  Greg Woods
June 7, 2015 4:58 am

I once wrote a Genetic Algorithm program to ‘evolve’ reinforced concrete beams. Came up with some very unexpected designs, and each run came up with something different, albeit similar.

Ed Zuiderwijk
Reply to  Greg Woods
June 7, 2015 5:21 am

I worked in the field of simulated neural networks and we did get unexpected (but correct) results, so unexpected that we first had to convince ourselves and then our colleagues. Great fun, though.

Chris in Hervey Bay.
June 7, 2015 4:43 am

The only time you know something is wrong with your code, is when you get a result you don’t expect !

Chris in Hervey Bay.
Reply to  Chris in Hervey Bay.
June 7, 2015 4:44 am

Good God Greg, you read my mind, a result I didn’t expect !

TimTheToolMan
June 7, 2015 4:44 am

lsvalgaard writes ” The results are not ‘built in’ in the sense that the computer just regurgitates what we put in.”
But “The computer can tell us when to fire the rocket engines and for how long in order to arrive at Pluto ten years hence.”
That “built-in”ness is exactly what its telling us. We put in the laws to calculate the burn and we know precisely what they are…and the computer spits out the answer according to the programming of those laws. How well do you think we’d do with getting the rocket to Pluto if we didn’t actually know the mass of the rocket? Or just rounded “G” out to 7×10−11 N⋅m2/kg2 ?

Reply to  TimTheToolMan
June 7, 2015 4:48 am

You miss the point. We put in the physics, not the expected result. This is why computers are useful to let us see what the result of the physics is, something we would not otherwise know.

hunter
Reply to  lsvalgaard
June 7, 2015 5:00 am

Climate models are not physics.

Ed Zuiderwijk
Reply to  lsvalgaard
June 7, 2015 5:27 am

The question is: have you put in the physics correctly. Or better: do you understand the physics in sufficient detail to have put it in correctly. With so many different aspects to coupled-ocean-atmosphere models, some of which are poorly understand themselves (e.g. heat exchange between surface water and lower atmosphere), I doubt it very much.

richardscourtney
Reply to  lsvalgaard
June 7, 2015 5:50 am

lsvalgaard
As usual, you refuse to see the point. You say

We put in the physics, not the expected result. This is why computers are useful to let us see what the result of the physics is, something we would not otherwise know.

Often “the physics” cannot be “put in” because the model cannot accept them or they are not known. Both are true in climate models.
For example, the climate models lack resolution to model actual cloud effects so those effects are “parametrised” (i.e. guessed) in the models. And the dozens of alternative explanations for the “pause” demonstrate that the physics to determine which – if any – of the explanations are correct is not known.
The climate models are the product of their input “parametrisations”. Thus, they are merely digitised expressions of the prejudices of their constructors.
It is madness to claim the economies of the entire world should be disrupted because those prejudices demand it.
Richard

Reply to  richardscourtney
June 7, 2015 5:53 am

It is nonsense to say that the ‘model cannot accept the physics’. And the point was not about current climate models, but about the much more general statement that computers only give you what you expect. It is that misconception that I disagree with.

richardscourtney
Reply to  lsvalgaard
June 7, 2015 6:14 am

lsvalgaard
As usual, you try to justify your nonsense by accusing others of nonsense, and you obfuscate.
I wrote

Often “the physics” cannot be “put in” because the model cannot accept them

and

For example, the climate models lack resolution to model actual cloud effects so those effects are “parametrised” (i.e. guessed) in the models.

but you write

It is nonsense to say that the ‘model cannot accept the physics’.

Nope. Not nonsense, but – as I explained – fact.
Clearly, you would benefit from taking a remedial course on reading comprehension.
Not content with that, you obfuscate by saying

And the point was not about current climate models, but about the much more general statement that computers only give you what you expect. It is that misconception that I disagree with.

Sorry, but the discussion is “about current climate models”.
You may want to talk about something else (because you know you are wrong?) but that is your problem.
Richard

Reply to  richardscourtney
June 7, 2015 6:17 am

No, the issue as Eric presented it is not about climate models per see, but about the more general notion that computers can only give you the expected answers. But past experience with you tells me that it is useless to educate you about anything, so I’ll let you rest there.

richardscourtney
Reply to  lsvalgaard
June 7, 2015 6:29 am

lsvalgaard
You say to me

No, the issue as Eric presented it is not about climate models per see, but about the more general notion that computers can only give you the expected answers. But past experience with you tells me that it is useless to educate you about anything, so I’ll let you rest there.

I don’t believe you when you say that you will “let {me} rest there”.
You have entered another of your frequent ego trips so you will insist on the last – and typically silly – word.
You made an untrue assertion. I pointed out that you were wrong (again).
You have pretended you were right and tried to change the subject.
As always, you have attempted to have the last word with an unfounded insult.
I am writing this to laugh at you, and I predict you won’t “rest there” because you will want to make another post to provide you with the last word by telling everybody how clever you think you are.
Richard

KaiserDerden
Reply to  lsvalgaard
June 7, 2015 7:08 am

but if you jigger the inputs you can be sure to get the result you want … with any model … the program isn’t written to get warming … but if you jigger the inputs you can know in advance you’ll get warming or cooling … and humans control the inputs not the program …

Reply to  KaiserDerden
June 7, 2015 7:12 am

Again: not the fault of the models, but of people.

Bruce Cobb
Reply to  lsvalgaard
June 7, 2015 7:24 am

From the essay; “Computers are amazing, remarkable, incredibly useful, but they are not magic.”
Reading is fundamental.

Glenn
Reply to  lsvalgaard
June 7, 2015 11:33 am

You may benefit from contemplating that computers ONLY perform operations with the instructions provided. “The physics” IS the expected result. Computers do not output anything that we do not “know”.
Well, at least when everything goes as expected. If not, look to what the computer was told to do.

Reply to  Glenn
June 7, 2015 11:43 am

Computer analysis of the recordings of seismic waves tells us what the internal structure of the Earth [and of the Sun] is, and computer modeling of the waves from man-made explosions tell us where to drill for oil. In none of these cases is the answer put into the program beforehand. So the computer let us ‘see’ things we cannot see otherwise.

ferdberple
Reply to  lsvalgaard
June 7, 2015 6:40 pm

People born into the calculator age have no appreciation of the simple slide rule. We HAD to invent computers because once calculators were invented, we forgot how to use slide rules.

ferdberple
Reply to  lsvalgaard
June 7, 2015 6:56 pm

computers only give you what you expect. It is that misconception that I disagree with.
====================
this strikes me as hair splitting. of course computers can give you unexpected results. From long experience fighting these infernal machines, 99.99999 % of the time these “unexpected results” turn out to be errors.
The 0.00001% of the time the “unexpected results” are not errors, 99.99999% of the time the people doing the development cannot recognize they have something novel, and throw the result away by changing the code to deliver a more “acceptable” result.
If your paycheck depends on finding the wrong answer, most of the time that is what you are going to find. The times you don’t will be called unemployment.

Dems B. Dcvrs
Reply to  lsvalgaard
June 7, 2015 9:25 pm

lsvalgaard: “Computer analysis of the recordings of seismic waves tells us what the internal structure of the Earth …” “computer modeling of the waves from man-made explosions tell us where to drill for oil”
Your have almost no idea about processes involved in Seismic Oil Exploration. Humans are involved through out processing. Humans making experienced decisions, thus biased, as to initial starting parameters for computer programs used to analyze waves. With humans making changes or corrections to their initial starting parameters, further bias, as iterative processing continues. And in the end, humans still decide where to drill for oil, not computer models.

Reply to  Dems B. Dcvrs
June 7, 2015 9:31 pm

A computer-aided decision thus. Are yo trying to say that we can dispense with the computer altogether?
http://www.seismicsurvey.com.au/ I think not.

Dems B. Dcvrs
Reply to  lsvalgaard
June 7, 2015 9:46 pm

And you miss point. Global Warming Climatologists have “put in the physics” as they see it, as they can program it.
Explaining why vast majority of Climate Models have been wrong.

Reply to  Dems B. Dcvrs
June 7, 2015 9:50 pm

So you postulate a vast conspiracy covering the globe of scientists all working towards the same nefarious goal. If so, I have a bridge to sell you. You cannot put physics in ‘as you see it’. The physics is given, not chosen.

Dems B. Dcvrs
Reply to  lsvalgaard
June 7, 2015 10:00 pm

lsvalgaard: A computer-aided decision thus. Are yo trying to say that we can dispense with the computer altogether?
Nope. What I did say is you know very little of what you remarked on.
lsvalgaard: “computer modeling of the waves from man-made explosions tell us where to drill for oil”
Let me put it in term you will understand. You are wrong.

Reply to  Dems B. Dcvrs
June 7, 2015 10:06 pm
TimTheToolMan
Reply to  TimTheToolMan
June 7, 2015 4:56 am

No Leif, you’ve missed the point. When you dont precisely know the physics, how do you design the software if you genuinely dont know what result to expect? Tiny changes in choices of parameters can cause the software to fail by going off the rails so what choices does one use?
Well the answer is you use choices that fit within the expected probable ranges and result in an overall result that is also within the expected range.
To another seasoned programmer, its not rocket science to understand how the GCM programmers have built their models when plagued with uncertainty and stability problems…

Reply to  TimTheToolMan
June 7, 2015 5:14 am

The answer is simple: you do the best you can. You do NOT write the program such as to get the result you expect. If that is what you after, there is no need to write the program at all. If the problem is hard, the interpretation of the output of the program is hard as well. Having programmed for 50+ years one learns both what the power and the limitations are. My experience is in stark contrast to Eric’s. And most other commenters here have no idea what they are talking about. I’m reasonable sure that if the computer programs showed that CAWG is not occurring, the echo chamber people here are all shouting in, would praise the computer models to high heaven.

Leonard Weinstein
Reply to  TimTheToolMan
June 7, 2015 5:25 am

lsvalgaard, you again miss the point. No, you do not put in the physics in climate models, you put what physics you do know, then add approximations and guesses to cover the physics you do not know. The choices for the physics you do not know and approximations and guesses are adjusted to get a reasonable fitting of past data (i.e., the result you want). Then you project into the future and see if the future agrees with the model as time marches forward. For climate models, they show no skill at all so are failures. That is all there is to this argument.

Reply to  Leonard Weinstein
June 7, 2015 5:30 am

No, that is not all there is to that argument. Disagreements between models and reality are opportunities to LEARN something, and if you do, the learning will improve the models incrementally.

TimTheToolMan
Reply to  TimTheToolMan
June 7, 2015 5:31 am

Leif writes “The answer is simple: you do the best you can.”
And again, I ask you how well we’d do getting a rocket to Pluto if we coarsely approximated G or didn’t know the rocket’s mass?

Reply to  TimTheToolMan
June 7, 2015 5:33 am

If at first you fail, you try again. The failure may tell you what the correct values should be… so learn from the failures.

TimTheToolMan
Reply to  TimTheToolMan
June 7, 2015 5:36 am

Leif writes “the learning will improve the models incrementally.”
Nope. Curve fitting never improves a GCM so there is nothing to learn from them directly. Only properly understood and implemented physics could do that and since the problem is one of not understanding crucial physics (eg clouds) and not being able to implement it anyway (ie insufficient computing power), the pursuit of climate prediction is fruitless today

Reply to  TimTheToolMan
June 7, 2015 5:42 am

But we have to keep trying, and eventually we may get it right. It is nihilistic to think that just because it doesn’t work today, it will never work. At some point in time it will work well enough to be useful. Weather prediction is already getting to this point. It certainly does better now than when we started out around 1950.

TimTheToolMan
Reply to  TimTheToolMan
June 7, 2015 5:52 am

Leif writes “Weather prediction is already getting to this point. It certainly does better now than when we started out around 1950.”
Hell yeah! But weather prediction is a completely different question to climate prediction and has essentially nothing to do with our ability to model weather.

Reply to  TimTheToolMan
June 7, 2015 5:56 am

And neither have anything to do with the general notion that computers can only give you what you expect, which is what I disagreed with. Computers models allow you to ‘see’ what you otherwise cannot. Sometimes the seeing is poor, but progress is possible and happening.

Leonard Weinstein
Reply to  TimTheToolMan
June 7, 2015 5:59 am

lsvalgaard, let me give you an example: As computers became more and more powerful, many people claimed they could learn to play a perfect chess game. It never happened, and never will. The game is too complex. However, computers did become able to beat the best human player in the world. It did that by in putting the same opening combinations and strategy human players use, and by looking at all combinations about 5 or so moves ahead, something it could do. Humans could only compute the combinations to a slightly smaller number of moves ahead (say 4 or so). This was not a solution to the problem, just a sufficient approximation to beat humans. Weather can now be frequently estimated about 3 or so days ahead (but not well where I live), but will never be able to be computed 100 years ahead, and since climate is nothing but average weather, climate will also not be able to compute 100 years ahead. The issue of whether a forcing like CO2 will cause a significant trend up (ALL OTHER FACTORS BEING THE SAME), may be doable, but all other factors are not going to be the same, so I think the climate models will never be able to solve the issue. That does not mean they can not do better than they have for shorter periods, but they have failed badly so far in all efforts., and do not seem to be doing better with time.

Reply to  Leonard Weinstein
June 7, 2015 6:03 am

Computers don’t have to play a perfect game, it is enough that they can beat Kasparov. And you are off the rail, because the issue with which I disagree is the claim that computers only can give you the answer you want and already know. Someone here even claimed that the test of correctness was that the answer was what you expected [in which case you didn’t need to run the program at all].

TimTheToolMan
Reply to  TimTheToolMan
June 7, 2015 6:04 am

Leif writes “And neither have anything to do with the general notion that computers can only give you what you expect”
And so we come full circle. If you dont know the physics, then computers are only giving you what you program them to tell you. There’s nothing magical about tuning parameters to get the result within an expected range if the expected range is, say, positive.
How does the computer give you a useful answer if reality has the range including negative values but you hadn’t experienced any so you excluded them from the calculations because the consensus said negative values were unreasonable?

Reply to  TimTheToolMan
June 7, 2015 6:10 am

The purpose of the models is to apply the physics you know the best you can, parameterizing what you don’t know and learning from any deviations you discover in a never-ending cycle of improvements. The notion that computer models are constructed such as to give you only what you want is hopelessly wrong and naive, although widespread among people who don’t know what they are talking about.

TimTheToolMan
Reply to  TimTheToolMan
June 7, 2015 6:15 am

Leif writes “parameterizing what you don’t know and learning from any deviations you discover in a never-ending cycle of improvements.”
This is curve fitting.
Leif writes “The notion that computer models are constructed such as to give you only what you want is hopelessly wrong and naive, although widespread among people who don’t know what they are talking about.”
This is ironic.

Reply to  TimTheToolMan
June 7, 2015 6:24 am

Curve fitting is not bad as such. The orbit of a spacecraft passing though a system of moons [e.g. Jupiter’s] depends on the unknown masses of the planet and of the moons. Curve fitting to the actual observed orbit can tell you what those unknown masses are. Curve fitting is part of the science.

Latitude
Reply to  TimTheToolMan
June 7, 2015 6:19 am

lsvalgaard
June 7, 2015 at 5:30 am
No, that is not all there is to that argument. Disagreements between models and reality are opportunities to LEARN something
=====
Even though no one knows what the physics of CO2 really are….even though the temperature record has been so corrected no one knows what it really was or is….
So you put in the physics that’s a total WAG….a temp record that’s a total WAG
…and out comes knowledge
question: how would you recognize it’s knowledge if it’s now confirming what you think knowledge is?

thallstd
Reply to  TimTheToolMan
June 7, 2015 6:38 am

“The notion that computer models are constructed such as to give you only what you want is hopelessly wrong and naive”
They may be “constructed” with the best objective physics we have. But if they are constantly being run with climate to Co2 sensitivity values higher than justified because that is what is needed to illustrate dangerous warming then the models are being “run” to generate the desired outcome.
I don’t know if that is the case or not. But it renders as irrelevant any discussion about how the models are constructed when so much of their outcome is dependent on the parameters they are run with.

TimTheToolMan
Reply to  TimTheToolMan
June 7, 2015 6:39 am

Leif writes “Curve fitting is not bad as such.”
Curve fitting is fatal in GCMs.
You give the example “Curve fitting to the actual observed orbit can tell you what those unknown masses are.”
That’s not curve fitting, that’s an analysis based on gravitational perturbation using precisely known physics. A curve fit would be sending a spacecraft around a planet once and using an analysis such as above to calculate probability of moon masses and then sending the next spacecraft through as a slingshot, knowing the moon masses but with no regard to where they were in their orbit.
Its a curve fit because you have one view of some data but dont know how it changes and what impact those changes will have.

Reply to  TimTheToolMan
June 7, 2015 6:42 am

When Kepler deduced his three laws of planetary motion, that was curve fitting of the highest degree, without any physics. Curve fitting is a good way of expressing what you have found out about a system, even if you don’t know the exact physics, in fact, the only way.

Reply to  TimTheToolMan
June 7, 2015 6:42 am

Computer Modeling for Climate Analysis by IPCC:
1. Unless the Models are programmed with all the applicable physics required, the results are useless for Government policy.
2. If the Models contain known physics AND approximation/best guesses of unknown physics, then the output is only suitable for study and learning.
Dr. Isvalgsard is mostly correct, but he didn’t consider how these Models (item 2.) are being utilized for a political agenda. This is not how science should be applied.