One point struck me, reading Anthony’s fascinating account of his meeting with Bill McKibben. Bill, whose primary expertise is writing, appears to have an almost magical view of what computers can do.
Computers are amazing, remarkable, incredibly useful, but they are not magic. As an IT expert with over 25 years commercial experience, someone who has spent a significant part of almost every day of my life, since my mid teens, working on computer software, I’m going to share some of my insights into this most remarkable device – and I’m going to explain why my experience of computers makes me skeptical, of claims about the accuracy and efficacy of climate modelling.
First and foremost, computer models are deeply influenced by the assumptions of the software developer. Creating software is an artistic experience, it feels like embedding a piece of yourself into a machine. Your thoughts, your ideas, amplified by the power of a machine which is built to serve your needs – its a eerie sensation, feeling your intellectual reach unfold and expand with the help of a machine.
But this act of creation is also a restriction – it is very difficult to create software which produces a completely unexpected result. More than anything, software is a mirror of the creator’s opinions. It might help you to fill in a few details, but unless you deliberately and very skilfully set out to create a machine which can genuinely innovate, computers rarely produce surprises. They do what you tell them to do.
So when I see scientists or politicians claiming that their argument is valid because of the output of a computer model they created, it makes me cringe. To my expert ears, all they are saying is they embedded their opinion in a machine and it produced the answer they wanted it to produce. They might as well say they wrote their opinion into a MS Word document, and printed it – here is the proof see, its printed on a piece of paper…
My second thought, is that it is very easy to be captured by the illusion, that a reflection of yourself means something more than it does.
If people don’t understand the limitations of computers, if they don’t understand that what they are really seeing is a reflection of themselves, they can develop an inflated sense of the value the computer is adding to their efforts. I have seen this happen more than once in a corporate setting. The computer almost never disagrees with the researchers who create the software, or who commission someone else to write the software to the researcher’s specifications. If you always receive positive reinforcement for your views, its like being flattered – its very, very tempting to mistake flattery for genuine support. This is, in part, what I think has happened to climate researchers who rely on computers. The computers almost always tell them they are right – because they told the computers what to say. But its easy to forget, that all that positive reinforcement is just a reflection of their own opinions.
Bill McKibben is receiving assurances from people who are utterly confident that their theories are correct – but if my theory as to what has gone wrong is correct, the people delivering the assurances have been deceived by the ultimate echo chamber. Their computer simulations hardly ever deviate from their preconceived conclusions – because the output of their simulations is simply a reflection of their preconceived opinions.
One day, maybe one day soon, computers will supersede the boundaries we impose. Researchers like Kenneth Stanley, like Alex Wissner-Gross, are investing their significant intellectual efforts into finding ways to defeat the limitations software developers impose on their creations.
They will succeed. Even after 50 years, computer hardware capabilities are growing exponentially, doubling every 18 months, unlocking a geometric rise in computational power, power to conduct ever more ambitious attempts to create genuine artificial intelligence. The technological singularity – a prediction that computers will soon exceed human intelligence, and transform society in ways which are utterly beyond our current ability to comprehend – may only be a few decades away. In the coming years, we shall be dazzled with a series of ever more impressive technological marvels. Problems which seem insurmountable today – extending human longevity, creating robots which can perform ordinary household tasks, curing currently incurable diseases, maybe even creating a reliable climate model, will in the next few decades start to fall like skittles before the increasingly awesome computational power, and software development skills at our disposal.
But that day, that age of marvels, the age in which computers stop just being machines, and become our friends and partners, maybe even become part of us, through neural implants – perfect memory, instant command of any foreign language, immediately recall the name of anyone you talk to – that day has not yet dawned. For now, computers are just machines, they do what we tell them to do – nothing more. This is why I am deeply skeptical, about claims that computer models created by people who already think they know the answer, who have strong preconceptions about the outcome they want to see, can accurately model the climate.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

I disagree with that statement in the sense that, firstly, I don’t think any software can be defined as a “model” if there are any assumptions in it, it should be called what it is – a “guess”.
Secondly, when sophistry creeps into the equation it transforms from a “guess” into outright fraud.
yrloc=[1400,findgen(19)*5.+1904]
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,2.6,2.6,2.6]*0.75 ; fudge factor
if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’
http://wattsupwiththat.com/2009/12/04/climategate-the-smoking-code/
It’s actually quite easy. 🙁
Mr Worrall has the wrong end of the stick. Computer software reflects the wants of the people who paid for it to be built.
Briffa’s pressure to present a nice tidy story? 🙂
Not really. But if the results aren’t clear and unambiguous, there are often some forces in play that will cause certain sorts of corrections to be favored over others. So, yes computer models can be biased.
I agree! Everbody in the IT business can quote any number of cases where people paid obscene amounts of money and got something that most certainly didn’t satisfy their wants. One could even suggest that this is more common than the opposite outcome.
@ur momisugly TTY
Do I need to mention “Obamacare system software”?
For those who would like to see a video on these subjects by a professor of applied mathematics at Guelph University in Canada, I recommend Dr Chris Essex’s lecture on Youtube: https://www.youtube.com/watch?v=19q1i-wAUpY
Chris Essex explains in detail the problems in modeling climate.The explanation is somewhat different from what you read here. He and Ross McKitrick, professor of economics have a book on the subject:
Taken By Storm: The Troubled Science, Policy and Politics of Global Warming.
Ross McKittrick you know from his work with Steven McIntyre on the Hockeystick graph.
More than 30 years working with computers and software leads me to agree with Eric Worrall, and find defects in the arguments of Dr. Svalgaard. No computer has yet consistently and accurately predicted yesterday’s weather should be a hint.
And you are ascribing that failure to the programmers putting in the expected result?
I would rather say that some problems are hard and progress is therefore slow, but here HAS been progress in weather prediction since I did my first runs some 50 years ago.
you did your first runs on what mainframe ?
a GIER and a SAAB-D21
http://datamuseum.dk/site_dk/rc/gierdoc/ieeeartikel.pdf
http://en.wikipedia.org/wiki/Datasaab
Don’t get Leif and me started on old computers….
Surely you program your own computer. You code what you believe to be true as to the calculation to be performed. You code how the machine is to manipulate the input data to show what the output should be. No mater how complex the calculation, you control every single step. If you believe a known input data causes positive feedback you write the formula that way. The computer always outputs what you told it to output. In the end, you, not the computer, determines the truth by comparing the output to reality. For what it is worth, I think you have done a fine job in seeking the truth.
If you believe a known input data causes positive feedback you write the formula that way
That is where you go wrong. The models don’t work that way.
Rick, I’ll compare my old computer to your old computer… Mine had tubes with main memory on a drum.
docwat, You must be more ancient that Leif or me. My father designed a process control computer that executed off a drum but the electronics were germanium transistors. He boasted it was so easy to program that a 12 year old could do it, and got me programming the I/O processor to prove it. Released in 1962, it was also the world’s first commercially successful parallel processor.
One of the ease of use aspects was it’s decimal design – 25 bit words, a sign bit and six BCD digits.
An Australian power plant was the last site I know of that ran one. They shut it down nearly twenty years ago. They did instruction layout optimization on an Excel spreadsheet.
Dad always thought Bailey Meter should have come out with an IC based version of the machine. Had they done that, there would be versions running today.
@ur momisugly Rick W
That sounds like it was a Univac III, ……. was it? See: http://en.wikipedia.org/wiki/UNIVAC_III
If so, …… then I personally knew 2 or 3 of its design engineers.
Which was my “IN” into the wonderful world of computer design engineering and manufacturing.
Nope, Bailey Meter 756. Being outside of the computer company/computer science circles, there’s very little available on the web. I see there’s a reference to a booklet from the Australia Computer Museum Society (I gave the author some information and a photo of my father from that timeframe).
There’s a little more about the Bailey Meter 855, the follow-on system. It used ICs (Fairchild?) and core memory. One realtime feature it had was a hardware based process scheduler. Every line clock tick would trigger the system to switch to a different set of registers, including the program counter. I think it had eight sets, so each process got its 1/8th of the CPU time.
So true, Eric. I’ve lost count of the number of equations I’ve corrected by simple use of parenthesis in an argument. Computers do EXACTLY what you tell them to do; indeed, they can do nothing else.
I remember many years ago during my first efforts to program in ‘C’ on an imbedded micro-controller platform. My first successful compile was something of a celebration. What every programmer knows is that a successful compile only qualifies that the machine syntax has been correctly encoded. The computer will then faithfully execute all of your mistakes.
I was involved in writing the control software for a large nuclear power plant in the mid-1990s. We had developed the system to a point where I was confident that we had met most of the functional requirements. We had tested and tested, and the system seemed robust and stable, so we asked a green test by an actual plant operator. Within five minutes he did something totally unexpected that resulted in a major transient (in simulation, of course, not the actual plant). We eventually installed that system, and pretty much eliminated plant trips due to control system failure; however, we still discovered interactions and dependent paths that we could not find by simulation. We eventually made over 400 additional software and hardware corrections to refine the system to it’s current level of reliability. And we really aren’t done; it’s just that we’ve reached the point of diminishing returns where it doesn’t make economic sense to keep refining the process.
Computers are the most complex machines ever invented by mankind. Those science fiction movies where the genius programmer stares at a list of source code and then determines how he can save the world with just a few little modifications, all without testing, shows just how far from reality Hollywood writers are. Makes a nice feel-good story, but don’t bet your world on the ability of any one to get software right the first time.
There is, of course, a lot of information on how software and computers *really* work, but years ago I was following GROKLAW almost daily, before it shut down, just as I follow WUWT now. For those interested, there is an excellent set of articles archived on the subject of software and mathematics, and how they are intertwined, and how that affects the patent system there:
http://www.groklaw.net/staticpages/index.php?page=Patents2
-BillR
Like spending five years writing flight management software before you let the first pilot look at it. The pilot always does something unexpected and breaks the thing within five minutes. Thirty five years later they are still breaking it.
Flight management software is probably trivial compared to climate modeling software (FMS ~ 2.5 x 10^5 lines of code).
Climate modeling software does have the likely advantage that the developers are most likely the users.
the “users” are the politicians … they are the ones asking for answers … the modelers are just rent seeking hucksters bent on getting another paycheck out of the users …
“spending five years writing flight management software before you let the first pilot look at it.”
And tragically, there can still be issues, like the recent Airbus A400M crash.
Computers will even tell the researcher what he wants to hear with the wrong physics written in the code.
In my industry of big Pharma, a common test of drug product is known as “dissolution”. Basically you place the tablet in a pot of 37C water, agitate with a spinning paddle, and measure how the drug dissolves.
One group has developed theoretical software to predict dissolution behavior based on inputs such as particle size, surface area, etc. After nearly 20 years of spitting out so-called accurate predictions, someone finally looked into the code and discovered the original writer included an important physical constant, but the value at 25C, not 37C, which is the actual test temperature. There is a big difference between the two values, yet every user of the program for 20 years has sworn to its accuracy to predict reality.
Eric Worrall
You say
Yes. The way I have often stated it is:
A claim that man-made global warming exists is merely an assertion: it is not evidence and it is not fact. And the assertion does not become evidence or fact by being voiced, written in words, or written in computer code. This is true regardless of who or how many have the opinion that AGW exists.
Richard
As an electronics engineer I use thermal computer modelling all the time and are very useful but only if calibrated with the real world. The assumptions are the key point, if incorrect the models are useless..
As a person on the IPCC Chapter 10 on modelling, and an IT expert as well as years of experience on modelling (in Industry, not Climate), I was stunned by the ignorance of so-called experts who had no clue to the limits of such things. It was as if I was whispering in a hurricane.
Artificial intelligence will never happen – it will just be some sort of algorithm balancing probabilities that it invokes from somewhere or attempts to develop. For instance, will a computer develop a superior theory of Relativity? It may use physical data and equations it selects to try and fit something but it will just use some sort of exhaustive search moderated by some sort of constraints it may be enabled to apply. People should realize that AI is the IT analog of Darwinism and it is patently false. There is such a thing as intelligent design. We are made in the image of God.
Copying a note I added to the McKibben thread, http://wattsupwiththat.com/2015/06/06/my-one-on-one-meeting-with-bill-mckibben/#comment-1955915
One thing that would keep me from looking for a climate modeling gig is that I have no idea how one can model water in the atmosphere. One moment it’s saturating (or supersaturating) the atmosphere and doing its best to be the dominant greenhouse gas. The next moment it’s turning into a cloud and reflecting nearly all the sunlight back into space.
Maybe I could start with the Atacama Desert.
Slightly O/T but has everyone else noticed the huge ramp-up in media coverage of doom laden climate change stories in advance of the Paris summit? My preferred site for several years now has been Yahoo news, and the uptick in articles has been a true hockey stick. My overall impression is that the sceptics have won the scientific battle long, long ago, but the propaganda war, which is vastly more important, is now irretrievably lost. Perceptions are reality.
Spot On
Eric, excellent article. I too have spent my entire career in the computer industry and I often take for granted that people know how computers work. Sadly, the vast majority of people out there have no conception. Even telling people that computers precisely execute instructions written by people (nothing more, nothing less) is hard for the average person to understand. They’ve never seen computer source code, they don’t know what a compiler is and they have never encountered the term Von Neumann machine. Its all just magic.
I’m guessing that 10% of the developed world has used Microsoft Excel. When I tell people that an Excel Spreadsheet is a type of a computer model, then that 10% begins to understand. I can create a spreadsheet (model) whose purpose is to predict the future direction of the stock market, based on things like consumer sentiment, GDP growth, P/E Ratios, money supply, etc. . But that doesn’t make my model correct.
Unfortunately, 90% of the people out there can’t even understand this.
I find the idea that computers must produce the right results if you tell them the laws of physics to be worryingly naïve and quite wrong.
Has anyone ever use Microsoft Word? Have you ever pasted a table from somewhere else into Word and got an unexpected result? Word, is a computer program written by computer programmers. Adding F=ma to its knowledge base will not improve your table pasting experiences.
Have you ever looked at the list of bugs associated with a program? Programs of any size have thousands of known bugs. Who lists these for the climate models? Programs don’t always crash when they come across a bug. Often they produce entirely plausible results.
http://en.wikipedia.org/wiki/List_of_software_bugs
Have you ever seen the pictures of the patterns caused in the data due to rounding errors? These patterns are being built in to your model results.
Computer programs have several relevant causes for concern, including:
Rounding error and precision;
The fact that they will contain bugs: (Mariner 1, anybody?)
Misunderstandings about what a variable actually means and what its units are;
The fact that you can’t test every eventuality;
Assumptions about what needs calculating and when.
This is before you start fudging the science by adding in parameters because you decide that will be accurate enough.
I speak as someone who has worked continuously at or studied in computing or digital electronics since my school started building the Wireless World Digital Computer 1967.
There was a hilarious post on this issue on JoNova – http://joannenova.com.au/2013/07/oops-same-climate-models-produce-different-results-on-different-computers/
I worked for about 28 years in IT – from junior programer to senior systems analyst. The programmers had a saying/adage, “The only programs without bugs are trivial programs.”
When I was getting my Bsc (in Computer Science) I took a course on computer simulation. One’s grade for the course was based entirely on a simulation project. If your simulation was completely wrong, you got an F and failed the course. You would NOT be allowed to take the course 18 times in a row, especially if you got it wrong each and every time.
These so-called AGW “models” are farces – they can make a lot of money; but they are WRONG.
Gavin Schmidt has often said that the CO2 sensitivity is not programmed into the model, but it is an emergent property.
Most of the faithful seem to be believe this when the question comes up.
When I used the individual outputs of GISS Model E, the first model that Gavin Schmidt worked on at GISS, the temperature impact of rising GHGs followed a simple ln(CO2) formula so closely it was clearly programmed in.
Yes, one of the features used for comparing models is the 4XCO2 sensitivity.
Climate computer models, they have so many variables and all the variables feed back on each other but at what gains we do not really know and sometimes we do not even know the signs of the feedbacks so right away I knew that the models were just being tweaked to get the temperatures that they wanted to see.
Testing, testing, testing! The b*****d tester from hell is the guy you want. He’ll read the specification and set up a matrix of test conditions. Functional tests, reasonableness tests, limit tests, boundary tests, the list goes on. Developers and designers don’t get to test their own code in organisations that care about quality (sadly all too rare). Some places even have ‘black team testing’ where developers get to test each others projects, and get extra pay for finding faults. It’s amazing what drops out when you give stuff to a bright tester and say here, try and break this. In an earlier phase of my IT career, I was the guy you didn’t want to test your programs if you were the precious type. I worked on a number of telecoms products, and one call data analysis tool was chock full of timing and data aggregation errors so bad that it was eventually abandoned.
Perhaps we ought to just ask the GCM enthusiasts what their test criteria are?
Has anybody seen an analysis of General Circulation models? How much error will be caused by a widely-spaced grid, how much by approximations used to model turbulence, to model clouds? Why do models still use a latitude-longitude grid (grid points close to each other in polar regions)?
I dare say that the modelers did not do any due diligence. Why should they? Are they responsible to anybody for anything?
so no fame t man
Testing sounds like an electronic term. A term many people would never be able to fit the the job description. Off the top of your head, what IF.
You were given the job to “test” one of the more complicated models that are being used on the web today for instance SWPC or ISWA.
It takes days to get results as seen to the changes to the barometric pressure readings here on earth. And the results aren’t quite as much of a change or the time periods do not line up. So you report this to the programmers and they ask you “what result do you want”. Does that mean that you have found a flaw in the code? Or does it mean that something else is going on up there that they do not understand. So a meeting in the board room determines what the problem is and the programmers make modifications to the model and you begin to test, test, test again.
What are you testing for? Is there a specification written to tell you what the result should be? What if somewhere in the code it uses 1400 Joules/s/m2 to describe the energy of the sun. Lets say it changes, not much but enough to change the way the model would produce a result.
Would you be able to find this flaw during the testing process? Of course this value is set in stone, or is it?
How would you tell if it changes? Would you be the one to make a formula to compensate for the flaw, or would it be decided at the board room?
In due respect to Gene Rodenberry, Q suggested making a change to the gravitational constant of a planet in order to save it. The consortium then allowed him back into Q because he did something good. Maybe we need a Q to write the specifications for the models.
Why does anybody assume that answers lie in the direction of intelligence, either human or artificial IT. No-one talks about artificial intuition IT.
For software that is critically important as regards functionality the specifications must be 100% testable. Meaning… what you asked for you got and nothing else. The performance bounds are extremely strict.
So is it safe to use? Safety is about not causing harm. Harm relates to injury/death including loss and damage to property. Harm can be caused by a system performing unexpectedly…and in various ways.
None of the above has anything to do with experimental software systems (Models and/or Windows). However, the use of such systems has had vast influence. That influence appears to me to be causing safety issues as regards major financial loss, property damage and likely sickness and death. Windows has a health warning waiver.
Many of us have spent about 30 years refining software system standards, so that procurement is eased and the end result is usable and safe. We knew that computer games and similar were being knife and forked for years. What we didn’t realise was that such poor software would ultimately be used in monitoring and predicting planet catastrophe. And never forget the Windows!!
Guys/Gals: L Svaalgard did correctly predict solar SNN 24 (SSN ~70~, I think). However, this was after Hathaway and His Pals at NASA modified it down wards from ~170ssn or so many, many, many times. So that’s about the only prediction he’s got right so far LOL
No, my prediction [SSN in the range 67-83] was made in 2004, before any others.
There’s no shame in issuing updated forecasts. Do you plan you activities for tomorrow based on a forecast issued six days ago? Forecasts can have two goals.
1) set expectations for future conditions.
2) test our knowledge of the physical properties being analyzed.
Both goals are useful.
In software development, someone creates a “use case” which you then give to the software developers who write the software that will satisfy that use case. For climate science models in particular, the use case was and still is (coming from the UN) “Human produced CO2 causes global warming”. The climate scientists gave that to the software developers. So it’s no wonder that the models always show it when the models were written to show it.
lsvalgaard
June 7, 2015 at 5:33 am
If at first you fail, you try again. The failure may tell you what the correct values should be… so learn from the failures.
Well, first one has to admit that they’ve failed.
I believe the Climate Modelers have already failed at that.
Some experts who assess the performance of climate models are not expecting the answer to come from bigger, faster computers.
“It is becoming clearer and clearer that the current strategy of incremental improvements of climate models is failing to produce a qualitative change in our ability to describe the climate system, also because the gap between the simulation and the understanding of the climate system is widening (Held 2005, Lucarini 2008a). Therefore, the pursuit of a “quantum leap” in climate modeling – which definitely requires new scientific ideas rather than just faster supercomputers – is becoming more and more of a key issue in the climate community.”
From Modeling Complexity: the case of Climate Science by Valerio Lucarini
http://arxiv.org/ftp/arxiv/papers/1106/1106.1265.pdf
See also: https://rclutz.wordpress.com/2015/03/25/climate-thinking-out-of-the-box/
And of course with computers there are important issues with floating-point arithmetic. Specifically rounding errors, representation errors, and the propagation of such throughout the models.
A good discussion over at The Resilient Earth:
“Climate Models’ “Basic Physics” Falls Short of Real Science”
http://theresilientearth.com/?q=content/climate-models-%E2%80%9Cbasic-physics%E2%80%9D-falls-short-real-science
“Even if the data used to feed a model was totally accurate, error would still arise. This is because of the nature of computers themselves. Computers represent real numbers by approximations called floating-point numbers. In nature, there are no artificial restrictions on the values of quantities but in computers, a value is represented by a limited number of digits. This causes two types of error; representational error and roundoff error.”
Far more technical discussions are available:
“What Every Computer Scientist Should Know About Floating-Point Arithmetic”
https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
“The idea that IEEE 754 prescribes precisely the result a given program must deliver is nonetheless appealing. Many programmers like to believe that they can understand the behavior of a program and prove that it will work correctly without reference to the compiler that compiles it or the computer that runs it. In many ways, supporting this belief is a worthwhile goal for the designers of computer systems and programming languages. Unfortunately, when it comes to floating-point arithmetic, the goal is virtually impossible to achieve. The authors of the IEEE standards knew that, and they didn’t attempt to achieve it. As a result, despite nearly universal conformance to (most of) the IEEE 754 standard throughout the computer industry, programmers of portable software must continue to cope with unpredictable floating-point arithmetic.”