There Are Models And There Are Models

Guest Post by Willis Eschenbach

I’m 74, and I’ve been programming computers nearly as long as anyone alive. 

When I was 15, I’d been reading about computers in pulp science fiction magazines like Amazing Stories, Analog, and Galaxy for a while. I wanted one so badly. Why? I figured it could do my homework for me. Hey, I was 15, wad’ja expect?

I was always into math, it came easy to me. In 1963, the summer after my junior year in high school, nearly sixty years ago now, I was one of the kids selected from all over the US to participate in the National Science Foundation summer school in mathematics.  It was held up in Corvallis, Oregon, at Oregon State University.

It was a wonderful time. I got to study math with a bunch of kids my age who were as excited as I was about math. Bizarrely, one of the other students turned out to be a second cousin of mine I’d never even heard of. Seems math runs in the family. My older brother is a genius mathematician, inventor of the first civilian version of the GPS. What a curious world.

The best news about the summer school was, in addition to the math classes, marvel of marvels, they taught us about computers … and they had a real live one that we could write programs for!

They started out by having us design and build logic circuits using wires, relays, the real-world stuff. They were for things like AND gates, OR gates, and flip-flops. Great fun!

Then they introduced us to Algol. Algol is a long-dead computer language, designed in 1958, but it was a standard for a long time. It was very similar to but an improvement on Fortran in that it used less memory. 

Once we had learned something about Algol, they took us to see the computer. It was huge old CDC 3300, standing about as high as a person’s chest, taking up a good chunk of a small room. The back of it looked like this.

It had a memory composed of small ring-shaped magnets with wires running through them, like the photo below. The computer energized a combination of the wires to “flip” the magnetic state of each of the small rings. This allowed each small ring to represent a binary 1 or a 0. 

How much memory did it have? A whacking great 768 kilobytes. Not gigabytes. Not megabytes. Kilobytes. Thats one ten-thousandth of the memory of the ten-year-old Mac I’m writing this on.

It was programmed using Hollerith punch cards. They didn’t let us anywhere near the actual computer, of course. We sat at the card punch machines and typed in our program. Here’s a punch card, 7 3/8 inches wide by 3 1/4 inches high by 0.007 inches thick. (187 x 83 x.018 mm).

The program would end up as a stack of cards with holes punched in them, usually 25-50 cards or so. I’d give my stack to the instructors, and a couple of days later I’d get a note saying “Problem on card 11”. So I’d rewrite card 11, resubmit them, and get a note saying “Problem on card 19” … debugging a program written on punch cards was a slooow process, I can assure you

And I loved it. It was amazing. My first program was the “Sieve of Eratosthenes“, and I was over the moon when it finally compiled and ran. I was well and truly hooked, and I never looked back.

The rest of that summer I worked as a bicycle messenger in San Francisco, riding a one-speed bike up and down the hills delivering blueprints. I gave all the money I made to our mom to help support the family. But I couldn’t get the computer out of my mind.

Ten years later, after graduating from high school and then dropping out of college after one year, I went back to college specifically so I could study computers. I enrolled in Laney College in Oakland. It was a great school, about 80% black, 10% Hispanic, and the rest a mixed bag of melanin-deficient folks. (I’m told than nowadays the polically-correct term is “melanin-challenged”, to avoid offending anyone.) The Laney College Computer Department had a Datapoint 2200 computer, the first desktop computer.

It had only 8 kilobytes of memory … but the advantage was that you could program it directly. The disadvantage was that only one student could work on it at any time. However, the computer teacher saw my love of the machine, so he gave me a key to the computer room so I could come in before or after hours and program to my heart’s content. I spent every spare hour there. It used a language called Databus, my second computer language.

The first program I wrote for this computer? You’ll laugh. It was a test to see if there was “precognition”. You know, seeing the future. My first version, I punched a key from 0 to 9. Then the computer picked a random number, and recorded if I was right or not.

Finding I didn’t have precognition, I re-wrote the program. In version 2, the computer picked the number before, rather than after, I made the selection. No precognition needed. Guess what?

No better than random chance. And sadly, that one-semester course was all that Laney College offered. That’s the extent of my formal computer education. The rest I taught myself, year after year, language after language, concept after concept, program after program.

Ten years after that, I bought the first computer I ever owned — the Radio Shack TRS-80, AKA the “Trash Eighty”. It was the first notebook-style computer. I took that sucker all over the world. I wrote endless programs on it, including marine celestial navigation programs that I used to navigate by the stars between islands the South Pacific. It was also my first introduction to Basic, my third computer language.

And by then IBM had released the IBM PC, the first personal computer. When I returned to the US I bought one. I learned my fourth computer language, CPM. I wrote all kinds of programs for it. But then a couple years later Apple came out with the Macintosh. I bought one of those as well, because of the mouse and the art and music programs. I figured I’d use the Mac for creating my art and my music and such, and the PC for serious work.

But after a year or so, I found I was using nothing but the Mac, and there was a quarter-inch of dust on my IBM PC. So I traded the PC for a piano, the very piano here in our house that I played last night for my 19-month-old granddaughter, and I never looked back at the IBM side of computing.

I taught myself C and C++ when I needed speed to run blackjack simulations … see, I’d learned to play professional blackjack along the way, counting cards. And when my player friends told me how much it cost for them to test their new betting and counting systems, I wrote a blackjack simulation program to test the new ideas. You need to run about a hundred thousand hands for a solid result. That took several days in Basic, but in C, I’d start the run at night, and when I got up the next morning, the run would be done. I charged $100 per test, and I thought “This is what I wanted a computer for … to make me a hundred bucks a night while I’m asleep.”

Since then, I’ve never been without a computer. I’ve written literally thousands and thousands of programs. On my current computer, a ten-year-old Macbook Pro, a quick check shows that there are well over 4,000 programs I’ve written. I’ve written programs in Algol, Datacom, 68000 Machine Language, Basic, C/C++, Hypertalk, Forth, Logo, Lisp, Mathematica (3 languages), Vectorscript, Pascal, VBA, Stella computer modeling language, and these days, R. 

I had the immense good fortune to be directed to R by Steve McIntyre of ClimateAudit. It’s the best language I’ve ever used—free, cross-platform, fast, with a killer user interface and free “packages” to do just about anything you can name. If you do any serious programming, I can’t recommend it enough.

Oh, yeah, somewhere in there I spent a year as the Service Manager for an Apple Dealership. As you might guess given my checkered history, it wasn’t in some logical location … it was in downtown Suva, in Fiji. There I fixed a lot of computers and I learned immense patience dealing with good folks who truly thought that the CD tray that came out of the front of their computer when they did something by accident was a coffee cup holder … oh, and I also installed the Macintosh hardware for the Fiji Government Printers and trained the employees how to use Photoshop. I also taught two semesters of Computers 101 at the Fiji Institute of Technology.

I bring all of this up to let you know that I’m far, far from being a novice, a beginner, or even a journeyman programmer. I was working with “computer based evolution” to try to analyze the stock market before most folks even heard of it. I’m a master of the art, able to do things like write “hooks” into Excel that let Excel transparently call a separate program in C for its wicked-fast speed, and then return the answer to a cell in Excel …

Now, folks who’ve read my work know that I am far from enamored of computer climate models. I’ve been asked “What do you have against computer models?” and “How can you not trust models, we use them for everything?”

Well, based on a lifetime’s experience in the field, I can assure you of a few things about computer climate models and computer models in general. Here’s the short course.

A computer model is nothing more than a physical realization of the beliefs, understandings, wrong ideas, and misunderstandings of whoever wrote the model. Therefore, the results it produces are going to support, bear out, and instantiate the programmer’s beliefs, understandings, wrong ideas, and misunderstandings. All that the computer does is make those under- and misunder-standings look official and reasonable. Oh, and make mistakes really, really fast. Been there, done that.

Computer climate models are members of a particular class of models called “Iterative” computer models. In this class of models, the output of one timestep is fed back into the computer as the input of the next timestep. Members of his class of models are notoriously cranky, unstable, and prone to internal oscillations and generally falling off the perch. They usually need to be artificially “fenced in” in some sense to keep them from spiraling out of control.

As anyone who has ever tried to model say the stock market can tell you, a model which can reproduce the past absolutely flawlessly may, and in fact very likely will, give totally incorrect predictions of the future. Been there, done that too. As the brokerage advertisements in the US are required to say, “Past performance is no guarantee of future success”.

This means that the fact that a climate model can hindcast the past climate perfectly does NOT mean that it is an accurate representation of reality. And in particular, it does NOT mean it can accurately predict the future.

• Chaotic systems like weather and climate are notoriously difficult to model, even in the short term. That’s why projections of a cyclone’s future path over say the next 48 hours are in the shape of a cone and not a straight line.

There is an entire branch of computer science called “V&V”, which stands for validation and verification. It’s how you can be assured that your software is up to the task it was designed for. Here’s a description from the web

What is software verification and validation (V&V)?

Verification

820.3(a) Verification means confirmation by examination and provision of objective evidence that specified requirements have been fulfilled.

“Documented procedures, performed in the user environment, for obtaining, recording, and interpreting the results required to establish that predetermined specifications have been met” (AAMI).

Validation

820.3(z) Validation means confirmation by examination and provision of objective evidence that the particular requirements for a specific intended use can be consistently fulfilled.

Process Validation means establishing by objective evidence that a process consistently produces a result or product meeting its predetermined specifications.

Design Validation means establishing by objective evidence that device specifications conform with user needs and intended use(s).

“Documented procedure for obtaining, recording, and interpreting the results required to establish that a process will consistently yield product complying with predetermined specifications” (AAMI).

Further V&V information here.

Your average elevator control software has been subjected to more V&V than the computer climate models. And unless a computer model’s software has been subjected to extensive and rigorous V&V. the fact that the model says that something happens in modelworld is NOT evidence that it actually happens in the real world … and even then, as they say, “Excrement occurs”. We lost a Mars probe because someone didn’t convert a single number to metric from Imperial measurements … and you can bet that JPL subjects their programs to extensive and rigorous V&V.

Computer modelers, myself included at times, are all subject to a nearly irresistible desire to mistake Modelworld for the real world. They say things like “We’ve determined that climate phenomenon X is caused by forcing Y”. But a true statement would be “We’ve determined that in our model, the modeled climate phenomenon X is caused by our modeled forcing Y”. Unfortunately, the modelers are not the only ones fooled in this process.

The more tunable parameters a model has, the less likely it is to accurately represent reality. Climate models have dozens of tunable parameters. Here are 25 of them, there are plenty more.

What’s wrong with parameters in a model? Here’s an oft-repeated story about the famous physicist Freeman Dyson getting schooled on the subject by the even more famous Enrico Fermi …

By the spring of 1953, after heroic efforts, we had plotted theoretical graphs of meson–proton scattering. We joyfully observed that our calculated numbers agreed pretty well with Fermi’s measured numbers. So I made an appointment to meet with Fermi and show him our results. Proudly, I rode the Greyhound bus from Ithaca to Chicago with package of our theoretical graphs to show to Fermi.

When I arrived in Fermi’s office, I handed the graphs to Fermi, but he hardly glanced at them. He invited me to sit down, and asked me in a friendly way about the health of my wife and our new-born baby son, now fifty years old. Then he delivered his verdict in a quiet, even voice. “There are two ways of doing calculations in theoretical physics”, he said. “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self- consistent mathematical formalism. You have neither.”

I was slightly stunned, but ventured to ask him why he did not consider the pseudoscalar meson theory to be a self- consistent mathematical formalism. He replied, “Quantum electrodynamics is a good theory because the forces are weak, and when the formalism is ambiguous we have a clear physical picture to guide us. With the pseudoscalar meson theory there is no physical picture, and the forces are so strong that nothing converges. To reach your calculated results, you had to introduce arbitrary cut-off procedures that are not based either on solid physics or on solid mathematics.”

In desperation I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, “How many arbitrary parameters did you use for your calculations?” I thought for a moment about our cut-off procedures and said, “Four.” He said, “I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” With that, the conver- sation was over. I thanked Fermi for his time and trouble, and sadly took the next bus back to Ithaca to tell the bad news to the students.

The climate is arguably the most complex system that humans have tried to model. It has no less than six major subsystems—the ocean, atmosphere, lithosphere, cryosphere, biosphere, and electrosphere. None of these subsystems is well understood on its own, and we have only spotty, gap-filled rough measurements of each of them. Each of them has its own internal cycles, mechanisms, phenomena, resonances, and feedbacks. Each one of the subsystems interacts with every one of the others. There are important phenomena occurring at all time scales from nanoseconds to millions of years, and at all spatial scales from nanometers to planet-wide. Finally, there are both internal and external forcings of unknown extent and effect. For example, how does the solar wind affect the biosphere? Not only that, but we’ve only been at the project for a few decades. Our models are … well … to be generous I’d call them Tinkertoy representations of real-world complexity.

Many runs of climate models end up on the cutting room floor because they don’t agree with the aforesaid programmer’s beliefs, understandings, wrong ideas, and misunderstandings. They will only show us the results of the model runs that they agree with, not the results from the runs where the model either went off the rails or simply gave an inconvenient result. Here are two thousand runs from 414 versions of a model running first a control and then a doubled-CO2 simulation. You can see that many of the results go way out of bounds.

As a result of all of these considerations, anyone who thinks that the climate models can “prove” or “establish” or “verify” something that happened five hundred years ago or a hundred years from now is living in a fool’s paradise. These models are in no way up to that task. They may offer us insights, or make us consider new ideas, but they can only “prove” things about what happens in modelworld, not the real world.

Be clear that having written dozens of models myself, I’m not against models. I’ve written and used them my whole life. However, there are models, and then there are models. Some models have been tested and subjected to extensive V&V and their output has been compared to the real world and found to be very accurate. So we use them to navigate interplanetary probes and design new aircraft wings and the like.

Climate models, sadly, are not in that class of models. Heck, if they were, we’d only need one of them, instead of the dozens that exist today and that all give us different answers … leading to the ultimate in modeler hubris, the idea that averaging those dozens of models will get rid of the “noise” and leave only solid results behind.

Finally, as a lifelong computer programmer, I couldn’t disagree more with the claim that “All models are wrong but some are useful.” Consider the CFD models that the Boeing engineers use to design wings on jumbo jets or the models that run our elevators. Are you going to tell me with a straight face that those models are wrong? If you truly believed that, you’d never fly or get on an elevator again. Sure, they’re not exact reproductions of reality, that’s what “model” means … but they are right enough to be depended on in life-and-death situations.

Now, let me be clear on this question. While models that are right are absolutely useful, it certainly is also possible for a model that is wrong to be useful.

But for a model that is wrong to be useful, we absolutely need to understand WHY it is wrong. Once we know where it went wrong we can fix the mistake. But with the complex iterative climate models with dozens of parameters required, where the output of one cycle is used as the input to the next cycle, and where a hundred-year run with a half-hour timestep involves 1.75 million steps, determining where a climate model went off the track is nearly impossible. Was it an error in the parameter that specifies the ice temperature at 10,000 feet elevation? Was it an error in the parameter that limits the formation of melt ponds on sea ice to only certain months? There’s no way to tell, so there’s no way to learn from our mistakes.

Next, all of these models are “tuned” to represent the past slow warming trend. And generally, they do it well … because the various parameters have been adjusted and the model changed over time until they do so. So it’s not a surprise that they can do well at that job … at least on the parts of the past that they’ve been tuned to reproduce.

But then, the modelers will pull out the modeled “anthropogenic forcings” like CO2, and proudly proclaim that since the model no longer can reproduce the past gradual warming, that demostrates that the anthropogenic forcings are the cause of the warming … I assume you can see the problem with that claim.

In addition, the gridsize of the computer models are far larger than important climate phenomena like thunderstorms, dust devils, and tornados. If the climate model is wrong, is it because it doesn’t contain those phenomena? I say yes … computer climate modelers say nothing.

Heck, we don’t even know if the Navier-Stokes fluid dynamics equations as they are used in the climate models converge to the right answer, and near as I can tell, there’s no way to determine that.

To close the circle, let me return to where I started—a computer model is nothing more than my ideas made solid. That’s it. That’s all.

So if I think CO2 is the secret control knob for the global temperature, the output of any model I create will reflect and verify that assumption.

But if I think (as I do) that the temperature is kept within narrow bounds by emergent phenomena, then the output of my new model will reflect and verify that assumption.

Now, would the outputs of either of those very different models be “evidence” about the real world?

Not on this planet.

And that is the short list of things that are wrong with computer models … there’s more, but as Pierre said, “the margins of this page are too small to contain them” …

My very best to everyone, stay safe in these curious times,

w.

PS—When you comment, quote what you’re talking about. If you don’t, misunderstandings multiply.

H/T to Wim Röst for suggesting I write up what started as a comment on my last post.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
4.9 95 votes
Article Rating
492 Comments
Inline Feedbacks
View all comments
Brian Jackson
March 12, 2021 1:03 pm

With all that wonderful computer experience, you failed to mention Unix, Linux, or any of the other variants.

Rud Istvan
Reply to  Brian Jackson
March 12, 2021 1:43 pm

As you may know, Brian, Linux and Unix are operating systems, not programming languages. Sort of like WE’s MacOS on which he now runs the R programming language. An attempted derogation FAIL?

Brian Jackson
Reply to  Rud Istvan
March 12, 2021 2:22 pm

Eschenbach thinks MS-DOS and CP/M are programming languages.

Brian Jackson
Reply to  Brian Jackson
March 12, 2021 2:25 pm

If you stretch the definition of “programming languages” then shell scripting, awk, etc., which all come with Linux allows one to “program” 😀 😀 😀

Brian Jackson
Reply to  Brian Jackson
March 12, 2021 2:26 pm

How come Wills didn’t mention java?

Brian Jackson
Reply to  Willis Eschenbach
March 12, 2021 5:20 pm

” I call that a “language”.
Don’t like it?
Sorry, don’t care. Brian, all you want to do is come along and pour meaningless vitriol on what I’ve written.”
.
.
If you wish to do “science,” you need to use precise terminology. Obviously you “don’t care” if you misuse the terms. CPM is not a language.

Base on this posting, and your comments, you qualify as a computer “hack” (and not the derogatory meaning of that term.) You “think” you know about computers, but you don’t. This whole piece is a poor attempt to bolster a CV lacking in real world experience writing software.
.
Your skill set is equivalent to that of a “script kiddie.”

MarkW
Reply to  Brian Jackson
March 12, 2021 8:51 pm

There are words for people who spend all their time whining about minutia while ignoring the bigger issues.
That word isn’t expert.

As usual, Brian tries to force everyone to go down the rabbit hole of his choice.

Tom Abbott
Reply to  MarkW
March 14, 2021 10:29 am

“There are words for people who spend all their time whining about minutia while ignoring the bigger issues.
That word isn’t expert.”

One of those decriptive words is “troublemaker”.

I don’t think this particular troublemaker is whining about anything, he is deliberately attacking the character of Willis with his nitpiking.

It’s a pathetic performance. It probably won’t end well for troublemakers.

fred250
Reply to  Brian Jackson
March 14, 2021 1:31 am

“Your skill set is equivalent to.. blah blah…”

And your knowledge is equivalent that of a limp lettuce leaf. !

Willis is a few dozen MAGNITUDES above anything you would ever be capable of.

Carlo, Monte
Reply to  Brian Jackson
March 12, 2021 4:04 pm

Why do you have such a huge ego?

Brian Jackson
Reply to  Carlo, Monte
March 12, 2021 5:23 pm

No one has an ego bigger than Willis’s

Rud Istvan
Reply to  Brian Jackson
March 12, 2021 5:49 pm

Nope. Based on this thread, you do.

Reply to  Rud Istvan
March 12, 2021 7:47 pm

There seem to be lots of big egos here today. So, if I may make bold a bit myself, I’d like to ask why a person has to buy a $700 Mitituyo digital caliper, and pay another $300 or so for the data cable and software for it, instead of being able to buy a $10 cable and maybe another $30 for software to use with his $30 digital caliper with the unsupported data port?

With so many coders here, surely someone has an explanation?

Rud Istvan
Reply to  otropogo
March 12, 2021 8:25 pm

You have obviously never reloaded competition precision ammo. I have, for over 40 years.

Reply to  Rud Istvan
March 13, 2021 9:26 pm

Why “obviously”? And what does your comment have to do with my post or the questions I posed? Don’t be shy. If you have some information to share, shoot!

I don’t even understand what you mean by ‘reloaded competition precision ammo’. I’ve reloaded Lake City 30-06 brass, and lots of it. That’s competition brass. Does reloading it make it “precision” ammo? What do you mean by ‘precision’ anyway? I use ‘competion’ dies, and weigh each charge to a tenth of a grain, using a Bonanza Model “N” scale. I chronograph all my loads before taking them to the field, and have worked up some pretty hot loads without blowing up a single firearm in 35 years of reloading.

But my loads are intended to stop a charging grizzly primarily, and secondarily to cleanly take an elk, mulie, or whitetail out to 400 yards. Maybe that’s not “competition precision” in your book, but the precision I strive for can mean the difference between being dead or crippled/disfigured for life or walking away with a nothing more than a good scare.

“Competition ammo” is certainly more precise, under ideal conditions, than what I load. But I wouldn’t want to take on a bear, even a black bear, with it.

I’m sure you know all that, if only from shooting buddies who hunt dangerous game or in their vicinity. So what is your point?

Is it that non-competitive shooters shouldn’t/needn’t reload? Or if they reload they should just use a dipper for the powder and only employ once-fired brass?

Given the high praise Willis has bestowed on your pronouncements above:

IWith respect to 1), what Rud Istvan said. When he talks, I listen.

I am surprised and disappointed…

Rud Istvan
Reply to  otropogo
March 14, 2021 5:12 pm

Well, I have never shot a charging grizzly at 40 yards. Hope never have to.
But shot plenty of just 10x rings at 200 meter rifle multiple positions, plenty of 10x rings at 100 meters two hand pistol gun, and lots and lots of +96 trap, skeet, and sporting clays 4x rounds (each box of shells is 25, so a standard comp round is 4 boxes)—all with handloads.

Reply to  Rud Istvan
March 14, 2021 8:06 pm

Neither have I. But I’ve read and thought enough about this contingency to know better than to shoot a grizzly at either 400 yards (I had a magnificent blond one broadside in my sights once, and a tag in my pocket) or at 40 – better wait until the distance is 10 yards, for a better chance to break its shoulder. But you didn’t answer my question above – unless this post is meant to be an apology…

BTW – I would use the same strategy with a charging Black Bear, except at 40 yards it might be worth bluffing it by moving towards it aggressively. This worked for me once when I was armed with only a crossbow and a knife, and had no bear tag.

Reply to  otropogo
March 13, 2021 5:07 am

That’s not a question to ask of “coders.” However, you are more likely to get the correct answer from some of us than you will from any modern “economist.”

Simply put, not enough people want what you want to make it worthwhile for someone to go to the effort of supplying it.

Reply to  writing observer
March 13, 2021 8:23 pm

So it’s worthwhile for factories in China to put a serial port on millions of cheap digital calipers over a period of 20 or 30 years although nobody in the world has any interest in using them?

My guess is that in China and other ‘poor’ countries you can buy the cable and the software to utilize these ports, and the manufacturers have marketing agreements that keep those elements out of the hands of North Americans so they’ll pay through the nose for the pricey Japanese or American products .

Some of my cheap Chinese calipers even came with a drawing of the cable in the instruction brochure. I believe Hornady sold one, and simply said “no” when I asked to buy a cable. I also tried Faxing the Chinese factory after buying my first one, but got no answer.

But that still wouldn’t explain why nobody has offered a simple program and maybe a cable schematic so the tens of thousands of these calipers sold here can be used efficiently.

20 years or so ago I read of a group of American archeology students who circumvented the high cost of the Mitituyo cable and software by coming up with their own home brew of the latter. So this is not a recent development, but it seems to be getting worse.

Komeradecube
Reply to  otropogo
March 14, 2021 8:13 pm
Reply to  Komeradecube
March 15, 2021 9:00 pm

Well, the only cable they offer at $145 for their Lite software is for a Mitutoyo caliper as is their $241 RS232 to USB adaptor.

So – overkill and overpriced, and won’t work on my calipers. I think I’ll have better chance of success by contacting the guy at robotroom.

Thanks for your help.

MarkW
Reply to  otropogo
March 13, 2021 11:32 am

Some people need the extra precision that can only be provided by the more expensive instrument.
It’s obvious that you have no need for that kind of precision.
For myself, I have no need for the kind of precision that the $30 caliper provides, so I haven’t bought one. However I don’t go around wondering why people feel the need to buy calipers in the first place.

Reply to  MarkW
March 13, 2021 8:36 pm

I find it a bit arrogant for someone who can’t imagine why one would need a caliper to be passing judgments on their precision.

But here’s a freebie – I’ve used mine for measuring various dimensions of brass rifle and pistol cartridges, clutch flywheels, brake discs and pads and shims. I do have micrometers for more precise measurements, but they’re slower to use, cost more, and are overkill for the above purposes.

As for precision, I’ve always owned at least two calipers at a time, and usually three or more. It’s part of the joy of owning inexpensive measuring instruments that you can own several and check them against each other for accuracy.

MarkW
Reply to  Brian Jackson
March 12, 2021 8:52 pm

So refusing to believe as you do proves that Willis has a big ego?

Really?

I would say that declaring that anyone who mustt accept your opinion as the last word on everything, indicates an ego big enough to swallow light.

fred250
Reply to  MarkW
March 12, 2021 9:29 pm

Brian’s opinions are based on anti-knowledge,

His egotistic blah is a facade to hide his deep-seated mental insecurities and ignorance.

Carlo, Monte
Reply to  MarkW
March 12, 2021 9:54 pm

Classic projection on the part of Brainiac.

fred250
Reply to  Brian Jackson
March 12, 2021 9:27 pm

At least Willis has something to back it up.

You are an ABJECT FAILURE at everything you have posted. !

Your egotistic blathering is a facade based on trying to hide your IGNORANCE..

…. but its not working

Lrp
Reply to  Brian Jackson
March 13, 2021 10:29 am

There’s a big difference between you and Willis, with the conclusion that you come across as a complete jackass. But to explain; Willis appears to be a polymath, self-taught and capable of original thoughts and structured thinking. His analyses are fact based and challenge the inbuilt flaws of climate models. On the other hand , you just parrot what your gods say without showing a shred of understanding or thinking.

Tom Abbott
Reply to  Carlo, Monte
March 14, 2021 10:31 am

I think he has an anger problem.

Carlo, Monte
Reply to  Brian Jackson
March 12, 2021 9:52 pm

Tell us [TINU} what your hat size is, Brainiac.

Reply to  Brian Jackson
March 12, 2021 2:04 pm

To ask for Java wouldt have been correct 😀

MarkW
Reply to  Brian Jackson
March 12, 2021 2:29 pm

He’s mentioning the computer languages he’s worked on.
Why should he spend time talking about operating systems he may not have used?
Or are you just trying to show the rest of us how bright you are?

Brian Jackson
Reply to  MarkW
March 12, 2021 3:01 pm

He posted: ” I learned my fourth computer language, CPM”
.
CPM is not a computer language.

Brian Jackson
Reply to  Brian Jackson
March 12, 2021 3:04 pm

It is plainly obvious to computer professionals that Willis is trying to “show“us how bright he thinks he is.

Derg
Reply to  Brian Jackson
March 12, 2021 3:23 pm

Hello pot?

MarkW
Reply to  Derg
March 12, 2021 8:55 pm

I doubt Brian has sufficient self awareness to realize he has just been insulted.

MarkW
Reply to  Brian Jackson
March 12, 2021 8:55 pm

Willis is demonstrating why he has the background to make the judgements that he is making regarding models.
He never claimed to be a world class expert on all things computer.
That’s been your role.

fred250
Reply to  Brian Jackson
March 12, 2021 9:35 pm

“is trying to “show“ us how bright he thinks he is.”

That is YOUR modus operandi..

FAKE bravado, backed by an empty mess.

And it is FAILING BADLY

You are coming across as a ignorant and clueless moron.

MarkW
Reply to  fred250
March 13, 2021 11:37 am

The difference is that Willis doesn’t feel the need to show us anything. He just lays out the facts.
Brian is the one who feels the need to demonstrate his mental superiority. To bad for Brian that he has never succeeded in demonstrating his mental adequacy, much less superiority.

Gerald Machnee
Reply to  Brian Jackson
March 13, 2021 5:18 pm

Willis does not have to try to show how bright he is. It is obvious from his work. You, on the other hand are trying to dig deeper into a pit. Oh, you are successful at that. For a minute I though you were not capable of anything. Just do not wear out the shovel.

MarkW
Reply to  Brian Jackson
March 12, 2021 8:48 pm

How typical of Brian, when caught in a lie, he changes the subject.

fred250
Reply to  Brian Jackson
March 12, 2021 9:33 pm

CPM is not a computer language.

WRONG AS ALWAYS.. you base level ignorance is showing through more with your every post.

It contains a set series of instructions that you have to learn the meaning of to communicate with the computer.

If that isn’t a “language” then you have a really weird and twisted idea about what a “language” is.

Your puerile grunts and groans just won’t cut it. !!

Carlo, Monte
Reply to  MarkW
March 12, 2021 4:05 pm

Brainiac is an absolute expert on absolutely everything, and needs the entire world to see this.

aussiecol
Reply to  Carlo, Monte
March 12, 2021 8:48 pm

Might have to rename him ‘Lawrence of everywhere’

MarkW
Reply to  Carlo, Monte
March 12, 2021 8:56 pm

Meanwhile he declares that Willis has a big ego because even after having reality explained to him twice, Willis still fails to agree with Brian.

Carlo, Monte
Reply to  Brian Jackson
March 12, 2021 10:06 pm

Yet any Unix variant has stuff like c, lex, yacc, apache, etc built-in. Is it an OS or a language?

PaulH
March 12, 2021 1:05 pm

I studied computer science in high school in the early-mid 1970s. There were 6 students in my first computer programming class. We had a couple of IBM card punch machines in an oversized closet for preparing our assignments. Once a week, one of us would take a box or two of punch cards to the local bus station for shipment to a nearby university that had an IBM mainframe. They would run a batch job of our cards, and about a week later we would have a printout. Fun times! From there to a university degree in math and computer science, and work as a professional programmer starting in 1980.

 

 What I’ve learned, forgotten, then re-learned over the years is that no computer program can do more than what it is programmed to do. This applies to database programs, graphic design applications, games and, of course, models. But people don’t seem to grasp this basic fact. As Willis points out, all too often we hear that a model proves such-and-such. Sorry, no model can prove anything. Models say what they are programmed to say. We see this with climate models and CV-19 models when a scary scenario is required. And when the scary thing fails to materialize, they just move the forecast of the scary thing further into the future.

Curious George
Reply to  PaulH
March 12, 2021 2:45 pm

The trouble with computers is that they do what we tell them to do, not what we want them to do.

MarkW
Reply to  Curious George
March 12, 2021 8:58 pm

I’ve been requesting a DWIM op code from chip manufacturers for decades now.
(Do What I Mean)

Reply to  PaulH
March 13, 2021 5:20 am

Oh, a computer can do many things other than what it is programmed to do. That’s when you call in the hardware techs to figure out what is wrong. I recall a long day trying to figure out why the paycheck printer was suddenly printing gibberish. Nothing wrong with the computer or its software. Nothing wrong with the printer. Nothing wrong with the signature box between the two. Hardware guy walked in, walked out, walked back in with a USB cable he pulled out of someplace else. Voila! (Nothing visibly wrong with the old cable, either – I did check that.)

Even in modern systems, a random cosmic ray, or an intermittently bad transistor among the many billions, can waste days.

Curious George
Reply to  writing observer
March 13, 2021 8:47 am

“Everything that can go wrong will go wrong.”
Commentary: Murphy was an optimist.

PaulH
Reply to  writing observer
March 13, 2021 9:01 am

Back in the early 1980s, the tech company I worked for had a hardware/software installation in the CAD department of a large manufacturer. The issue was the computer would crash at roughly the same time each morning, just as the designers were starting their day. It turned out the boss had his own coffee maker in his office on the same circuit as our hardware. When the boss started the brewer, it caused enough of a draw and/or spike on the circuit that our hardware would crash. I guess the boss had to resort to vending machine coffee after that. 😉

Larry in Texas
March 12, 2021 1:15 pm

A computer model is nothing more than a physical realization of the beliefs, understandings, wrong ideas, and misunderstandings of whoever wrote the model. Therefore, the results it produces are going to support, bear out, and instantiate the programmer’s beliefs, understandings, wrong ideas, and misunderstandings. All that the computer does is make those under- and misunder-standings look official and reasonable. Oh, and make mistakes really, really fast.

Lol! Even though I am only a retired lawyer and NOT a math whiz of any stripe, I have understood the principle very well from my years of experience with clients who tried to show me their economic models, or engineer clients who thought they had the best idea since sliced bread. All I had to do was ask them about the assumptions they made in concocting the “model” or the idea. Usually, it blew the model out of the water, making it what I feared – a pipe dream. The saying, “garbage in, garbage out” was also one I was continually reminded about when I had to confront the subject of “climate change” with my environmental clients.

When you write about a subject that interests me (and this one truly does interest me, for I like you remain curious in my old age), I always learn something new in a very understandable way. So thanks very much for this particular piece.

Mr.
Reply to  Willis Eschenbach
March 12, 2021 6:01 pm

Another interesting post from you Willis. Keep ’em coming.

I just have one disagreement with a phrase you used –

“stay safe in these curious times”

I say we are living in INCURIOUS times.
People just accept at face value everything the media spews forth.

Roger Taguchi
Reply to  Willis Eschenbach
March 12, 2021 6:57 pm

Hi Willis (and Larry in Texas)! I respect highly intelligent people who admit to having limited formal backgrounds in physics and chemistry. I can send you, on request at my email address rtaguchi@rogers.com , pdf Attachments summarizing basic Atomic and Molecular Spectroscopy, with links, to get you up to speed in understanding the quantum mechanical models which accurately explain the observed infrared (IR) spectra from which we calculate climate sensitivity (before feedbacks).

From 1967-1971, I was a grad student at the University of Toronto under Prof. John Polanyi who deservedly won the 1986 Nobel Prize for Chemistry (see https://en.wikipedia.org/wiki/John_Polanyi ). I thought up, executed, and wrote up a short experiment on the IR emission from vibrationally excited HF molecules (HF’) formed during the inelastic collision between electronically excited mercury atoms (Hg*) and ground vibrational state (v=0) HF molecules. See https://www.osapublishing.org/ao/abstract.cfm?uri=ao-10-8-1755 (H. Heydtmann was a visiting professor from Germany).

The relevance to climate change is that 667 cm^-1 IR photons emitted from a 288 K Planck black body (the Earth’s surface) are absorbed by v=0 CO2 molecules which are boosted to the v=1 first excited bond-bending vibrational state.

If these excited CO2′ molecules simply re-emitted IR photons, there would be no net warming of the atmosphere.

However, during inelastic collisions with N2 , O2, and Ar molecules that constitute 99+% of the dry atmosphere, the excited CO2′ molecules can transfer their vibrational energy to translational and rotational motions of the departing air molecules.

Because N2 and O2 are non-polar diatomic molecules, and Ar is monatomic, they do not possess changing electric dipole moments and therefore do not emit any significant amount of IR energy. So the extra translational and rotational energy ends up after many more collisions shared among many molecules, and the troposphere will have warmed up. This is the true mechanism for the greenhouse effect, which came to me in a flash when I was a junior in undergrad Chemistry. But I thought at the time that this was so obvious I did not pursue it further.

Prof. William Happer of Princeton is correct when he says that the CO2 IR lines are all saturated from the v=0 ground state (so doubling CO2 will have no effect). However, the two small pockets between his red and black simulated spectra involve absorption from the v=1 first excited state to higher energy vibrational states. Since at 15 Celsius only around 3% of all CO2 molecules are in the v=1 first excited state, the lines from the v=1 state are not all saturated, so doubling CO2 does result in a small amount of extra net absorption which explains climate sensitivity.

.

Tom Abbott
Reply to  Roger Taguchi
March 14, 2021 10:49 am

Thanks for that, Roger. That’s the first technical critique I’ve seen of Dr. Happer’s research.

Granted it was a small point, but I’m glad to see Happer’s work being mentioned.

Dr. Happer’s research is game-changing if confirmed. If confirmed, Dr. Happer’s findings mean we don’t have to worry about CO2 and we don’t have to shut down fossil fuels and our economy to save the world. The world will be just fine without our doing all that.

Ethan Vos
March 12, 2021 1:17 pm

Many years ago I was a young mechanic apprentice. Early 80’s. The shop I was in bought a $40K computerized diagnostic machine.

I thought they were stupid. The machine didn’t do anything that our tune up guy couldn’t do.

But it had a cool printout. When the customers were presented with said printout saying “Based on the test data and Bruce’s expert opinion we recommend……” they almost always said yes.

Turns out those flashing lights and stupid printouts paid for the machine in 3 months. No different information, but the computer said so.

Meab
March 12, 2021 1:29 pm

Couldn”t help but notice how none of the recognizable climate alarmists that regularly post here have written in telling how they have a similar history with computing. Maybe learning about computing when you needed to know what was going on inside the computer in gory detail gives you a healthy appreciation of the fallibility of computers and computer models.

Despite being over a decade younger than Willis, I learned programming (Fortran) on an even older computer, a CDC 160A with 8K words of magnetic core memory. I had the opportunity to build a very early MITS Altair for the local community college in 1975 – one of the very first PCs. I programmed it, which consisted of setting switches to set operations and then hitting the load switch to load the instruction into memory. It beat the CDC running the same program. However, seeing how difficult it was to program, I concluded that PCs were going nowhere. Mere miles away, at almost exactly the same time, Bill Gates came to a different conclusion and began work on Altair Basic, Microsoft’s first product..

griff
Reply to  Meab
March 13, 2021 2:21 am

Well see my post: I’ve got years of computing experience… including systems programming, maintaining operating systems and designing editors and working on early word processing systems…

fred250
Reply to  griff
March 13, 2021 11:07 am

And all of it totally WORTHLESS in your hands..

michel
Reply to  fred250
March 13, 2021 12:30 pm

An unfortunate lapse of tone. Degrades the forum to the kind of mindless abuse you find on Ars Technica when someone departs from the Party Line. Griff just thinks differently. You will not change his mind like this.

As Sai Baba said:

<i>Before you speak, ask yourself: Is it kind, is it necessary, is it true, does it improve upon the silence?</i>

fred250
Reply to  michel
March 13, 2021 12:54 pm

yawn

Yes, griff does debase the forum.,. always.

Komeradecube
Reply to  Willis Eschenbach
March 14, 2021 8:11 pm

Sorry Willis, i can’t go with you on this one. Griff is not presenting alternative ideas, he’s making drive by shootings. While we can never truly know someone’s motivations, it is very difficult to believe after years of observation that his purpose is anything but disruption.

michel
Reply to  Willis Eschenbach
March 16, 2021 2:36 am

Yes, this is absolutely right.

The point is not how anyone feels about a comment.

I would say to Fred250: How you feel about Griff or anyone else is irrelevant to how you should respond. The discussion is not about you and your feelings. Its like conducting a loud conversation at a play, its just getting in the way. You may feel Griff gets in the way, and maybe he sometimes does, but that is not a reason for starting up yet another irrelevant off-topic conversation.

We are not participating, or should not be participating, in these discussions in order to express or relieve our feelings.

We should be participating in them in order to make some contribution to a pleasant and sensible if sometimes hard-argued discussion and increase our understanding.

The right thing to do with a ‘drive-by’ is simply ignore it. Because responding in kind is going to impair the whole discussion.

You can see where this ends up if you visit the Ars Technica comment pages on some controversial topic. In no time over there you have a chorus of people frothing at the mouth over Trump and heaping abuse on people they accuse of being politically incorrect, without regard to the nominal topic under discussion. Or posting ridiculous huge graphics which I have not figured out what they mean, but they are evidently part of the same pattern.

It prevents any sensible discussion of any topic. The forum becomes unreadable. This is why people should not do it. We are not interested in how people feel about Griff or anyone else. That is not what we are here for. The best advice is, people should keep their feelings to themselves. If they cannot control themselves, stop reading.

Watts is unusual in having very light moderation and allowing a wide range of views. But for this to be possible, and the forum still readable, people have to control their posts to being points of view on the subject. Otherwise readability will eventually require heavier and heavier censorship.

Willis’ advice is right. Quote the words, and direct our comments to them. And keep it topical, not personal.

Reply to  Willis Eschenbach
March 14, 2021 9:20 pm

Hear, hear! And I hope your friend Rud is “listening” when YOU speak.

ralfellis
March 12, 2021 1:32 pm

Brings back memories.

Our punch-card programs had a thousand cards, in huge great trays – and woe betide the fool who dropped a tray. (Program destroyed.)

And the Datapoint 2200 was a wonderful machine with removable storage disks (about 35 cm across).

Were you responsible for the program which output an image of a naked lady on the printer? I also seem to remember a rudimentary ‘space invaders’ game.

RE

MarkW
Reply to  ralfellis
March 12, 2021 2:34 pm

I was told by a friend who worked in the computer lab, that the admins decided to implement a page limit per job after one programmer submitted a program that had an infinite loop that had only one command in it. A form feed that was sent directly to the printer.
I’m told the paper arced half way across the computer room before touching down. An entire box of paper was exhausted before someone could hit the kill switch.
The offending programmer was immediately hauled into the computer room, given a trash can and instructed to clean up his mess.

Reply to  ralfellis
March 12, 2021 3:20 pm

Perhaps you mean this?

https://en.wikipedia.org/wiki/Star_Trek_%281971_video_game%29

I recall it finding its way onto mainframes at work not long after. Some people got through roll after roll of thermal paper on their office computer terminals. CRT screens were at least a little more discrete, but the cost in time wasted must have been monumental.

griff
Reply to  ralfellis
March 13, 2021 2:22 am

yes… seeing someone drop a stack of cards had a horrible fascination to it…

Editor
Reply to  ralfellis
March 13, 2021 2:14 pm

It was common to draw a diagonal line across a big card deck so that one could put it back together quickly.

One day I saw the Univac 1108 operator deal with a card jam by extracting the pieces and just throwing it out instead making a new card. It turns out that was in the data portion of the deck and was just sample points. Missing one or two was not a big problem.

March 12, 2021 1:35 pm

A great article, Willis, with which I completely agree. I am another old physics guy who has done the hard yards in Fortran, Algol, Pascal, DOS, Basic, Visual Basic, C, C++, Python. I have even written in machine code for a vacuum tube computer (SILLIAC, Sydney Uni, 1963). I have coded fluid dynamic models of tidal and tropical cyclone forcing of ocean currents and became very skeptical of their value. The problem is that computer models are deterministic and cannot handle the stochastic phenomenon of turbulence because the Navier-Stokes equations break down at high Reynolds numbers. (I wrote a book on this, ISBN: 1-5275-3206-2). The idea that present day carbon emissions will remain in the atmosphere for millennia is false and based solely on climate models. Much of the hysterical rhetoric about emissions is based on this unwarranted assumption (Reid and Dengler, JGR Atmospheres, under review).

Geoff Sherrington
Reply to  John Reid
March 12, 2021 5:44 pm

John,
Would SILLIACSbe the computer that we saw at Sydney when our class of 20 or so from RAAF College aPt Cook were on tour in 1969? Geoff S

jono1066
March 12, 2021 1:36 pm

Jean ?
you must remember jean !
and lunar landing
and
and

March 12, 2021 1:38 pm

Great trip down memory lane Willis; thank you!

Re: (Continuous simulation) models. The value of modelling lies in building the model. The building process reveals what information is needed (both qualitatively and quantitatively) to close the loops and so points the modeller directly at the research that needs to be done. Parameters are admissions of failure to obtain the necessary information / relations.

David Jay
Reply to  Morgenroth
March 12, 2021 4:19 pm

Lunar Lander. On a KSR-33 teletype terminal. Good times.

March 12, 2021 1:38 pm

Yes but what about:

Structured Expert Judgement and Calibrated Mental Models?

That I just read about in the previous Watts Up With That Sea Level Post?

Sorry, I couldn’t resist (-:

Tom Bauch
March 12, 2021 1:38 pm

What, no PL/1? 🙂

Brian Jackson
Reply to  Tom Bauch
March 12, 2021 2:55 pm

PL/1 was the best of both (COBOL & FORTRAN) worlds. !!!

Reply to  Tom Bauch
March 12, 2021 3:24 pm

Or ADA or CORAL? (Both used in defence systems programming)

BigJohn
Reply to  Itdoesn't add up...
March 12, 2021 6:33 pm

And fixpac.

RicDre
Reply to  Tom Bauch
March 12, 2021 5:08 pm

Ah, PL/I. I learned that language in college but never worked for (or new of) anybody who actually used it. If I recall correctly, it was sort of a mash-up of COBOL and FORTRAN.

Reply to  Tom Bauch
March 13, 2021 9:53 am

I learned PL/1 at Penn State in the late 1980s.

We had terminals to work on, but the compiler still operated in terms of punch cards. Leading to the dreaded “Deleted card encountered” error.

Joe Crawford
Reply to  Tom Bauch
March 13, 2021 1:16 pm

In its heyday I’d bet more business applications were written in PL-I than practically any other programming language with the possible exception of Cobal. Quoting from the wiki (yea, I know): “The PL/I optimizing compiler… was IBM’s workhorse compiler from the 1970s to the 1990s”

Reply to  Joe Crawford
March 13, 2021 1:54 pm

>>
. . . with the possible exception of Cobal.
<<

There’s a tremendous amount of legacy code written in COBOL. You can still find job offers in the help wanted ads for COBOL programmers.

Jim

March 12, 2021 1:54 pm

Maybe one thing the computer is useful for is in a similar way to how you come to *properly* understand something yourself.
viz; Try to explain it to someone else

It will come undone even then though because the computer cannot or will not ask ‘why’
The computer thus becomes a perfect slave and works to reinforce erroneous thinking
Exactly what’s going on here inside Climate Science – computers are amplifying junk.

The Datapoint reminded me of my first desktop. The HP85
Ain’t they similar?

Mostly used as a datalogger, driving and recording the outputs of HP test equipments in the Electronics Research Lab I worked in – testing little analogue circuits over extremes of time, voltage, noise and temperature
Was it via the ‘HPIB Databus’ – that chuck sticking out the rear of the museum picture.

It did play a mean version of a card game – can’t recall now but the GF at the time, an avid Cribbage player, loved it

How times have moved on eh – things have never been better. not

RicDre
Reply to  Peta of Newark
March 12, 2021 5:19 pm

“Maybe one thing the computer is useful for is in a similar way to how you come to *properly* understand something yourself.”

I use to write programs for an Accountant that could never tell me exactly what he wanted, but was very good at telling me what he didn’t want. We use to circle around the problem (kind of like finding the range for Artillery) until we got something close enough that he could use it.

“It did play a mean version of a card game”

My first program (in PDQ FORTRAN!) was a program to play “Chemin de fer” (its sort of the French version of Blackjack). Not really much of a program but is seemed like a brilliant piece of work at the time.

Editor
March 12, 2021 1:54 pm

In my first job, on some nights I drove the van loaded with trays of punched cards and operator instructions to the computer for the overnight computer run. My career nearly ended prematurely the night I didn’t close the van’s back door properly, and two or three trays of cards fell out and spilled across the street. Fortunately, it wasn’t raining, and the cards were OK, and I shoved them back in what I hoped were the correct trays – completely out of order of course – and drove on to the computer. All the cards had a sequence number in columns 73-80, so all the operator had to do was start by sorting those trays while I kept my fingers crossed all night. In the morning – there had been a successful overnight run on the computer! Computing was an interesting exercise in those days.

OK S.
March 12, 2021 2:03 pm

Nice story, Willis. I enjoy them all.

in 1963 I had heard of those big computers, but the only one I ever held was my Uncle Bill’s Magic Brain.

Keep up the good work.

MagicBrain.png
Editor
Reply to  OK S.
March 13, 2021 2:23 pm

I had one of those. Cute toy. Of course, I would have done almost anything for a Curta calculator. http://www.vcalc.net/cu.htm

comment image

Rud Istvan
March 12, 2021 2:19 pm

Willis, thought I would provide you with a fact tidbit in thanks for this most excellent post.
You noted that at climate model grid sizes they cannot ‘see’ thunderstorms despite using the Navier-Stokes equations.

After finishing his quantum chromodynamics, the ever curious Richard Feynman spent four years experimentally and mathematically exploring Navier-Stokes at CalTech. His conclusions were written up in the Feynman Lectures on Physics volume 2, chapters 40 “Flow of Dry Water” (classic Feynman joke, fluids with negligible viscosity) and 41 “Flow of Wet Water” (viscosity, N-S).

Last three paragraphs of the last section of Chapter 41 are quite famous. I hauled out my well thumbed Lectures copy and provide you an excerpt:

We have written the equations of wet water flow…
But had you not visited Earth, you would not know about thunderstorms…
The next great era of awakening of human intelligence may well produce a method of understanding the QUALITATIVE content of equations. Today we cannot.

Rud Istvan
Reply to  Willis Eschenbach
March 12, 2021 5:39 pm

With which emergent phenomena I completely agree, albeit from a different emergent perspective. But Feynman preceded us both. Genius is just that.

cerescokid
Reply to  Willis Eschenbach
March 13, 2021 7:05 am

“The next great era of awakening of human intelligence may well produce a method of understanding the QUALITATIVE content of equations. Today we cannot.”

Truly great insight. When you and Willis write something, I pay attention. I know you have lived in the real world, unlike some of the usual detractors.

As I think about the dichotomy between skeptics and true believers, I wonder if that is what separates us.

Rob Robertson
March 12, 2021 2:21 pm

Wow, what a good read Willis. As a humble accountant who tried vainly to use Excel to model the various what-ifs in creating a budgeting tool for our management and sales team, I cannot begin to imagine the complexities you are talking about, both within the software climate modellers are using and of course within the climate system itself.

Juan Slayton
March 12, 2021 2:29 pm

OK Willis,
You have deliberately induced time travel, and now I need to get back home. I can remember some way points in getting here: Encounters with several languages, particularly C, using various Basics, doing 6500 machine programming, borrowing my neighbor’s Trash 80, punching Hollerith cards and getting disappointing results when trying to run my programming….

You have taken me back to 1958, and like you, I am riding a bicycle around downtown San Francisco. No blueprints, though–I work out of the Western Union office around Third and Market. One of my weekly jobs is to pick up the printing mats for the weekend comics from the Chronicle office and take them across the street to the Examiner. Why they paid Western Union to do this, I have no idea–unless it had something to do with a union contract.

It’s a heady experience for a 17 year old to be walking around the street holding in his hands all the Sunday comics for most of San Francisco. But I really need to get back home in time for dinner. The models can wait.

March 12, 2021 2:38 pm

Willis: thanks for the wonderful trip down Memory Lane! I’m almost exactly your age. While I’ve never written programs for a living, I did learn enough Fortran to do a bit of programming. And my first wage-paying job, at age 12 or 13, was sweeping out the Okla A&M Computer Center, which consisted of a vacuum-tube Univac, obsolete even then. But kind of a plum job for a kid: air-conditioning! — a rarity then. Pay was 50c. an hour!

And the anecdote of Freeman Dyson consulting Fermi on his toy model was priceless! Dyson didn’t have much use for the climate models, either. And got Nastygrams from the Usual Suspects for his pains, when he said so.

Well, we need to keep whacking away at the Good Fight! Science will self-correct, in due time? We hope before  the Climate Activists wreck our economy with useless and actively harmful ‘solutions’. Giant, floating windmills in the North Sea! Or offshore Northern California. Yeah, those will work….

Roger Knights
Reply to  Peter D. Tillman
March 13, 2021 2:47 am

And the anecdote of Freeman Dyson consulting Fermi ….”

It was Richard Feynman.

March 12, 2021 2:42 pm

Willis, thank you for this post. Ah yes, punch cards. But I digress.

I recall this earlier post in which you recounted your interaction with Gavin Schmidt on how the models made sure that energy was conserved. “So I asked him how large that energy imbalance typically was … and to my astonishment, he said he didn’t know.”

https://wattsupwiththat.com/2020/01/18/gavins-falsifiable-science/

Last year I came across this paper, Zhao et all 2018b, which characterizes the GFDL’s AM4.0 model.

https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2017MS001209

Supplement 1, in the lead paragraph (link to pdf below):
“S1 Treatment of energy conservation in dynamical core. The dissipation of kinetic energy in this model, besides the part due to explicit vertical diffusion, occurs implicitly as a consequence of the advection algorithm. As a result, the dissipative heating balancing this loss of kinetic energy cannot easily be computed locally, and is, instead returned to the flow by a spatially uniform tropospheric heating. This dissipative heating associated with the advection in the dynamical core in AM4.0 is ∼ 2 W m−2.”

https://agupubs.onlinelibrary.wiley.com/action/downloadSupplement?doi=10.1002%2F2017MS001209&file=jame20558-sup-0001-2017MS001209-s01.pdf

So at least in the GFDL’s latest model, about 2 W/m^2 is added back for energy conservation. My sense is that the model has to blur the handling of so much energy that discriminating between incremental outward emission to space and incremental heating of the surface from greenhouse gas forcing is simply not possible.

Am I looking at this correctly?

Curious George
Reply to  David Dibbell
March 12, 2021 3:16 pm

https://judithcurry.com/2013/06/28/open-thread-weekend-23/#comment-338257
The Common Atmosphere Model 5.0 from National Center for Atmospheric Research decreed a latent heat of water vaporization to be a constant independent of temperature. They chose a value that is 2.5% too high for tropical seas (where most of surface water evaporation on our planet happens). NCAR CAM 5 is considered “science”.

https://judithcurry.com/2012/08/30/activate-your-science/#comment-234131
“If the specific heats of condensate and vapour is assumed to be zero (which is a pretty good assumption given the small ratio of water to air, and one often made in atmospheric models) then the appropriate L is constant (=L0). (Note that all models correctly track the latent heat of condensate).”

leitmotif
March 12, 2021 2:45 pm

RSX-11M-PLUS operating system.

David Jay
Reply to  leitmotif
March 12, 2021 4:24 pm

With you there. We built a multi-terminal Level-2 cell controller on M+ with industrial terminals that had a VT-100 emulation mode.

leitmotif
Reply to  David Jay
March 12, 2021 6:10 pm

VT-100. Yeah. 🙂

leitmotif
Reply to  David Jay
March 12, 2021 6:14 pm

Lots of DEC courses at the Butts Centre in Reading.

Roger Taguchi
March 12, 2021 2:53 pm

Excellent essay on computer models and calculations, Willis! As a Chemistry student, I took a course in Numerical Analysis in 1966-67, and our first assignments were to solve simple arithmetic calculations using (1) pencil & paper, (2) mechanical calculators filled with springs, levers and gears (the kind that went nuts if you tried to divide by zero), and (3) simple FORTRAN IV programs (involving going to the punch card room, followed by submitting the deck of punch cards to the computer operator of the IBM 7040 or 7090). The idea was to show that the computer program did no more than arithmetic, although a lot faster when many iterations and 10 significant digits were involved. Then we could get on to solving equations, calculating definite integrals, derivatives, etc. using approximations and arithmetic.

Estimates of climate sensitivity (the amount of warming when CO2 is doubled) start with computer models of infrared (IR) spectra obtained by satellites looking down on a warm Earth (for example, see Fig. 3 at https://climateaudit.org/?p=2572 ). Models in Molecular Physics of the bond-bending vibrations in CO2, H2O (water vapour), O3 (ozone), N2O are quite good and fit the observed spectra (see the MODTRAN calculated spectra to 20 km altitude at https://en.wikipedia.org/wiki/Radiative_forcing ).

William Happer has run high resolution (HITRAN) models to altitudes beyond 70 km, with similar results (see 24:13 to 29:17 at https://www.youtube.com/watch?v=CA1zUW4uOSw .
The entire 45 minute presentation is perhaps the single best introduction to the real science of climate change, and can be understood in 26 minutes when played back at 1.75x speed).

Happer’s value of 3 W/m^2 for radiative forcing corresponds to the tiny difference in area between the red and black simulated spectra. This agrees with the 3.39 W/m^2 which corresponds to the tiny difference in area between the green and blue simulated spectra in the Wikipedia article on Radiative Forcing. The oft-quoted value of 3.7 W/m^2 appears to come from the equation “delta F” = 5.35 ln2, but in fact 3.7 W/m^2 came from another computer simulation, and the factor 5.35 is derived from 3.7/ln2 . The difference between 3.39 and 3.7 W/m^2 corresponds to about 8-9%, and may indicate the possible error expected in these calculations.

A Top Of the Atmosphere (TOA) outgoing flux of 240 W/m^2 corresponds to a Planck black body temperature of 255.07 K (see https://en.wikipedia.org/wiki/Stefan-Boltzmann_law ). If we add or subtract 3.7 W/m^2 from 240 W/m^2, we get Planck black body temperatures that differ from 255.07 K by about 1.0 degree (the difference is about 0.9 K if we use 3.39 W/m^2).

Because tropospheric temperature profiles are roughly parallel regardless of latitude (if we ignore temperature inversions during the long polar nights), the change in surface temperature on doubling CO2 will be 1.0 or 0.9 degrees (depending on whether we use 3.7 or 3.39 W/m^2), not including feedbacks. The reason is that if energy is added to all the molecules of the atmosphere, the most probable distribution results when each molecule, on average, gets an equal share. I.e. each parcel of air warms up by the same number of degrees, on average, since temperature is a measure of the average kinetic energy of the molecules.

However, the error increases when we consider water vapour feedback. The reason is that CO2 is relatively constant at around 410 ppmv regardless of altitude, whereas water vapour is generally assumed to be at 50% relative humidity (obviously not true in the muggy tropics) and saturated water vapour pressure (for 100% relative humidity) varies exponentially with temperature (see https://en.wikipedia.org/wiki/Clausius-Clapeyron_relation ).

I estimate from the IR spectra from 0 to 2400 cm^-1 that water vapour feedback after one iteration is around 32%. Therefore using r = 0.32 in an infinite geometric series, the climate sensitivity including water vapour feedback could be as high as a/(1-r) = 1.47a = 1.47 K (assuming a = 1.0K). An infinite number of iterations would boost 32% feedback to 47%.

Note: if r = 2/3, then a/(1-r) = 3a = 3 degrees of warming if a = 1.0 K. This is how positive water vapour feedback boosted climate sensitivity from 1 degree to the long-quoted 3 degrees.

Yet this 3 degrees had to be at least a factor of 2 too large, for the following reason:
0.8 degrees of warming occurred between 1850 and 2019, as CO2 increased from 285 to 410 ppmv. Since warming is proportional to the logarithm of the CO2 concentration, then the MAXIMUM expected warming on doubling CO2 is 0.8[log2/log(410/285)] = 1.5 degrees. Similar calculations could have been made anytime during the last 3 decades.

And this assumes that ALL of the 0.8 degree warming was due only to CO2 and related feedbacks. It’s no good to say that EQUILBRIUM climate sensitivity is higher due to a time constant (lag), because any lag in warming would mean that temperatures would continue to increase even if CO2 remained constant. This is not consistent with the observed hiatus in warming over the last 2 decades, even as CO2 has continued to increase (not remain constant).

In addition, all computer calculated spectra, including Happer’s, assume a cloudless troposphere. But clouds reduce the overall importance of changing CO2 to only the molecules in the path length above the cloud tops – since condensed phases like cloud particles act as miniature Planck black bodies which absorb and emit ALL long-wavelength infrared (IR), meaning that any extra absorption by CO2 below the tops of clouds is cancelled by exactly that much less absorption by the cloud particles. And the Earth has about 62% cloud cover, which is increased slightly by increased water vapour, resulting in a slightly higher albedo (just consider that as the Sun rises in the tropics, a clear morning sky turns to afternoon thunderstorm clouds).

So the bottom line is that doubling CO2 would result in about 0.7 degrees net warming, including feedbacks, not 1.5 degrees, and definitely not 3 degrees.

Computer models can then be run as CO2 changes as a function of time, but all must be pinned by the assumed value of climate sensitivity. Predicted temperature changes will all be too high by how much greater their assumed climate sensitivity is than 0.7 degrees. And some want to ruin the economies of the free world over these flawed predictions???

Clyde Spencer
Reply to  Roger Taguchi
March 12, 2021 10:04 pm

(2) mechanical calculators filled with springs, levers and gears (the kind that went nuts if you tried to divide by zero),

Friden calculator?

Editor
Reply to  Clyde Spencer
March 13, 2021 2:42 pm

My father’s employer had a Marchant(?) calculator that just sat there spinning the drive shaft when told to divide by zero. It also had a stop key.

Real division was much, much neater.

Rich Lambert
March 12, 2021 2:55 pm

First computer programing course was in Fortran about 1966. You would walk across campus to submit your cards to be run. A day or two later you would check the output usually to find out the program didn’t run and you had no idea why. A great way to kill any zeal for programing. About 10 years later used Basic and found it much more satisfying since it could be submitted remotely with quick turn around. First PC experience was about 1982 when the company bought a an IBM PC. Basically they said use it if you wish but don’t expect any help. Told this story to some junior workers a few years back and they didn’t even know what a punch card was.

Felix
March 12, 2021 3:14 pm

A Datapoint 2200!!! My first paid programming job, in 1976, was on Datapoint 2200s, soon upgraded to 5500s, with a whopping 56K of RAM, 4K of ROM, and the other 4K was interrupt vectors in RAM, but not available otherwise … I think.

Thanks for the memories. On the darker side, an extra “build” in here, near the beginning: “They started out by having us build design and build logic circuit”

WXcycles
March 12, 2021 3:23 pm

… I was always into math, it came easy to me. In 1963, the summer after my junior year in high school, nearly sixty years ago now, I was one of the kids selected from all over the US to participate in the National Science Foundation summer school in mathematics. … It was a great school, about 80% black, 10% Hispanic, and the rest a mixed bag of melanin-deficient folks. … Verification and Validation? … The fact that the model says that something happens in modelworld is NOT evidence that it actually happens in the real world …

Willis, you should be careful sharing this stuff, you come off sounding like a privileged hard-core rayssist, plus you’re male. It’s not a good look.

I do love your dedication to rigorous uncertainty within “the séance”.

Fortran77 and digital watches since 1977. Although I do have a 17 year old Texas Instruments programmable calculator with a Mac processor and 1 megabyte of ROM and 512k of RAM.

I’m delighted with the predictive performance range and useful time interval of current WX models RE “life or death situations”, for amelioration. But fully share your view of climate models. They’re strong evidence of the pivotal role of mass delusion in the development of human civilization, especially one with computers and Mars impact probes.

One thing though, being a failed-geo, due to wanting to understand what’s under me, much more than work as a pro, I have a problem with this sort of remark:

“ … This means that the fact that a climate model can hindcast the past climate perfectly does NOT mean that it is an accurate representation of reality. And in particular, it does NOT mean it can accurately predict the future. …”

You’re being much too generous, as the past is not known. We have ‘records’ stored, but always stored in media that really is not good at storing things. Look at any rock outcrop, and the leached regolith above it, and the soil that forms on top. That’s the Earth’s version of a ‘floppy disk’.

#1. Records aren’t.

Most people can’t even accept that much.

Only a small-fraction of a record is ‘complete’ (usually < 0.001% ‘complete’). And even those incomplete ones are in a constant state of edit within geology, where you get to see the skin, and a few outer dermal layers at certain points. Same with anthro-studies. Same even with modern history, which absolutely must be edited to suit the now, so that the future will not get the then all ‘wrong’.

The tares : wheat ratio in the < 0.001% editing is another area of concern.

But I would say that no, “climate model can hindcast the past climate perfectly”, not even the bit for which certain eager-beavers have suitably edited and biased the < 0.001% ‘record’ contents to enable that to seem to occur with a suitably compelling delusion-forming and reinforcing capability, that improves with each suite of reality versions, which are tuned to better emit money streams.

Which is how you secure the new abacus.

And then there’s infinity. Which complicates things a fair bit, as computers can only deal with discrete defined abstractions – ‘not-infinity’. Else logic tends to explode.

It reveals a computer is fundamentally incapable of addressing the understanding of an infinity, even if said abacus was fabricated from one, as was our brain. Which is disconcerting, and best set aside until on a death-bed, where one may finally be prepared to ponder how that can be?

And still come up with no answers.

I’ve come to the view that this is all I’ll ever really ‘know’. And there’s no logic model that can change it.

It’s abstracted figments … all the way down.

” … Therefore, the results it produces are going to support, bear out, and instantiate the programmer’s beliefs, understandings, wrong ideas, and misunderstandings. All that the computer does is make those under- and misunderstandings look official and reasonable. … “

Actually, scientific papers and text books share a lot of these same characteristics.

Vive la edit!

Geoff Sherrington
March 12, 2021 3:27 pm

Willis,
Thanks for the memories! I’ll be 80 in June. Performance is dropping off.
First computer was a Data General Nova with 4K of memory,1968 with no mass storage and an ASR-33 punched paper tape teletype. Next was a PDP 8e, an 8K OEM on a gamma spectrometer,1970. Did a course in machine language for the Nova, failed it and forgot it. In 1976 I flew from Sydney to San Francisco with my genius mate Albert, for the sole purpose of having a lady in a huge factory withdraw some of the wires on a memory card, to replace a single faulty ferrite core on the 4K card. Such was the value of memory then.
Was fortunate from then on to have a computer department of mathematician professionals to compute to specifications I would dream up in mineral exploration work. Never did learn to program, though could read enough in several languages to get the gist and fix easy errors. Our early corporate work was on a Hewlett-Packard 2000 series.
Our geophysics group was at world leading edge with models to simulate the size, broad shape, disposition of underground discrete bodies with magnetic properties, from surface measurements of the displacement of the earth’s magnetic field. The model was created and used in pre-computer days using a mechanical calculator, enter a value then pull the handle. It worked brilliantly.This taught me that models can work in valuable ways, but that they need iteration after iteration until (practically) all measured variables are reconciled by the model. A tiny adjustment taking days to finesse could have large effects on the final result. Same applies to climate GCMs, I presume. I gave up on those when The Establishment decided it was leading edge mathematics to average the results of dozens of diverse computer runs to arrive at a best estimate, with no mathematical analysis offered as to the validity of this kindergarten grade error. Geoff S

R.T.Dee
Reply to  Geoff Sherrington
March 13, 2021 12:10 pm

I believe I may be older than most here – a decade older than Willis. My memories go back to Turing – well, almost! He helped with the design of ACE for NPL – and ACE was the grandfather of DEUCE, made by English Electric, where I worked for the Guided Weapons Division in Luton in 1958. I was working with the LACE team (LACE was an analogue computer) in the Maths/Physcics dept.

As a young lad of 20, the computer section, however, was of great interest to me. It was filled with young females aged around 18 and run by a very attractive female mathematician of around 25 years. They were called computers…all armed with Fridens… 

Across the way from our lab was one of the DEUCEs. It was the size of a small room with an aisle down the center. I remember being shown over it with its mercury delay line RAM and rotary drum memory. I never used it but it was used in conjunction with LACE for analysis. The MTBF was about 4 hours before one of the 1500 double triodes (12AT7’s, IIRC) bit the dust, or so one of the maintenance techs said..

At the time we were having a small problem with the Thunderbird Missile, which kept blowing up on test flights for then unknown reasons. Yet simulations said all was OK with the servos, rocket parameters, etc. I think it turned out that the aerodynamics of the control fins was incorrectly modelled, found out by wind-tunnel measurements,causing instability and fin breakup. 

Around that time my room mate, a mathematician engaged in programming DEUCE told me about Algol, which fascinated me and I fiddled with it, but it was 9 years before I used Fortran in the US – they actually let engineers use the IBM 360 in the lunch hour.

Complex Fortran served me for a decade or so; I also used Basic. I avoided Pascal and C and found MATLAB and never looked back – I left all programming to experts where it belongs and concentrated on what matters – ideas. “The purpose of computing is insight, not numbers.” R.W.Hamming, Numerical Methods for Scientists and Engineers.