There Are Models And There Are Models

Guest Post by Willis Eschenbach

I’m 74, and I’ve been programming computers nearly as long as anyone alive. 

When I was 15, I’d been reading about computers in pulp science fiction magazines like Amazing Stories, Analog, and Galaxy for a while. I wanted one so badly. Why? I figured it could do my homework for me. Hey, I was 15, wad’ja expect?

I was always into math, it came easy to me. In 1963, the summer after my junior year in high school, nearly sixty years ago now, I was one of the kids selected from all over the US to participate in the National Science Foundation summer school in mathematics.  It was held up in Corvallis, Oregon, at Oregon State University.

It was a wonderful time. I got to study math with a bunch of kids my age who were as excited as I was about math. Bizarrely, one of the other students turned out to be a second cousin of mine I’d never even heard of. Seems math runs in the family. My older brother is a genius mathematician, inventor of the first civilian version of the GPS. What a curious world.

The best news about the summer school was, in addition to the math classes, marvel of marvels, they taught us about computers … and they had a real live one that we could write programs for!

They started out by having us design and build logic circuits using wires, relays, the real-world stuff. They were for things like AND gates, OR gates, and flip-flops. Great fun!

Then they introduced us to Algol. Algol is a long-dead computer language, designed in 1958, but it was a standard for a long time. It was very similar to but an improvement on Fortran in that it used less memory. 

Once we had learned something about Algol, they took us to see the computer. It was huge old CDC 3300, standing about as high as a person’s chest, taking up a good chunk of a small room. The back of it looked like this.

It had a memory composed of small ring-shaped magnets with wires running through them, like the photo below. The computer energized a combination of the wires to “flip” the magnetic state of each of the small rings. This allowed each small ring to represent a binary 1 or a 0. 

How much memory did it have? A whacking great 768 kilobytes. Not gigabytes. Not megabytes. Kilobytes. Thats one ten-thousandth of the memory of the ten-year-old Mac I’m writing this on.

It was programmed using Hollerith punch cards. They didn’t let us anywhere near the actual computer, of course. We sat at the card punch machines and typed in our program. Here’s a punch card, 7 3/8 inches wide by 3 1/4 inches high by 0.007 inches thick. (187 x 83 x.018 mm).

The program would end up as a stack of cards with holes punched in them, usually 25-50 cards or so. I’d give my stack to the instructors, and a couple of days later I’d get a note saying “Problem on card 11”. So I’d rewrite card 11, resubmit them, and get a note saying “Problem on card 19” … debugging a program written on punch cards was a slooow process, I can assure you

And I loved it. It was amazing. My first program was the “Sieve of Eratosthenes“, and I was over the moon when it finally compiled and ran. I was well and truly hooked, and I never looked back.

The rest of that summer I worked as a bicycle messenger in San Francisco, riding a one-speed bike up and down the hills delivering blueprints. I gave all the money I made to our mom to help support the family. But I couldn’t get the computer out of my mind.

Ten years later, after graduating from high school and then dropping out of college after one year, I went back to college specifically so I could study computers. I enrolled in Laney College in Oakland. It was a great school, about 80% black, 10% Hispanic, and the rest a mixed bag of melanin-deficient folks. (I’m told than nowadays the polically-correct term is “melanin-challenged”, to avoid offending anyone.) The Laney College Computer Department had a Datapoint 2200 computer, the first desktop computer.

It had only 8 kilobytes of memory … but the advantage was that you could program it directly. The disadvantage was that only one student could work on it at any time. However, the computer teacher saw my love of the machine, so he gave me a key to the computer room so I could come in before or after hours and program to my heart’s content. I spent every spare hour there. It used a language called Databus, my second computer language.

The first program I wrote for this computer? You’ll laugh. It was a test to see if there was “precognition”. You know, seeing the future. My first version, I punched a key from 0 to 9. Then the computer picked a random number, and recorded if I was right or not.

Finding I didn’t have precognition, I re-wrote the program. In version 2, the computer picked the number before, rather than after, I made the selection. No precognition needed. Guess what?

No better than random chance. And sadly, that one-semester course was all that Laney College offered. That’s the extent of my formal computer education. The rest I taught myself, year after year, language after language, concept after concept, program after program.

Ten years after that, I bought the first computer I ever owned — the Radio Shack TRS-80, AKA the “Trash Eighty”. It was the first notebook-style computer. I took that sucker all over the world. I wrote endless programs on it, including marine celestial navigation programs that I used to navigate by the stars between islands the South Pacific. It was also my first introduction to Basic, my third computer language.

And by then IBM had released the IBM PC, the first personal computer. When I returned to the US I bought one. I learned my fourth computer language, CPM. I wrote all kinds of programs for it. But then a couple years later Apple came out with the Macintosh. I bought one of those as well, because of the mouse and the art and music programs. I figured I’d use the Mac for creating my art and my music and such, and the PC for serious work.

But after a year or so, I found I was using nothing but the Mac, and there was a quarter-inch of dust on my IBM PC. So I traded the PC for a piano, the very piano here in our house that I played last night for my 19-month-old granddaughter, and I never looked back at the IBM side of computing.

I taught myself C and C++ when I needed speed to run blackjack simulations … see, I’d learned to play professional blackjack along the way, counting cards. And when my player friends told me how much it cost for them to test their new betting and counting systems, I wrote a blackjack simulation program to test the new ideas. You need to run about a hundred thousand hands for a solid result. That took several days in Basic, but in C, I’d start the run at night, and when I got up the next morning, the run would be done. I charged $100 per test, and I thought “This is what I wanted a computer for … to make me a hundred bucks a night while I’m asleep.”

Since then, I’ve never been without a computer. I’ve written literally thousands and thousands of programs. On my current computer, a ten-year-old Macbook Pro, a quick check shows that there are well over 4,000 programs I’ve written. I’ve written programs in Algol, Datacom, 68000 Machine Language, Basic, C/C++, Hypertalk, Forth, Logo, Lisp, Mathematica (3 languages), Vectorscript, Pascal, VBA, Stella computer modeling language, and these days, R. 

I had the immense good fortune to be directed to R by Steve McIntyre of ClimateAudit. It’s the best language I’ve ever used—free, cross-platform, fast, with a killer user interface and free “packages” to do just about anything you can name. If you do any serious programming, I can’t recommend it enough.

Oh, yeah, somewhere in there I spent a year as the Service Manager for an Apple Dealership. As you might guess given my checkered history, it wasn’t in some logical location … it was in downtown Suva, in Fiji. There I fixed a lot of computers and I learned immense patience dealing with good folks who truly thought that the CD tray that came out of the front of their computer when they did something by accident was a coffee cup holder … oh, and I also installed the Macintosh hardware for the Fiji Government Printers and trained the employees how to use Photoshop. I also taught two semesters of Computers 101 at the Fiji Institute of Technology.

I bring all of this up to let you know that I’m far, far from being a novice, a beginner, or even a journeyman programmer. I was working with “computer based evolution” to try to analyze the stock market before most folks even heard of it. I’m a master of the art, able to do things like write “hooks” into Excel that let Excel transparently call a separate program in C for its wicked-fast speed, and then return the answer to a cell in Excel …

Now, folks who’ve read my work know that I am far from enamored of computer climate models. I’ve been asked “What do you have against computer models?” and “How can you not trust models, we use them for everything?”

Well, based on a lifetime’s experience in the field, I can assure you of a few things about computer climate models and computer models in general. Here’s the short course.

A computer model is nothing more than a physical realization of the beliefs, understandings, wrong ideas, and misunderstandings of whoever wrote the model. Therefore, the results it produces are going to support, bear out, and instantiate the programmer’s beliefs, understandings, wrong ideas, and misunderstandings. All that the computer does is make those under- and misunder-standings look official and reasonable. Oh, and make mistakes really, really fast. Been there, done that.

Computer climate models are members of a particular class of models called “Iterative” computer models. In this class of models, the output of one timestep is fed back into the computer as the input of the next timestep. Members of his class of models are notoriously cranky, unstable, and prone to internal oscillations and generally falling off the perch. They usually need to be artificially “fenced in” in some sense to keep them from spiraling out of control.

As anyone who has ever tried to model say the stock market can tell you, a model which can reproduce the past absolutely flawlessly may, and in fact very likely will, give totally incorrect predictions of the future. Been there, done that too. As the brokerage advertisements in the US are required to say, “Past performance is no guarantee of future success”.

This means that the fact that a climate model can hindcast the past climate perfectly does NOT mean that it is an accurate representation of reality. And in particular, it does NOT mean it can accurately predict the future.

• Chaotic systems like weather and climate are notoriously difficult to model, even in the short term. That’s why projections of a cyclone’s future path over say the next 48 hours are in the shape of a cone and not a straight line.

There is an entire branch of computer science called “V&V”, which stands for validation and verification. It’s how you can be assured that your software is up to the task it was designed for. Here’s a description from the web

What is software verification and validation (V&V)?

Verification

820.3(a) Verification means confirmation by examination and provision of objective evidence that specified requirements have been fulfilled.

“Documented procedures, performed in the user environment, for obtaining, recording, and interpreting the results required to establish that predetermined specifications have been met” (AAMI).

Validation

820.3(z) Validation means confirmation by examination and provision of objective evidence that the particular requirements for a specific intended use can be consistently fulfilled.

Process Validation means establishing by objective evidence that a process consistently produces a result or product meeting its predetermined specifications.

Design Validation means establishing by objective evidence that device specifications conform with user needs and intended use(s).

“Documented procedure for obtaining, recording, and interpreting the results required to establish that a process will consistently yield product complying with predetermined specifications” (AAMI).

Further V&V information here.

Your average elevator control software has been subjected to more V&V than the computer climate models. And unless a computer model’s software has been subjected to extensive and rigorous V&V. the fact that the model says that something happens in modelworld is NOT evidence that it actually happens in the real world … and even then, as they say, “Excrement occurs”. We lost a Mars probe because someone didn’t convert a single number to metric from Imperial measurements … and you can bet that JPL subjects their programs to extensive and rigorous V&V.

Computer modelers, myself included at times, are all subject to a nearly irresistible desire to mistake Modelworld for the real world. They say things like “We’ve determined that climate phenomenon X is caused by forcing Y”. But a true statement would be “We’ve determined that in our model, the modeled climate phenomenon X is caused by our modeled forcing Y”. Unfortunately, the modelers are not the only ones fooled in this process.

The more tunable parameters a model has, the less likely it is to accurately represent reality. Climate models have dozens of tunable parameters. Here are 25 of them, there are plenty more.

What’s wrong with parameters in a model? Here’s an oft-repeated story about the famous physicist Freeman Dyson getting schooled on the subject by the even more famous Enrico Fermi …

By the spring of 1953, after heroic efforts, we had plotted theoretical graphs of meson–proton scattering. We joyfully observed that our calculated numbers agreed pretty well with Fermi’s measured numbers. So I made an appointment to meet with Fermi and show him our results. Proudly, I rode the Greyhound bus from Ithaca to Chicago with package of our theoretical graphs to show to Fermi.

When I arrived in Fermi’s office, I handed the graphs to Fermi, but he hardly glanced at them. He invited me to sit down, and asked me in a friendly way about the health of my wife and our new-born baby son, now fifty years old. Then he delivered his verdict in a quiet, even voice. “There are two ways of doing calculations in theoretical physics”, he said. “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self- consistent mathematical formalism. You have neither.”

I was slightly stunned, but ventured to ask him why he did not consider the pseudoscalar meson theory to be a self- consistent mathematical formalism. He replied, “Quantum electrodynamics is a good theory because the forces are weak, and when the formalism is ambiguous we have a clear physical picture to guide us. With the pseudoscalar meson theory there is no physical picture, and the forces are so strong that nothing converges. To reach your calculated results, you had to introduce arbitrary cut-off procedures that are not based either on solid physics or on solid mathematics.”

In desperation I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, “How many arbitrary parameters did you use for your calculations?” I thought for a moment about our cut-off procedures and said, “Four.” He said, “I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” With that, the conver- sation was over. I thanked Fermi for his time and trouble, and sadly took the next bus back to Ithaca to tell the bad news to the students.

The climate is arguably the most complex system that humans have tried to model. It has no less than six major subsystems—the ocean, atmosphere, lithosphere, cryosphere, biosphere, and electrosphere. None of these subsystems is well understood on its own, and we have only spotty, gap-filled rough measurements of each of them. Each of them has its own internal cycles, mechanisms, phenomena, resonances, and feedbacks. Each one of the subsystems interacts with every one of the others. There are important phenomena occurring at all time scales from nanoseconds to millions of years, and at all spatial scales from nanometers to planet-wide. Finally, there are both internal and external forcings of unknown extent and effect. For example, how does the solar wind affect the biosphere? Not only that, but we’ve only been at the project for a few decades. Our models are … well … to be generous I’d call them Tinkertoy representations of real-world complexity.

Many runs of climate models end up on the cutting room floor because they don’t agree with the aforesaid programmer’s beliefs, understandings, wrong ideas, and misunderstandings. They will only show us the results of the model runs that they agree with, not the results from the runs where the model either went off the rails or simply gave an inconvenient result. Here are two thousand runs from 414 versions of a model running first a control and then a doubled-CO2 simulation. You can see that many of the results go way out of bounds.

As a result of all of these considerations, anyone who thinks that the climate models can “prove” or “establish” or “verify” something that happened five hundred years ago or a hundred years from now is living in a fool’s paradise. These models are in no way up to that task. They may offer us insights, or make us consider new ideas, but they can only “prove” things about what happens in modelworld, not the real world.

Be clear that having written dozens of models myself, I’m not against models. I’ve written and used them my whole life. However, there are models, and then there are models. Some models have been tested and subjected to extensive V&V and their output has been compared to the real world and found to be very accurate. So we use them to navigate interplanetary probes and design new aircraft wings and the like.

Climate models, sadly, are not in that class of models. Heck, if they were, we’d only need one of them, instead of the dozens that exist today and that all give us different answers … leading to the ultimate in modeler hubris, the idea that averaging those dozens of models will get rid of the “noise” and leave only solid results behind.

Finally, as a lifelong computer programmer, I couldn’t disagree more with the claim that “All models are wrong but some are useful.” Consider the CFD models that the Boeing engineers use to design wings on jumbo jets or the models that run our elevators. Are you going to tell me with a straight face that those models are wrong? If you truly believed that, you’d never fly or get on an elevator again. Sure, they’re not exact reproductions of reality, that’s what “model” means … but they are right enough to be depended on in life-and-death situations.

Now, let me be clear on this question. While models that are right are absolutely useful, it certainly is also possible for a model that is wrong to be useful.

But for a model that is wrong to be useful, we absolutely need to understand WHY it is wrong. Once we know where it went wrong we can fix the mistake. But with the complex iterative climate models with dozens of parameters required, where the output of one cycle is used as the input to the next cycle, and where a hundred-year run with a half-hour timestep involves 1.75 million steps, determining where a climate model went off the track is nearly impossible. Was it an error in the parameter that specifies the ice temperature at 10,000 feet elevation? Was it an error in the parameter that limits the formation of melt ponds on sea ice to only certain months? There’s no way to tell, so there’s no way to learn from our mistakes.

Next, all of these models are “tuned” to represent the past slow warming trend. And generally, they do it well … because the various parameters have been adjusted and the model changed over time until they do so. So it’s not a surprise that they can do well at that job … at least on the parts of the past that they’ve been tuned to reproduce.

But then, the modelers will pull out the modeled “anthropogenic forcings” like CO2, and proudly proclaim that since the model no longer can reproduce the past gradual warming, that demostrates that the anthropogenic forcings are the cause of the warming … I assume you can see the problem with that claim.

In addition, the gridsize of the computer models are far larger than important climate phenomena like thunderstorms, dust devils, and tornados. If the climate model is wrong, is it because it doesn’t contain those phenomena? I say yes … computer climate modelers say nothing.

Heck, we don’t even know if the Navier-Stokes fluid dynamics equations as they are used in the climate models converge to the right answer, and near as I can tell, there’s no way to determine that.

To close the circle, let me return to where I started—a computer model is nothing more than my ideas made solid. That’s it. That’s all.

So if I think CO2 is the secret control knob for the global temperature, the output of any model I create will reflect and verify that assumption.

But if I think (as I do) that the temperature is kept within narrow bounds by emergent phenomena, then the output of my new model will reflect and verify that assumption.

Now, would the outputs of either of those very different models be “evidence” about the real world?

Not on this planet.

And that is the short list of things that are wrong with computer models … there’s more, but as Pierre said, “the margins of this page are too small to contain them” …

My very best to everyone, stay safe in these curious times,

w.

PS—When you comment, quote what you’re talking about. If you don’t, misunderstandings multiply.

H/T to Wim Röst for suggesting I write up what started as a comment on my last post.

4.9 94 votes
Article Rating
492 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Joe Wagner
March 12, 2021 10:10 am

Wow! Talk about memory lane!!
(and thanks for that first picture, makes my Wire Spaghetti seem less horrendous now)

 The more tunable parameters a model has, the less likely it is to accurately represent reality.

My thesis advisor in College had a similar conclusion essentially- the more parameters your algorithm needed, the less real it was. This was for Computer Vision Algorithms, but I’ve found it fits everywhere….

Jeff Labute
Reply to  Joe Wagner
March 12, 2021 12:07 pm

Spaghetti wire wrap was fairly common, and repairable too. I built an 8085 based computer using wire wrap. I also fixed a radio observatory ADC board that used wire wrap all in the early 80s. In regards to an old computer with 768KB of core memory, I can only imagine that would take up an enormous amount of space. I saw 4KB of core memory in an old BASIC-4 machine and the core memory was 1/2″ thick and the depth and width of the steel cabinet it was in.

MarkW
Reply to  Jeff Labute
March 12, 2021 7:45 pm

The problem with wire wrap is that as clock frequencies go up, the wires stop being wires, and start being antennas.

Joe Crawford
Reply to  MarkW
March 13, 2021 7:17 am

Mark, you actually have the same problem with long traces on PC boards. Especially on single layer boards. Multi-layer boards make that less of a problem now.

MarkW
Reply to  Joe Crawford
March 13, 2021 7:58 pm

True, however PC traces are both more controllable as well as being completely repeatable. In PC traces you can lay down guard traces surrounding problematic traces. This is not possible with wire wrap.

Joe Crawford
Reply to  MarkW
March 14, 2021 7:47 am

Sure, but I have had to wire wrapped a few twisted pair to fix problems that had no easier solution and rerouting didn’t take care of ’em.

David Brunfeldt
Reply to  MarkW
March 14, 2021 11:36 am

In 1973, I wire wrapped several 11 inch by 8 inch boards populated with sockets to hold 7400 series gate ICs. It was a 4 bit microcontroller, complete with magnetic core memory. We got the thing going, but only briefly and only if it never moved.
As soon as we put it into a truck to gather data, it stopped working forever.
The problem was sockets and wire wrap, and connectors between boards.

The reason that computers work today is the massive integration of the transistors, gates and functions…all done on one chip. Interconnects are the killer. Connections are unreliable. Put in thousands of socket pins and wire wrap connections and you have junk.

Hivemind
Reply to  Jeff Labute
March 12, 2021 9:03 pm

Indeed, wire wrap was often used in small volume electronics. Even big things like computers, but when only a few were made, it was much cheaper than designing PCBs.

Joe Crawford
Reply to  Jeff Labute
March 13, 2021 7:12 am

Wire wrap got really interesting back in the early 60’s when they were using Teflon for insulation. Took ’em a while to figure out that it suffered from cold flow and when wrapped under even slight pressure it eventually developed highly intermittent shorts. We had to field replace a whole lot of backplanes when it was finally discovered.

woodsy42
Reply to  Joe Crawford
March 13, 2021 3:35 pm

Yes, but core memory was great fun. I had the job of running a PDP11 and you could halt the processor, switch the power off overnight, go back and switch it on the next morning and it simply continued from where it had stopped.

richardw
March 12, 2021 10:15 am

Great post, Willis, as usual.

Wow, that Datapoint 2200 brought back memories for me. I worked with Datapoint systems in the late 70s and 80s, then for Datapoint themselves. All the Ford dealers in Europe used business systems written in Databus. Sadly, Datapoint lost its way along with most other minicomputer manufacturers, but for a while led the industry in networking (ARCnet) and desktop videoconferencing, used by the US military and I think NATO.

Anyway, back on topic, you are so right about models. The only 100% accurate model of reality is reality itself. Everything else is a more or less useful approximation and subject to significant amplification of human error.

ralfellis
Reply to  richardw
March 12, 2021 1:34 pm

You sure that was not Datashare?
Our Datapoint 2200s used Datashare.
R

michel
Reply to  ralfellis
March 12, 2021 2:25 pm

Datashare was the multi-user evironment. Databus was the programming language. Datashare allowed multiple intelligent terminals to communicate with each other without a host. Datashare was the multi user interpreter.

They had a DOS also, and to go with it a sort of primitive Bash type scripting language called Chain.

Took a lot of hard work to destroy Datapoint.

Felix
Reply to  michel
March 12, 2021 3:31 pm

They were the original contractors with Intel for the 8008, I think I remember. Didn’t like the result, came up with their own instruction set (and I think they did a better job, especially with the followon 5500). They told us once we were the second biggest customer after Safeway, and every once in a while they’d ship us an extra printer or processor, then take it back with apologies a week later. A year or two later, they imploded when a new accountant got curious about a bunch of hotel rooms being rented long-term, and found that someone had wanted to continue their streak of profitable quarters, ordered and shipped a few extra units, then took them back. Eventually it got out of hand, and when the financial skulduggery went public, they fell apart. Or so I heard. It did explain the extra equipment they’d send and take back. Might just be confirmation bias….

Leonard
Reply to  richardw
March 12, 2021 1:46 pm

Great post Willis. It is nice to be reminded of the old days with computers and models and numerical solutions to the models.
I still have a “Trash 80” somewhere in my office. Fun little machine.
About models and their stability, I have a short story that happened in a partial differential equation (PDE) class I took in a class on Engineering Mathematics.

One of our homework assignments was to use a particular numerical method to solve a given PDE, Well it turned out later this tricky professor asked us to use an unstable numerical scheme that was unconditional unstable for any (delta x, delta t) we selected. In previous assignments we had learned that too large of (dx,dt) choices could be improved by using smaller (dx,dt) values.

When we turned our homework in, he took a few minutes to go through our homework papers. Most of the class gave up when the procedure blew up and blew up sooner and more with smaller delta t values. However there was one clever student who turned in some results that appeared to be a solution.

The next class day the professor handed back our homework papers and everyone who gave up and said that a numerical solution could not be found got an A on the assignment. He proceed to ask the one guy who got a solution to stay after class. He then put the problem and the numerical method on the blackboard and demonstrated to us that no solution could be obtained with that numerical scheme, and that the smaller delta t we used the quicker and more violently the “solution” blew up. And he wrote on the blackboard the numerical scheme and labeled it the “unstable scheme”. He then demonstrated that what weighting factors we had to use to make the scheme conditionally stable.

That was over 50 years ago and I still remember the lessons (not the details) of checking (V&V) our numerical solutions under a variety of circumstances. Here is a small list of the lessons learned that day but appreciated more and more with the passing years.

1) When presented with a numerical solution to differential equation(s) the solution is always an approximation.
2) Good engineers, scientists always include some analyses of the likely range of errors in the results.
3) Producing reasonable numerical results to PDES is not for the novice, tricky, or dishonest person.
4) Numerical analyses solutions to a single PDE is often difficult and quantifying the errors is also an approximation.
5) God save us from those who produce numerical solutions to a large number of coupled or linked PDES and claim they know the solutions to the models are correct and accurate to a specified range of uncertainty, or worse, they give the solutions as tested and perfect.
6) In hindsight, it was a blessing to learn mathematics and ways to apply and developing computer solutions to mathematical problems in the dawning age of digital computers.

Thanks again Willis.
I hope to see more posts from you on these modeling/computer topics.

MarkW
March 12, 2021 10:19 am

If you truly believed that, you’d never fly or get on an elevator again. Sure, they’re not exact reproductions of reality, that’s what “model” means … but they are right enough to be depended on in life-and-death situations.

That’s not technically true.
The models are true enough to commit to building engineering models. Those models are tested thoroughly. Then full sized units are built, and those are again tested thoroughly. Only at that point do you commit to production.

Last edited 3 months ago by MarkW
OweninGA
Reply to  MarkW
March 12, 2021 11:55 am

We actually have certain configurations that we know the model output will not be up to snuff. Most of them work great if you keep the flow in the laminar configuration, but they can get a little hinky when turbulent flow emerges, particular in the transonic arena. That is why P-51 and P-38s that broke the sound barrier in a dive tended to lose their wings (the models don’t show that). The forces were definitely not linear. There are models for transonic flight, but the physical models still get put through their paces in a wind tunnel to double check the output and surprises still occur.

beng135
Reply to  OweninGA
March 13, 2021 9:18 am

The F-86 Sabre could also go supersonic in a dive — they fared better w/the swept wings & fully-movable elevators.

halb
Reply to  MarkW
March 12, 2021 12:07 pm

I can relate to MarkW’s comment:

Back around late 1980s to early 1990s, I worked on a hybrid AI and numerical based code to “tune” an high current, pulsed particle accelerator. The code needed to adjust various “steering” coils to keep the beam going down the (near) center of the beam tube. The accelerator pulsed at approximately 1 Hz, with the pulses lasting 10s of nanoseconds. The beam tube could not withstand continued strikes of the multi mega-volt by few kilo-amp electron beam. The beam position was measured at intervals. Thus, the task was to adjust the coils to center the next beam pulse based on the previous pulse positions down the beam tube.

One day, I was talking with the chief numerical modeling physicist about issues we had encountered between the actual beamline performance vs the modeled predictions. He made the remark (which I have to paraphrase as its been to long to provide an actual quote): “Our models are good enough to design a beamline, but not good enough to predict exactly how they will operate.” At the time, I was taken aback. After all, this was just particles (albeit relativistic ones) interacting with electric and magnetic fields. But then, as I considered the issues with all the manufacturing imperfections of all the magnets and electric coils, cascaded with all their alignment errors, his statement made complete sense; this was the whole point of the “tuning” system we were constructing.

John Garrett
March 12, 2021 10:24 am

Mr. Eschenbach,

Brilliant. Thanks.

It’s good to be reminded of things like this. We forget— and we also forget that the young have little or no knowledge or experience of history.

CDC = Control Data Corporation
DataPoint Corporation
DEC = Digital Equipment Corporation
Data General Corporation

Like you, I started young. I’ve been through punched tape and Hollerith cards and Visicalc and Lotus 1-2-3 and Excel and Reverse Polish Notation (RPN) and Fortran and Cobol and Basic.

Throughout it all, I’ve watched with astonishment and amazement as people treated the output from computer models as if it were the revealed truth.

God bless von Neumann. He knew— and he communicated that knowledge better than anybody ever did or ever will:

“Give me four parameters, and I can fit an elephant. Give me five, and I can wiggle its trunk”.

Last edited 3 months ago by John Garrett
HAS
Reply to  John Garrett
March 12, 2021 12:31 pm

Started with Algol on an Elliot 503. Most fun language was Forth (still used to control telescopes, I understand). You could buy the core on a plugin cartridge for the Commodore 64 I got for the kids. It’s a threaded compact language that runs faster than interpreted languages like the Basic in the C64.

The C64 was the last machine I ever bothered getting down to machine code. The Cannon SE-100 100 memory desktop programmable calculator (early 70s) was probably the penultimate. That’s all you got.

The IPCC process for climate modelling is part of the problem, but if you must have it then how about a requirement to hold out all information on 1930-1990 in their construction (including parameter estimation), and then verify out of sample against that (plus report actual propagation of uncertainty)?

Phillip Bratby
Reply to  John Garrett
March 12, 2021 1:42 pm

I too recall the horrors of dropping a stack of Hollerith cards and trying to splice together punched paper tape.

Mr.
Reply to  Phillip Bratby
March 12, 2021 5:00 pm

Phillip, I wondered if someone would mention the dreaded calamity of a dropped deck of punch cards.

Back in ’69 we had to suffer the boss’ high-school son coming in to work during his school vacation breaks.
Nice kid, but he had feet 4 sizes larger than his adolescent frame required, and we used to have bets on just when he would stumble while carrying a tray of card decks.
Of course it happened, and as Murphy’s Law dictates, it was month-end data entry.

His Dad the boss accepted the overtime cost philosophically.

Neil Jordan
Reply to  Phillip Bratby
March 12, 2021 8:05 pm

Willis, thanks for the memories. I recall from card days that there was a command for the card punch to put in sequential card numbering in columns out in right(?) field. The sequential numbers would print out along the tops of the cards. If dropped, the deck could be reconstructed. I failed to find the command for you. As a consolation, if your computer does not have “Pi”, try ARCTAN(one radian)*4. You will get Pi to the limit of the computer innards instead of what you might define Pi to be, like 3.14.

Mike Salish
Reply to  Neil Jordan
March 13, 2021 10:35 am

For a reasonable approximation try 355/113

Clyde Spencer
Reply to  Phillip Bratby
March 12, 2021 8:32 pm

That’s why we drew a line across the top of the deck in its box. If you DID drop it the line helped to show if any of the cards were out of sequence when you put them back in the box.

Roger Knights
Reply to  Clyde Spencer
March 13, 2021 1:46 am

You meant a diagonal line, right? Otherwise it wouldn’t be a help.

Roger Knights
Reply to  Phillip Bratby
March 13, 2021 1:45 am

I recall always wearing a rubber band around my wrist, like many other co-workers, to always have one at hand to keep card decks together.

Jan de Jong
Reply to  Phillip Bratby
March 13, 2021 1:26 pm

I remember a closet with paper tape subroutine rolls… Fortran I believe. Card stacks, Algol60 and Algol68 came later.

John in Oz
Reply to  John Garrett
March 12, 2021 4:34 pm

Might I add:
Bunker Ramo (used on the Ikara system I describe further down) and
Perkin-Elmer (which became, in part, Concurrent Computer Corp that I worked for over 17 years)

More memories

Proeng
Reply to  John Garrett
March 13, 2021 10:44 pm

My first computer was an Apple11 which only had 16k bytes but had lots of mathematical operators and you could save a program and data on a tape recorder. One of the my first programs was a spreadsheet before Visicalc was available. Spreadsheets normally have only two dimensions x &y but each sheet can be piled up and referenced eg monthly accounts which can add to a year end sheet. My spreadsheet program using to for loops could have as many dimensions as one wished. That is a feature of mathematics which is not bound by physical limits. I have a book on string theory (The Shape of Inner Space by Shing-Tung Yau) which mentions 10 dimensions for the universe. Most (if not all) climate models ignore physical reality eg they ignore the 2nd law of thermodynamics, they ignore that above a wavelength of 10 micron which is about the peak emitted from the Earth’s surface CO2 only absorbs and emitts at a wavelength around 14.8 micron (ie the absorptivity is close to zero not 1 as claimed). I do not program any more. I can do with Excel all I want in my old age.

Rob_Dawg
March 12, 2021 10:26 am

> Your average elevator control software has been subjected to more V&V than the computer climate models.

That’s because elevator software has direct impacts on people’s expected comfort, livelihoods and behavior unlike clima… oh… wait. Never mind.

Michael Lemaire
March 12, 2021 10:34 am

Hi Willis, nice article! I am 4 years younger (less old?) than you but had somewhat of a similar start with computers, and therefore really enjoyed your article. However I am surprised you called MS-DOS a language. I first used it as QDOS (Quick and Dirty Operating System) from Seattle Computers before Bill turned it into MSDOS, but is was never a language, just an OS. Why do you call it a language?

richardw
Reply to  Michael Lemaire
March 12, 2021 10:46 am

I think DOS stood for Disk Operating System. Datapoint had its own DOS operating system. It’s first microprocessor – used in the 2200 – was claimed by some to be the direct forerunner of the first Intel chip – the 8086 I think.

Reply to  richardw
March 12, 2021 2:12 pm

It was possible to make batch programming, the later versions got more complexity, since Win NT, if I remeber well.

Last edited 3 months ago by Krishna Gans
Clyde Spencer
Reply to  Krishna Gans
March 12, 2021 8:37 pm

Yes, I was taken back a little by calling a Disk Operating System a computer language. However, it was the ability to write batch files to tell the OS what to do and when, particularly when booting, that I think qualifies it as a language.

jorgekafkazar
Reply to  Krishna Gans
March 13, 2021 12:16 pm

Batch processing made a big difference, especially when you started the job just before leaving for lunch.

Christopher R Pastel
Reply to  richardw
March 12, 2021 6:24 pm

I thought the very first Intel microprocessor chip was the 4004. That’s what we studied in college right from the Intel manual.

Carlo, Monte
Reply to  Christopher R Pastel
March 12, 2021 9:20 pm

It was; the four indicated it was a 4-bit processor. Intel made it big with the later 8-bit 8080. Motorola had the 6800 8-bit processor.

MarkW
Reply to  Carlo, Monte
March 13, 2021 9:12 am

There was an 8008 that was I believe, an 8 bit version of the 4004. The op-code set was fairly limited.
I thought the 6500 line was Motorola’s first microprocessor.

beng135
Reply to  Carlo, Monte
March 13, 2021 9:28 am

Commodore Vic-20 had the Motorola 6502 8-bit processor — my first computer. Ran on “Vic–Basic” or straight machine-language if you could do it.

Last edited 3 months ago by beng135
Graeme#4
Reply to  beng135
March 13, 2021 5:03 pm

I developed some of the earliest electronic ticketing systems for public transport by writing assembly code for the 6502. Amazing what we could achieve with a 2k EEPROM.

Last edited 3 months ago by Graeme#4
Shawn Marshall
Reply to  Christopher R Pastel
March 14, 2021 3:51 am

That’s what we programmed in machine code in college – had to ‘jam’ the instructions.

Erik Magnuson
Reply to  Michael Lemaire
March 12, 2021 11:08 am

Oh wow, my first computer was also an SCP machine, though the OS was called 86-DOS by the time I bought it. The Seattle small assembler was really fast.

Willis’s comments about computer models reminds me of Bob Pease’s comments on SPICE, “SPICE will lie to you”. Modeling circuitry is a much simpler problem than modeling than climate and also orders of magnitude easier to verify. This also goes along with “all models are wrong, but some are useful” in that while SPICE does not give a perfect answer to circuit simulation, the answer it gives is generally close enough to be useful – provided that the circuit model was sufficiently detailed.

Like, Willis, my first exposures to computer programming CDC machines, first being a CDC 1700 and second being a CDC 6400.

rbabcock
Reply to  Michael Lemaire
March 12, 2021 11:12 am

I’m right with you. I remember taking a stack of computer punch cards to be processed (FORTRAN) and dropping them on the floor in 1969. About 4 hrs of work scrambled beyond use. My first company I started was providing computer services to IBM midrange customers. The second was based on OS2 servers and dialup. Other than hardware, the biggest step forward has been object based programming languages. You can hire an experienced programmer and turn out the work of literally 60 programmers from the 1980’s.

Curious George
Reply to  rbabcock
March 12, 2021 11:36 am

A classic story of a guy with the same scrambled mishap. He used a version of Fortran that allowed more than one statement on a card. He then developed a programming style where each card had a label, and read like
860 X=X+1; GOTO 870
Now he could shuffle cards and the program still ran ..

HAS
Reply to  rbabcock
March 12, 2021 11:48 am

Very quickly started putting a commented out sequence number at the end of each card.

MarkW
Reply to  HAS
March 12, 2021 12:27 pm

Just remember to not number sequentially. Just in case you discover you have to add a couple of lines in the middle of your program.

Reply to  HAS
March 12, 2021 12:46 pm

I soon learnt that a diagonal line drawn across the edge of the stack made re-sorting a lot easier 🙂

And I too was a taken aback by the statement that “Microsoft Disk Operation System” (aka MS-DOS) was a programming language

Another Scott
Reply to  StuM
March 12, 2021 8:05 pm

Maybe it’s a reference to the scripted .bat files you can create for Ms Dos?

Last edited 3 months ago by Another Scott
Tim Gorman
Reply to  HAS
March 12, 2021 1:20 pm

We actually had a sorter in the computer room that would sort the cards if you put a sequence number on the card.

Carlo, Monte
Reply to  rbabcock
March 12, 2021 12:33 pm

This is why a big magic marker was an essential accessory for punch card programming—with a diagonal stripe across the top it was way easier to recover from disaster.

ralfellis
Reply to  Carlo, Monte
March 12, 2021 1:38 pm

Ah yes – the fat felt-tip marker-pen.
I remember now…
RE

Otway dreamer
Reply to  Michael Lemaire
March 12, 2021 12:19 pm

You can program in DOS using script or .bat files

mikebartnz
Reply to  Michael Lemaire
March 12, 2021 12:24 pm

I was going to comment about MSDOS being an OS rather than a language. It often came with Basic bundled with it.

Dave L
Reply to  mikebartnz
March 12, 2021 1:51 pm

A few years before SB/MS/PC DOS, there was CP/M 80.

The CP/M distribution typically came with a macro assembler – that was useful as common word processors often needed some customization to deal with particular printers. WordStar for example came with source listing for Diablo 630 daisywheel printers and Epson ‘graphtrax’ augmented dot matrix printers.

John Adams
Reply to  Dave L
March 12, 2021 6:53 pm

Yep. Used them to make instruction manuals in the 80’s.

Loren C. Wilson
Reply to  Dave L
March 12, 2021 7:22 pm

WordStar and Superalc. Supercalc was an early speadsheet. It had a feature I still haven’teen in Excel – being able to copy a formula or group of formulas with eoither relative or absolute references without having written the formula specifically with either type of reference.

MarkW
Reply to  Dave L
March 13, 2021 9:17 am

My ex-wife worked for a lawyer who had purchased some kind of word processing system. Might have been a Wang. What I remember about it was that it used a non-standard formatting for it’s floppy disks. It also didn’t include a formatting program. If you wanted new floppy’s, you had to buy them from the company. Something like $10 per disk.

MarkW
Reply to  Michael Lemaire
March 12, 2021 12:39 pm

My first experience was also in college. A CDC, I do not remember the number.
In my first class we used punch cards. A few months later they were using teletype machines for data entry. Used a single line editor to do text entry. Might have been ed.

The first four languages that I learned were Fortran, Pascal, PL/M and ASM86.
For the ASM86 we used an Intel development station. It had 4 8 inch floppy drives, and the OS disk was always put in the first drive. The editor/compiler went in the second slot and the third and fourth slots were for your program and data.
I never tried to move one of those development stations, but it looked like it would take at least 2 people.
My first job out of college we used 6510 assembly. I also redesigned the circuitry and relaid out the circuit boards. There was something very satisfying about laying out tape. Both puzzle and art. In my second job I learned C, and that’s pretty much all I’ve used since then. Have done some C++ and Python though.

Interestingly enough, a few years back I interviewed with an elevator company that was still using PL/M for most of their code. All the new stuff was being done in C++, but they had decades worth of code that hadn’t been retired yet that still had to be maintained. There PL/M guy had indicated a desire to retire.

I thought I was a shoe in for that job. After all, how many people have ever worked with PL/M? I was wrong.

Alan Robertson
Reply to  MarkW
March 12, 2021 8:53 pm

Back when all of us were learning programming along with being keypunch operators, a friend (a joker, no less) inserted his stack of cards into the reader and headed out the door, just a moment before the wall of printers erupted, tractoring their entire box of paper onto the floor.
Of course, he said it was an unplanned event.

Mike McMillan
Reply to  Michael Lemaire
March 12, 2021 1:06 pm

I’m only 2 years younger and started with Algol on a Burroughs B5500, which I’m guessing ran around 2 MHz (that’s 0.002 GHz). I recall it had a couple megabytes of hand-woven-in-Haiti core memory cards. Punch card programs and one run overnight to get results from a chain printer. Graduated to BASIC, then C, then 286/287 ASM. My masterpiece was loading the Mandelbrot algorithm entirely onto the 287 math coprocessor. Aaaannd, I have proof:

comment image

Brian Jackson
Reply to  Willis Eschenbach
March 12, 2021 2:00 pm

CPM is not a programming language.
.
https://en.wikipedia.org/wiki/CP/M
,
Computer professionals reading this see it as a heck of a lot of BS.

Last edited 3 months ago by Brian Jackson
Brian Jackson
Reply to  Willis Eschenbach
March 12, 2021 4:52 pm

Consider yourself sued. All your are doing is bullshitting the non-programmers on this site.
.
I call that a language”

That shows how ignorant you are of computer programming.

Last edited 3 months ago by Brian Jackson
Clyde Spencer
Reply to  Willis Eschenbach
March 12, 2021 8:45 pm

Willis
What Jackson did is called “nit picking,” a form of red herring.

fred250
Reply to  Brian Jackson
March 12, 2021 9:11 pm

Brian does things using grunts and groans and huffing and puffing !

That is his language. !

As to his knowledge of computer programming….. roflmao !!

beng135
Reply to  Brian Jackson
March 13, 2021 9:47 am

Brainless, you are the lowest sort of troll, and full of hatred.

Last edited 3 months ago by beng135
paul courtney
Reply to  Brian Jackson
March 13, 2021 2:35 pm

Mr. Jackson: Thank you so much, I am a non-programmer and I was totally bull-shitted into thinking CPM was a language. Since it was so critical to the article, you sure showed him, huh? Anyway I sure would have been embarrassed at my next soiree, where I planned to impress the ladies with my “what’s your sign? Did you know CPM is a language?” patter. Shame on Mr. E for bull-shitting like that.
Mr. Jackson, THAT is bull-shitting. Hope it helps you recognize.

Mr.
Reply to  Willis Eschenbach
March 12, 2021 5:15 pm

Willis, were you sufficiently masochistic to use MP/M?

Alan Robertson
Reply to  Willis Eschenbach
March 12, 2021 8:39 pm

From that, one might expect that your tendencies to recognize usefulness vs. time spent, to have kicked in when confronted with an Altair 8800.

Last edited 3 months ago by Alan Robertson
Michael Lemaire
Reply to  Willis Eschenbach
March 12, 2021 5:51 pm

CP/M (Control Language for microprocessor, trying to make /M looking like Greek mu, for micro) was not a language either, but still an operating system made by Digital Research for the 8-bit microprocessor 8080 made by Intel.

Don’t confuse simple commands in an OS (DIR, etc.) with instructions in a programming language which can be used to write any sort of program.

I am sorry to sound pedantic but with your programming skills (way superior to mine) I thought this was understood.

Joe Crawford
Reply to  Michael Lemaire
March 13, 2021 8:04 am

Michael,
Depending on the hardware design it doesn’t take but a single instruction to build any program (without I/O of course): https://en.wikipedia.org/wiki/One-instruction_set_computer.

Add some form of input and output instruction and you can then build a 3-instruction computer that will perform most programming tasks.

MarkW
Reply to  Willis Eschenbach
March 12, 2021 8:13 pm

I too noticed the DOS/language comment and my first thought was that I didn’t agree. However since it was neither relevant, nor important, I said nothing.
However as certain individuals kept trying to turn this molehill into a mountain, I thought more about it.
The more I thought about the difference between an operating system and a computer language, the more vague it became. Eventually I decided that it didn’t really matter and that the whole issue was rapidly becoming a debate on how many angels could dance on a pin head type argument.
Brian, your apparent belief that you, and only you, have the right and intelligence to decide such issues for everyone is rapidly becoming your trademark.

Michael E McHenry
March 12, 2021 10:36 am

This re-enforces my perception that computer modeling of climate is the ultimate intellectual hubris

MarkW
Reply to  Michael E McHenry
March 12, 2021 12:40 pm

Modeling can be useful in helping you figure out what it is you don’t know yet.
It’s not useful for anything beyond that.

Robbin
March 12, 2021 10:37 am

Thank you Willis, I enjoyed your post and learned something too….

Ossqss
March 12, 2021 10:50 am

Thanks for the enjoyable read/ride Willis. I am reminded of a Kraftwerk song listened to while punching holes in cards back in the day. “Das Modell” from the album, yep, album “Die Mensch-Maschine”

I was not a Fortran Fan.

Ellen
Reply to  Ossqss
March 12, 2021 2:31 pm

Hey, Fortran was limited but useful. (It was a lot younger in 1959 when I started on an IBM 704, as were all the other languages.) But the best programming language I have used was a strange hybrid that ran on the Control Data 3100 in our lab in the middle Sixties. You could throw both Fortran and assembly language into the same program if you knew what you were doing. I reworked Spacewar to run on the 3100 using that hybrid.

And the spaceships in the first version got bigger and bigger as the game went on. I used too large a dt, and not enough terms in the expression. The second version was better, and the third good.

Ben Franklin, asked in 1747 what electricity was good for, replied, “If there is no other Use discover’d of Electricity, this, however, is something considerable, that it may help to make a vain Man humble.” Computer programming and modeling can serve the same purpose.

rbabcock
Reply to  Ellen
March 12, 2021 2:43 pm

.. and how many hours were spent looking for the bug just knowing it can’t be a result of your code.. only to have someone come in behind you and find it in one minute.

Ellen
Reply to  rbabcock
March 12, 2021 3:27 pm

I didn’t spend much time at all – it was a compact section of code. In those days, mathematical operations ate up cycles, so the program used scaling, which was a shift operation. The location of the problem was intuitively obvious to even the casual observer.

MarkW
Reply to  Ellen
March 12, 2021 8:20 pm

Most languages that I know of will let you mix assembly language into the code.
There are a few versions of C that use pragmas to allow you to insert the assembly language directly in-line. (You have to be careful doing this as it will make porting your code a lot more difficult.) In more general circumstances, you need to create subroutines in a different module, compile the module as a library and then make a library call to pull in the ASM.

gringojay
March 12, 2021 10:53 am

Logically one conclusion is obvious.

CCF430E9-54D5-444F-A922-0C0D4548DAB1.png
Itdoesn't add up...
Reply to  gringojay
March 12, 2021 11:07 am

Isn;t that what AOC wants to do? At least the first part…

Brian Jackson
Reply to  Itdoesn't add up...
March 12, 2021 2:05 pm

Isn’t that what was tried on January 6th, 2021?

Carlo, Monte
Reply to  Brian Jackson
March 12, 2021 3:18 pm

Liar.

Derg
Reply to  Brian Jackson
March 12, 2021 3:18 pm

Yes Pelosi did

MarkW
Reply to  Brian Jackson
March 12, 2021 8:21 pm

Not even close, but I’m sure that thinking this way will help you keep your sense of superiority intact.

Gunga Din
Reply to  Itdoesn't add up...
March 12, 2021 7:06 pm

But AOC (et al) don’t want to plug the US back in.

Curious George
Reply to  gringojay
March 12, 2021 2:01 pm

That sign is ahead of its time.

griff
Reply to  gringojay
March 13, 2021 2:16 am

I have had IT support jobs in the past where actually I have had to ask ‘have you tried switching it of and on again?’ and that worked…!

Joe Crawford
Reply to  griff
March 13, 2021 8:09 am

Griff, I had to drive 27 miles one time to plug in a keypunch for a customer.

fred250
Reply to  griff
March 13, 2021 10:54 am

Griff, have you tried just switching off?

It would make no difference to the worth of your comments.

H. D. Hoese
March 12, 2021 11:01 am

Great story, I remember some of that early on almost all watching on the side. Saw many, including family, immediately adapted to it. I played (real) softball with the brilliant man that started our university computer program, cautioned me about such. Huge machine, fortunately got smaller, didn’t cure the caution.

Why is it not understood that that is why we call them models? Cars, planes, once had one of the WWII B-17 models that were used for teaching identification. Never could get it to glide, but it had been very useful for its purpose. Ok, that’s too simple.

MarkW
Reply to  H. D. Hoese
March 12, 2021 12:44 pm

There’s an old movie about a plane crash in the desert. The crash survivors decide to use the parts of the crashed plane to build a new plane to fly them out of the desert.
The guy who did the design near the end admitted that while he was trained as an aeronautical engineer, he had spent his career designing model planes. When the other passengers started getting upset, he told them that models have to be better designed than full sized planes. In his words, a model has to fly without the benefit of a pilot.

Carlo, Monte
Reply to  MarkW
March 12, 2021 1:21 pm

Flight of the Phoenix, based on a true story.

RicDre
Reply to  Carlo, Monte
March 12, 2021 4:02 pm

“Flight of the Phoenix”

I recommend the 1965 version with Jimmy Stewart.

Carlo, Monte
Reply to  RicDre
March 12, 2021 9:27 pm

Yep, that’s it, a classic; never bothered with the modern remake.

MarkW
Reply to  Carlo, Monte
March 13, 2021 9:34 am

In my experience, remakes are rarely as good as the origninal.

griff
Reply to  RicDre
March 13, 2021 2:16 am

yes, the remake was rubbish!

Jim G
March 12, 2021 11:01 am

My recollections:
Adjusting the tone and volume controls to get the program saved on cassette tape to load.

The evil words: Syntax Error

dBase II

Our first 5-1/4 floppy drive for the Apple.
Wow! Fast and loaded every time

Adding a Z80 CPM card and 8″ floppies to the Apple II.

A 14″ plate 10meg hard drive (Altos 4 user CPM system.

Punch tapes for a wire edm.

Love the HP-15C
Still have mine from 1983.

Vuk
Reply to  Jim G
March 12, 2021 11:15 am

Lou Ottens, inventor of the cassette tape, died yesterday aged 94<a href=” https://www.theguardian.com/world/2021/mar/11/lou-ottens-inventor-of-the-cassette-tape-dies-aged-94&nbsp;“> link </a>

Komeradecube
Reply to  Jim G
March 12, 2021 11:44 am

The sound of an 8” hard sector floppy disk (on a Cromemco S-100 Z-80) click click click click …. click click click click. Microfocus Cobol. Z-80 assembler.

Reply to  Jim G
March 12, 2021 2:41 pm

http://bubek.net/pics/disklocher1.jpg

I remember I got a hardware tool to upgrade a 5-1/4 180 KB memory to 360 KB make it usable both sides in a C64 floppy drive.
Amazing times 😀

Last edited 3 months ago by Krishna Gans
RicDre
Reply to  Krishna Gans
March 12, 2021 4:15 pm

“I remember I got a hardware tool to upgrade a 5-1/4 180 KB memory to 360 KB make it usable both sides ”

One of the early computers I worked on was an IBM System/3 Mod 12 mini computer. It used 8 inch singled sided floppy diskettes as input in place of a card reader. The Diskettes had a notch on one site, that if covered, prevented you from writing on the diskette. We learned that if you cut a similar notch on the other side of the disk a flipped the disk over you could also write on the back sided of the diskette. IBM frowned on this but we never had a problem with it. We used “Dykes” (Diagonal Wire Cutting Pliers) to cut the notch.

Abolition Man
March 12, 2021 11:02 am

Thanks, Willis!
I had always thought that one of the biggest problems with climate models was trying to work with two massive, chaotic systems simultaneously! Per usual your posts make understanding easier; a skill that far too few ‘professional’ educators possess!
One of the few regrets I have in my life besides turning down a scholarship to Stanford was not accepting my parents offer of piano lessons when I was seven! I’ve added playing the first movement of “Moonlight Sonata” to my bucket list; Beethoven being one of the original rock stars in my opinion. Guitars are great fun, but if you’re serious about composition and songwriting piano is unbeatable! Stay safe and healthy!

markl
March 12, 2021 11:04 am

I spent a good part of my career in computer service management and can attest to your observations about programming and programmers. What I noticed about Climate modeling is a tendency to either compromise, discount, ignore, or be selective about the data to fit a desired outcome. “Forcings” are the elephant in the room. When doing compute intensive programs with finite element modeling that affect design parameters of say airplanes, bridges, cars etc. one always assumes the underlying material data is correct because it has been time tested. Not so much with climate modeling. It’s the result of producing a desired outcome vs. an accurate outcome.

John in Oz
Reply to  markl
March 12, 2021 4:40 pm

I was also in the hardware side of computers and found programmers to have a narrow view of how their programs would be used.

They never allowed for a cat walking over a keyboard and the subsequent effect of random inputs.

Clyde Spencer
Reply to  John in Oz
March 12, 2021 8:58 pm

Back in the days when I was doing a lot of programming, I spent more time trapping unreasonable input than writing the core program, so that the person using it wouldn’t crash it with out of range values. I also developed Computer Assisted Instruction software to supplement my labs. I considered the input trapping to be part of the learning experience because the students (if they were paying attention) would soon learn what wouldn’t work if they didn’t know what they were doing.

Reply to  John in Oz
March 13, 2021 4:33 am

Now, now – SOME of us did. Although in my professional career, I didn’t concern myself with cats; <i>users</i>, particularly from the marketing department or (shudder) HR, were much, much worse hazards.

Joe Crawford
Reply to  John in Oz
March 13, 2021 9:04 am

In my day, we considered software development experience as limited by the amount of time spent supporting code you had developed in multiple installations. Until then you didn’t have the foggiest idea of how reliable, usable or functional it was.

Reply to  John in Oz
March 13, 2021 9:20 am

So true! Long, long, ago I worked for Big Blue as a (rare) professional hire. I was running an in-house expected resource capacity planning (modeling) program. The program kept crashing and pissed me off so I tried to see why. Low and behold the program had no input bounds checking. At the company, EBCDIC was the encoding of choice and bounds checking non-trivial, but still necessary. To make matters worse, the program was written in APL version 1. I found two locations where bounds checking was making the program crash and filed bug reports.

The point of emphasizing that I was a professional hire, was that I had a lot of experience in programming and hardware design so picking up on the problem was relatively easy for me. The normal hires for this division were non-engineering/non-hard science types, for example my immediate supervisor was a history major. You get the picture. Everyone working with the capacity planner who hit a wrong key would worry that they had “broken” the program and restart without telling anyone.

Shortly thereafter I was offered a position with another company and since BB couldn’t/wouldn’t match it I resigned. The day I left the Branch Manager conducted the exit interview and told me the program had been used for two years before the first bug was reported. That bug was one of the two I reported, and reported just a week before I reported it. The other bug had never been reported.

Funny that many of the CVSs reported on programs to this day are essentially bounds checking problems. BUFFER OVERRUN is a bounds checking problem!

My language knowledge parallels many of others in this thread. Maybe the TI 9900 assembly, and being in the first programming class in the world, taught by Nikolaus Wirth required to learn Pascal might also be interesting.

Clyde Spencer
Reply to  wsbriggs
March 13, 2021 6:55 pm

William
You said,

… my immediate supervisor was a history major

When I was a young man working at Lockheed MSC, they were so desperate to hire college graduates for managers that they had a music major managing engineers.

Joe Crawford
Reply to  wsbriggs
March 15, 2021 6:56 am

Yea, Big Blue went through a phase where too many engineers promoted to management weren’t working out and had to be reassigned back to engineering jobs. So, they started hiring non-engineers, mostly with degrees in Business Management, for those positions. I left in the early phase of that trial. Don’t know whether it eventually worked better or not. Fortunately other companies and start-ups I work for after that still had mostly engineers and/or scientists in management. At least you could have a reasonable discussion with them.

MarkW
Reply to  John in Oz
March 13, 2021 9:47 am

In my company we have a core product that we use to support multiple customers.
For each new customer, we develop translators to convert the customer’s files into a standardized format that our core program processes.

We start when the customer sends us their documentation that describes their data formats. We write the translators and test it with internally generated test data.
Then we have the customer send us some of their data

There’s nothing like counting bytes in a record to try and figure out which field was increased or decreased in size, or where an extra field was squeezed in, and what kind of data is in that new field.

One of these days I may get a file from a customer that actually matches their documentation, but I’m not optimistic.

In most cases, we have to adapt our code to match the actual data, because the guy who wrote the customers code has retired, and nobody in that company knows the code well enough to make changes to it.

Last edited 3 months ago by MarkW
chickenhawk
Reply to  John in Oz
March 13, 2021 2:49 pm

My cat knows more shortcuts than I do. I’ve never been able to manipulate windoze better than she can.

otropogo
March 12, 2021 11:14 am

We’ve got a few things in common:
1. age
2. same first laptop , except I also got the serial FDD and barcode wand, upgraded memory to 32KB (voiding the warranty), but never learned Basic, or any other programming language)

3. dropped out of high school, then dropped out of grad school twice, but only diminished my earning power through these academic feats

4. have (putative) grandchildren, but have never met or talked to them, and don’t know their birthdate (twins) or location

I guess I’m like a Bizarro to your Superman (except I only have a cat)

I wish there were some way for (non-billionaire) programming-challenged people like me to access the programming skills of people like you.

For example, I’ve been looking for years for a simple database or even just a spreadsheet template to let me enter the details of my various nutritional supplements and medications so I can see if I’m getting too much or too little of each, how much each product is costing me per day and when I need to order more, etc.

A simpler problem concerns digital micrometers. I’ve bought half a dozen over the past two decades, and every one of them has a serial port. But none of the various vendors offer a data cable or software to permit them to be used with a PC to quickly record a series of measurements (as in checking neck expansion of cartridge brass). Things are actually going backwards for consumers in this respect.

Thirty years ago I could buy a multimeter from Radio Shack Canada that had a serial port and software for DOS and Windows. Ten years ago, or so, you couldn’t buy such a thing in Canada anymore AND you could no longer import it from Radio Shack USA. I had to get someone in the USA to buy one for me in a RS store and mail it to me here in Canada (thanks so much NAFTA!).

michel
Reply to  otropogo
March 12, 2021 12:46 pm

Get Livecode. Successor to Hypercard. Very, very easy to learn, and very powerful. Mac, Windows, Linux, Android. Here is the opensource version:

https://livecode.org/download-member-offer/

otropogo
Reply to  michel
March 12, 2021 7:27 pm

Thanks for the tip. I AM impressed that Livecode will run under Windows 7 with only 256MB of RAM and 150MB of disk space, and runs in compatibility mode in Windows 10, but otherwise the FAQ/FAQ left me mostly scratching my head.

Easy for you, maybe…

michel
Reply to  otropogo
March 13, 2021 12:14 am

Very very off topic, but OK, here is how to get started. You create what LC refers to as a ‘stack’, which contains ‘cards’. A card is basically just a graphical interface background, its something to put your components on.

You then start by dragging components across to it. These are of two sorts, purely graphical elements, which are not part of the programming, just look and feel. More important, you also place things like buttons, menus, fields which are part of the programming.

Start with something really simple, like the usual ‘Hello World’. You will need to drag over two components, a field and a button. Give each of them a name, like eg ‘field1’ and ‘calcbutton’.

Next you write little scripts for each of your active components. In this case you have only one, your button. So your program will consist of a script of that button, and it will be something like

on mouseup, put ‘Hello World’ into field1

generally speaking, programming in LC consists of writing scripts for objects and events. Events can be stuff like mouseup, mousedown. Objects can be stacks, cards, buttons, fields…

Buy a copy of Mark Schonewille’s book ‘Programming Livecode for the real beginner’. I guarantee if you work through it, you’ll be able to program. Its about 200 pages and very clear and gradually leads you into how to use all the different features.

Use tab separated text files to hold any data. Don’t worry if you find his section on arrays difficult. Its just about the only section of the book that is less than clear to a beginner, but if you get to arrays in your programming, you can see the finish line and can find someone to explain them to you more fully than he does.

There are tutorials on the LiveCode site also.

The great thing about it, apart from the ease of configuring the gui, is that its almost automatically structured at some level. For any given script, you can insist on writing spaghetti. But the underlying structure of scripting components means that your overall program will be in manageable self contained blocks.

Its not R, as recommended by Willis. Its not C++. But it will get you well started.

Good luck!

michel
Reply to  otropogo
March 13, 2021 12:31 am
Komeradecube
Reply to  otropogo
March 14, 2021 6:08 pm

I can write you a program to do this. What caliper are you using… a quick internet search doesn’t show one with serial or usb port.

otropogo
Reply to  Komeradecube
March 14, 2021 8:49 pm

That would be greatly appreciated. Of all that I’ve owned over the past 30 years, only the most recent purchase (a couple of years ago) had a name and it’s shown here:
https://www.amazon.com/Mastercraft-Digital-Caliper/dp/B07K8JVS5W
The serial port is neither shown nor described, but you can see its sliding cover at the top right corner of the display case. I’ve attached an image of it. BTW all of the digital calipers I’ve owned, and all that I’ve examined (ie. cheap ones) have the identical flat board with four flat contact strips. The second picture is of the port of a no-name caliper that’s at least 20 years older than the Mastercraft one. Their measurements differ by less than a tenth of a millimeter. Oops! The site will only let me attach one image.

P3141578b.jpg
otropogo
Reply to  Komeradecube
March 14, 2021 10:09 pm

Komerade,

After writing my first response to your kind offer, I found a link offering something close.

http://robotroom.com/Caliper-Digital-Data-Port.html

I haven’t studied the contents closely, but most of it is clearly beyond my technical competence. The cable he proposes would have to be modified by replacing the DIN connector with a usb one – something I think I could manage if I knew which of the USB pins to connect. And his software is meant to send the caliper data to a milling machine(?), not a PC. But I assume his notes would help in writing a suitable program for sending data to a PC.

cheers,

Otro

Pinout-of-imported-digital-caliper.png
John Loop
March 12, 2021 11:19 am

Nice, Willis. How anybody less than 70? and with a modicum of science/engineering training can appreciate the modern age, nor its vast complexity I don’t know. I regularly pay homage to the billions of transistors between here and there -created by human beings, not to mention the billions of lines of code -written by fallible human beings. My first experience was programming an HP 2116? to play ping pong with the lights and switches at Stanford in 1970. I think we programmed it with the switches….. IBM cards came later. Who will appreciate this when we are gone? Certainly not the modern crowd I think.
John

Mr.
Reply to  John Loop
March 12, 2021 5:23 pm

Yep.
I remember sitting at a pc with one of my nieces who recently got her degree in IT.
I brought up the DOS command screen.
She said – “what’s THAT?”

Ben Vorlich
March 12, 2021 11:27 am

I had a on and off relationshipwith computers. I had a couple of visits to CDC in Bloomington Minneapolis, It was a fascinating place and made a big impression on me. As a result I still take an interest in the Vikings, Twins and Timberwolves.

My experience with computers and languages is in the field of test engineering, finding other people’s dropoffs. Most had bespoke systems and languages. One was a 10 bit system by Elliott Automation, I think, that used two and a half 54 series logic circuits.

Mike Lyons
March 12, 2021 11:29 am

As a fellow programmer, though not nearly as seasoned as you, I wholeheartedly agree. I’ve built models for artillery shells and other well understood, narrowly defined physical processes. Even these can go horribly wrong. Garbage in, Garbage out. Computers are finite machines. Regardless of how slick the interface or how many processors you cram into it, it’s just a really fast Abacus. There’s no way you can predict the future of the weather, the stock market, or any other chaotic, nondeterministic process with a finite machine. That does not compute!

Clyde Spencer
Reply to  Mike Lyons
March 12, 2021 9:02 pm

An abacus can’t do logic tests and alter what it does based on the outcome of the logic test.

Carlo, Monte
Reply to  Clyde Spencer
March 12, 2021 9:30 pm

Also has non-existent undo support…

Aaron Schnelle
March 12, 2021 11:33 am

Do NOT fold, spindle, or mutilate.

Roger Knights
Reply to  Aaron Schnelle
March 13, 2021 2:05 am

Think or thwim.

Editor
March 12, 2021 11:34 am

Sigh, all this reminds me of all the web pages I’m getting close to writing but haven’t had time to for 25 years yet.

Brief notes:

Dad taught me the binary number system when I was about 7, then how to count in binary on my fingers – they go from 0 to 1023.

In 1963 he had designed the Bailey Meter 756, the first commercially successful parallel processor. No one knows that because it was a process control computer. It was successful because it was decimal machine and that made it a lot easier for power plant people to deal with. He said it was so easy to program a 12 year old could do it, and taught me how to program the I/O processor. That was before core memory was common (your photo), the system executed off a drum – think spinning cylinder with a magnetic coating on the outside.

It wasn’t until I got to CMU in 1968 that I realized I was born to be a programmer. The first course was using Algol on the Univac 1108. The lecturer was Alan Perlis, one of the inventors. In meeting the keypunch, which did not have a backspace key to put the chads back in, you had to duplicate the card to the point of the error and deal with it there.

The duplicate function brought an epiphany – copying audio tape or paper lost information. Here I could type a card, duplicate it 100 times, and the data on the first and last cards would be identical. Absolutely amazing.

Near the end of the semester my first on my own, for the heck of it program had a goal to simulate the trajectory of an Apollo module on its flight to the moon. First came Earth – print filled in circles and open ellipses on the line printer. Then Mercury – an orbit around the Earth. Those orbits kept spiraling inward. Thinking that might have been related to discontinuities in the atan() function, I changed to atan2() [hmm, very fuzzy memory, I should check the code], but the spiraling still happened. The problem might have been single precision floating point, but there’s little reason to explore that.

I set it aside as the end of the semester approached and never got back to it, but I learned a tremendous amount about simulations, cometary (parabolic and hyperbolic) trajectories did bizarre things until I adjusted the time steps as a function of distance between Earth and the comet, and all the “1 plus epsilon” issues with floating point.

In my sophomore year I got a parttime job operating the Computer Science Dept’s new DEC PDP-10. It remains my most favorite computer ever, a position that no computer today or in the future can ever equal. OTOH, I can’t go back to it either. Progress marches on, but at least we can look back!

Editor
Reply to  Ric Werme
March 12, 2021 11:44 am

One thing I did write recently was in response to the Y2K transition – can you believe there are actually people who don’t remember it?

Before that, there was the DATE-75 problem on the PDP-10’s “TOPS-10” operating system. I figured I better write that up as a reply. Old PDP-10 fans might like http://wermenh.com/folklore/dirlop.html – covers that, but also has examples about why assembler programming on the -10 was more productive than pretty much any other system I’ve encountered. A wonderful machine on many levels.

Joe Crawford
Reply to  Ric Werme
March 13, 2021 9:54 am

I never verified it but was told by several friends familiar it that DEC’s Fortran compiler wrote tighter code than most assembler programmers could.

Editor
Reply to  Joe Crawford
March 13, 2021 1:24 pm

Ahem. I was at DEC and I was told that by one of the compiler’s authors. That was an utterly ridiculous claim, as TOPS-10 and most user level programs dedicated several of the 16 general purpose registers to purposes like point to important data structure. In assembly code, they were “just there” across a module’s subroutines whereas Fortran had to pass them on the stack or other memory. I think my reply started out with “Well maybe….”

Around 2005, I was back at DEC and decided to see if I could make the IP checksum routine faster on DEC’s Alpha processor. It wasn’t much code, but I had already come up with better byte swapping routines, and well, manually writing RISC assembler code is difficult, what with dealing with caches, pipelines and other cruft that original (discreet transistors!) PDP-10 didn’t implement.

I came up with code that looked pretty good to me, nicely unrolled, did 32 bit math instead of the 16 bit inherent in the checksum. (I couldn’t do 64 bit because I had to handle overflows.) There were a couple wait states, but better than the old code.

As a lark (and recalling a CMU case where someone found his LISP code was faster than his compiled code), I rewrote things in C to try out. The C code was faster. The C compiler used a different computation that allowed some intermediate results to make it through the pipeline and the generated code had no wait states.

With my 30 year sense of superiority over compilers nearly completely smashed, I rewrote my code to use the compiler’s algorithm and came up with something that was a little faster checksumming data that was not in the processor’s cache.

MarkW
Reply to  Ric Werme
March 13, 2021 4:51 pm

Between multiple levels of caching, data and instruction pipelines, look ahead processing you need to be an absolute ASM guru to have a chance of outperforming most modern compilers.
Beyond that, every time you code gets ported to a new computer, you have to go through it all over again.

Editor
Reply to  MarkW
March 13, 2021 6:43 pm

Yes, and I was absolute ASM guru back in the day. And the IP checksum code is heavily used by NFS, and I saw I could do a better job than the original engineer did. The exercise was worthwhile.

MarkW
Reply to  Ric Werme
March 12, 2021 12:47 pm

then how to count in binary on my fingers – they go from 0 to 1023.

Ever played with Grey code?

Mike_la_jolla
Reply to  MarkW
March 12, 2021 5:40 pm

All the time. But that is a thing more applicable to those of us that design the hardware.

Editor
Reply to  MarkW
March 13, 2021 1:34 pm

I didn’t learn about that until I got to college. I was pleased to realize the PDP-10’s line printer had an optical drum position sensor built around a spinning disk that used Gray code. I didn’t try hard to do that on my fingers. Even if I figured it out it would be a challenge to convert between the two.

https://www.allaboutcircuits.com/technical-articles/gray-code-basics/

Clyde Spencer
Reply to  Ric Werme
March 12, 2021 9:06 pm

Then you know what binary 4 looks like on your hand. Not many people will know what you really mean if you yell, “Four!”

Editor
Reply to  Clyde Spencer
March 13, 2021 1:03 pm

Yeah, I kinda try to keep my fingers moving as I go past four and also aim them elsewhere.

I think I wrote a simple Algol to list all possible four letter words. Umm, 26 would fit on a 132 character line printer line, two blocks of 26 would fit on a page, that still requires 26 x 26 x 13 pages. I recall only printing 10 pages or, maybe starting ‘d’ or ‘f’.

That was probably inspired by Arthur C Clarke’s The Nine Billion Names of God and fulfilled a need to see for myself that a trivial subset of that grander goal was just a simple matter of programming.

https://urbigenous.net/library/nine_billion_names_of_god.html

Editor
Reply to  Clyde Spencer
March 13, 2021 1:42 pm

Also, “four” is appropriate. Apparently some Audi Quattro owners have gotten license plate “QQQQ” approved since it sort of looks like the Audi logo.

Komeradecube
March 12, 2021 11:36 am

I have a computer model of the second world war written by a team in Eastern Europe. I’ve run it several times (it takes months to run a simulation on a PC.) In my virtual ensemble of model results I have Germany winning the war most frequently, which is why I am writing this in German.

MarkW
Reply to  Komeradecube
March 12, 2021 12:50 pm

Fortunately for the rest of the world, Hitler was no where near as good a general as he thought he was.
Kind of reminds me of climate scientists.

paul courtney
Reply to  MarkW
March 13, 2021 4:38 pm

MarkW: Or you could say the model inputs for Hitler’s military aptitude were….not validated. Hitler’s model ran into reality at Stalingrad. AGW ran into its own Stalingrad some time back, but our press hasn’t noticed.

Brian Jackson
Reply to  Komeradecube
March 12, 2021 1:27 pm

Ever watch “The Man in the High Castle?” (Amazon Prime Video)

Derg
Reply to  Brian Jackson
March 12, 2021 3:19 pm

Isn’t that about the fraud Mann?

Clyde Spencer
Reply to  Komeradecube
March 12, 2021 9:08 pm

Did you ever play Eastern Front on the Atari?

griff
Reply to  Clyde Spencer
March 13, 2021 2:18 am

Or the paper version from SPI with 3000 individual cardboard counters and 5 foot square paper map?

Gino
March 12, 2021 11:43 am

Hi Willis,

This is a very good post, and leads to a couple of things that you might be able to clarify for me.

1) are the parameters used truly tuned within the main model or are they determined by independent experimentation?

2)Do the equations used within models satisfy the basic physical laws of mass,momentum, energy and and do these models balance all nodes within the grid to satisfy these laws at each grid point (viscous and non viscous flow, natural convection/advection, enthalpy states, partial pressure analysis to derive latent heat transfer, etc) ?

3) what is the general gridding methodologies and are models done in Finite element, volume, or difference methodologies?

The purpose behind my questions is that I suspect most models are exercises in curve fitting and not iterative solutions of the applicable physics equations.

But as in all things computer related… GIGO.

Reply to  Gino
March 12, 2021 12:50 pm

With respect to (1), both. There are two basic parameter tuning processes. I wrote about and illustrated both in my years ago climate models post here, ‘The trouble with climate models’.

Joseph Zorzin
Reply to  Rud Istvan
March 12, 2021 1:01 pm
Gino
Reply to  Rud Istvan
March 13, 2021 4:31 pm

Rud, you hit the nail on the head, and the post that Joseph found is EXACTLY what I was referring to. I have to admit I asked leading questions like a prosecuting attorney because I knew these answers existed and wanted to bring them into the record again.

From way back in my CFD days, I learned that true models were never “tuned” in themselves. parameterizations were ok, but you absolutely had to determine those coefficients in independent experiments My specific experience utilized the Hazen-Williams pipe flow approximations). Once you built your model, the only things you should do are to change boundary conditions or add additional equation terms and rerun the model. and by terms I mean a complete new equation set.

As soon as you start tweaking coefficients within the model itself because it didn’t predict the expiremental values you meaured you left the realm of reality and entered “curve fit land” and lost all ability to trust any “prediciton” your model made. Curve fitting is great at interpolating between know data points but absolutely sucks at extrapolation outside of the measured data set. Essentially you are trying to empirically fit a curve to take care of every possible variable, but some of those variables are codependent with other variables, or non linear/multi order, and so require their own set of equations to balance to come to the correct “parameter”.

MarkW
Reply to  Gino
March 12, 2021 12:52 pm

I didn’t see them in the list of parameters that Willlis provided, but I’ve been told that the parameters include things like amount of aerosols in the atmosphere, this parameter is allowed to vary over time, but I don’t believe it can vary spatially. It just takes one number for the entire world.

Clyde Spencer
Reply to  MarkW
March 12, 2021 9:10 pm

Works for really big volcanic eruptions, not so much for urban smog.

MarkW
Reply to  Clyde Spencer
March 13, 2021 10:02 am

The problem is that even today, the total amount of aerosols in the atmosphere is a bit of a guess. As you go back in time, the total amount becomes an even bigger guess.

This allows the modelers to put in whatever the amount of aerosols they need to get the results they are looking for. The ultimate Finagle’s constant.

Editor
Reply to  MarkW
March 13, 2021 1:48 pm

Even better, we don’t have a good understanding of how to map the industrial aerosol concentration into temperature change. The uncertainty is so high, at least a couple decades ago, we aren’t certain about the sign of the effect.

Clyde Spencer
Reply to  Willis Eschenbach
March 12, 2021 9:15 pm

And the real problem is that the programmers and their apologists say that the models are all based on physics — except when it comes to parameterizations. They are akin to what engineers call fudge factors. Subjective estimates of how things work. Sort of like E = b(mC^2 + e)

Gino
Reply to  Willis Eschenbach
March 13, 2021 4:51 pm

Willis, a well developed model satisfies the physical constraints at all nodes. When I wrote CFD code back in the day, equations for grid point state and flow would balance to match basic physical laws like conservation of momentum and energy. The energy flux from into a specific node would match the energy flux out of a specific node. the same for momentum and such. If a physcial manifestation , say aerosols, could not be described by an energy or momentum calculation, then it could be parametrized ONLY in terms of an equation that could be, and those parameters had to be calculated through independent experimentation. Your a), b), c) responses above indicate exactly what I am referring to. Your link to your article noting Gavin Schmidt had ne idea if his models could satisfy these requirements is exactly what I expected to find.

Your “emergent phenomenon” explanations are absolutely spot on, and until we can solve the navier-stokes equations without simplification we are unable to predict these phenomenon. Engineers come close with our general understanding and expressions like Nusselt, Prandtl, Rayleigh, and Grashof numbers, but they are approximate relations that require an critical mind to evaluate each and every application and the assumptions made in each case.

As far as point 3), I suspect that most models are finite difference which simply seek to measure the change in each grid point over time and drive that to zero with each successive iteration. This can be good for many generalized area, but anyone who deals in physics knows that when working with fluxes (energy, mass, etc) then a finiate volume approach neeads to be used, which is a much more complicated calculation set.

In the end, and to be simple, I suspect I agree with you greatly. To paraphrase mister Twain, the robustness of our scientists modeling is greatly exaggerated.

Gino
Reply to  Willis Eschenbach
March 13, 2021 7:49 pm

willis i replied to clyde by accident because of internet oopsies.

randomengineer
March 12, 2021 11:48 am

Hi Willis

The Prof George EP Box quote “all models are wrong…” is accurate enough. Even the simplest models are missing some factor — maybe it’s the tiny change in the plastic regime of a torsion bar depending on temp and humidity in an elevator — but the expected and extreme cases accounted for may not be affected by that parameter. (Not modeled, and it’s ok.) By “wrong” one can impute “incomplete” in that a change of materials classes could very well require knowing the plastic regime parameters, at which point the model’s lack of that particular control becomes apparent.

MarkW
Reply to  randomengineer
March 12, 2021 12:57 pm

They are also wrong in that they can only have finite representations of infinite numbers.

https://en.wikipedia.org/wiki/Floating-point_arithmetic

As a result a small error is introduced every time you do a calculation with floating point numbers. When you do an iterative calculation, such as Willis’s example above, this error compounds each time through the loop. After millions, or billions, of iterations the error can be almost as big as the number you you are examining.

Curious George
Reply to  MarkW
March 12, 2021 1:15 pm

There are ways to minimize this effect. For example, an iterative algorithm for a square root actually yields successively better approximations.

Itdoesn't add up...
Reply to  Curious George
March 12, 2021 2:26 pm

You can extract square roots (providing you can represent the number you are rooting) exactly. No need for approximations, except in choosing when to stop adding significant figures.

Clyde Spencer
Reply to  Curious George
March 12, 2021 9:18 pm

Just because some functions under some conditions will converge doesn’t mean that all functions converge under all conditions. The trick is to find a function that doesn’t diverge.

Gino
Reply to  Curious George
March 13, 2021 7:46 pm

you can minimize but never eliminate. and if the scale of the error still overshadows the expected result you have garbage.

Paul Penrose
March 12, 2021 11:50 am

Willis,
It sounds like our paths were very similar, except I’m 15 years behind you. I was lucky in that personal computers like the TRS-80 and Apple II came out when I was in High School. I spent a lot of time up at my local Radio Shack in those days banging away on a Model I TRS-80 (4K of RAM and cassette tape storage). The manager didn’t mind as long as I unofficially helped out with the cleaning and restocking a bit. By the time I got a job there, I knew the products and where the were in the store better than the manager.

Anyways, I just wanted to add my voice to this conversation: a computer model can only be as good as our understanding of what we are modeling. On top of that, even if you do a good job with V&V, you will still have bugs and undetected loss of precision due to a variety of issues. This is a particularly nasty problem when the iteration count gets high. And if you don’t do a good job with V&V, you are doomed. In other words, you are dead on Willis.

And just in case you are wondering, I’m not just some entry level grunt slinging java-script and html, I have worked in the embedded control world for most of my 30+ year career. If you wish, you can check out U.S Patent 6584356, which is for the first use of a real time operating system in a cardiac pacemaker.

Last edited 3 months ago by Paul Penrose
Joe Crawford
Reply to  Paul Penrose
March 13, 2021 10:08 am

“…a computer model can only be as good as our understanding of what we are modeling.”

I agree. Another way of stating that is: If you don’t understand the problem well enough to work it on an adding maching (given enough time of course), you can’t model it.

Chris Hall
March 12, 2021 11:52 am

Great trip down memory lane. My first language was FORTRAN (actually WATFOR on punch cards), but my first love was Algol, which in a way lives on as Pascal. Pascal was the great breakthrough language in that it could rapidly compile on an Apple ][ with only 64 kbytes of memory!!! Pascal is also considered “extinct” but it lives on as Delphi and Lazarus and of course it helped to make C++ a more useable language than the unnecessarily cryptic C. A well-written C++ program resembles Pascal more than C.

Here are a couple languages that I also played around with over the decades. Forth was a language written by someone that must have fallen in love with HP calculators. Either that or someone from Poland walking backwards. Then there was the bizarre Prolog, where you specified the answer and the computer figured out how to get there. It almost never worked for me.

These days, I like to use opensource when possible, so if I have to tinker with Matlab, I prefer Octave. I’ve used R, but not to the level you have mastered.

Finally, I completely agree with your comments about models. I remember creating an NFL football betting system where I had “regressed the past”, very much as climate models attempt to do, but with far fewer parameters. It predicted the past beautifully. The future, not so much.

Willis, do you have a web site where it is possible to download sample and/or simple R scripts to perform some of the magical analysis you do on things like Ceres and Argo data?

Brian Jackson
Reply to  Willis Eschenbach
March 12, 2021 3:44 pm

 “if you program you know the drill.” very true, and if you program for a living, and do it your way, they show you the door very quickly.

MarkW
Reply to  Brian Jackson
March 12, 2021 8:32 pm

I’ve been a programmer for almost 40 years, and what Willis describes happens at every company I’ve ever worked at.
Sure, the style manual may say otherwise, but when it’s crunch time and deadlines are approaching, style, shmyle, get the dam thing running.

paul courtney
Reply to  Brian Jackson
March 13, 2021 5:26 pm

Mr. Jackson: Your comment would be cruel, maybe funny, if he had ever been shown the door by “them”. If “they” never showed him the door, then your comment shows your ignorance. Have you got an example of a “they” who showed the door to the target of your obsession?

Chris Hall
Reply to  Willis Eschenbach
March 12, 2021 7:03 pm

How could one access your Public Dropbox folder?

Chris Hall
Reply to  Willis Eschenbach
March 13, 2021 6:26 am

Thanks Willis. This will be very educational and a perfect way to get reacquainted with R.

Clyde Spencer
Reply to  Willis Eschenbach
March 12, 2021 9:26 pm

Forth! I forgot Forth on my list of languages, and also Logo.

Atari produced cartridges for Forth and one of their more interesting educational forays was a cartridge with Logo and Pilot. Pilot used recursion and produced some of the most interesting graphics I have seen outside of fractal graphics.

beng135
Reply to  Clyde Spencer
March 13, 2021 10:24 am

My Commodore Vic-20 also had a Pilot cartridge and “Compute” magazine showed a few very interesting graphics programs for it to type in & run.

RicDre
Reply to  Chris Hall
March 12, 2021 4:36 pm

“Great trip down memory lane. My first language was FORTRAN”

My first language was also FORTRAN, an old version called “PDQ FORTRAN” which ran on the first computer I ever programmed on, an IBM 1620, which had a console typewrite for I/O and also a Card Reader/Punch for I/O and something like 60KB of storage (“Core” storage like the picture at the top of the article). I also owned a APPLE ][ and learned to program PASCAL on that machine.

Most of my programming was “Business” programming but the same rules applied, if you did not understand the process you were trying to automate your program would produce lots of useless output (but do it quickly!). I was always amazed at the number of people that thought that if the output came from a computer, it must be correct.

Clyde Spencer
Reply to  RicDre
March 12, 2021 9:32 pm

My first FORTRAN program (1966) was written to run on a DDP-24, but it wouldn’t compile because it exceeded the 24K of memory. It made me a better programmer because I had to learn to use DO loops and subroutines. The punched paper-tape input also forced me to learn BASIC so that I could test all the subroutines before committing to the time to punch up the tape.

Paul Penrose
Reply to  Chris Hall
March 12, 2021 9:24 pm

I had forgotten Forth – I kind of liked programming in that language. I got a taste of assembler on an HP3000 mini before that, so Forth wasn’t quite as novel as it would have otherwise been. So many languages over a lifetime, but I have to say, you can make a right mess in any of them. Just like you can write really elegant code in any of them. Some of the best designed and structured code I’ve ever written was in Z-80 assembler, I kid you not. The hardest part of writing a program is not the coding; a dunce can be taught to code. The hardest, and most important parts are the requirements and design. If you don’t know what you are supposed to be doing, and/or don’t have a good design on how to do it, then it doesn’t matter how masterful you are at churning out code, it will still be useless.

So many “developers” today startup any new program they’ve written in the debugger because they expect it to fail. I don’t. I expect it to work and I’m always a bit peeved with myself when it doesn’t. Now there’s usually something that doesn’t work quite right, but the number of initial bugs is quite low now. I once told a fellow developer that I do most of my debugging in the design phase. He rolled his eyes, but my code had the lowest number of defects reported by the verification team and I still turned out as much code per month as anybody else.

Chris Hall
Reply to  Paul Penrose
March 13, 2021 6:37 am

I agree with your thoughts on code design and I despair of many “modern” programs that are just bloatware and have poorer performance than some stuff I used to run on a CPM card in an Apple ][! With the exception of video and graphics, which need a LOT of memory, most programs don’t give all that much added value over their ancestral roots from the 80s and early 90s. I particularly despair of Labview programs. Although it is barely possible to write good Labview code, you need to have a background in classic coding techniques to even think about them. And don’t get me started on documentation! It’s the definition of spaghetti code, yet it’s all the rage in the instrumentation world. Try V&V with Labview, then take a stiff drink. Any language where all of the global variables and subroutines reside in separate files is an absolute joy to maintain.

MarkW
Reply to  Chris Hall
March 13, 2021 10:14 am

My biggest complaint about C++ is how much stuff it does for you. Unless you are very disciplined and spend time thinking about everything that is going on under the hood, you will end up with slow, bloated code every time.

MarkW
Reply to  MarkW
March 13, 2021 11:04 am

Because of how much it does for you, it is possible to write code faster. Which is why many managers love it. The philosophy these days seems to be that if a program is slow, they can just throw more iron at it and if it uses a lot of memory, then memory is cheap.
It’s very frustrating for someone like who learned on systems with just a few hundred thousand bytes of memory, and more memory cost something like $1000/megabyte.
Also I had to learn all kinds of techniques to make sure my code was efficient. Almost all of those techniques have been built into modern compilers.

Paul Penrose
Reply to  Chris Hall
March 13, 2021 11:48 am

Labview is definitely one of my least liked computer languages/environments. It is cryptic, poorly conceived, and difficult to write functional programs in. I also detest Perl, however you can write good code using it which approaches elegance. There are also a lot of good packages available to extend it. The problem with Perl is that it makes it all too easy to write nearly undecipherable crap; and in fact it could be argued that it encourages such coding practices. But I digress. The key is to use the proper tool for the job which is available to you. Unfortunately that is sometimes tools like Labview and Perl.

MarkW
Reply to  Paul Penrose
March 13, 2021 10:56 am

I agree that spending time thinking about what you want to do is critical, however, depending on how complex the problem you are working on is, it can be nearly impossible to think through all the possible variations and combinations prior to coding and testing.

A couple of years ago I was asked to write a program that could first format and display individual records from an input file, then as a second phase, add a filter that could select individual records. They wanted to be able to do things like ((Field1+Field2)>Field3) & (Field4=”D”)). They also wanted the ability to make modifications to fields, such as right and left trimming, converting to upper or lower case, etc.
First I had to develop a way to describe incoming records so the program could isolate the individual fields. Then I had to design a way to parse then resolve the equation. The parser and resolver had to be able to handle ASCII and binary data, arithmetic, logical functions as well as process the output of the various sub-functions.
I ended up designing a doubly linked list that fully described each element in the record. I went with doubly linked list because I needed to be able walk the list in both directions and a linked list made it easier to remove elements as they were resolved. The parser walked the link to find the deepest elements, then it applied standard precedence rules to figure out the order within a single level of parenthesis.

Applying all the rules, figuring out which functions don’t work with each other, figuring out how to collapse the linked list as the functions are resolved, ended up taking about 15,000 lines. It two weeks to wrote the code. Then I started feeding it simple formulas and gradually increased the complexity. It took another two weeks until I was satisfied that all of the bugs had been beaten out of it. The program has been in use for two years and other than an occasional request for enhancement, there have been no new bugs found.

Paul Penrose
Reply to  MarkW
March 13, 2021 11:53 am

Agreed. The problem domain is seldom small enough to allow you to consider every possible input, output, and processing path. However the more you can account for in the requirement and design phases, the fewer problems you will encounter later, and the less likely a costly logic redesign will be needed.

March 12, 2021 11:52 am

I was taught R in undergrad, but I switched to Bash/Awk after finding a good job after school. I taught myself by patience in RTFM. It was quite a challenge and no one expected that from me.

I became that much more efficient than others stuck in Excel, R or Python, I got an award for it 🙂

I haven’t been able to find an easier, terser and cleaner way to manipulate plain financial tables.

I like Linux because it’s a gui wrapped on top of a complete programming environment, rather than a programming environment you have to install on top of a black box gui.

After more than a decade of financial fortune telling (still do it), I decided to put my skills to something else. I hope some people are enjoying mommy’s amateur science hobby other than my children.

Thanks for the post, Willis. -Z

Brian Jackson
Reply to  Zoe Phin
March 12, 2021 1:40 pm

 it’s a gui wrapped on top of a complete programming environment”
..
No, Linux is not a gui. Linux is not a programming environment. Linux is an operating system. The programming environment runs on top of Linux. The gui also runs on top of Linux. Linux can be run without either a gui or a programming environment.

Reply to  Brian Jackson
March 12, 2021 1:49 pm

Then Windows and Mac is just a kernel too. Weird way to see things. No?

I like to include basic user utils as well. Linux uses them when booting into a USEABLE environment.

Brian Jackson
Reply to  Zoe Phin
March 12, 2021 3:12 pm

Current Macs are unix based. Windows is a bowl of spaghetti masquerading as an operating system (which you correctly called a black-box gui.)

Itdoesn't add up...
March 12, 2021 11:53 am

Been through much the same history on computing. At school I learned Algol and FORTRAN, and we had access to time on the local university’s computer (limited to 1 minute of runtime per programme). Punched out cards with a hand punch. Those doing economics were able to run a macroeconomic model at LSE via a teletype machine: today’s Sim-Econ games are probably much more sophisticated.

My introduction to modelling applications was somewhat different. I was fortunate enough to be employed in looking at some of the implications of the infamous Limits to Growth study, so the next language I learned was DYNAMO (BASIC and 8080 assembler came later, when I got my first computer – a ZX81 which I soon expanded to 16k of RAM in which I contrived to run refinery LP simulations), in which their mode was written. I spent some time picking over its entrails and writing other simultaneous differential equation models both as programming practice and as numerical testing of the tendency of solutions to blow up because of the limitations of rounding errors and Runge-Kutta fourth order integrations. I then helped design (my co-designer was a physicist by training, and insisted on things like dimensional analysis) and did the writing and punching in FORTRAN and the many runs of a resource model. We had the benefit of some expert input from mining and geological and metallurgical specialists. We tried to capture some of their ideas in the modelling. Despite 4 card trays (about 2 ft long each), in reality it was fairly rudimentary. But it was more than enough to teach me that Limits to Growth was a prisoner of its assumptions for data and modelling: there were other answers that were more plausible.

Looking back on it decades later, I am pleased to report that our modelling turned out to be much closer to reality than that implied by Limits to Growth. Perhaps we were just lucky. Perhaps because we didn’t start by assuming the answer and tweaking the modelling to fit. The other great lesson was from seeing modelling used for political purposes. It has made me naturally suspicious of models, and insistent on proper relation to real world physics and measurement ever since. Not long after, I got introduced to some of the math of chaos and catastrophe theory which provided more insight on the need for caution in following models.

TallDave
March 12, 2021 11:57 am

who needs GCMs when anyone can model what the IPCC will decide what the models should say

Last edited 3 months ago by TallDave
Tim Gorman
Reply to  TallDave
March 12, 2021 1:32 pm

Since most of the models turn into nothing more than y = mx+b equations after about 50 years all the complexity of the models is just wasted effort.

Clyde Spencer
Reply to  Tim Gorman
March 12, 2021 9:37 pm

Where y = temperature and x = CO2 concentration 🙂

ThinkingScientist
Reply to  Clyde Spencer
March 13, 2021 6:24 am

Actually its y = temperature and x = input forcings in W/m^2

fred250
March 12, 2021 12:05 pm

Another big problem , is that if they are hindcasting to say, GISS,

.. they are hindcasting to something that is fabrication in the first place. !

If they reproduce that temperature fabrication,

… they are almost certainly WRONG before they even start

fred250
Reply to  fred250
March 12, 2021 12:16 pm

eg, If your elevator shaft is 100m tall, and you give it data that says its 110m tall

Things aren’t going to work very well !!

Joe Crawford
Reply to  fred250
March 13, 2021 10:23 am

Might work fine until someone press the wrong button :<)

Tom Abbott
Reply to  fred250
March 14, 2021 9:46 am

That’s right, the alarmists are hindcasting the bogus, bastardized instrument-era Hockey Stick temperature profile.

They should try hindcasting for the *real* global temperature profile as represented here by the U.S. regional surface temperature chart, which is also representative of regional surface temperature charts from around the world.

U.S. regional surface temperatue chart (Hansen 1999):

comment image

Hindcast this.

Last edited 3 months ago by Tom Abbott
March 12, 2021 12:06 pm

About a decade behind you, e.g. PLATO IV, OPM. Modelers best assume Constructal law.

To bed B
March 12, 2021 12:06 pm

It’s not just computer models.

My favorite paper was one in which a colleague had a strange result. The measured rate coefficient should have been constant but was inversely proportional to the reactant concentration. I thought of a reason why this should occur and a quantitative model that was just basic arithmetic. A computer was only used to calculate the values of three variable paramers for a best fit to the data.

The final equation had k inversely proportional to the concentration and variable parameters that were realistic when fitted to the data. Good enough to claim that we figured out what was happening. The maths of the model was easy to check, just arithmetic that barely filled half a page. That the fit was good was easy to check. It was a paper that would have passed peer review and been accepted as the truth.

But! We could measure one variable parameter using a completely different experiment, so we did, and got a result a factor of ten different. We still published it as kind of close-but-no-cigar paper.

The point of the anecdote is that the model did not have the complexity of climate models. A chemical engineer could have used it as reality, plugging in variables and trusting the calculation. In reality, the mistake would have been picked up earlier but imagine if 97% of chemists said that they were 95% confident in it so ignore those who measured the variable parameter, and those sceptical of it should just shut up? Go to jail for racketeering, even.

chemman
March 12, 2021 12:08 pm

Well done Willis. I could follow what you were saying even though I’m not a programmer. I fell in love with Chemistry after getting my first chemistry set in Junior High. Went on to become a high school chemistry teacher. I always included a section of what models are and aren’t so the students wouldn’t get a false impression that models are reality.

March 12, 2021 12:14 pm

Brilliant synopsis.

Combine it with this interview of Judith Curry: https://www.youtube.com/watch?v=BOO9mafcA3s&t=2s

After understanding what you both wrote it makes me wonder how many people have to die when the electric energy system is totally revamped because the models said we had to.

Lee L
March 12, 2021 12:16 pm

Wow Willis. So cool to read your history. I followed a similar path until I didn’t. 2 10th grade students from every school in the city were chosen after candidates wrote a science/math exam and I got to be one of those in 1964. (Joe Berg Seminars). We got to listen to and meet professors from the university once a week and I found it a lot of fun. When we were invited to the university to learn a bit about computers I was already learning about Eccles-Jordan circuits(inventors of the flip-flop) and those punch cards we stole from the keypunch room were also good for making toy rockets!

I still have my 8k Commodore PET (Personal Electronic Transactor) with chiclet keys in the basement along with an aluminized paper printer and 20 virgin rolls of paper.

First commercial programming was done on Z80 s100 computer using CPM and Bill Gates’ first Fortran compiler to control a wood processing machine. I actually worked on a control system that had that magnetic donut memory so you didn’t have to reload the program if the power died. It came as part of the huge waferboard press from Germany in ’84.

The model of our manufacturing process that I wrote in VBA and Excel showed me how easy it is to fool yourself with these programs. I have never understood how they manage the cumulative errors in iterative climate models. Maybe they don’t?

Lawrence E Todd
March 12, 2021 12:17 pm

I do not trust computer models because I worked with computer models going back as far as 1968.

Editor
Reply to  Willis Eschenbach
March 13, 2021 1:56 pm

Indeed!

Jan Fluitsma
March 12, 2021 12:19 pm

That brings back memories. My first employer in 1980 was Datapoint and the first computer I learned the internal workings of and how to repair was the 2200.
Two years later I joined the Amsterdam Stock Exchange as junior Cobol programmer. Punchcards were still used and very important for the department. They were our mail system, calendar, notepad, predecessor of the stickies, we could not function without them.

Brian Jackson
March 12, 2021 12:20 pm

“I learned my fourth computer language, MS-DOS.”
..
MS-DOS is not a computer language.

Mr.
Reply to  Brian Jackson
March 12, 2021 5:46 pm

You write commands in MS-DOS language. That’s all it recognizes.
If I entered “Brian is an asshat” as an MS-DOS command, it would not know what I meant.
(on second thoughts . . . )

fred250
Reply to  Brian Jackson
March 13, 2021 12:50 am

You really are showing yourself to be monumentally STUPID, brianless. !

MS-DOS is a set of words or commands that is used for communication between the user and the computer.

It is a computer language..

Its main function just happens to be user interface and disc operation instructions.

Chris Hall
Reply to  fred250
March 13, 2021 7:01 am

Ha ha. Of course you’re both right. DOS is a computer program that is both an operating system (“cleverly” masquerading as a set of interrupt calls) and a user interface which includes the ability to run a rudimentary scripting language (bat files). Some people are overly picky, but if someone says they programmed in DOS, I’m going to assume they mean bat file scripting unless they specifically mention an interpreted language like early BASIC running under DOS or an early compiler (or even assembler). The best was Turbo Pascal. Most C compilers were agonizingly slow and they had a bad habit of having wayward pointers that could destroy the operating system, require a reset button or the infamous three fingered salute to recover.

One of the reasons I liked Pascal was that it forced a discipline that increased the chances that the program would actually do what you wanted. The fact that you had to define things before using them meant that it was a 1-pass compiler, vital when processors ran at the rate of a few MHz. And it also meant that the compiler and you had to be on the same page as to what variables meant. That confusion and the insistence on strict type checking meant that you tended to avoid the spectacular explosions caused by sloppy C code. Object Pascal’s fingerprints are all over C++ and although you can be sloppy with it, if you write programs as if they were in Pascal, you will be rewarded in the end.

One thing I just remembered. If you watch the original Terminator movie, the world from Ahnold’s point of view always has computer code scrolling by. I recognized where that code comes from because I once had to write subroutines in it to control lab equipment that was called from UCSD Pascal. It’s 6502 assembler code from an Apple ][, a computer not quite up to the job of controlling a Terminator.

fred250
Reply to  Chris Hall
March 13, 2021 11:00 am

Ah, the old Apple ][e. Did some early programming on them.

The “peak and poke” were fun because I could build little interface boxes to control external machines.

Made a cute little car that could follow a curvy line on a sheet of card.

AARGH63
March 12, 2021 12:25 pm

Hmmm . . . core memory invented by Dr. An Wang, founder of Wang Laboratories.

Brian Jackson
March 12, 2021 12:27 pm

You left out mention of the cert from Aames, and your stint at Sonoma,

Brian Jackson
Reply to  Willis Eschenbach
March 12, 2021 5:04 pm

I thought is was a blog about climate, not the life and times of a wanna-be computer programmer.

MarkW
Reply to  Brian Jackson
March 12, 2021 8:40 pm

As usual, what you think and reality have little in common.

The point of the article, which you once again go out of your way to miss, was about why climate models range from bad to useless.

The life history was to show why Willis is qualified to make such judgements.

Last edited 3 months ago by MarkW
Tom Abbott
Reply to  MarkW
March 14, 2021 10:00 am

Well put, MarkW.

You are being very reasonable. I fear you may be wasting your time though, since Brian seems intent on disrupting the conversation with his angry comments, regardless of topic.

eyesonu
Reply to  Brian Jackson
March 12, 2021 8:53 pm

BJ you are such a blowhard.

fred250
Reply to  Brian Jackson
March 12, 2021 9:20 pm

Yet brian is making it about brian being a wannabe something.. anything!!

… and its failing. !

You are still an abyss of empty blah !!

Your facade of egotistical bravado is hilarious. 🙂

You really do have deep-seated emotional and mental issues.

MarkW
Reply to  fred250
March 13, 2021 11:09 am

Pseudo intellectual is becoming a synonym for progressive.

beng135
Reply to  Brian Jackson
March 13, 2021 10:33 am

I thought is was a blog about climate

Brain-dead, read the “About” link at the top of the page. This blog is about much more than just climate.

Last edited 3 months ago by beng135
John Dilks
Reply to  Brian Jackson
March 13, 2021 9:02 pm

Brian, you are getting tiresome. Are you a thirteen year old that is trying to start a fight from a safe distance? Your constant rudeness is just ridiculous.

Brian Jackson
March 12, 2021 12:28 pm

“Members of his class of models are notoriously cranky, unstable, and prone to internal oscillations and generally falling off the perch. “…..especially when their predictions tell you to bring an umbrella to work.

Brian Jackson
March 12, 2021 12:29 pm

Do you think climate models are skillful?

fred250
Reply to  Brian Jackson
March 13, 2021 11:03 am

stoat ! ROFLMAO

The very bottop of the fetid abyss when it comes to anything related to science or morality.

You really know how to dig deep in to the putrid slime !.

Tom Abbott
Reply to  Brian Jackson
March 14, 2021 10:09 am

Climate models have proven extremely skillful at reproducing the bogus, bastardized, instrument-era Hockey Stick chart profile.

That’s because the climate models were tuned to reproduce the bogus Hockey Stick chart “hotter and hotter and hotter” temperature profile.

You have a bogus Hockey Stick and you have bogus Climate models that reproduce the bogus Hockey Stick profile and it’s all dishonest computer manipulation of the data.

The real temperture profile of the Earth doesn’t look anything like the bogus Hockey Stick profile.

The real temperature profile of the Earth is based on actual temperature readings taken by human being over many years.

The Hockey Stick chart and the Global Climate Models are nothing more than the programming of computers to reach a certain outcome.

A huge fraud has been perpetrated on the world by alarmist computer data manipulators. That’s what we have with the bogus Hockey Stick chart and the Global Climate Models that aim to duplicate a false temperature history.

WXcycles
Reply to  Brian Jackson
March 12, 2021 4:46 pm

NO

They are not testable in practice, because Climate change is not a 30 year period phenomena, just because a bunch of weather record editors, with nothing but current data and altered record declared it to be so.

Earth itself does not work in such corrupted ways to make a crust.

In geology unambiguous climate change that we know of (and there is no other we know of, btw) is on a ~500 year time-scale resolution.

The LIA cycle was detected on a similar time scale.

All else is the product of unscrupulous corrupting of recent more detailed records.

While everything shorter in time-scale on the observational level is the net of weather cycle noise.

So the question of model predictive ‘skill’ is pure nonsense as applied to climate models, as they have no skill which they can demonstrate without recourse to a ‘time machine’ to sample a trend in 500 year future sample increments, which would negate the need for a GCM prediction in the first place.

Blah blah blah ‘skill’ blah blah blah.

Stop kidding yourself Brian, you fool no one here but yourself.

MarkW
Reply to  Brian Jackson
March 12, 2021 8:42 pm

40 years worth of wrong predictions show that climate models have no skill.

michel
Reply to  MarkW
March 13, 2021 12:24 am

One of them demonstrably does have skill, the Russian one. The problem is they take their one good proven successful model and then for some unaccountable reason mix it up with ones proven to have failed (by averaging their results). The result is they have degraded the one model that is probably fit for purpose.

If this were medicine, drug A, the equivalent would be saying we have 50 different models of the effects of large scale administration of drug A on the target population.

Some show mortality of 60%, some show cures of 70%. None come very close to predicting the results of previous real world trials. There is one model and only one which has successfully predicted the results of previous uses of A in this context, and it shows mortality of 20% and 5% cures, which tallies pretty well with field experience.

So what we do is average those results with all the ones that have failed, and tell our policy officials that this shows what is needed is large scale immediate dosing of the entire country…

Well, maybe I am missing something. Very much like to know what.

fred250
Reply to  Brian Jackson
March 12, 2021 9:22 pm

“Do you think climate models are skillful”

MOST CERTAINLY NOT !!

They are totally lacking is SO, SO MANY areas, and can’t even hit the side of a barn with their scatter gun. !!

They barely reach the level of a low-end computer game

Last edited 3 months ago by fred250
March 12, 2021 12:32 pm

Willis, that was a trip down memory lane for me. My main thing in college turned out to be math modeling of all sorts (an example, the classical predator prey equations done three ways: in calculus, in numerically solved Fortran, and via probabilistic Markov chain matrices). My senior/PhD thesis (separate story) input/output (I/O) dynamic model of nuclear power was written in Fortran on Hollerith cards—two big boxes holding about 1000 cards. And yes, debugging was a real PITA. Bonus was that Vassily Leontief won his Nobel prize in Economics for I/O that year, and as a prized pupil I got to celebrate with him.

You DO get around. When I was the first global sr partner head of BCG’s then new ‘Time Based Competition’ operations practice in the late 1980s, I bought Stella for each office’s TBC team. Stocks, flows, converters, connectors. Visually simple so our clients could understand our work and recommendations. Got to know creator Barry Richmond very well during his time at Dartmouth. He used our stuff (disguised) to market Stella at his High Performance Systems, and he brought us new clients. Win-win.

Jolyon Hallows
March 12, 2021 12:45 pm

I think I’ve been programming computers even longer than you, Willis. The first ones were IBM plug boards. But throughout my history with computers, one phenomenon stands out: people trust whatever they say. An example. I wanted a bank loan for a startup (involving computers, of course), so I did a simple model showing projected revenues and expenses, with the revenues exceeding the expenses (of course). I expected the bank manager to grill me about my assumptions. Never happened. He said, “This was done by a computer?” I didn’t correct him that it was done on a computer. Then he said, “It must be accurate. How much do you need?” I wish time had eroded this blind trust in electronics, but given the fealty paid to models–whether climate or COVID–I remain disappointed.

RicDre
Reply to  Jolyon Hallows
March 12, 2021 4:52 pm

“The first ones were IBM plug boards.”

I learned how to program IBM plugboard machines in college though never actually did it for a living; IBM Accounting machines, Collators and Reproducers and of course keypunch and sorters which were not programmable but necessary to feed the beasts with cards. Those plugboard machine could be pretty cantankerous things to program but it was was fun when, with enough plug wires and patience, you got them to do a complex job correctly.

Carlo, Monte
March 12, 2021 12:46 pm

Regarding core memory: about 1965 or 66 as a kid I went on a tour of IBM-Boulder (Colo.) which included a walk through the huge room dedicated to fabricating magnetic core memory planes. I remember row-after-row of people sitting at benches and peering through magnifiers to thread the x, y, and sense wires by hand. Those hundreds of kilobytes of main memory were exorbitantly expensive.

John in Oz
Reply to  Carlo, Monte
March 12, 2021 4:07 pm

The first thing I noticed in the core store memory photo was the lack of sense wires. How did the one shown sense if the polarity changed on a ‘read’?

FYI, in the Oz navy I maintained an Oz-designed anti-submarine system (Ikara) that used 32K core store memory to track submarines and guide a missile-mounted torpedo over same. It could process inputs from radar, sonar and external data links (other ships and/or helos).

Amazing what could be programmed into such a small amount of memory.

Booting the system required switch inputs using octal numbering for the code, then a magnetic tape. Lots of wire wrap backplanes as well.

Memories!!!!!

Carlo, Monte
Reply to  John in Oz
March 12, 2021 9:45 pm

I was puzzled by the lack of sense wires in the photo also.

Sounds a bit like Apollo programming.

Spent about 10 years running an HP-1000 RTE-VI data acquisition system, it also had a 16-bit front panel switch on the CPU. Had the switch setting memorized that booted it up from the disk drive. Wrote many lines of Pascal for it.

noaaprogrammer
Reply to  John in Oz
March 12, 2021 9:53 pm

Yes, the first thing I also noticed was the missing sensing wire that ran diagonally through the ferrite donuts. Maybe the photo was taken during manufacturing before the sense wires were threaded.

Dnalor50
Reply to  noaaprogrammer
March 13, 2021 1:13 pm

The sense wire is probably on the other side of the circuit board

Reply to  John in Oz
March 13, 2021 1:40 pm

>>
The first thing I noticed in the core store memory photo was the lack of sense wires.
<<

Core memory I studied usually had four wires–two for half-select, one for inhibit, and the sense wire.

Jim

Editor
Reply to  John in Oz
March 13, 2021 2:03 pm

There may have been a scheme that fished the polarity changes by, umm, “back EMF” on the X and Y wires. That could have been a big win, as people may never have gotten good at automating threading the sense wire.

DEC hired skilled seamstresses to “sew” their core planes.

Likely boring work that paid well.

Joseph Zorzin
March 12, 2021 12:51 pm

Wow, great essay! I’ve wanted to get some understanding of climate models and what’s wrong with computer models in general. So now when some bozo tells me that climate science is settled because of definitive climate models- I’ll tell them to read your essay.

michel
March 12, 2021 12:54 pm

Enjoyed this a lot (maybe in part because I agree with it…!)

You allude in passing to something that has always troubled me about the models and particularly the spaghetti graphs.

It cannot be right, surely, to take a bunch of models many or most of which are demonstrably failing, and then average their forecasts and claiming that the result has any kind of validity.

If this were medicine, for instance. We had a drug whose function we did not understand, made a bunch of different models of its working, then generated forecasts for treatment of the target population. The results vary from 70% cured to 60% dying. So we average them and say on balance it is forecast to be significantly beneficial, lets go?

Or if we were designing a bridge. The results are all the way from, at a given sample set of girder parameters, all the way from failing under load once every five years to failing once in a hundred. We then average them to find out what girder parameters we should be using?

I would love to get an answer on this from someone who understand the subject better. It looks to me like a totally insane and unscientific procedure, completely unfit for purpose of generating forecasts to be used in policy selection.

But maybe someone knows different. I am very willing to be persuaded, but so far have not only found no satisfactory explanation of why averaging the bad with the good makes sense, I haven’t even come across any explanation of any sort.

Surely the only rational way is reject the failing ones, stick with the better ones, until finally we get to one decent one. Then use it.

Probably, from what I read, if we did that we’d end up with the Russian one, and all the alarm would evaporate.

Reply to  michel
March 12, 2021 1:48 pm

Or if we were designing a bridge.

Idiotic things happen with “best models” 😀

On Christmas Day 2003, everyone was already about to put the shovel and concrete machine in the corner and ring in Christmas Eve, when a final routine check by the construction management took place.
They found that there was a difference of 54 centimeters between the bridge construction on the German side and the construction on the Swiss side. If one wanted to join the bridge in the middle, it would be difficult. “Laufenburger we have a problem” radioed the construction management to the headquarters.

Reference horizon

“The height difference can be corrected with minimal effort during route construction on the German side,” it was appeased. But clearly pointed out, “The fault is on the Swiss side.”
The overall project manager Beat von Arx also admitted this in no uncertain terms. However, he too did not know why at first.
Later, the public was enlightened. The cause of the error, he said, was the fact that in the area of road and bridge construction, the horizons on the German and Swiss sides were based on different reference horizons.
With it one did not say with a technical phrase the following: Germany refers with all these calculations to the sea level of the North Sea. Switzerland, which obviously prefers to look south, takes its reference from the Mediterranean Sea. The layman learns: Sea level is not equal to sea level!

Minus became plus

But that is not all. This difference between the two reference seas results in a difference of 27 centimeters. This was known when the bridge was planned 12 years earlier and included in the calculations accordingly.
Unfortunately, someone must have made a minus sign out of a plus. Because the values were corrected to the wrong side. And so, in the end, the two bridge sections differed by 27 times 2, which equals 54 centimeters.

Translated with http://www.DeepL.com/Translator (free version)

German source

michel
Reply to  Krishna Gans
March 12, 2021 2:30 pm

Wonderful. If it had been climate science… what would they have done? Split the difference maybe, and had a 13.5 bump?