Our resident polymath Willis Eschenbach joins Heartland Senior Fellow Anthony Watts to discuss the parallels of hysteria and failure surrounding climate models and the coronovirus model that effectively put the world on hold.
Problem is, it’s worse than the old computer programmers adage “garbage in/garbage” out this time.
I’m sure you’ve heard about Neil Fergusson, the philandering epidemiologist from Imperial College who created the COVID-19 model that governments used to make lockdown decisions. Turns out, it was hugely flawed, and the code was a train wreck.
As a result, Neil Ferguson’s COVID-19 model could be the most devastating software mistake of all time. Meanwhile, Willis has been graphing the true nature of this epidemic on a regular basis here at WUWT.
Even in climate science, the gloom and doom worst case scenario has been seen to need a dialing back, except that it’s no longer climate “science” these days, and instead of dialing back, the climate dogmatists required a 50% increase in future temperature predictions.
These unreal models are affecting real lives.
Read up on the early history of IBM, the “counting engines” and how they were used to promote all manner of political agendas – including in Nazi Germany but also the US.
As those of us in the compute industries know – models are all about the beliefs of the creator(s). Testing is how you reconcile reality with belief.
I followed the link to the Telegraph article by David Richards and Konstantin Boudnik and was very disappointed. After touting their credentials as software designers and developers they then state:
Which is not what I had read previously and which was then retracted by this note at the end:
So somebody apparently checked the facts with Imperial College.
But the disappointing thing is I have to conclude these two highly-credentialed software experts clearly did not actually look at the code, which means they are simply repeating what they’ve heard or read elsewhere. Indeed there is in the first Telegraph article a link to a second, which states:
There is no link to an actual report from Edinburg University.
What I had read previously from a Google engineer was the only thing released so far has been a rework of the original C code into C++ by Microsoft engineers hastily brought in by Imperial College. So far as I know, nobody outside of Imperial Collete has seen the original code. It might be as bad as these articles claim, but we don’t positively know that (it could of course be much worse).
This brings me to my real point. I’m not bashing Neil Fergusson or Imperial College here. Academics simply do not work in the the same kind of accountability and liability environment that industry does and their working practices reflect this. The real fault is with the government officials who put force of law behind an unverified computer model from a researcher with a less than spotless prior record.
Another complaint: where were our intelligence agencies? I’m sure they’re monitoring all kinds of potential threats to national security – military, economic and others. I’m sure they have plenty of people qualified to do detailed code audits, but they don’t need to go that far. It would be a two-hour job for an agency researcher to assemble the complete track record of Fergusson’s previous predictions. We’ve put 40 million people out of work based on model output that nobody every audited. That’s not Fergusson’s fault, even if his model is crap.
Alan: the codes mentioned were in use I believe back in the 80’s. In nuclear plant design we were routinely required to validate reproducible outputs for given inputs as part of a project. I would guess that TV19 transmission is a relatively simple problem to analyze when compared to what engineers have to deal with routinely not only at the nukes but many other industries where the outcome of analysis can be the difference between life and death/profit and loss.
What’s going on is a crock.
One of the most important characteristics and virtues of consensus climate models is uncertainty. Paradoxical: because critics often cite model uncertainty as a big defect of climate models! Not so.
1. By admitting to uncertainty, models dispense with any need for quality control or traditional scientific testing and validation. It lets them do what they want.
2. Uncertainty also gives modellers vast scope to add kludges to their models; especially kludges leading to positive feedbacks. Again: it lets them do what they want.
Big difference between climate models and any other models is how similar all climate models are while claiming to ‘different‘ (there are over 100 of them). All these ‘different’ models crib basic radiative forcing ideas from Manabe and Wetherald (1967) and Held and Soden (2000). So the different models are all actually minor variations on the same model! This is clearly a carefully policed operation, because other (actually different) models of the greenhouse gas effect have been written such as those by: Miskolczi, David Evans, Dai Davies, Nasif Nahle, …
Mark: Good points…in business when formulating and testing thermal performance guarantees we have to take uncertainty into account, and Performance Test Codes provide standard means of calculating and applying uncertainty. Failure to understand and properly apply the PTC can lead to significant loss of money in the form of Liquidated Damages.
Do the other climate models you mention give more realistic answers? Do they quantify their uncertainty? Are they useful for hind casting?
Or just another bunch of gigo?