Forecast models encounter reality

Reposted from CFACT

By Kenneth Green |May 28th, 2020

Though forecast models have been a problem in the way they are utilized in public and environmental health for decades now, they have never before crested public awareness in quite the way they have in the time of COVID-19. People accustomed to seeing forecasts of things that are somewhat remote, somewhat abstract in time, place, and consequences are suddenly being exposed to how the sausage is made in predictive forecasting, and many are not liking what they’ve seen.

Many policy analysts (including myself) have been critical of the way that forecast models have been incorporated into governmental decision making for decades, arguing over the validity of forecasts projecting the impacts of tiny changes in air pollution exposure, radiation and chemical exposure; in predicting species endangerment; in predicting transit system ridership; in predicting “peak oil;” manmade climate changes and much more, COVID-19 has brought the problem out of the tall weeds of policy analysis, and into everyone’s living room, where they a) have more time on their hands than usual, and b) have suddenly realized that putting faith in model projections is more than an abstract concern for policy wonks.

To be clear, computer modeling of complex systems has its place, which is mostly in the computer lab, where one can tinker with one or more variables and pit models vs. models to see which one best explains something in the real world. That’s very valuable. The problem with modeling occurs when it escapes the lab and is abused and misunderstood by policymakers and the public. Unfortunately, space is limited, so here are a few things to understand about models.

The first point should be obvious: computer models are a gross simplification of reality (the technical term is abstraction). Consider a picture of a mouse. The picture of the mouse tells you a lot of things, but really very little about the biology of mice. To understand those things, they must be reduced into ever more tiny aspects of mousehood – it’s shape, it’s chemical composition, its biochemistry, behavior, capabilities, and so on, ad infinitum. Mickey Mouse, for example, is an abstraction of a mouse. When you see Mickey, you see a mouse, but in fact, Mickey tells you remarkably little (and a lot that’s not realistic) about mice. As the great astrophysicist George O. Abell explained in my early science education, to truly model something as simple as a mouse, you would have to have the knowledge to create the mouse, and humanity is far from doing that even for as small a thing as a virus (we still are, and that was 40 years ago now).

The second point you should know is that the farther away from modeling the tiniest of things, the less reliably models reflect the reality of what you are studying. Because in modeling, errors accumulate. And all measurement includes error. So the more complex the model, the greater uncertainty becomes.

The third thing to understand is that trying to model complex things goes well beyond looking at variables we can actually measure, especially if we’re trying to forecast. Instead, we have to approximate those variables, which entails a variety of assumptions. (Indeed, even measuring the variables you can measure involves a host of assumptions about your ability to accurately measure what you’re looking at.) Assumptions are inherently subjective, which renders model outputs relatively useless as forecasting tools. To be fair, that’s why computer modelers talk about “projections” vs. “predictions,” a nuance that quickly gets lost in public policy discussion.

COVID-19 has brought these points home to people in a way they have never been seen before.

There is only space here for one example, though there have been many, from models of COVID-19 mortality that were produced almost daily even as policymakers instituted wide-reaching restrictions on people’s daily lives.

The Washington Post has a good, accessible article on the evolution of modeled death-estimates from COVID-19. The article is long, but well worth reading. This figure, in particular, summarizes the evolution of one of the most relied upon models, from the Institute for Health Metrics and Evaluation at the University of Washington (IHME). As you can see, the estimated mortality from COVID-19 shifted massively over time as some of those variables discussed above were clarified by the incorporation of new data:

Forecast models encounter reality

As the figure shows, the plausible modeled range of fatalities from COVID-19 exceeded 150,000 deaths in the United States in early versions of the model, but they were rapidly downgraded over a matter of days in April, as the model was revised with newer and better information. Even now, as the Post notes, there are battling models that generate quite different estimates of COVID-19 mortality in the US.

All of this would be relatively uninteresting to most people if instead of COVID-19, scientists were modeling the lethality of say, a virus affecting a particular species of, well, mice. But with people craving any kind of certainty they can get their hands on, and policymakers crafting policy in the fog of war, forecast models have to be taken with more than a grain of salt – an entire saltlick would be more appropriate. Hopefully, this new public insight into the limitations of computer modeling of complex systems will stay with them as they evaluate future forecasts of everything from health, to the environment, to economics, to pretty much everything. As superstar-scientist Dr. Anthony Fauci, head of the United States National Institute of Allergy and Infectious Diseases (NIAID) put it recently, “Models are really only as good as the assumptions that you put into the model.”

0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
Rick McCargar
May 29, 2020 2:23 pm

Garbage in, garbage out. It’s never been different.

Reply to  Rick McCargar
May 29, 2020 2:41 pm

… and then draw pretty graphs of the output.

It’s art, bro.

Bryan A
Reply to  Robert Kernodle
May 29, 2020 8:05 pm

And…here we are now at the end of May (prediction plus 6 weeks) and the U.S. is at 102,000 deaths
Almost a full 10,000 more deaths than the April 2 prediction

Reply to  Bryan A
May 29, 2020 8:39 pm

In an uncharacteristic (for me) defense of the modelers – not even the nuttiest of the conspiracy theorists predicted that Blue State Governors would deliberately force facilities jam packed with the most vulnerable to take in the contagious.

Not taking that into account throws the projections off by at least 30K or so.

Reply to  Bryan A
May 30, 2020 4:57 am

Golly, I wonder how many of those were from kovid 19. My father in law passed away April 15th. His death certificate stated corona virus. No he didn’t. He died of congested heart failure. It is estimated 2/3 of theses people died from other causes!

Reply to  Fred
May 30, 2020 7:52 am

Colorado, under public pressure, revised down their Covid deaths by ~500 after review and redefinition of dying with Covid and dying from Covid couple weeks back. The 500 died with Covid but did not die from Covid and was a significant portion of the deaths at that point. Pulling a bit from memory but I think it was about 1/3 of overall deaths.

Reply to  Bryan A
May 30, 2020 5:03 am


Reply to  Rick McCargar
May 29, 2020 2:44 pm

And garbage out is politically predetermined.

Reply to  Zoe Phin
May 29, 2020 3:22 pm

Yes. Models are based on assumptions. If the assumptions are based on preconceptions, then the result cannot fail to represent that prejudice. There should be an “ergo” somewhere in that, and phrased in contrived Latin or geek Greek, but the GIGO law pretty much covers it.
I’ve got to add that models are also software. It is best to always assume that software is broken until proven otherwise. (hint: Artificial Intelligence is software, too)

Reply to  Rick McCargar
May 29, 2020 4:11 pm
Reply to  Scissor
May 29, 2020 6:12 pm

Better data is better

Reply to  TRM
May 29, 2020 8:00 pm

You can get more data, or you can get better data. However unless you are willing to spend a ton of money, you can’t get more data AND better data.

Michael S. Kelly
Reply to  Rick McCargar
May 29, 2020 8:27 pm

Yesterday, the United States had a total of 104,542 deaths attributed to coronavirus. That same day, the rate of new deaths stood at 1,212.

It strikes me that in one or two months, we’ll have hit that 150,000 death mark, and I don’t see it ending there.

So how bad was this model?

richard verney
Reply to  Michael S. Kelly
May 30, 2020 1:28 am

If studies, from other countries, are anything to go by, the number of patients dying of the virus as opposed to dying with the virus could be off by an order of magnitude. All studies have found that comorbidity is the cause of death in over 90% of all patients. In fact a study in Italy, published in the first half of March, put it at about 95%.

It has been a deliberate policy to guess the cause of death, and not find out the true facts surrounding the cause of death, such that the ‘data’ is essentially junk, and does not permit proper scientific scrutiny.

Reply to  richard verney
May 30, 2020 7:50 am

Comorbidity does not mean the victim died of the non-COVID condition.
Excess mortality numbers are very clear: novel coronavirus is killing people.
It is killing them faster than they would normally die of comorbidities.
It is killing people who don’t have comorbidities.
It is killing young people.

don rady
Reply to  c1ue
May 31, 2020 7:53 am

maybe the excess deaths are caused from the lock-downs not just the corona-virus, ie; more suicides, drug overdose, alcohol abuse, people scared to go the hospital when they have a heart attack or stroke, etc. etc.

Michael S. Kelly
Reply to  richard verney
May 30, 2020 8:34 am

I suspect that’s true. The other missing piece is knowledge of how many people have antibodies, i.e. how many have actually been infected at one time. That, too, may hold surprises.

Tom Abbott
Reply to  Michael S. Kelly
May 30, 2020 4:19 pm

“So how bad was this model?”

This particular model is not bad at all, it is tracking reality pretty closely.

It makes one wonder which models all the model detractors are looking at. They can’t be looking at this model, otherwise they would have to agree it is tracking the numbers pretty good.

Maybe they are looking at the Imperial College model or something like that.

President Trump’s model, the Univeristy of Washington model, the one that initially predicted between 100,000 and 140,000 mitigated deaths from Wuhan virus is right on the money.

Their latest prediction from a couple of weeks ago was for 137,000 Wuhan virus mitigated deaths.

Reply to  Tom Abbott
June 1, 2020 3:29 pm

This particular model is not bad at all, it is tracking reality pretty closely.

Well if the model is tracking reality as you say, then you’ll be able to confirm that one of the assumptions of the model included approximately 1/4 of deaths were to come from nursing homes:

Can you show that assumption was a coded assumption? If not, how can you logically argue that the model is “tracking reality pretty closely”?

Michael Lemaire
Reply to  Rick McCargar
May 29, 2020 10:26 pm

I’ll reuse that one: “computer models are to reality what Mickey Mouse is to a mouse”

Reply to  Rick McCargar
May 30, 2020 11:17 am

“January interview that the virus was “not a major threat” to the U.S.” Dr. Anthony Fauci.”
” “Models are really only as good as the assumptions that you put into the model.” Dr. Fauci, late May.

“DR. MARTY MAKARY, PROFESSOR OF PUBLIC HEALTH, JOHNS HOPKINS UNIVERSITY: Well, I agree, Tucker. He(Dr, Fauci) is a very nice gentleman. He’s a good laboratory virologist. But you know, in terms of preparing this country, he missed it.
He missed this one. For two months from January 15th when we had our first case confirmed walking around in the United States on U.S. oil until March 15th, with the country latching on to every word he says, he never once prepared this country with anything beyond simple hygiene and basic virology “lessons. ”

“If one assumes that the number of asymptomatic or minimally symptomatic cases is several times as high as the number of reported cases, the case fatality rate may be considerably less than 1%. This suggests that the overall clinical consequences of Covid-19 may ultimately be more akin to those of a severe seasonal influenza (which has a case fatality rate of approximately 0.1%) or a pandemic influenza (similar to those in 1957 and 1968) rather than a disease similar to SARS or MERS, which have had case fatality rates of 9 to 10% and 36%, respectively.2-” Dr. Anthony Fauci, NEJM, March 26.

“WHITE House coronavirus expert Dr Tony Fauci said Sunday lives could have been saved if US had been shut down earlier.” Dr. Anthony Fauci, April 12

TODAY show, Februay 12, “Dr. Fauci on coronavirus fears: No need to change lifestyle yet.” Dr Anthony Fauci.

Dr. Fauce has pretty consistently made statements on both sides of the COVID19 problem. Mostly with cautions about what we know, and then makes a recommendation. Many of his statements were based on models. Every time the models were updated( a futile exercise), the advice changed, often contradictorily.

NO WONDER people are fed up, confused, and on edge. When there is no way to make testable, accurate predictions that are policy useful. Bad data is worse than no data at all!

Tom Abbott
Reply to  Philo
May 30, 2020 4:29 pm

““WHITE House coronavirus expert Dr Tony Fauci said Sunday lives could have been saved if US had been shut down earlier.” Dr. Anthony Fauci, April 12”

Yeah, if we had stopped travel from China back in December, we might not have any Wuhan virus in the United States.

So who was calling for doing that back in December? Nobody. How about January? There’s only one person I know who was for that, President Trump, who stopped the flights on Jan. 31, 2020, against the advice of every other party who chimed in.

Correction: U.S. Senator Tom Cotton (R-Arkansas) was promoting stopping China travel, too. He and Trump are the only ones I know who were in favor of it at that time.

Tom Abbott
Reply to  Philo
May 30, 2020 4:38 pm

I would also like to say again that I think Dr. Fauci and Dr. Birx have been treated very unfairly in this matter by people with an agenda of one kind or another. When I look at these two scientists, I see very careful, conservative people who are doing the best they can and giving the truth as they see it in a highly political environment.

There is a lot of emotion built up over the economic shutdown and a lot of it is being directed at these people because they are easy targets of opportunity for the frustrations of people. When people are in pain, they look for who is causing that pain. It’s not Dr.Fauci, or Dr. Birx.

Reply to  Tom Abbott
May 30, 2020 7:37 pm

Drs. Fauci and Birx signed on to an untested buggy TinkerToy model giving wildly exaggerated results, without making the slightest attempt to find out if it was any good.

And it’s worse. It was by Neil Ferguson, whose previous model-based predictions had been horrible. Not just bad. Pathetically, mostrously wrong.

Then they used that model to scare the politicians around them into doing something untested, something without scientific support—locking down an entire country.

As Sweden has demonstrated, it’s more than possible to fight the virus without taking the wheels off of the economy and throwing millions out of work. The damage from the lockdown is far larger and will last far longer than that of the few deaths avoided by locking down.

So no. “Doing the best they can” would mean INVESTIGATING THE DAMN MODEL BEFORE USING IT TO DESTROY LIVES. There are thousands of extra deaths due to the lockdown: suicides, murders, delayed and denied medical treatments, plus endless pain and suffering.

And that is on Drs. Fauci and Birx. They have NOT been “treated unfairly”. They have treated the American people hugely unfairly by using us as guinea pigs for their untested medico-economic experiment.


Reply to  Willis Eschenbach
June 1, 2020 3:38 am

Take a look at the Swedish Covid case rate. It shows no signs of abating the way most other European countries are. Their death rate is over 400 per million and is about to challengez France for one of Europe’s top spots. I wouldn’t use this model as the best way to go.

Reply to  Rick McCargar
May 30, 2020 5:24 pm

In climate “science” predicting the future is easy (you must say a crisis is coming or you get called a climate denier) … but predicting the past is hard, because the adjustments keep changing the past climate !

Robert of Texas
May 29, 2020 2:52 pm

“Models are really only as good as the assumptions that you put into the model.”

No, no they aren’t. You can input perfect knowledge and still get garbage out, so this quote is incorrect. The correct form would be:

“Models CAN be no better than the assumptions used to build them and the data used within them.”

The issues with say, climate modeling are many..,Important processes are missing entirely (so missing assumptions), many assumptions are wrong or too simplistic, the historic data they use was never meant for this purpose, and rampant data manipulation has built in new biases missing from the original data. The final problem, chaos, makes climate projections past a certain amount of time completely unreliable. The best we will ever achieve is a list of outcomes associated with probabilities.

But none of this really matters as long as a large percent of so-called “scientists” believe that CO2 from fossil fuels is “driving” climate change. Until they can not only prove this, but correctly quantify it, the models are pure junk.

Reply to  Robert of Texas
May 29, 2020 7:28 pm

A wise person once taught this engineer “All models are wrong. But some are useful.”

Curious George
May 29, 2020 2:58 pm

Models are very precise, but do they model reality, or an alternate universe?

Paul of Alexandria
Reply to  Curious George
May 29, 2020 3:11 pm

They model the real universe, to a given degree of precision. If we didn’t have models, we wouldn’t have engineering; a great deal of an engineer’s job is trying to predict how a design will behave before it is actually built. And we’ve gotten rather good at it. On the other hand, any good engineer knows precisely when to stop trusting the models and start testing a prototype!

Tim Gorman
Reply to  Paul of Alexandria
May 29, 2020 3:52 pm

Part of the problem is that engineers typically create models of individual parts that can be combined to represent the entire system. For instance, it makes no sense to create a model for an engine and try to use that to predict the performance of one car with that engine vs another car with the exact same engine. Performance of the entire system for each car requires knowing car weight, car aerodynamics, transmission performance, differential performance, tire performance, etc. In other words it requires a multiplicity of models to produce a usable result.

What do we get from the climate modelers? Crap. It’s like they give us a model for an engine performance and then want us to believe that it represents a picture of the entire system. The models give us nothing about rainfall, temperature, subsoil temperatures, subsoil moisture, and a whole host of other factors that is needed to actually make policy decisions.

What good does it do to know a global average temperature if you don’t know what the entire system does? Freeman Dyson made this point many, many years ago. And not a single climate scientist today seems to have heard what he said let alone take it to heart!

Just since the first of the year I have read an article about higher maximum temperatures causing world-wide food shortages yet an *average* temperature doesn’t tell you what is happening to maximum temperatures. I have read an article about higher maximum temperatures causing increased desertification around the globe and another one about higher maximum temperatures casing increased rainfall around the globe. They both can’t be right and they both assume higher temperatures in the future when a global *average* can’t tell you that there will be higher maximum temperatures.

And it just goes on an on and on ………

Reply to  Tim Gorman
May 29, 2020 4:54 pm

Wonderful comment, Tim. Someone had to say it. Thank you.

Rick C PE
Reply to  Paul of Alexandria
May 29, 2020 4:48 pm

I would submit that the difference between a useful engineering model and a useless climate (or epidemic) model is “rigorous validation”. In engineering it is common practice to use models heavily in the design phase, but to still build and test prototypes before final approval and production. I.e. we trust, but verify.

Tim Gorman
Reply to  Rick C PE
May 29, 2020 6:25 pm

You pretty much hit the nail on the head.

Engineering models that I used a good number of years ago were mostly determinative. You put x, y, z, etc in and you got A out. How much uncertainty there was in A was based on your how you built your model.

In RF models (e.g. pspice) if you didn’t properly design in parasitic capacitances and inductances then your output had a pretty large uncertainty. When you built your prototype there was no guarantee that it would work like you thought. You could fudge things like adding a large capacitor between the negative rail and ground to simulate the distributed capacitance between top traces and the ground trace on the bottom but that was no guarantee that you wouldn’t run into unforeseen impacts in your prototype.

As you say this is part of the problem with current climate models. You can fudge all kinds of things to make the answer come out “right” but there is no way to actually validate the model against the real world by using a prototype. You either believe you got the “right” answer or you believe you don’t know if you got the right answer.

This is where Pat Frank’s analyses come into play. If your model needs fudging then you have no idea what unforeseen impacts that fudging will have on other parts of the model. It creates an uncertainty interval in any output that only grows every time you use the output of one run of the model as the input to the next model run.

Pat Frank
Reply to  Rick C PE
May 29, 2020 6:47 pm

And your engineering models are parameterized by experiment.

And the experiments step through the specification range.

And the model is used to interpolate behavior within the specification range, period.

Climate models have none of that and are used to predict well outside of the specification bounds.

Pat from kerbob
Reply to  Paul of Alexandria
May 29, 2020 9:05 pm

When I graduated from engineer school my brother the electrician sent me a wonderful picture, one of those old time pictures with a bunch of people in coats and top hats and the inscription below:
“There comes a time in every project when you have to shoot the engineers and start production”!!

Priceless even after 25 years

Citizen Smith
Reply to  Paul of Alexandria
May 30, 2020 10:05 am

Engineers model then multiply by 10. The first 5x covers variables in materials and workmanship. The second 5x is cya.

Reply to  Citizen Smith
June 6, 2020 5:33 pm

Citizen, I don’t know where you get your information from or what field of engineering you are talking about; I have spent a career in the field of pressure vessel, piping, valves and storage tanks, primarily steel and occasionally Aluminum.
We design to 2/3 yield strength for pressure and hydrotest pressure equipment to 90% yield max. All the material heats are tested according to ASTM standards for yield and tensile strength and the chemistry of the steel verified by analysis. We employ radiographic inspection, Ultrasonic inspection and dye penetrant as required to determine the efficiency of welded joints. Welding procedures are qualified for the specific materials and the welders must also be qualified according to ASME pressure vessel code. There is a comprehensive inspection by qualified inspectors. The vessels must have overpressure protection mostly via a tested safety valve.
The materials have established properties at various temperatures by the ASME Pressure Vessel codes. Finite element models are used for complex shapes.
Industry that I work and consult in cannot afford explosions nor can they use some 10 factor due to cost.
What industry are you talking about?

Reply to  Curious George
May 29, 2020 3:39 pm

A frame of reference.

Another Ian
May 29, 2020 3:06 pm

And “assume” is the word that “makes an ass out of u and me”

Gregory Woods
May 29, 2020 3:15 pm


Climate Change And Deforestation Mean Earth’s Trees Are Younger And Shorter

Oh, really?

Another Ian
Reply to  Gregory Woods
May 30, 2020 12:49 am

Vertically challenged then?

Reply to  Another Ian
May 30, 2020 11:35 am

No, differently heighted. Get with it!

Ron Long
May 29, 2020 3:19 pm

This issue of modelling emphasized two important scientific points for me. It is close to impossible to know all of the interactive factors to put into a model, you might leave out something, either out of ignorance or trying to push the result sin one direction, and, some of the data you put in, which you believe to be both true and relevant, is wrong, and this pushes the output to one side. When I am in the Chief Geologist mode I try to steer young geologists toward both curiosity and introspection. Their parents will already have provided them with, or not, professional conduct. The earth climate is far more complex than a SARS virus, and this SARS virus is still somewhat ahead of useful understanding. Stay sane and safe.

Reply to  Ron Long
May 30, 2020 10:50 am

Could you consider leaving the “safe” out of your sign-off. The word has begun to affect me in the same way as fingernails on a blackboard.

Stay sane.

Carl Friis-Hansen
May 29, 2020 3:19 pm

The equation for model uncertainty:

Uncertainty = 0.3 * Sigma * days^2

Just kidding. But really, forecasting weather, for example, 3 days ahead is fairly precise these days, whereas 6 days ahead is only plausible. Forecasting 29220 days (80 years) ahead makes uncertainty reaching for infinity.

Reply to  Carl Friis-Hansen
May 30, 2020 8:07 am

I live about an hours drive from the Pacific Ocean, trust me they can’t accurately predict out 3 days. If we enter a fairly predictable pattern like a blocking high then they’ll be able to go out a couple days. Give us unsettled air and you can’t trust the mornings forecast for what it will be doing that evening.

May 29, 2020 3:20 pm

To cut through the fog, there’s an actual analysis of the predictions of 11 different models here.

Worth a read, and the model that did the best is worth a look.


Reply to  Willis Eschenbach
May 29, 2020 3:59 pm

These are all so bad, it makes me want to burn down a Target store.

Michael Jankowski
Reply to  Willis Eschenbach
May 29, 2020 4:16 pm

MOBS – Northeastern U was 11th out of 11 on state-by-state comparisons…but 4th of 11 for the overall US.

Reminds me of climate models which suck on regional and continental scales when compared to observations but have a global anomaly or trend that looks halfway “good” compared to observations. Somehow that is acceptable or even worthy of praise. You can’t add garbage results to garbage results and have it add up to something valuable. Which then reminds me of this classic yet horrific moment in cinema…

Reply to  Willis Eschenbach
May 30, 2020 5:00 am

Thank you very much for passing this information on. The performance of the Machine Learning model is most impressive as shown in the “View Projections” section. I am also impressed by the fact that the model keep tracks of uncertainties…
Kudos to the folks that developed this important tool and are making it available to the public in a a very user-friendly website.

Reply to  Willis Eschenbach
May 30, 2020 6:43 am

Quite a read, indeed. You ought to write an overview of their methodology and compare it current MainStreamModels. I kinda think that their approach could be used in a ciltame model. This same article, but focused on climate model, would be interesting, eh?

May 29, 2020 3:24 pm

Well, model this:
What happens if ocean warms from about 3.5 C to 5 C?

Of course it matter where ocean waters are warmed- what depth and where.
But what could help in that regards, is that it’s going to take a long time to actually increase
the global average temperature by about 1.5 C. Centuries.
So since it will take a long time, one assume the added heat will be somewhat uniform, or there are differences now, and one assume similar range of difference in the future that has ocean of with average temperature of about 5 C. Or if ocean could warm to 5 C within a century, you could have greater differences.
Or ocean with greater differences in regional temperatures is almost a measure of how fast Earth temperature is decreasing or increasing.
It’s commonly said that 90% of the ocean is 3 C or colder.
And it’s commonly said that 90% of all global warming warms the Ocean.
One could assume if ocean warms to 5 C, it would then be commonly to said that
90% of ocean is about 4 C or colder.
It should be noted that warming the entire ocean is little effect upon the tropics and large effect upon the area outside the tropics.
Or the ocean average surface temperature is about 17 C. And average tropical ocean is about 26 c and remaining 60% of world ocean average surface temperature is about 11 C. Which results in average global ocean surface temperature of 17 C.
And the average land surface air temperature is about 10 C, giving global average surface air temperature of about 15 C.
So what’s going to warm the most is ocean surface water outside the tropics and land outside of the Tropics, and coldest land surfaces outside the tropics will warm the most.
Or land of Canada and Russia which around -4 C will warm the most and largely has to do winter temperature of cold land area.
One could imagine it has large effect upon ocean warming effects of the Gulf Stream. Which currently is said to warm Europe by about 10 C. Or Europe average temperature is about 9 C and would below 0 C if not for the warming from the Gulf Stream.
Or 5 C ocean would have warm Europe by more than it’s currently it.
In terms dramatic effects, a 5 C ocean would cause there to be less polar sea ice, which in turn would allow ice free polar sea ice in the summer.
It doesn’t effect tropical ocean much, because tropical ocean has very large slab of warm upper ocean waters, whereas outside tropics one can patches warmer slabs or “big warm blobs” which eventually will disappear.
Or Tropics has always been a “permanent vast warm blob” and with entire ocean warmer, the outside of tropic, the ocean surface waters to stay warmer longer.

One could also model an ocean where 90% was 2 C or colder.
I would say, that should give you a glacial period.

Reply to  gbaikie
May 29, 2020 4:13 pm

In reality, it was warmer thousands of years ago.

Reply to  Scissor
May 29, 2020 5:59 pm

Yes, the Holocene climatic optimum was warmer

And there were other times during our Ice Age, in which it was much warmer than our climatic optimum.

We been cooling for thousands of years and presently, we have been climbing out of the deepest holes called the Little Ice Age {the coldest period in last +9,000 years}.

May 29, 2020 3:34 pm

The real problem is overselling. If one would strictly view and use the ASME model as it was described and intended (RTFM), as an undeveloped logistics planning tool (RTFM) using close-to-real-time and unreliable data (RTFM), rather than as an oracle, then there might possibly be a little less disappointment.
Of course, despite universal experience, we always assume the competence and benevolence of those who are charged with making policy decisions. It is really easy, as well as recursive, to blame the software for the human error.

Pat from kerbob
Reply to  d
May 29, 2020 9:09 pm

I thought I was the only one who used RTFM


I did make up one myself to explain some computer problems, “ a problem with the organic interface”

May 29, 2020 3:40 pm

“But with people craving any kind of certainty they can get their hands on, and policymakers crafting policy in the fog of war”
So is it any less foggy without a model? What should they do if there is a new disease killing rapidly increasing numbers of people? Something has to be done. What basis is there for a decision without a model?

Reply to  Nick Stokes
May 29, 2020 6:27 pm

Models should be used and refined by doing up specific predictions about influenza season. Those that are in the top half get funding for next year. The rest can try again at their own expense. If they make the top half they get funded the next year. Like pro golf you only get paid if you finish “in the money”.

Now you have a handful of models that make forecasts with reasonably known error ranges. A pandemic comes along you have some semi-competent models to work with.

Tim Gorman
Reply to  Nick Stokes
May 29, 2020 6:35 pm

You develop a PROCESS to be followed. You don’t make wild ass guesses about what is going to happen.

A process is like wearing a mask while using social distancing. The process doesn’t contain any guesses.

If the disease is not very infectious then the process can be terminated quickly. It if is more infectious and is also deadly then the process remains in place for a longer period. That is *exactly* the process that has been followed with covid and it worked! All of the WAGs made by the so-called experts didn’t change what the process was and the WAGs didn’t change the progress of the disease in any manner.

The fog from the WAGs only serve to scare people. The WAGs don’t help actually solve anything.

Reply to  Tim Gorman
May 29, 2020 7:32 pm

“You develop a PROCESS to be followed. “
On what basis? Is playing golf a process? How do you decide what to include in the process without a model?

It’s true that you develop a process with a model. But without one, guessing is all you have.

Reply to  Nick Stokes
May 30, 2020 5:10 am


‘But without one, guessing is all you have.‘
If you don’t have enough knowledge to understand the system, and don’t have enough quality data to describe that system, your model is still only a guess!

The Big problem with unfit models is that it gives decision makers false confidence when making a decision AND it provides a scapegoat for those who make the decisions.

So if your decisions cause untold damage, misery and suffering all you need to do is ‘blame the model/scientist’ and you’re off the hook and looking forwards to the next election.
As most academics don’t suffer any consequence when being wrong (see the endless list of failed predictions with NO negative consequences for the predictors) this system works very well for both the politicians and the scientists. There is so much pressure on governments to be seen to ‘do something’ about ‘it’ that as soon as a potential scapegoat shows up government takes the easy way out. Unfortunately the common people end up suffering or paying the bill.

It would help if people realized that we live in an imperfect world and that government cannot protect you from everything, so sometimes it is far better to ‘do nothing’ and wait until you have a sound basis for your decisions as opposed to doing the wrong thing early on.

Whatever happend to the precautionary principle once so popular amongst the industry of fear?

Stay sane,

Tim Gorman
Reply to  Nick Stokes
May 30, 2020 5:20 am


The process is what is effective in stopping transmission. It doesn’t depend on a model of how many people are going to die. The process is the *SAME* no matter how many will die. Or even get infected.

In fact, you are using an argumentative fallacy known as Equivocation. You are changing the definition of “model” to suit your purposes at the time. One time you use “model” to speak of determining how the disease transmits itself and the next time you use “model” to speak of guessing at how many will die.

E.g. HIV vs Covid The transmission vector is different for both. The process to interrupt the transmission has to be different. And yes, you need a transmission model to determine the process for each. But developing the interruptive process for each is *NOT* dependent on guessing how many people will die from each.

Using WAGs to develop public policy *is* nothing more than guessing at public policy itself.

Please stop trying to use Equivocation to defend your assertions.

Michael Jankowski
Reply to  Nick Stokes
May 30, 2020 4:01 pm

“…On what basis? Is playing golf a process? How do you decide what to include in the process without a model?…”

How did they fight wildfires in Australia last year? Show me the models that dictated the process.

Richard M
Reply to  Nick Stokes
May 29, 2020 7:48 pm

Nick, a bad model is worse than no model. Use the knowledge you have and more collect data.

Pat Frank
Reply to  Nick Stokes
May 29, 2020 9:35 pm

Use the knowledge gained from the behavior of past viral epidemics.

That’s how you make decisions without a statistical model.

Inglesby, et al. (2006) Disease Mitigation Measures in the Control of Pandemic Influenza Biosecurity And Bioterrorism: Biodefense Strategy, Practice, And Science 4(4), 366-375.

Abstract: The threat of an influenza pandemic has alarmed countries around the globe and given rise to an intense interest in disease mitigation measures. This article reviews what is known about the effectiveness and practical feasibility of a range of actions that might be taken in attempts to lessen the number of cases and deaths resulting from an influenza pandemic. The article also discusses potential adverse second- and third-order effects of mitigation actions that decision makers must take into account. Finally, the article summarizes the authors’ judgments of the likely effectiveness and likely adverse consequences of the range of disease mitigation measures and suggests priorities and practical actions to be taken.

Thomas V. Inglesby, MD, is COO and Deputy Director; Jennifer B. Nuzzo, SM, is Senior Analyst; Tara O’Toole, MD, MPH, is CEO and Director; and D. A. Henderson, MD, MPH, is Distinguished Scholar; all are at the Center for Biosecurity of the University of Pittsburgh Medical Center, Baltimore, Maryland.

Tom Abbott
Reply to  Pat Frank
May 30, 2020 4:50 pm

“Use the knowledge gained from the behavior of past viral epidemics.

That’s how you make decisions without a statistical model.”

Of course, and that’s how the initial models for the Wuhan virus were created, since they had very little knowledge of its behavior back in January.

The initial University of Washington educated guess was for between 100,000 and 140,000 mitigated deaths, and the current death rate is over 104,000. So their initial educated guess was a very good one. They are curently predicting about 137,000 mitigated deaths, after having added a lot of data to their model.

Michael Jankowski
Reply to  Nick Stokes
May 30, 2020 4:07 pm

“…So is it any less foggy without a model…”

A good model is useful and has results that benefit policymakers. A bad model that policymakers put faith in can be disastrous. How is that not completely obvious?

May 29, 2020 3:57 pm

Ohm’s Law (E=IR) is a model. It provides a very good approximation to reality as long as you know what you’re doing. Otherwise, even that very simple model will give you nonsensical results.

Ohm’s Law doesn’t calculate the behavior of every electron in a circuit. That’s not necessary. That brings us to CM et al’s Irreducibly Simple Climate Model.

Very often, modeling simple bulk behaviors is much more reliable than trying to derive the behavior of a system as the sum of many tiny parts.

Tim Gorman
Reply to  commieBob
May 29, 2020 6:39 pm

Anyone looking at the output of the climate models can quickly see that their output is basically y=mx+b. Trying to develop a complicated model whose final output is a simple linear equation is nothing more than pettifogging for monetary or political gain.

Reply to  Tim Gorman
May 30, 2020 1:50 pm

Here is some output of a climate model

Where do you see y=mx+b?

Tim Gorman
Reply to  Nick Stokes
May 31, 2020 4:51 am


1. The simulation you pointed to is not the output of a climate model.

2. go here:

Tell me again that the graphs of *future* temperature shown at this site are not y=mx+b linear equations.

3. Don’t like No. 2? try here:

Again,, the trend line for each of the models is a y=mx+b plot. You can deny that all you want but even a simple visual observation by the casual observer can see the truth.

Nick Schroeder
May 29, 2020 4:11 pm

Yes, models can be fraught with all kinds of problems from conception to the collection of algorithms to applications.
The part time programmer who inserted English instead of metric units.
The summer electrical engineer who did not understand how reed switches work and installed them backwards.
And lack of quality checks and oversight, e.g. a project manager/engineer who is too busy climbing the corporate ladder to check his subordinate’s work.
The epidemiologists who assumed the Covid-19 spread would be a catastrophic exponential when the data were quite clearly a modest second order fit.
And the following example of a math error as simple as an errant checking account entry.

According to the NASA heat balance computer model graphic (attached and/or linked) 163.3 W/m^2 make it to the surface.
18.4 W/m^2 upwell from the surface through non-radiative processes, i.e. conduction and convection.
86.4 W/m^2 upwell from the surface through latent processes, i.e. evaporation and condensation.
The balance upwells 163.3-18.4-86.4-0.6 = 57.9 W/m^2 as LWIR.

That’s it!
The energy balance is closed!

But what about this!?
LWIR: 398.2 total upwelling – 57.9 from balance – 0.6 absorbed = 340.3??
An “extra” 340.3 W/m^2 have just appeared out of thin air!!!???
So where does this 398.2 W/m^2 upwelling “extra” energy come from?
Well, actually the 398.2 W/m^2 is a theoretical “what if” S-B heat radiation calculation for an ideal, 1.0 emissivity, Black Body with a surface temperature of 289 K or 16 C.

The SINGLE amount of LWIR energy leaving the surface has just been calculated by TWO different methods!! and then combined to effectively double the amount!!!! much like entering your paycheck twice in your checking account register.

398.2 is THEORETICAL!!!!!
340.3 is NOT REAL!!!
340.3 VIOLATES conservation of energy!!!!!

And, no, it is NOT measured except by amateurs who don’t understand how IR instruments work or emissivity and assume 1.0 when emissivity is in fact 57.9/398.2=0.145. (backwards reed switches.)

There is no 398.2 upwelling “extra” energy, there is no 340.3 “trapping” and “back” radiating “extra” energy, no RGHE, no GHG warming and no CAGW.

Nick Schroeder, BSME CU ‘78
Colorado Springs, CO

As demonstrated by experiment in the classical science tradition:

P.S. The collapse of CAGW will leave a bigger economic hole than the Covid-19 cluster **** cause it won’t be all the little people.

Reply to  Nick Schroeder
May 29, 2020 5:04 pm

How do you install a reed switch backwards? (ie. in a manner that will prevent it from working correctly)

Nick schroeder
Reply to  commieBob
May 29, 2020 5:29 pm

That’s what happened to the solar particle collector that went into the ground at 200 mph because the reed switches did not deploy the chutes.

Reply to  Nick schroeder
May 29, 2020 8:02 pm

I suspect there’s more to the story than just “backwards”.

On the other hand … even though a reed switch is just about the simplest device on God’s Green Earth … I can easily imagine an engineering student missing the point and finding a way to make a reed switch not work. Hey folks, you’re supposed to properly supervise engineering students so they don’t make dumb mistakes and cost the organization a million bucks. /rant

Tim Gorman
Reply to  commieBob
May 29, 2020 6:51 pm

Do you know how many kinds of reed switches there are?

Think an alarm sensor on your window. The reed switch is not operated by an electric current being run through it. Or how about the reed switch in your flip-style cell phone that turns the phone on when you open it.

For these types of reed switch operation the reed switch may not have been installed backwards but the wrong ones may have been installed.

May 29, 2020 4:14 pm

As of 26 May, fully 26% of all Covid-19 deaths in the US occurred in the four counties which contain New York City. (New York State accounts for 29%.)

Quotable quotes:

“The secret of our success* has been communications and transparency.” NY Governor Andrew Cuomo (* Now that’s “chutzpah”!)

“You pick the 26,000 people who are going to die!” Andrew Cuomo (directed at President Trump for not immediately sending New York State 30,000 ventilators which subsequently were never used.)

“If you isolate, your family won’t get infected!” Gov. Cuomo, 13 April

“Shocking. … We thought maybe they were taking public transportation … but actually no, because these people were literally at home.” 7 May (on 66% of new cases having been in home isolation.)

“I’m out of business [of making judgments] because we all failed. The models were all wrong. [I’m] out of the guessing business.” Gov. Cuomo, 25 May

“Governor [Cuomo], it’s a professional honor to work with you. Your state has already shown what can be achieved when policies are driven by science.” Samir Bhatt, Senior Lecturer, Imperial College

Tom Abbott
May 29, 2020 4:23 pm

From the article: “The problem with modeling occurs when it escapes the lab and is abused and misunderstood by policymakers and the public.”

Isn’t that the truth! I think the author is doing a little abusing of the truth with this article.

From the article: “As the figure shows, the plausible modeled range of fatalities from COVID-19 exceeded 150,000 deaths in the United States in early versions of the model, but they were rapidly downgraded over a matter of days in April, as the model was revised with newer and better information. Even now, as the Post notes, there are battling models that generate quite different estimates of COVID-19 mortality in the US.”

That doesn’t make any sense. The original estimate of the University of Washington was for from 100,000 to 140,000 mitigated deaths. My latest Wuhan virus update says there have been 103,931 mitigated deaths to date.

That 100,000 to 140,000 range has been presented numerous times in news conferences by President Trump and his advisors, and obviously this first guess was pretty darn good.

I don’t know where you come up with those lower numbers. The death rate never moved like that, it was always going up, not down, during those time frames and it was obvious to everyone, so I find it hard to believe that the University of Washington was predicting fewer deaths than actually occurred. Although this only covered a five-day period so maybe that’s the trick you are using.

President Trump never wavered from the 100,000 to 140,000 figure. And it looks like he was correct to depend on this projection.

So here we have another person spreading disinformation about the virus computer models. Is it any wonder people are confused. A lot of people are working awfully hard to keep them confused.

Projection: 100.000 to 140,000 mitigated deaths from Wuhan virus

Reality: 103,931 mitigated deaths from Wuhan virus.

This particular virus computer model projection was right on the money. Is right on the money.

The only revisions I’m aware of with this virus model is they upped the death count from 134,000 a few weeks ago to 137,000 the next week.

Nick Schroeder
Reply to  Tom Abbott
May 29, 2020 6:03 pm

CDC tables show about 2/3rds the deaths as the MSM reports.

Reply to  Nick Schroeder
May 29, 2020 7:56 pm

“CDC tables show about 2/3rds the deaths as the MSM reports.”

The first thing you see at that link is this note:

“Note: Provisional death counts are based on death certificate data received and coded by the National Center for Health Statistics as of May 29, 2020. Death counts are delayed and may differ from other published sources (see Technical Notes). Counts will be updated periodically.”

Here is CDC’s Thursday press release:

“United States Coronavirus (COVID-19) Death Toll Surpasses 100,000”

Reply to  Tom Abbott
May 29, 2020 7:50 pm

“President Trump never wavered from the 100,000 to 140,000 figure.”

Really? Here is a timeline:
April 10th
“We did the right thing,” he said a bit later, “because maybe it would have been 2 million people died instead of whatever that final number will be, which could be 60, could be 70, could be 75, could be 55. Thousands of people have died.”

April 17th
“I think we’ll be substantially, hopefully, below the [100,000] number,” he said. “And I think, right now, we’re heading at probably around 60-, maybe 65,000.”

April 20th
“We did the right thing, because if we didn’t do it, you would have had a million people, a million and a half people, maybe 2 million people dead,” he said. “Now, we’re going toward 50, I’m hearing, or 60,000 people. One is too many. I always say it: One is too many. But we’re going toward 50- or 60,000 people.”

April 29th
“President Donald Trump is projecting that coronavirus deaths in the United States could reach 70,000 “
(They were already 65000)

Michael Jankowski
Reply to  Nick Stokes
May 30, 2020 4:22 pm

In April, IHME dropped the lower end of projections from 100k to 60k by August (BTW mocking him for putting faith in model results in a thread where you are defending those models is priceless). But their range of estimates was still 60k to 240k. Why does a model have a permissible HUGE range of error while a person does not?

Of course since flu season estimates ranged from 24k to 60k – something we had just gone through and observed as we do every year – WTF is the difference between Trump estimates of 60k, 100k, and 140k with COVID in the first place?

Reply to  Michael Jankowski
May 31, 2020 3:12 am

“Why does a model have a permissible HUGE range of error while a person does not?”

I was responding to the claim that:
“President Trump never wavered from the 100,000 to 140,000 figure.”

In fact it wasn’t a range of ERROR by the model. All model predictions are based on a scenario of how people respond. Science can’t predict that, at least not far ahead. So they predict something like 2 million if nothing is done, but when a control policy emerges, they predict a smaller number based on that being followed. If it turns out not to be being followed, the predictio changes again.

Pat from kerbob
Reply to  Tom Abbott
May 29, 2020 9:15 pm

Except, how many of those actually died of covid?
How many were administrative paperwork to get more $$$

Follow the $$$$

Patrick MJD
Reply to  Pat from kerbob
May 29, 2020 10:25 pm

“Pat from kerbob May 29, 2020 at 9:15 pm”

We will never know because of…follow the money…

But watch for the gullible to follow draconian laws being enacted as we speak. Our freedoms are being sacrificed for safety.

Nick Schroeder
May 29, 2020 4:43 pm

Or just too early to check.

May 29, 2020 5:47 pm

Models, aka computer simulations, can only do what their programmers want them to do. Or rather, what the programmers think they want them to do. These aren’t computerized, mythical working crystal balls. They have no predicative value – the output is want the programmers want to see. They are little more than fancy, complicated spreadsheets. People should stop pretending otherwise.

May 29, 2020 5:47 pm

– A theory is an abstract mechanistic simplification that expresses one’s understanding of how a particular aspect of nature works. Its validity depends on how accurately it conforms to objective data.

– A mathematical model is a mathematical expression of one or more theories (usually many), usually with an automated calculation, in order to help to understand complex problems of nature, involving multiple physical aspects, that are too complex for human brain to easily integrate, especially dynamically (over time).

A mathematical model is:
– very useful for helping to understand the limits of our knowledge.
– useful for understanding the relative order of magnitude of the importance of different aspects of a problem.
– somewhat useful for interpolating between replicated experimental data points

– very dangerous when extrapolating beyond the bounds of the data that was used to formulate the theories.
– very dangerous in the hands of those who are unfamiliar with the science (theories) upon which the models are based.

Human knowledge, i.e. science, is always a vast simplification of nature. Mathematical models only help to express the limits of this knowledge. The errors of both the climate models and the epidemiological models appears to me to be that the ‘scientists’ using these models are insensitive to both the limits of their own knowledge and the complexity of nature.

May 29, 2020 6:10 pm

In online discussions some were suggesting using “excess deaths” to see what effect the covid-19 disease is having and I thought that would be a reasonable approach as it gets past the deaths “with/from” issue.

That idea was backed up recently by Yoram Lass (formerly director-general of Israel’s Ministry of Health). In an interview and said “total deaths” was the only way to look at it.”

As of 2020-05-28 the CDC data has North Carolina missing week 16 for 2020. The rest of the states are complete.

The entire USA has an “excess death” rate about 5.5% (50,331) higher than the 4 year average for weeks 1 to 16. As a comparison I checked the first 16 weeks of 2018 compared to the previous 4 year average and it was 7.2% (63,260).

All the data is from this CDC page:

The script and all related files are here if you want to kick the tires:

The script generates data for all 50 states plus DC and New York City (CDC treats it separately from New York State).
The “reports.xlsx” file has the accumulative totals for all reports.

Tim Gorman
May 29, 2020 7:15 pm

“A mathematical model is:
– very useful for helping to understand the limits of our knowledge.
– useful for understanding the relative order of magnitude of the importance of different aspects of a problem.
– somewhat useful for interpolating between replicated experimental data points”

These only come into play if you can validate the mathematical model against empirical data. Climate models cannot be validated against the future. And past climate models don’t validate well against the empirical data collected since the models were run in the past. I.e. the models tremendously over forecasted temperature gains. Current models likely over forecast future temperature gains as well!

Beta Blocker
Reply to  Tim Gorman
May 29, 2020 8:49 pm

Every month, Dr. Roy Spencer publishes an analysis of global temperature trends based on satellite data. This is his analysis for April, 2020:

UAH Global Temperature Update for April 2020: +0.38 deg. C

This is the kind of graphical plot which is included in every monthly update:

Dr. Spencer’s April 2020 UAH Anomaly Plot

The 2557 comments that have been made on Dr. Spencer’s April 2020 monthly article cover every topic we can imagine in today’s climate science. (And some we can’t imagine.)

Tim Goreman, recognizing that your criticisms of Global Mean Temperature as a scientific and mathematical construct are mostly on target from a purely scientific, narrowly defined perspective, I ask you this question:

In your personal opinion, does the work Dr. Spencer and Dr. Christy do in aggregating masses of satellite data into highly condensed UAH data plots such as the one shown above have any useful value for any purpose whatsoever?

Reply to  Beta Blocker
May 29, 2020 10:46 pm

I rather have that than endless variable maps. As long that is done consistently, Roy and Christie give some sense of comparison at an glance. Reducing chaotic system to one number per month seems to be crazy, but a lot of people look for simplicity. Sadly once in a graph everything else goes out of the window and is taken at face value, particularly if it is in colour.

Tim Gorman
Reply to  Beta Blocker
May 30, 2020 6:33 am


How long does it take for the satellites to cover the entire earth? 10 days? 14 days?

When the temperature taken (the satellites don’t actually measure temperature directly don’t you know) by the satellite is spread over this amount of time and and equates to ONE temperature reading (not an average at any location, just ONE reading) at each location read, EXACTLY WHAT DO YOU THINK AN AVERAGE OF ALL THESE READINGS TELLS YOU?

It can’t tell you what is happening to maximum temperatures or minimum temperatures at any specific location let alone on a global basis.

So no, I simply don’t put any stock in the satellite data to tell what it actually happening with the temperature envelope of the entire earth just like I don’t put any stock in a land-based thermometer “average” to tell us what is happening with the temperature envelope on a global basis, regional basis, or even local basis.

It’s why HVAC engineers don’t typically use an “average” daily temperature to design the heating and cooling units of a commercial building or residential building. They use cooling and heating degree-day values (essentially an integration of the temperature envelope above or below target temperatures). It is this kind of data that is of *real* use in determining what is happening to environment.

Ask yourself why the AGW climate alarmists refuse to move to using degree-day values instead of using “average” temperature? Why don’t they develop models that will tell us what the cooling and heating degree-day values will be in 10 years so I can size the heating and cooling units in my new house or my new office building in order to actually be able to handle the future environment?

Is it because it would be too difficult to do? I don’t buy that for a minute. Is it because it it might dry up all the money being fueled by climate alarmism? *That* I would totally believe!

Global climate is made up of regional and local climates. And it is the amalgam of the regional and local climate that actually determines the effect of climate change. Yet we never hear of regional or local climate models forecasting regional and local climate 80 years into the future. Why is that? If the physics of the global climate models are well known and can be used to accurately forecast 80 years into the future then those same physics should be able to be used in regional and local models to forecast regional and local climates 80 years into the future.

If the physics of the global climate models are so unknown and unprecise that they can’t be used for long term local and regional climate forecasts then exactly how well do they forecast the future global climate? How can you say that x+y+z+a+b+c = 60 if you don’t have any idea of what x, y, z, a,b, and c *are*? In essence that is what the global climate models are saying – we don’t know anything about regional and local climate but we know the global climate!

Ian Coleman
May 30, 2020 3:26 am

Much of the general public confidence in models arises from the fact that many of them do work. Newton’s laws of gravity are models. Even things like our current understanding of the structure of atoms are models. Unfortunately the models predicting the future of the Earth’s climate look suspiciously like science fiction, with a heavy dose of political activism.

And there are all kinds of weak excuses made when the models don’t’ work. Recall the pause in global warming. You had a period when the concentration of atmospheric carbon dioxide increased, but the surface temperature of the Earth did not. Didn’t that automatically falsify the theory that carbon dioxide concentrations are the driver of the Earth’s temperature? Why no. Didn’t the failure of the models undermine the authority of the people who wrote the models ? No. You denier.

May 30, 2020 5:36 am

In my 40 years of Aerospace engineering, I have seen lot’s of aircraft and system models. The best models are those that take actual flight test data and correct for model errors. Takes work and the model guys have to be responsive to the flight test guys. Just my input.

Tim Gorman
Reply to  Shark24
May 30, 2020 9:28 am

In other words your models are validated against empirical data. That’s not true of the climate models. If your aircraft was built on a model similar to a climate model it would wind up flying into mountains (not a single climate model forecasted the 20 year long warming hiatus). And the excuse that “well it got the closest 20 year period wrong but it will get the next 80 years correct” is just pathetic. If the physics of the climate models are so wrong that they can’t forecast the next 20 years correctly then how can you have any faith they can forecast the next 80 years correctly? Such a belief has all the indicators of being a religious faith!

May 30, 2020 8:22 am

My really big break in the computer industry came 40+ years ago. At that time most everything in the computer industry was black and white.

By luck I was able to get hold of one of the first color plotters, and used this to produce a color overhead display presentation for the board of the bank I worked for.

They had never seen anything like it, and they gave us the $10 million we were asking for. Later we found out it was all due to the color. They didn’t look at the numbers.

May 30, 2020 10:44 am

IHME is also using fake data to go with the dubious assumptions. I’ve been watching their Georgia model. One stunt they pull is not reporting deaths on the day the state reports them occurring. THey save some of them up, and then report them as occurring one a later day, to drive up that day’s total.

Georgia had peak deaths of 55 on 4/16. IHME claims — actual, not projected — 36. But moving over to 5/28, IHME claims actual deaths of 29.

The state says: 4. IHME shifted deaths around to keep the curve up.

For new cases. IHME claims Geogia had — again, actual not projected — 615 confirmed infections on 5/29.

The state says: 60. IHME simply inflated the number by ten times.

Roger Knights
May 31, 2020 7:47 am

As Lockdowns Are Lifted, Is the COVID-19 Reproductive Number Rising or Falling?
Two models generate strikingly different estimates.
Jacob Sullum5.28.2020 3:10 PM

the Gu model’s projections “are considerably less optimistic” than the projections from other widely cited models. Historically, Gu notes, his model’s COVID-19 death projections have matched the actual fatalities counted by the Johns Hopkins Coronavirus Resource Center much better than the model used by the University of Washington’s Institute for Health Metrics and Evaluation (IHME). On May 2, for instance, the Gu model predicted 101,950 deaths in the United States by today, compared to the IHME projection (since revised) of 71,918. The current Johns Hopkins tally is 100,415.

The Gu model predicted that the death toll would reach 100,000 by May 25, and that happened just a couple of days later. It is now projecting more than 200,000 deaths by August 28. A projection by the U.S. Centers for Disease Control and Prevention, leaked to the press early this month, predicted that mark would be reached by June 1, which thankfully has proven to be excessively pessimistic. But if history is any guide, the IHME projections err in the opposite direction. They currently go only as far as August 4, when the predicted death toll is about 132,000, compared to more than 173,000 in the Gu model.

Since the Gu model’s death projections incorporate its estimate of the reproductive number, it seems to have a pretty good handle on the latter, which suggests it is closer to the mark than the University of Utah model. Nationally, the Gu model shows the reproductive number falling from 2.26 on February 5 to a low of 0.91 on April 11, then beginning to rise on April 28 and reaching 1.02 today.

June 1, 2020 4:06 am

Hold on! Are you saying that Mickey Mouse isn’t real?

%d bloggers like this:
Verified by MonsterInsights