Mathematical modeling illusions

The global climate scare – and policies resulting from it – are based on models that do not work

Dr. Jay Lehr and Tom Harris

For the past three decades, human-caused global warming alarmists have tried to frighten the public with stories of doom and gloom. They tell us the end of the world as we know it is nigh because of carbon dioxide emitted into the air by burning fossil fuels.

They are exercising precisely what journalist H. L. Mencken described early in the last century: “The whole point of practical politics is to keep the populace alarmed (and hence clamorous to be lead to safety) by menacing it with an endless series of hobgoblins, all of them imaginary.”

The dangerous human-caused climate change scare may well be the best hobgoblin ever conceived. It has half the world clamoring to be led to safety from a threat for which there is not a shred of meaningful physical evidence that climate fluctuations and weather events we are experiencing today are different from, or worse than, what our near and distant ancestors had to deal with – or are human-caused.

Many of the statements issued to support these fear-mongering claims are presented in the U.S. Fourth National Climate Assessment, a 1,656-page report released in late November. But none of their claims have any basis in real world observations. All that supports them are mathematical equations presented as accurate, reliable models of Earth’s climate.

It is important to properly understand these models, since they are the only basis for the climate scare.

Before we construct buildings or airplanes, we make physical, small-scale models and test them against stresses and performance that will be required of them when they are actually built. When dealing with systems that are largely (or entirely) beyond our control – such as climate – we try to describe them with mathematical equations. By altering the values of the variables in these equations, we can see how the outcomes are affected. This is called sensitivity testing, the very best use of mathematical models.

However, today’s climate models account for only a handful of the hundreds of variables that are known to affect Earth’s climate, and many of the values inserted for the variables they do use are little more than guesses. Dr. Willie Soon of the Harvard-Smithsonian Astrophysics Laboratory lists the six most important variables in any climate model:

1) Sun-Earth orbital dynamics and their relative positions and motions with respect to other planets in the solar system;

2) Charged particles output from the Sun (solar wind) and modulation of the incoming cosmic rays from the galaxy at large;

3) How clouds influence climate, both blocking some incoming rays/heat and trapping some of the warmth;

4) Distribution of sunlight intercepted in the atmosphere and near the Earth’s surface;

5) The way in which the oceans and land masses store, affect and distribute incoming solar energy;

6) How the biosphere reacts to all these various climate drivers.

Soon concludes that, even if the equations to describe these interactive systems were known and properly included in computer models (they are not), it would still not be possible to compute future climate states in any meaningful way. This is because it would take longer for even the world’s most advanced super-computers to calculate future climate than it would take for the climate to unfold in the real world.

So we could compute the climate (or Earth’s multiple sub-climates) for 40 years from now, but it would take more than 40 years for the models to make that computation.

Although governments have funded more than one hundred efforts to model the climate for the better part of three decades, with the exception of one Russian model which was fully “tuned” to and accidentally matched observational data, not one accurately “predicted” (hindcasted) the known past. Their average prediction is now a full 1 degree F above what satellites and weather balloons actually measured.

In his February 2, 2016 testimony before the U.S. House of Representatives Committee on Science, Space & Technology, University of Alabama-Huntsville climatologist Dr. John Christy compared the results of atmospheric temperatures as depicted by the average of 102 climate models with observations from satellites and balloon measurements. He concluded: “These models failed at the simple test of telling us ‘what’ has already happened, and thus would not be in a position to give us a confident answer to ‘what’ may happen in the future and ‘why.’ As such, they would be of highly questionable value in determining policy that should depend on a very confident understanding of how the climate system works.”

Similarly, when Christopher Monckton tested the IPCC approach in a paper published by the Bulletin of the Chinese Academy of Sciences in 2015, he convincingly demonstrated that official predictions of global warming had been overstated threefold. (Monckton holds several awards for his climate work.)

The paper has been downloaded 12 times more often than any other paper in the entire 60-year archive of that distinguished journal. Monckton’s team of eminent climate scientists is now putting the final touches on a paper proving definitively that – instead of the officially-predicted 3.3 degrees Celsius (5.5 F) warming for every doubling of CO2 levels – there will be only 1.1 degrees C of warming. At a vital point in their calculations, climatologists had neglected to take account of the fact that the Sun is shining!

All problems can be viewed as having five stages: observation, modeling, prediction, verification and validation. Apollo team meteorologist Tom Wysmuller explains: “Verification involves seeing if predictions actually happen, and validation checks to see if the prediction is something other than random correlation. Recent CO2 rise correlating with industrial age warming is an example on point that came to mind.”

As Science and Environmental Policy Project president Ken Haapala notes, “the global climate models relied upon by the IPCC [the United Nations Intergovernmental Panel on Climate Change] and the USGCRP [United States Global Change Research Program] have not been verified and validated.”

An important reason to discount climate models is their lack of testing against historical data. If one enters the correct data for a 1920 Model A, automotive modeling software used to develop a 2020 Ferrari should predict the performance of a 1920 Model A with reasonable accuracy. And it will.

But no climate models relied on by the IPCC (or any other model, for that matter) has applied the initial conditions of 1900 and forecast the Dust Bowl of the 1930s – never mind an accurate prediction of the climate in 2000 or 2015. Given the complete lack of testable results, we must conclude that these models have more in common with the “Magic 8 Ball” game than with any scientifically based process.

While one of the most active areas for mathematical modeling is the stock market, no one has ever predicted it accurately. For many years, the Wall Street Journal chose five eminent economic analysts to select a stock they were sure would rise in the following month. The Journal then had a chimpanzee throw five darts at a wall covered with that day’s stock market results. A month later, they determined who preformed better at choosing winners: the analysts or the chimpanzee. The chimp usually won.

For these and other reasons, until recently, most people were never foolish enough to make decisions based on predictions derived from equations that supposedly describe how nature or the economy works.

Yet today’s computer modelers claim they can model the climate – which involves far more variables than the economy or stock market – and do so decades or even a century into the future. They then tell governments to make trillion-dollar policy decisions that will impact every aspect of our lives, based on the outputs of their models. Incredibly, the United Nations and governments around the world are complying with this demand. We are crazy to continue letting them get away with it.

Dr. Jay Lehr is the Science Director of The Heartland Institute which is based in Arlington Heights, Illinois. Tom Harris is Executive Director of the Ottawa, Canada-based International Climate Science Coalition.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
183 Comments
Inline Feedbacks
View all comments
zemlik
January 29, 2019 11:22 pm

Someone on here a while ago was saying that these models are linear.
They calculate the effect of a variable on something then calculate the effect of that on something else and so on, whereas in reality everything happens all at once and feeds back to everything else.
Is this correct ?

January 29, 2019 11:52 pm

The best take down of the climate models that I recall was by richardscourtney, a onetime regular here. His argument was simple.

The models don’t agree with one another. There is only one earth, and so a maximum of only one model can be correct, and more likely than not none are correct. Since any model that is incorrect will increasingly diverge from reality the longer into the future it is run, it is absurd to average over a hundred models together knowing that you are averaging results that are by definition, wrong with the possible (but unlikely) exception of one. Basing policy, or for that matter writing media puff pieces for the public quoting models known to be wrong is not just absurd, it is irresponsible.

Louis Hooffstetter
Reply to  davidmhoffer
January 30, 2019 1:13 am

I like the take down of another regular here, RAH. I can’t recall the exact quote so I’ll paraphrase from memory:

“Climate model projections are fantasies that climatologists wish were happening in the real world. They’re not reality, they’re climate porn.”

Micky H Corbett
January 30, 2019 1:17 am

One suggestion to break the link between pure hypothetical musings and application to the real world is to provide an incentive to understand the limitations of the source data.

So any climate scientist who advocates action based on their models should also be happy applying the same verification level to other aspects in their life that have similar criticality.

In other words, drink from a batch of water deemed safe under similar standards. Eat from food deemed safe by similar standards.

I for one verify software for safety critical applications as one strand of my work. The methods in this are equally applicale to water and food safety. I would be happy to eat food and drink water prepared under those standards.

knr
January 30, 2019 3:31 am

Models are required because the data or the ability to collect data is missing.
There a second-hand car that is only bought because you cannot afford a new one.

The great ‘advantage’ they give. is that unlikely reality you can get any result you ‘need’ through playing around with the inputs.
Hence the first rule of climate science, ‘when reality and models differ in value, it is always reality which is in error’.

January 30, 2019 3:44 am

Surely the reason why weather forecasting is better now is due solely to satellites? Experience of what usually happens for a few days after a particular situation means anyone with thst knowledge can make a fairly decent forecast. For the UK a modern version of “Red sky at night” would be “depression over western Atlantic, rain in three days”. Who needs a computer?

January 30, 2019 4:34 am

Alarmists produce models that look like the Picasso painting “Guernica” portraying the tragic 1937 bombing in Spain, rather than the Constable painting “The Hay Wain” on an 1821 English rural scene.

Consider how much we can learn about both situations. While Picasso’s painting should elicit a deep emotional response it actually tells us little about the actual bombing of the town by German planes. However, Constable’s painting gives us a clear picture of real rural English life at a particular point in time. This may be a useful analogy to contrast a climate model with a real observation.

Donald Boughton
January 30, 2019 4:36 am

All climate models should be subject to third party validation using hind casting. The results should be compared with the historical record. Half of research groups whose model results most closely resembled the historical record should keep their funding while remaining research groups should all lose their funding.
All the research groups should be given 18 months to two years warning about the impending third party validation. This validation process will provide evolutionary pressure on the research groups to improve their models by removing unwarranted assumptions and improve the wild guesses on the values of climate variables

Steve O
January 30, 2019 4:49 am

I don’t see anything inherently wrong with relying on models, just because they’re models. After all, any model is designed to be a reflection of reality. Granted, an unreliable model has limited utility.

Actions need to be justified, and justification is impaired by the reliability of the models. The question is, what actions are justified based on what we know.
– More funds for climate research? Sure.
– Funding to reduce the costs of nuclear power, in case we need to convert? Why not.
– Limiting global emissions and reordering the world’s economy? You’re kidding right.

Even if the models were trustworthy, and pointed to CO2 emissions causing catastrophic warming, there are proposals being implemented that make no sense. Converting to wind and solar power? Wealth transfers? Sheesh. I’m supposed to think that the cabal is smart enough to model the earth’s climate, but not smart enough to see that wind power is stupid?

ANY actions need to be justified. Expected benefits must exceed expected costs. After you convince me that mankind’s CO2 emissions are causing catastrophic warming, you still have to convince me that we should take action. I would consider spending six trillion dollars in order to delay the warming by six years to be a futile gesture, with the money being better spent on adjusting ourselves to live in a warmer world.

The “do nothing” scenario is the foundational scenario. In the field of decision-making under uncertainty this is kindergarten level stuff. Yet that scenario is more than simply ignored. The very acknowledgment of its existence is met with boiling oil, fire, and arrows. I’m supposed to believe that these geniuses can model the earth’s climate but are unable to formulate a simple decision tree?

Steve O
Reply to  Steve O
January 30, 2019 5:06 am

Just to be clear, I don’t mean to imply that the people creating climate models are not geniuses. I trust that most of them are. But where is my decision tree?!

Juan Slayton
January 30, 2019 5:26 am

If one enters the correct data for a 1920 Model A, automotive modeling software used to develop a 2020 Ferrari should predict the performance of a 1920 Model A with reasonable accuracy. And it will.

Automotive modeling software must be remarkable:

The Ford Model A was the second successful vehicle model for the Ford Motor Company, after its predecessor, the Model T. First produced on October 20, 1927….

Yeah, I know it’s Wikipedia, but the dates of Model A production are not, so far as I know, in dispute.
:>)

Coach Springer
January 30, 2019 7:16 am

The problem with debunking models is there will always be a new one that appears to debunk your debunking.

Bob Hoye
January 30, 2019 7:18 am

I was going to nitpick the 1920 Model A and Juan beat me to it.
The car produced in 1920 was a Model T.
My first car was a 1930 Model A Ford Coupe.
Bought in 1955 for $40.
However, the analogy of using cars is worthwhile.

Ken Irwin
January 30, 2019 8:16 am

I’d like to see a model that predicts the ride and handling characteristics.

Too complex.

Horsepower is easy – well defined via thermodynamics etc.

Inappropriate analogy.

Some models work well others not so much – climate modelling is simply guesswork dressed up as science.

SLC Dave
January 30, 2019 8:49 am

The national weather service has probably saved millions of lives with their “silly models,” but the next time a cat 5 hurricane is barreling towards the gulf coast you should probably just put on sunscreen cause you know it’s just a bunch of baloney

Superchunk
Reply to  SLC Dave
January 30, 2019 9:35 am

I would bet it’s the plane and space-based tracking that is saving lives. In fact, the false-alarm mentality created by models could arguably cost lives since it seem like wherever the long-range models say a hurricane will go is where it’s likely not to go, except on the very rare occasions when it does.

Reply to  Superchunk
January 30, 2019 10:18 am

Here’s a discussion of the model predictions for the landfall of hurricane Sandy:
“The results show that ECMWF operational forecasts 8 days before landfall gave a strong and accurate indication of what was to happen. From 7 days before the landfall the high-resolution forecasts were consistent in its prediction of the landfall. The results from the ensemble forecasts allowed a significant degree of confidence to be attached to these forecasts but also showed signs of a too slow movement of the cyclone, which led to a timing error of the landfall.”

Sandy is a good test since it made an unusual turn to the west prior to landfall which was predicted by the model.

https://www.ecmwf.int/sites/default/files/elibrary/2013/10913-evaluation-forecasts-hurricane-sandy.pdf

Kurt
Reply to  SLC Dave
January 30, 2019 8:28 pm

Google the phrase “straw man fallacy.” You might find it informative.

Superchunk
January 30, 2019 9:21 am

I’m curious when, and based on what conditions, the models say the next glaciation should begin, or when and why the ice age we’re currently in (if my understanding is correct) will end. And what triggered the MWP and the LIA. Without this, I can create a model that just rises at .5C per century that matches official models.

Robert of Texas
January 30, 2019 11:32 am

You can divide computer models up into a few categories to determine which ones are theoretically possible of making a prediction over a given amount of time:

1) Bound and Unbound – a bound model will not exceed certain limits and will act against any movement to an edge with increasing adjustments, so these have some medium range that is practical. The Earth would fit into this category as it has never become like Venus. An unbound model can scurry off any direction (degree of freedom) from some non-linear interaction the programmer didn’t predict. These are common early bugs in many models requiring bounds be added, and these bounds are often just guesses, not limits that are discovered through studying the natural process.

2) Well Behaved or Chaotic: Well behaved models transition from state to state in well defined ways and are easy (or at least easier) to understand. Chaotic models can transition suddenly with no warning from very little changes in the inputs. The Earth’s climate is likely chaotic. Chaotic models are extremely difficult to get right when there multiple variables to tune – tuning one can detune the others. Using an evolutionary learning process is about the only way to attempt to tune a chaotic model with many variables, and even then the answer may be only one possible correct answer from many possible ones.

3) Independent or Iterative: An independent model is given a set of initial input and it calculates some final state and then stops. An Iterative model starts with a set of initial input and then uses the result state as the initial input into another run producing another resultant state, and so on. Iterative models often suffer from divergence – that is any error in a previous state amplifies in a successive state, becoming more and more noise in the processing until all accuracy is lost. Iterative chaotic models that lack initial accuracy – that is if any of the inputs are guesses – will suffer a complete failure of accuracy of the results after enough iterations. Even if the model is bounded, it likely will just hit and stay at an edge.

If a system is Chaotic and Iterative, then it will be nearly impossible to predict over a sufficient time. The amount of time depends on the complexity of the system, and the magnitude of a state change when it goes through a chaotic shift. Climate fits into this class of problems – it is very complex and appears chaotic (i.e. goes through swift sudden changes).

A reasonable climate model is likely to be Bound, Chaotic, and Iterative. I say Bound because the Earth seems to stay within a narrow range of temperatures over time, Chaotic because most natural systems are to some degree chaotic, and Iterative because is is obvious that the weather today affects the weather of tomorrow, and climate is after all just a 30-year (or 100, or some amount of time) averaging of weather.

A favorite method for programmers of such models to keep failures from occurring is to add artificial boundaries and tests that look for behaviors that will result in an edge failure. These tests can herd a models behavior back towards an more expected result, and so the model appears to complete its run giving realistic results. The problem is of course, the results are complete B.S.

But if you are predicting far enough out – say 50 years, then no one can call you out on it for at least 20 to 30 years. By then you either update your model to make the past 20 to 30 years look more reasonable, or you change the recorded historical data to better match your results, or you replace real measurements with model produced measurements so that reality no longer matters. Our Climate Scientists are trying all three. The correct scientific response would be to rebuild the model to better simulate what has been learned, but if you already know the answer, you cannot ever learn… This seems to be the root of the main problem.

Now to say the people programming these models lied is not accurate – they truly believe the results before them – its like their mind conveniently forget all the contrivance put inside the model. The people adding the contrivance are often working for the scientist who is trying to model the behavior, so the scientist may not even understand how bad the model is, and the programmers are just trying to make the code work. I have myself been fooled by a model I wrote, only to discover later it wasn’t predicting real world results as much as just reflecting my own beliefs – its a very easy trap to fall into.

This is NOT a problem with models, or the idea of modeling. Models work very well in many applications. It is a problem with the application of models to such a complex natural system where testing is impossible unless you are patient and test over 100 years or so.

The real problem is a human one…people have jumped to a conclusion that “CO2 controls temperature” when it has become clearly evident that it at best “influences temperature somewhat”. As long as the scientists producing these models already know the answer, their models will produce nothing but biased results.

Kurt
Reply to  Robert of Texas
January 30, 2019 9:13 pm

“Iterative because is is obvious that the weather today affects the weather of tomorrow, and climate is after all just a 30-year (or 100, or some amount of time) averaging of weather.”

I’m not sure that’s right. Today’s weather is certainly correlated with tomorrow’s weather, but I don’t think you can assume a causal relationship where one “affects” the other. Instead, I’d very broadly visualize the climate system as being some set of physical phenomena(ocean currents, sun shining on a spinning earth, clouds, mountains, etc) that produces a sequence of chaotic short term weather events but that collectively exhibits generally predictable long term statistical behavior, such as seasonal cycles of temperature values, annual rainfall amounts within typical bounds, etc.

But a weather event in the real world, like say a hurricane, is something that can last for several weeks and move over many locations. A rainstorm over Seattle may last 48 hours, but depending on its timing it could cover anywhere between two to three calendar days. It then moves over the mountains and produces blizzard conditions there for a day or so, and so forth.

This doesn’t seem to be iterative behavior – it only seems so because we impose an arbitrary 24-hour boundary around what we call “weather.” But we do that only because we organize our human lives around a 24-hour calendar. Weather events don’t really care about when the calendar turns over to a new day.

RayG
January 30, 2019 12:47 pm

There exists a very extensive body of work on verification and validation (V and V) of simulation models in the peer reviewed archival literature. There is an accompanying body of work where others have replicated this work. This body of work dates back to the 1960s and continues to the present. Mr. Stokes, please identify any of the GCMs that have been subjected to a proper V and V testing and, if that has occurred, the results made public.

As an aside, the notion that “peer review” is some kind of gold standard is misguided at best. The gold standard is replication!

Alan Ranger
January 30, 2019 3:15 pm

Without even debating any of the assumptions underlying these models (CMIP5 etc.) it’s well worth watching Pat Frank’s damning expose, showing the abject worthlessness of whole computer climate modelling exercise:
https://www.youtube.com/watch?v=THg6vGGRpvA

Alan

Toto
January 30, 2019 6:58 pm

“GEOS is a global atmospheric model that uses mathematical equations run through a supercomputer to represent physical processes.” That’s from a new WUWT posting. That’s a good description of all climate models. First there is reality, then there is the known science of those physical processes, then there is the mathematical description of those, then there is the computational description of that, then there is the interpretation of the results, usually of the ensemble of results. Many steps, and at each step along the way there is a chance that something is left out or something is not as accurate as it should be. If this were proper science, everybody would be jumping to find problems and fix them. That happens with weather models, and weather models are getting better, and everybody acknowledges that they have their limits.

Climate models are similar in some ways but used in very different ways. Weather and climate are different things, weather being what you get and climate being the range you could get. That’s not the problem. With weather, we only need to predict the things that change in a few days. CO2 and other greenhouse gasses (other than water) don’t matter. The problem with climate is that there are so many things that could matter that we know of and probably more things that matter that we don’t, and our knowledge of things that do matter is very incomplete.

So when the climate models predict more incoming radiation than outgoing, it’s going to get warmer. The science is settled, basic physics. Case closed. Except it isn’t, because the climate models aren’t very good at clouds and other things that affect those incoming and outgoing values. The science is not settled, because the science is hard, wicked hard.

It’s even harder when in the rush to find the Holy Grail everybody rushes down the wrong path.
Except Willis and a few others who see emergent phenomena and self-regulating systems. It’s ironic that Willis’s earth is more Gaia like and the mainstream view of earth is like a crude, dumb machine.

There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.
– Hamlet (1.5.167-8), Hamlet to Horatio

January 31, 2019 2:27 am

A common error is that CO2 itself raising the temperature. It does not. All it does is accepts the bundles of energy from the Sun, the photons, and re-radiates them at a different frequency which is absorbed by the other gases of the atmosphere.

Its the a retention by the other gases of this energy which raises the temperature.

But at night this situation reverses. The CO2 now accepts the heat energy y from the warmth which is the atmosphere and again it changes the frequency e back to the Infra Red. Part of that ends up going to outer Space. That is why in very dry places at night, deserts, it gets very cold. So poor CO2, blamed by the Greens for overheating the Planet, actually cools things down.

MJE

January 31, 2019 4:49 am

At a vital point in their calculations, climatologists had neglected to take account of the fact that the Sun is shining!

This silly notion is the message in Christopher Monckton’s mathematically incoherent video at https://www.youtube.com/watch?v=kcxcZ8LEm2A

If you dig into that video’s math, you find that the “term for sunshine” he goes on about actually has no effect in his actual calculations, that his math all boils down to the six numbers in his “end of the global warming scam in a single slide” at https://wattsupwiththat.com/2018/08/15/climatologys-startling-error-of-physics-answers-to-comments/ –which implements the simple and erroneous notion that extrapolation of with-feedback temperature as a function of without feedback temperature should be based on average slope rather than local slope.

If the Heartland Institute’s science director really believes that extrapolation should be based on average slope, he’s likely to accept a lot of other mathematically incoherent things, too.

January 31, 2019 9:11 am

It just amazes me that as a person involved with developing the computer models for accident analysis and straining simulators for nuclear power plants that it took over two years just to get a rough model of a nuclear power plant to start verifying it worked correctly AND we had every known parameter and the affects of this parameter on the process. From that point it took more than another two years to get the training simulator to duplicate the actual plant within 0.1% accuracy. Further we knew every parameter, no guesses, no fudging, no estimates, and no “tricks.” Same is true for flight simulators and other process plant simulators (e.g. gasoline refining plants.)
NONE of this has been done with computer models. And as the article states the world does not have a computer capable to do this. All the climate models are like the simple program I wrote in Basic for CPM about 50 years ago to model a Nuclear power plat as a sort of game. hardly an accurate simulator.

RayG
Reply to  Usurbrain
January 31, 2019 12:21 pm

Usur, please comment on the V&V that was involved in developing your models. Thanks

Reply to  RayG
February 1, 2019 8:39 am

Was placed on this project as I was the Instrumentation Engineer on several NPPs. I was responsible for developing the algorithms and function curves for controlling the Reactor and feed-water system. Also responsible for aligning and tuning the reactor and feed-water control system during the power escalation system. These calculations were used to make the initial model, and to get a Beta model. Once we got the model to roughly proximate the actual plant ( and the Beta was fairly close, just not within specs) we then use the computer data from the actual plant of actual power excursions, typical events like loss of a pump, or loss of power, and normal operating events, like increasing power, decreasing load, and each and every other normal operating procedures. Since this was in essence a closed system, every time we tweaked one function curve to match the plant, other function curves needed to be changed. Reminded me of doing the convergence on a color TV CRT, not LCD.
Therein is the problem of the Climate models – none of this has been done. They do it for models of airplanes, cars, “Cracking’ plants – like oil refineries, etc. but not for Climate Change. Thus it is just like that simple model I wrote for my son years ago so he could pretend he was operation a NPP.