A Simple Truth; Computer Climate Models Cannot Work

Guest opinion by Dr. Tim Ball –

Ockham’s Razor says, “Entities are not to be multiplied beyond necessity.” Usually applied in making a decision between two competing possibilities, it suggests the simplest is most likely correct. It can be applied in the debate about climate and the viability of computer climate models. An old joke about economists’ claims they try to predict the tide by measuring one wave. Is that carrying simplification too far? It parallels the Intergovernmental Panel on Climate Change (IPCC) objective of trying to predict the climate by measuring one variable, CO2. Conversely, people trying to determine what is wrong with the IPCC climate models consider a multitude of factors, when the failure is completely explained by one thing, insufficient data to construct a model.

IPCC computer climate models are the vehicles of deception for the anthropogenic global warming (AGW) claim that human CO2 is causing global warming. They create the results they are designed to produce.

The acronym GIGO, (Garbage In, Garbage Out) reflects that most working around computer models knew the problem. Some suggest that in climate science, it actually stands for Gospel In, Gospel Out. This is an interesting observation, but underscores a serious conundrum. The Gospel Out results are the IPCC predictions, (projections), and they are consistently wrong. This is no surprise to me, because I have spoken out from the start about the inadequacy of the models. I watched modelers take over and dominate climate conferences as keynote presenters. It was modelers who dominated the Climatic Research Unit (CRU), and through them, the IPCC. Society is still enamored of computers, so they attain an aura of accuracy and truth that is unjustified. Pierre Gallois explains,

If you put tomfoolery into a computer, nothing comes out but tomfoolery. But this tomfoolery, having passed through a very expensive machine, is somehow ennobled and no-one dares criticize it.

Michael Hammer summarizes it as follows,

It is important to remember that the model output is completely and exclusively determined by the information encapsulated in the input equations.  The computer contributes no checking, no additional information and no greater certainty in the output.  It only contributes computational speed.

It is a good article, but misses the most important point of all, namely that a model is only as good as the structure on which it is built, the weather records.

The IPCC Gap Between Data and Models Begins

This omission is not surprising. Hubert Lamb, founder of the CRU, defined the basic problem and his successor, Tom Wigley, orchestrated the transition to the bigger problem of politically directed climate models.

clip_image002

Figure 2: Wigley and H.H.Lamb, founder of the CRU.

Source

Lamb’s reason for establishing the CRU appears on page 203 of his autobiography, “Through all the Changing Scenes of Life: A Meteorologists Tale”

“…it was clear that the first and greatest need was to establish the facts of the past record of the natural climate in times before any side effects of human activities could well be important.”

Lamb knew what was going on because he cryptically writes,

“My immediate successor, Professor Tom Wigley, was chiefly interested in the prospects of world climates being changed as a result of human activities, primarily through the burning up of wood, coal, oil and gas reserves…” “After only a few years almost all the work on historical reconstruction of past climate and weather situations, which first made the Unit well known, was abandoned.”

Lamb further explained how a grant from the Rockefeller Foundation came to grief because of,

“…an understandable difference of scientific judgment between me and the scientist, Dr. Tom Wigley, whom we have appointed to take charge of the research.”

Wigley promoted application of computer models, but Lamb knew they were only as good as the data used for their construction. Lamb is still correct. The models are built on data, which either doesn’t exist, or is by all measures inadequate.

clip_image004

Figure 2

Climate Models Construct.

Models range from simple scaled down replicas with recognizable individual components, to abstractions, such as math formula, that are far removed from reality, with symbols representing individual components. Figure 2 is a simple schematic model of divisions necessary for a computer model. Grid spacing (3° by 3° shown) varies, and reduction is claimed as a goal for improved accuracy. It doesn’t matter, because there are so few stations of adequate length or reliability. The mathematical formula for each grid cannot be accurate.

Figure 3 show the number of stations according to NASA GISS.

clip_image006

Figure 3.

It is deceiving, because each dot represents a single weather station, but covers a few hundred square kilometers at scale on the map. Regardless, the reality is vast areas of the world have no weather stations at all. Probably 85+ percent of the grids have no data. The actual problem is even greater as NASA GISS, apparently unknowingly, illustrated in Figure 4.

clip_image008

Figure 4.

4(a) shows length of record. Only 1000 stations have records of 100 years and almost all of them are in heavily populated areas of northeastern US or Western Europe and subject to urban heat island effect (UHIE) 4(b) shows the decline in stations around 1960. This was partly related to the anticipated increased coverage of satellites. This didn’t happen effectively until 2003-04. The surface record remained the standard for the IPCC Reports. Figure 5 shows a CRU produced map for the Arctic Climate Impact Assessment (ACIA) report.

clip_image010

Figure 5.

It is a polar projection for the period from 1954 to 2003and shows “No Data” for the Arctic Ocean (14 million km2), almost the size of Russia. Despite the significant decline in stations in 4(b), graph 4(c) shows only a slight decline in area covered. This is because they assume each station represents, the percent of hemispheric area located within 1200km of a reporting station.” This is absurd. Draw a 1200km circle around any land-based station and see what is included. The claim is even sillier if a portion includes water.

Figure 6 a, shows the direct distance between Calgary and Vancouver at 670 km and they are close to the same latitude.

clip_image012

Figure 6 a

Figure 6 b, London to Bologna, distance 1154 km.

clip_image014

Figure 6 b

Figure 6 c, Trondheim to Rome, distance 2403 km. Notice this 2400 km circle includes most of Europe.

clip_image016

Figure 6 c

An example of problems of the 1200 km claim occurred in Saskatchewan a few years ago. The Provincial Ombudsman consulted me about frost insurance claims that made no sense. The government agricultural insurance decided to offer frost coverage. Each farmer was required to pick the nearest weather station as the base for decisions. The very first year they had a frost at the end of August. Using weather station records, about half of the farmers received no coverage because their station showed 0.5°C, yet all of them had “black frost”, so-called because green leaves turn black from cellular damage. The other half got paid, even though they had no physical evidence of frost, but their station showed -0.5°C. The Ombudsman could not believe the inadequacies and inaccuracies of the temperature record and this in essentially an isotropic plain. Especially after I pointed out that they were temperatures from a Stevenson Screen, for the most part at 1.25 to 2 m above ground and thus above the crop. Temperatures below that level are markedly different.

 

Empirical Test Of Temperature Data.

A group carrying out a mapping project, trying to use data for practical application, confronted the inadequacy of the temperature record.

The story of this project begins with coffee, we wanted to make maps that showed where in the world coffee grows best, and where it goes after it has been harvested. We explored worldwide coffee production data and discussed how to map the optimal growing regions based on the key environmental conditions: temperature, precipitation, altitude, sunlight, wind, and soil quality.

The first extensive dataset we could find contained temperature data from NOAA’s National Climatic Data Center. So we set out to draw a map of the earth based on historical monthly temperature. The dataset includes measurements as far back as the year 1701 from over 7,200 weather stations around the world.

Each climate station could be placed at a specific point on the globe by their geospatial coordinates. North America and Europe were densely packed with points, while South America, Africa, and East Asia were rather sparsely covered. The list of stations varied from year to year, with some stations coming online and others disappearing. That meant that you couldn’t simply plot the temperature for a specific location over time.

clip_image018

Figure 7

The map they produced illustrates the gaps even more starkly, but that was not the only issue.

At this point, we had a passable approximation of a global temperature map, (Figure 7) but we couldn’t easily find other data relating to precipitation, altitude, sunlight, wind, and soil quality. The temperature data on its own didn’t tell a compelling story to us.

The UK may have accurate temperature measures, but it is a small area. Most larger countries have inadequate instrumentation and measures. The US is probably the best, certainly most expensive, network. Anthony Watts research showed that the US record has only 7.9 percent of weather stations with a less than 1°C accuracy.

Precipitation Data A Bigger Problem

Water, in all its phases, is critical to movement of energy through the atmosphere. Transfer of surplus energy from the Tropics to offset deficits in Polar Regions (Figure 8) is largely in the form of latent heat. Precipitation is just one measure of this crucial variable.

clip_image020

Figure 8

It is a very difficult variable to measure accurately, and records are completely inadequate in space and time. An example of the problem was exposed in attempts to use computer models to predict the African monsoon. (Science, 4 August 2006,)

Alessandra Giannini, a climate scientist at Columbia University. Some models predict a wetter future; others, a drier one. “They cannot all be right.”

 

One culprit identified was the inadequacy of data.

One obvious problem is a lack of data. Africa’s network of 1152 weather watch stations, which provide real-time data and supply international climate archives, is just one-eighth the minimum density recommended by the World Meteorological Organization (WMO). Furthermore, the stations that do exist often fail to report.

It is likely very few regions meet the WMO recommended density. The problem is more complex, because temperature changes are relatively uniform, although certainly not over 1200km. However, precipitation amounts vary in a matter of meters. Much precipitation comes from showers that develop from cumulus clouds that develop during the day. Most farmers in North America are familiar with one section of land getting rain while another is missed.

Temperature and precipitation, the two most important variables, are completely inadequate to create the conditions, and therefore the formula for any surface grid of the model. As the latest IPCC Report, AR5, notes in two vague under-statements,

The ability of climate models to simulate surface temperature has improved in many, though not all, important aspects relative to the generation of models assessed in the AR4.

The simulation of large-scale patterns of precipitation has improved somewhat since the AR4, although models continue to perform less well for precipitation than for surface temperature.

But the atmosphere is three-dimensional and the amount of data above the surface is almost non-existent. Just one example illustrates the problems. We had instruments every 60 m on a 304 m tower outside the heat island effect of the City of Winnipeg. The changes in that short distance were remarkable, with many more inversions than we expected.

Some think parametrization is used to substitute for basic data like temperature and precipitation. It is not. It is a,

method of replacing processes that are too small-scale or complex to be physically represented in the model by a simplified process.

Even then, IPCC acknowledge limits and variances

The differences between parameterizations are an important reason why climate model results differ.

Data Even More Inadequate For Dynamic Atmosphere.

They “fill in” the gaps with the 1200 km claim, which shows how meaningless it all is. They have little or no data in any of the cubes, yet they are the mathematical building blocks of the computer models. It is likely that between the surface and atmosphere there is data for about 10 percent of the total atmospheric volume. These comments apply to a static situation, but the volumes are constantly changing daily, monthly, seasonally and annually in a dynamic atmosphere and these all change with climate change.

Ockham’s Razor indicates that any discussion about the complexities of climate models including methods, processes and procedures are irrelevant. They cannot work because the simple truth is the data, the basic building blocks of the model, are completely inadequate. Here is Tolstoi’s comment about a simple truth.

 

“I know that most men, including those at ease with problems of the greatest complexity, can seldom accept even the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they delighted in explaining to colleagues, which they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their lives.”

Another simple truth is the model output should never be used as the basis for anything let alone global energy policy.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
301 Comments
Inline Feedbacks
View all comments
Steve R
October 18, 2014 11:40 am

I don’t necessarily agree with the premise of this essay. The reason climate models do not correctly simulate future climate is NOT because of a lack of density in initial meteorological conditions. It is because the factors which are responsible for changes in the climate are not included in the model. Even if we had top-notch meteorological data, as dense as any modeler could desire, climate models would still not work. (Of course, such data could allow us to construct meteorological models of exceptional quality).
In order for a model to correctly predict changes in the climate, it is imperative that we actually know what causes the climate to change! Take the most obvious and in the long run, most important climate cycle; the periodic extreme variations of the Pleistocene. I would suggest, that no amount of meteorological data, collected today, no matter how dense, can simulate the periodic variations of the pliestocene.
Another way of putting this, the future climate is not “sensitive” to any meteorological data we can measure and put into a model today, no matter how dense. Even the future weather, at some point in the future, is not sensitive to present meteorological conditions at any density. Run different meterological models out far enough in time, and they ALL eventually diverge. Using this technique to simulate climate cannot work, because it is, in effect, trying to simulate climate by simulating the weather between now and very far into the future.

Reply to  Steve R
October 18, 2014 11:50 am

Though it would be well if the climate system could be reduced to cause and effect relationships, in modeling a complex system it is usually not possible to do so for the information that is available for doing so is incomplete. Notwithstanging this handicap it is often possible to build a model that yields sufficient information about the outcomes of events for the system to be controlled. Climatologists have not yet built such a model.

Reply to  Terry Oldberg
October 19, 2014 2:00 am

The only model that can model a complex non-linear system is the system itself. Climate system is complex because it has feedbacks. Heat transfer is governed by non-linear equations.
Complex non-linear systems are very sensitive to initial and boundary conditions. It would not use the word chaos because it is often connected to randomness. Lottery balls are a typical example of these systems. System is governed by deterministic laws of physics but small changes in the conditions result in unpredictable outcomes that in this case can be statistically analysed.
I spent a lot of time to think about Nick Stoke’s claim that GCMs are physical models that do not use weather station data. Behind his link to a simple climate model I found that initial and boundary values for each grid cell are used (look at chapter 7). I would call that deception because calculating grid cell values from weather station data is of course needed.
Googling ahead, I found a number of references that claim the climate system is not sensitive to iniatial situation. Just above in this thread: “Run different meterological models out far enough in time, and they ALL eventually diverge”. I am not sure of that. Starting at a ice age and at hot period in history will result in very different next 100 years. Sensitivity to initial data is definitely there but not necessarily to the athmospheric data but to the data of ocean heat and ice.
Missing understanding of the clouds and oceans is a showstopper in climate modelling. Without accurate and precise data it is not possible to get that understanding. Think about where HITRAN has been defined.
By definition models are simplifications of the system. You have to leave out unimportant factors and accuracy & precision have to be compromized because of limits of the computing power. Numerical calculation has its limits. Current climate models have overly large grid cells and weak parametrization to correct that. Overcoming that in a foreseable future is not probable.

Reply to  totuudenhenki
October 19, 2014 8:23 am

totuudenhenki:
Thanks for sharing. Abstraction is an idea in the construction of a model that can be applied to a complex nonlinear system. An “abstracted state” is one that is removed from selected details of the real world. For example, in a model of biological organisms states might be abstracted from gender differences.
An abstracted state is produced through the use of an inclusive dysjunction; this produces an abstracted state such as “male OR female” where “male,” “female” and “male OR female” are propositions as well as states.
When abstraction is used in conjunction with modern information theory a result is sometimes one’s ability to discover and recognize patterns where a “pattern” is an example of an abstracted state. Through pattern discovery and recognition one can sometimes predict the outcomes of the events of the future thus bringing a complex nonlinear system under a degree of control. In meteorology this approach has been tried successfully. In climatology it has not been tried.
If climatologists were to try this they would discover that to predict changes in the global temperature over more than about 1 year would be impossible for observed events would provide too little information to do better. They have convinced themselves and others that they can predict changes over 100 years by, in effect, fabricating the missing information.

Reply to  Terry Oldberg
October 19, 2014 10:38 am

Yes, just use Feynman’s scientific method: 1) guess, 2) compute consequences 3) compare them to empirical data. Without good data it is not easy to create good guesses and it is impossible to compare consequences to data.
Guess “it is CO2” is clearly wrong. “it is the sun” guess is discussed in another thread and Tisdale has convinced me that “it is the oceans” needs more research. It is a pity that we did not have Argos in the 1970s.

Reply to  totuudenhenki
October 19, 2014 11:55 am

totuudenhenki:
An often misunderstood aspect of the scientific method is that the comparison is between the predicted and observed relative frequencies of the outcomes of observed events. Prior to AR5, comparisons presented by the IPCC were not of this character.

Nick Stokes
Reply to  Terry Oldberg
October 19, 2014 10:26 pm

totuudenhenki
” Behind his link to a simple climate model I found that initial and boundary values for each grid cell are used (look at chapter 7). I would call that deception because calculating grid cell values from weather station data is of course needed.”
Well, you didn’t read far enough. Of course any time-stepping program needs some initial data, even if what you want to know is insensitive to the choice. To start a CFD program, the main concern is physical consistency, so you don’t set off some early explosion. Hence they set out the initial data they use:
“In this section, we describe how the time integration is started from data consistent with the spectral truncation.”
Now if you know what that means, you’ll know that station data won’t provide it. You need some kind of model to get that consistency. They use Bonan’s LSM, described by Wiki here. For a start, it is 1D.
But I find all this special pleading about how station data might be used for validation or initialization stretched. The initial post was totally muddling GMST index computation (which does relate stations to a grid, and is done by CRU) to GSM’s, which don’t.
Another contradiction is the common assertion here about how weather is chaotic, with the claims about intialization. Chaos means that you can’t get a reproducible result based on initial conditions. GCM’s sensibly don’t try (except for the v recent decadal, which may or may not turn out to be sensible). They conserve what is conserved (mass, momentum, energy), and extract what averaged results they can to attenuate the unpredictable weather.

Jim G
October 18, 2014 2:21 pm

3 degrees can buy a lot in Southern California.
You could go from ocean and coastal cities, to valleys at 300 ft MSL, high desert at 2300 ft MSL and on
to Mount Whitney at 14,505 ft.

joeldshore
October 20, 2014 6:14 pm

Nick Stokes has noted one of the major fallacies of Tim Ball’s post here. However, there is another that has gone unmentioned (at least in the comment that I have read): He confuses temperature and temperature anomaly. His example shows that temperatures can vary a great deal over 1200 km. (Actually, a much simpler example is to simply compare the temperature at the summit of Mt Washington to the temperature of the nearest valley.) However, what he is critiquing is talking about temperature anomalies, which are in fact, correlated over a much larger region than temperatures are. See discussion here:http://data.giss.nasa.gov/gistemp/abs_temp.html

Lloyd Martin Hendaye
October 20, 2014 8:13 pm

Complex dynamic systems such as Planet Earth’s cyclic atmospheric/oceanic circulation patterns are governed by chaotic/fractal, random-recursive factors, wholly deterministic yet impossible to project in detail even over ultra-short time-frames.
Since Earth’s long-term global temperature is in fact a plate-tectonic artifact, not fluid-based at all, so-called Global Climate Models (GCMs) are not only viciously circular but intrinsically subject to wholly extraneous geophysical shocks. Think Chicxulub’s Cretaceous/Tertiary (K/T) boundary, the Younger Dryas impact-generated “cold shock” of c. 12,800 – 11,500 YBP, Mesopotamia’s horrific “Dark Millennium” of c. BC 4000 – 3000… all wholly unpredictable yet all-determining, with absolutely no biogenic input whatsoever.
Be warned: On any objective empirical, observational, or rational math/statistical basis, Anthropogenic Global Warming (AGW) stands with Rene Blondlot, J.B. Rhine, Trofim Lysenko, Immanuel Velikovsky… John Holdren, Keith Farnish, Kentti Linkola, Hans-Joachim Schellnhuber et al. are manifestly Luddite sociopaths
whose One World Order makes Anabaptists of Munster seem benign.

Johan
Reply to  Lloyd Martin Hendaye
October 20, 2014 10:22 pm

A small correction: Mr Linkola’s first name is Pentti. His background is interesting: His father, professor Kaarlo Linkola, was a renowned botanist and Rector of the University of Helsinki, and his mother’s father was Hugo Suolahti, professor of German philology at Helsinki and later Chancellor of the University. Linkola himself was a. o. a serious ornithologist. He quit formal studies after one year but continued his studies outdoor and wrote a higly respected book (with Olavi Hildén) in 1955 and another one in 1967. After that, he turned into the highly pessimistic and utterly anarchistic person he is best known as. Finland being one of the most tolerant democracies in the world today, he can freely advocate his gruesome message knowing that nobody will harm him regardless of what he says.

rgbatduke
October 21, 2014 5:55 am

Another contradiction is the common assertion here about how weather is chaotic, with the claims about intialization. Chaos means that you can’t get a reproducible result based on initial conditions. GCM’s sensibly don’t try (except for the v recent decadal, which may or may not turn out to be sensible). They conserve what is conserved (mass, momentum, energy), and extract what averaged results they can to attenuate the unpredictable weather.

Really? So we learn something or predictive value by averaging over microtrajectories produced by of other chaotic systems? Who knew?
Somebody should publish this. It’s news!
rgb

joeldshore
Reply to  rgbatduke
October 21, 2014 3:58 pm

Robert,
I am confused about what you are saying. Are you saying, for example, that you don’t think we could get a good picture of the seasonal cycle by averaging over the trajectories in a climate model? I don’t think that it is really news that “chaos” is not the equivalent of “We can’t predict anything about the system.”

October 22, 2014 10:24 am

A bigger problem of the models is that they are built on fundamentaly wrong science
Over 140 years ago two groups of scientists disagreed. On one side were Maxwell and Boltzmann. On the other Laplace, Lagrange and Loschmidt, Boltzmann’s former mentor!
What they disagreed on was Loschmidt’s Gravito-Thermal Theory.
Loschmidt suggest that a column of a column of air would establish a thermodynamic equilibrium where all layers had the same energy (potential + kinetic). Air at the top would have more potential energy and therefore less kinetic (it would be colder) and air at the bottom would have less potential energy and therefore more kinetic (it would be warmer).
The air column would therefore naturally develop a thermal gradient with height.
Maxwell and Boltzmann disagreed, saying that this would violate the Second Law of Thermodynamics. Laplace, Lagrange and Loschmidt were adamant that their theory was anyway derived from the First Law of Thermodynamics and would not violate the second Law.
The argument was never resolved.
Today the theories of Maxwell and Boltzmann dominate.
The problem is that Loschmidt, Laplace and Lagrange were right. The value they calculated for the thermal gradient accoring to their gravito-thermal theory matches the dry adiabatic lapse rates we observe in reality.
The implications are significant. Earths Energy Budget in reality is based on a transfer of thermal energy to potential energy during the day and the reverse transfer of potential energy to thermal energy at night.
The current climate models are fundamentally wrong.

joeldshore
Reply to  Andrew Gordon
October 23, 2014 3:37 am

No…Loschmidt is not right. There are in fact rigorous statistical mechanical arguments showing that he is wrong. I think that the value they calculated for the thermal gradient according to their gravito-thermal theory matches the dry adiabatic lapse rates only if you are careless in your derivation about the distinction between specific heat at constant pressure and volume.
And, the fact that the lapse rate is close to the adiabatic lapse rate is understood by CORRECT physics: It is true because lapse rates higher than the adiabatic lapse rate are unstable to convection, which drives the lapse rate back down to the adiabatic lapse rate.
Finally, all of this is irrelevant because proposing such a lapse rate still does not get you a surface temperature 33 K warmer than it could possibly be for an atmosphere transparent to terrestrial longwave radiation, based on simple energy balance arguments at the TOP of the atmosphere .