Guest opinion by Dr. Tim Ball –
Ockham’s Razor says, “Entities are not to be multiplied beyond necessity.” Usually applied in making a decision between two competing possibilities, it suggests the simplest is most likely correct. It can be applied in the debate about climate and the viability of computer climate models. An old joke about economists’ claims they try to predict the tide by measuring one wave. Is that carrying simplification too far? It parallels the Intergovernmental Panel on Climate Change (IPCC) objective of trying to predict the climate by measuring one variable, CO2. Conversely, people trying to determine what is wrong with the IPCC climate models consider a multitude of factors, when the failure is completely explained by one thing, insufficient data to construct a model.
IPCC computer climate models are the vehicles of deception for the anthropogenic global warming (AGW) claim that human CO2 is causing global warming. They create the results they are designed to produce.
The acronym GIGO, (Garbage In, Garbage Out) reflects that most working around computer models knew the problem. Some suggest that in climate science, it actually stands for Gospel In, Gospel Out. This is an interesting observation, but underscores a serious conundrum. The Gospel Out results are the IPCC predictions, (projections), and they are consistently wrong. This is no surprise to me, because I have spoken out from the start about the inadequacy of the models. I watched modelers take over and dominate climate conferences as keynote presenters. It was modelers who dominated the Climatic Research Unit (CRU), and through them, the IPCC. Society is still enamored of computers, so they attain an aura of accuracy and truth that is unjustified. Pierre Gallois explains,
If you put tomfoolery into a computer, nothing comes out but tomfoolery. But this tomfoolery, having passed through a very expensive machine, is somehow ennobled and no-one dares criticize it.
Michael Hammer summarizes it as follows,
It is important to remember that the model output is completely and exclusively determined by the information encapsulated in the input equations. The computer contributes no checking, no additional information and no greater certainty in the output. It only contributes computational speed.
It is a good article, but misses the most important point of all, namely that a model is only as good as the structure on which it is built, the weather records.
The IPCC Gap Between Data and Models Begins
This omission is not surprising. Hubert Lamb, founder of the CRU, defined the basic problem and his successor, Tom Wigley, orchestrated the transition to the bigger problem of politically directed climate models.
Figure 2: Wigley and H.H.Lamb, founder of the CRU.
Lamb’s reason for establishing the CRU appears on page 203 of his autobiography, “Through all the Changing Scenes of Life: A Meteorologists Tale”
“…it was clear that the first and greatest need was to establish the facts of the past record of the natural climate in times before any side effects of human activities could well be important.”
Lamb knew what was going on because he cryptically writes,
“My immediate successor, Professor Tom Wigley, was chiefly interested in the prospects of world climates being changed as a result of human activities, primarily through the burning up of wood, coal, oil and gas reserves…” “After only a few years almost all the work on historical reconstruction of past climate and weather situations, which first made the Unit well known, was abandoned.”
Lamb further explained how a grant from the Rockefeller Foundation came to grief because of,
“…an understandable difference of scientific judgment between me and the scientist, Dr. Tom Wigley, whom we have appointed to take charge of the research.”
Wigley promoted application of computer models, but Lamb knew they were only as good as the data used for their construction. Lamb is still correct. The models are built on data, which either doesn’t exist, or is by all measures inadequate.
Climate Models Construct.
Models range from simple scaled down replicas with recognizable individual components, to abstractions, such as math formula, that are far removed from reality, with symbols representing individual components. Figure 2 is a simple schematic model of divisions necessary for a computer model. Grid spacing (3° by 3° shown) varies, and reduction is claimed as a goal for improved accuracy. It doesn’t matter, because there are so few stations of adequate length or reliability. The mathematical formula for each grid cannot be accurate.
Figure 3 show the number of stations according to NASA GISS.
It is deceiving, because each dot represents a single weather station, but covers a few hundred square kilometers at scale on the map. Regardless, the reality is vast areas of the world have no weather stations at all. Probably 85+ percent of the grids have no data. The actual problem is even greater as NASA GISS, apparently unknowingly, illustrated in Figure 4.
4(a) shows length of record. Only 1000 stations have records of 100 years and almost all of them are in heavily populated areas of northeastern US or Western Europe and subject to urban heat island effect (UHIE) 4(b) shows the decline in stations around 1960. This was partly related to the anticipated increased coverage of satellites. This didn’t happen effectively until 2003-04. The surface record remained the standard for the IPCC Reports. Figure 5 shows a CRU produced map for the Arctic Climate Impact Assessment (ACIA) report.
It is a polar projection for the period from 1954 to 2003and shows “No Data” for the Arctic Ocean (14 million km2), almost the size of Russia. Despite the significant decline in stations in 4(b), graph 4(c) shows only a slight decline in area covered. This is because they assume each station represents, “the percent of hemispheric area located within 1200km of a reporting station.” This is absurd. Draw a 1200km circle around any land-based station and see what is included. The claim is even sillier if a portion includes water.
Figure 6 a, shows the direct distance between Calgary and Vancouver at 670 km and they are close to the same latitude.
Figure 6 a
Figure 6 b, London to Bologna, distance 1154 km.
Figure 6 b
Figure 6 c, Trondheim to Rome, distance 2403 km. Notice this 2400 km circle includes most of Europe.
Figure 6 c
An example of problems of the 1200 km claim occurred in Saskatchewan a few years ago. The Provincial Ombudsman consulted me about frost insurance claims that made no sense. The government agricultural insurance decided to offer frost coverage. Each farmer was required to pick the nearest weather station as the base for decisions. The very first year they had a frost at the end of August. Using weather station records, about half of the farmers received no coverage because their station showed 0.5°C, yet all of them had “black frost”, so-called because green leaves turn black from cellular damage. The other half got paid, even though they had no physical evidence of frost, but their station showed -0.5°C. The Ombudsman could not believe the inadequacies and inaccuracies of the temperature record and this in essentially an isotropic plain. Especially after I pointed out that they were temperatures from a Stevenson Screen, for the most part at 1.25 to 2 m above ground and thus above the crop. Temperatures below that level are markedly different.
Empirical Test Of Temperature Data.
A group carrying out a mapping project, trying to use data for practical application, confronted the inadequacy of the temperature record.
The story of this project begins with coffee, we wanted to make maps that showed where in the world coffee grows best, and where it goes after it has been harvested. We explored worldwide coffee production data and discussed how to map the optimal growing regions based on the key environmental conditions: temperature, precipitation, altitude, sunlight, wind, and soil quality.
The first extensive dataset we could find contained temperature data from NOAA’s National Climatic Data Center. So we set out to draw a map of the earth based on historical monthly temperature. The dataset includes measurements as far back as the year 1701 from over 7,200 weather stations around the world.
Each climate station could be placed at a specific point on the globe by their geospatial coordinates. North America and Europe were densely packed with points, while South America, Africa, and East Asia were rather sparsely covered. The list of stations varied from year to year, with some stations coming online and others disappearing. That meant that you couldn’t simply plot the temperature for a specific location over time.
The map they produced illustrates the gaps even more starkly, but that was not the only issue.
At this point, we had a passable approximation of a global temperature map, (Figure 7) but we couldn’t easily find other data relating to precipitation, altitude, sunlight, wind, and soil quality. The temperature data on its own didn’t tell a compelling story to us.
The UK may have accurate temperature measures, but it is a small area. Most larger countries have inadequate instrumentation and measures. The US is probably the best, certainly most expensive, network. Anthony Watts research showed that the US record has only 7.9 percent of weather stations with a less than 1°C accuracy.
Precipitation Data A Bigger Problem
Water, in all its phases, is critical to movement of energy through the atmosphere. Transfer of surplus energy from the Tropics to offset deficits in Polar Regions (Figure 8) is largely in the form of latent heat. Precipitation is just one measure of this crucial variable.
It is a very difficult variable to measure accurately, and records are completely inadequate in space and time. An example of the problem was exposed in attempts to use computer models to predict the African monsoon. (Science, 4 August 2006,)
Alessandra Giannini, a climate scientist at Columbia University. Some models predict a wetter future; others, a drier one. “They cannot all be right.”
One culprit identified was the inadequacy of data.
One obvious problem is a lack of data. Africa’s network of 1152 weather watch stations, which provide real-time data and supply international climate archives, is just one-eighth the minimum density recommended by the World Meteorological Organization (WMO). Furthermore, the stations that do exist often fail to report.
It is likely very few regions meet the WMO recommended density. The problem is more complex, because temperature changes are relatively uniform, although certainly not over 1200km. However, precipitation amounts vary in a matter of meters. Much precipitation comes from showers that develop from cumulus clouds that develop during the day. Most farmers in North America are familiar with one section of land getting rain while another is missed.
Temperature and precipitation, the two most important variables, are completely inadequate to create the conditions, and therefore the formula for any surface grid of the model. As the latest IPCC Report, AR5, notes in two vague under-statements,
The ability of climate models to simulate surface temperature has improved in many, though not all, important aspects relative to the generation of models assessed in the AR4.
The simulation of large-scale patterns of precipitation has improved somewhat since the AR4, although models continue to perform less well for precipitation than for surface temperature.
But the atmosphere is three-dimensional and the amount of data above the surface is almost non-existent. Just one example illustrates the problems. We had instruments every 60 m on a 304 m tower outside the heat island effect of the City of Winnipeg. The changes in that short distance were remarkable, with many more inversions than we expected.
Some think parametrization is used to substitute for basic data like temperature and precipitation. It is not. It is a,
method of replacing processes that are too small-scale or complex to be physically represented in the model by a simplified process.
Even then, IPCC acknowledge limits and variances
The differences between parameterizations are an important reason why climate model results differ.
Data Even More Inadequate For Dynamic Atmosphere.
They “fill in” the gaps with the 1200 km claim, which shows how meaningless it all is. They have little or no data in any of the cubes, yet they are the mathematical building blocks of the computer models. It is likely that between the surface and atmosphere there is data for about 10 percent of the total atmospheric volume. These comments apply to a static situation, but the volumes are constantly changing daily, monthly, seasonally and annually in a dynamic atmosphere and these all change with climate change.
Ockham’s Razor indicates that any discussion about the complexities of climate models including methods, processes and procedures are irrelevant. They cannot work because the simple truth is the data, the basic building blocks of the model, are completely inadequate. Here is Tolstoi’s comment about a simple truth.
“I know that most men, including those at ease with problems of the greatest complexity, can seldom accept even the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they delighted in explaining to colleagues, which they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their lives.”
Another simple truth is the model output should never be used as the basis for anything let alone global energy policy.