Guest Opinion: Dr. Tim Ball
In his recent article on WUWT titled “HADCRU Power and Temperature” Andy May refers to the challenges of modelling the atmosphere. He wrote,
The greenhouse effect (GHE), when calculated this way, shows an imbalance of 390-239=151 W/m2. Kiehl and Trenberth, 1997 calculated a similar overall forcing of 155 W/m2 using the same procedure. This GHE calculation makes a lot of assumptions, not the least of which is assuming the Earth has an emissivity of 1 and is a blackbody. But, here we want to consider the problem of using a global average temperature (T) for the Earth, which is a rotating sphere, with only one-half of the sphere facing the Sun at any one time.
Models vary from hardware models or simple scaled down versions of reality to complete abstractions. A model car is an example of the former and a mathematical formula with symbols replacing variables of the latter. The problem with the hardware is it is impossible to scale down many things because the physical properties change. For example, it is impossible to scale down the change of ice from solid and rigid to plastic and flowing as occurs in an alpine glacier in a hardware model. In the abstract model, each variable loses most of its real-world properties.
Climate models are abstract models, except they are made up of a multiple of models all interacting with each other. Those interactions bear little resemblance to reality.
- We have virtually no data.
- This is true even for fundamental variables like temperature, precipitation atmospheric pressure, and atmospheric water content.
- Data is replaced by symbols that eliminate most of the properties of the natural variable.
- In many cases the “data” is generated in another model and used as ‘real’ data in the larger model.
- The models are essentially static representations of average conditions. The one thing we know with certainty is that the Earth’s atmospheric system is dynamic, changing daily, seasonally and constantly over the course of time.
- The models consistently fail the standard test of scientific understanding and accuracy by producing inaccurate predictions.
Initially, I learned the basics of weather and especially forecasting necessary for aviation. These were expanded when I gave lectures on aviation weather as an operations officer on an anti-submarine squadron flying over the North Atlantic and then for a search and rescue squadron flying in northern and Arctic Canada. I recall one search out of Fort Chipewyan, northern Alberta when we observed first-hand the severe limitations of knowledge and therefore forecast skills. We took an Environment Canada forecaster with us to provide more local information. He was unable to come even close from the data originating from Edmonton. Eventually, we listened to his forecast and then took him flying with us to show him the reality.
After I left the military because I lost my flying category, and as they say, I didn’t want to fly a desk, these experiences caused my return to university to try and determine the limitations of knowledge about weather and climate.
When I began studying them from an academic perspective, the first thing I realized was the complete paucity of data. This was reinforced when I learned about the work of H.H. Lamb who set up the CRU with the realization that without data, no understanding of the mechanisms was possible and accurate forecasting beyond hope.
“…it was clear that the first and greatest need was to establish the facts of the past record of the natural climate in times before any side effects of human activities could well be important.”
I include this quote in as many articles as logic allows because things are worse now. Sadly, this is due to many graduates of the CRU and their disciples, like Gavin Schmidt.
This awareness led to my doctoral thesis that involved reconstruction of weather and climate patterns for central Canada over a 300-year span. Serendipitously, it was while in places like Fort Chipewyan that I became aware of the remarkable weather and meteorological journals of the Hudson’s Bay Company.
In basic climatology and some meteorology texts of the time, the student learned about a single cell atmospheric model (Figure 1).
The objective is to show that in a non-rotating world a simple, single cell system would exist. The strength of the basic circulation pattern is a function of the temperature difference (gradient) between the Equator and the Poles. This is a lesson for the Intergovernmental Panel on Climate Change (IPCC) who claim that the Poles will warm more than the Equator, thus weakening the system. The reality is that the difference is almost totally a function of the amount of insolation received not a difference in the greenhouse gas difference, especially CO2.
The next problem that climate change theorists and modelers face is that the Earth is rotating and changing how it presents itself to the insolation over time. In 1735 George Hadley’s determination of the existence of a single tropical cell, now named after him, (Figure 2) from winds recorded by British sailing ships, is a classic example of inductive reasoning.
This gradually evolved over the next 200 years into a three-cell system (Figure 3).
Note that this diagram appeared in a text titled Tropical Meteorology published in 2011. A similar pattern is used in the UK Met office (UKMO) web site diagram. (Figure 4). The caption reads
Circulating Cells: The Hadley cells have the most regular pattern of air movement, and produce extreme wet weather at the equator and extreme aridity on the deserts. The polar cells are the least well-defined.
The terminology starts the misunderstanding. The word extreme is wrong. The wet at the equator and dry in the desert is normal. The polar cell is well-defined as evidenced by the over 90 percent persistence of the polar easterly winds, the Polar Front that separates cold polar air from warm subtropical air, and the circumpolar vortex (jet stream) (Figure 5)
Figure 5 (source: author)
Questions began to emerge about the existence of the Ferrel cell at about the time the IPCC was forming in the late 1980s. William Ferrel proposed its existence in 1856 to explain newly measured wind speeds, especially the mid-latitude westerly winds. The Encyclopedia Britannica is better informed than the UKMO because they write,
Ferrel cell, model of the mid-latitude segment of Earth’s wind circulation, proposed by William Ferrel (1856). In the Ferrel cell, air flows poleward and eastward near the surface and equatorward and westward at higher altitudes; this movement is the reverse of the airflow in the Hadley cell. Ferrel’s model was the first to account for the westerly winds between latitudes 35° and 60° in both hemispheres. The Ferrel cell, however, is still not a good representation of reality because it requires that the upper-level mid-latitude winds flow westward; actually the eastward-flowing surface winds become stronger with height and reach their maximum velocities around the 10-km (6-mile) level in the jet streams.
The problem is they, like the UKMO and the IPCC do not provide an alternative model. The reason is because they have no data as they explain in AR5.
In the past few years, interest in an accurate depiction of upper air winds has grown, as they are essential for estimating the state and changes of the general atmospheric circulation and for explaining changes in the surface winds (Vautard et al., 2010).
We also learned in AR4 that,
Due to the computational cost associated with the requirement of a well-resolved stratosphere, the models employed for the current assessment do not generally include the QBO.
From a March 2015 conference in Victoria BC, we learned that
The Quasi-Biennial Oscillation is one of the most remarkable phenomena in the Earth’s atmosphere. High above the equator, in the stratosphere, strong zonal winds blow in a continuous circuit around the Earth. At a given altitude, the winds might start as westerlies, but over time they weaken and eventually reverse, becoming strong easterlies.
Why is the QBO important? It is certainly relevant for seasonal prediction, where the state of stratospheric winds affects interactions between the tropics and the mid-latitudes, and may also affect the tropical troposphere directly and possibly how the solar cycle interacts with the atmosphere.
They concluded that,
The poor representation of the QBO in climate change models means that no-one knows what will happen to the QBO in the decades ahead – will it remain largely unchanged, will its period lengthen, or will it change more radically?
The question is how and on what structure are the climate models built? The most recent representation I am familiar with is in Figure 6:
How do you build a computer model to represent this structure and all the mechanisms it encompasses? But the challenge is much greater than that because the diagram is a representation of the average, which is a fixed statistical condition. In reality, it is an extremely dynamic system that changes on an almost infinite number of time scales from hourly to millions of years. Even if you can approximate the data and mechanisms with a mathematical formula there is the problem mathematician and philosopher A. N. Whitehead identified,
“There is no more common error than to assume that, because prolonged and accurate mathematical calculations have been made, the application of the result to some fact of nature is absolutely certain.”
The IPCC acknowledged this in the Third Assessment Report when they wrote,
In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.
There is one major component of computer models that distorts and masks the reality and allows models that consistently fail to persuade policy. Pierre Gallois (1911-2010) summarized it in his comment that
If you put tomfoolery into a computer, nothing comes out but tomfoolery. But this tomfoolery, having passed through a very expensive machine, is somehow ennobled and no-one dares criticize it.
This parallels the well-known, but somehow ignored acronym GIGO, Garbage in, Garbage out. Somebody perceptively said in climate models it was Gospel in, Gospel out. However, in the case of climate computer models, the Garbage in includes not only the input or what the IPCC calls forcings but the very structure.
I collect quotations that appear to epitomize a period such as a decade, a century, or a millennium. My prime candidate so far for the 21st century was from Alan Greenspan former President of the US Federal Reserve and thereby de facto architect of US financial policy. Greenspan was asked in his appearance before Senator Markey’s hearing into the financial collapse of 2008 what went wrong. He simply replied, my model was wrong. When asked how long he had been using it he replied, 40 years.
If we consider the first meaningful climate model the one developed at NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL) in the late 1960s (1967?) the first meaningful climate model because it included atmosphere and ocean processes, then it is 50 years.
Now consider the IPCC situation where they take the worst of both worlds by combining the output of climate and economic models. They have done this now for 27 years (1990-2017), and despite supposed updates and improvements their predictions or projections are still wrong, but still being used to determine global environmental and energy policy.
Andy May correctly identifies that the problems of climate modeling are much greater than the Kiehl Trenberth energy balance diagram and its numbers. They are so fundamental, as some of us have identified for decades that it is remarkable that the IPCC managed to fool the world and all the scientist affiliated with or supporting that agency and its work. It is why my book is titled “Human Caused Global Warming: The Biggest Deception in History.