Guest essay by Dr. Tim Ball
I have no data yet. It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. Arthur Conan Doyle. (Sherlock Holmes)
There is no more common error than to assume that, because prolonged and accurate mathematical calculations have been made, the application of the result to some fact of nature is absolutely certain. A.N.Whitehead
The recent article by Nancy Green at WUWT is an interesting esoteric discussion about models. Realities about climate models are much more prosaic. They don’t and can’t work because data, knowledge of atmospheric, oceanographic, and extraterrestrial mechanisms, and computer capacity are all totally inadequate. Computer climate models are a waste of time and money.
Inadequacies are confirmed by the complete failure of all forecasts, predictions, projections, prognostications, or whatever they call them. It is one thing to waste time and money playing with climate models in a laboratory, where they don’t meet minimum scientific standards, it is another to use their results as the basis for public policies where the economic and social ramifications are devastating. Equally disturbing and unconscionable is the silence of scientists involved in the IPCC who know the vast difference between the scientific limitations and uncertainties and the certainties produced in the Summary for Policymakers (SPM).
IPCC scientists knew of the inadequacies from the start. Kevin Trenberth’s response to a report on inadequacies of weather data by the US National Research Council said
“It’s very clear we do not have a climate observing system…” “This may come as a shock to many people who assume that we do know adequately what’s going on with the climate, but we don’t.”
This was in response to the February 3, 1999 Report that said,
“Deficiencies in the accuracy, quality and continuity of the records place serious limitations on the confidence that can be placed in the research results.
Remember this is 11 years after Hansen’s comments of certainty to the Senate and five years after the 1995 IPCC Report. It is worse now with fewer weather stations and less data than in 1990.
Before leaked emails exposed its climate science manipulations, the Climatic Research Unit (CRU) issued a statement that said,
“GCMs are complex, three dimensional computer-based models of the atmospheric circulation. Uncertainties in our understanding of climate processes, the natural variability of the climate, and limitations of the GCMs mean that their results are not definite predictions of climate.”
Phil Jones, Director of the CRU at the time of the leaked emails and former director Tom Wigley, both IPCC members, said,
“Many of the uncertainties surrounding the causes of climate change will never be resolved because the necessary data are lacking.“
Stephen Schneider, prominent part of the IPCC from the start said,
“Uncertainty about feedback mechanisms is one reason why the ultimate goal of climate modeling – forecasting reliably the future of key variables such as temperature and rainfall patterns – is not realizable.”
Schneider also set the tone and raised eyebrows when he said in Discover magazine.
Scientists need to get some broader based support, to capture the public’s imagination…that, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified dramatic statements, and make little mention of any doubts we may have…each of us has to decide what the right balance is between being effective and being honest.
The IPCC achieved his objective with devastating effect, because they chose effective over honest.
A major piece of evidence is the disparity between the Working Group I (WGI) (Physical Science Basis) Report, particularly the Chapter on computer models and the claims in the Summary for Policymakers (SPM) Report. Why did the scientists who participated in the WGI Report remain so silent about the disparity?
Here is the IPCC procedure:
Changes (other than grammatical or minor editorial changes) made after acceptance by the Working Group or the Panel shall be those necessary to ensure consistency with the Summary for Policymakers (SPM) or the Overview Chapter.
The Summary is written then the WGI is adjusted. It is like an executive publishing findings then asking employees to produce material to justify them. The purpose is to present a completely different reality to the press and the public.
This is to ensure people, especially the media, read the SPM first. It is released well before the WGI Report, which they knew few would ever read. There is only one explanation for producing it first. David Wojick, an IPCC expert reviewer, explained:
Glaring omissions are only glaring to experts, so the “policymakers”—including the press and the public—who read the SPM will not realize they are being told only one side of a story. But the scientists who drafted the SPM know the truth, as revealed by the sometimes artful way they conceal it
What is systematically omitted from the SPM are precisely the uncertainties and positive counter evidence that might negate the human interference theory. Instead of assessing these objections, the Summary confidently asserts just those findings that support its case. In short, this is advocacy, not assessment.
The Physical Basis of the Models
Here is a simple diagram of how the atmosphere is divided to create climate models.
Figure 1: Schematic of General Circulation Model (GCM).
The surface is covered with a grid and the atmosphere divided into layers. Computer models vary in the size of the grids and the number of layers. They claim a smaller grid provides better results. It doesn’t! If there is no data a finer grid adds nothing. The model needs more real data for each cube and it simply isn’t available. There are no weather stations for at least 70% of the surface and virtually no data above the surface. There are few records of any length anywhere; the models are built on virtually nothing. The grid is so large and crude they can’t include major weather features like thunderstorms, tornados, or even small cyclonic storm systems. The IPCC 2007 Report notes,
Despite the many improvements, numerous issues remain. Many of the important processes that determine a model’s response to changes in radiative forcing are not resolved by the model’s grid. Instead, sub-grid scale parameterizations are used to parametrize the unresolved processes, such as cloud formation and the mixing due to oceanic eddies.
O’Keefe and Kueter explain how a model works: “
The climate model is run, using standard numerical modeling techniques, by calculating the changes indicated by the model’s equations over a short increment of time—20 minutes in the most advanced GCMs—for one cell, then using the output of that cell as inputs for its neighboring cells. The process is repeated until the change in each cell around the globe has been calculated.”
Interconnections mean errors are spread and amplified. Imagine the number of calculations necessary that even at computer speed take a long time. The run time is a major limitation.
All of this takes huge amounts of computer capacity; running a full-scale GCM for a 100-year projection of future climate requires many months of time on the most advanced supercomputer. As a result, very few full-scale GCM projections are made.
A comment at Steve McIntyre’s site, Climateaudit, illustrates the problem.
Caspar Ammann said that GCMs (General Circulation Models) took about 1 day of machine time to cover 25 years. On this basis, it is obviously impossible to model the Pliocene-Pleistocene transition (say the last 2 million years) using a GCM as this would take about 219 years of computer time.
So you can only run the models if you reduce the number of variables. O’Keefe and Kueter explain.
As a result, very few full-scale GCM projections are made. Modelers have developed a variety of short cut techniques to allow them to generate more results. Since the accuracy of full GCM runs is unknown, it is not possible to estimate what impact the use of these short cuts has on the quality of model outputs.
Omission of variables allows short runs, but allows manipulation and moves the model further from reality. Which variables do you include? For the IPCC only those that create the results they want. Besides, because climate is constantly and widely varying so a variable may become more or less important over time as thresholds change.
By selectively leaving out important components of the climate system, likelihood of a human signal being the cause of change is guaranteed. As William Kinninmonth, meteorologist and former head of Australia’s National Climate Centre explains,
… current climate modeling is essentially to answer one question: how will increased atmospheric concentrations of CO2 (generated from human activity) change earth’s temperature and other climatological statistics? Neither cosmology nor vulcanology enter the equations. It should also be noted that observations related to sub-surface ocean circulation (oceanology), the prime source of internal variability, have only recently commenced on a consistent global scale. The bottom line is that IPCC’s view of climate has been through a narrow prism. It is heroic to assume that such a view is sufficient basis on which to predict future ‘climate’.
Static Climate Models In A Virtually Unknown Dynamic Atmosphere.
“Heroic” is polite. I suggest it is deliberately wrong. Lack of data alone justifies that position, lack of knowledge about atmospheric circulation is another. The atmosphere is three-dimensional and dynamic, so to build a computer model that even approximates reality requires far more data than exists, much greater understanding of an extremely turbulent and complex system, and computer capacity that is unavailable for the foreseeable future. As the IPCC note,
Consequently, for models to predict future climatic conditions reliably, they must simulate the current climatic state with some as yet unknown degree of fidelity. Poor model skill in simulating present climate could indicate that certain physical or dynamical processes have been misrepresented.
The history of understanding the atmosphere leaps 2000 years from Aristotle who knew there were three distinct climate zones to George Hadley in the 18th century. The word climate comes from the Greek word klima for slope referring to the angle of the sun and the climate zones it creates. Aristotle’s views dominated western science until the 16th century, but it wasn’t until the 18th century wider, but still narrow, understanding began.
In 1735 George Hadley used the wind patterns, recorded by English sailing ships, to create the first 3D diagram of circulation.
Figure 1. Hadley Cell (Northern Hemisphere)
Restricted only to the tropics, it became known as the Hadley Cell. Sadly, today we know little more than Hadley although Willis Eschenbach has worked hard to identify its role in transfer of heat energy. The Intergovernmental Panel on Climate Change (IPCC) illustrates the point in Chapter 8 of the 2007 Report.
The spatial resolution of the coupled ocean-atmosphere models used in the IPCC assessment is generally not high enough to resolve tropical cyclones, and especially to simulate their intensity.
The problem for climate science and modelers is the Earth is spherical and it rotates. Rotation around the sun creates the seasons, but the rotation around the axes creates even bigger geophysical dynamic problems. Because of it, a simple single cell system (Figure 2) with heated air rising at the Equator moving to the Poles, sinking and returning to the Equator, breaks up. The Coriolis Effect is the single biggest influence on the atmosphere caused by rotation. It dictates that anything moving across the surface appears to be deflected to the right in the Northern Hemisphere and to the left in the Southern Hemisphere. It appears that a force is pushing from the side so people incorrectly refer to the Coriolis Force. There is no Force.
Figure 2: A Simple Single Cell.
Figure 3 shows a more recent attempt to approximate what is going on.
Figure 3: A more recent model of a cross-section through the Northern Hemisphere.
Now it is the “Indirect Ferrell Cell”. Notice the discontinuities in the Tropopause and the “Stratospheric – Tropospheric Mixing”. This is important, because the IPCC doesn’t deal with the critical interface between the stratosphere and a major mechanism in the upper Troposphere in their models.
Due to the computational cost associated with the requirement of a well-resolved stratosphere, the models employed for the current assessment do not generally include the QBO.
This is just one example of model inadequacies provided by the IPCC.
What the IPCC Working Group I, (The Physical Science Basis Report) Says About the Models.
The following quotes (Italic and inset) are under their original headlines from Chapter 8 of the 2007 IPCC AR4 Report. Comments are in regular type.
There is currently no consensus on the optimal way to divide computer resources among finer numerical grids, which allow for better simulations; greater numbers of ensemble members, which allow for better statistical estimates of uncertainty; and inclusion of a more complete set of processes (e.g., carbon feedbacks, atmospheric chemistry interactions).
Most don’t understand models or the mathematics on which they are built, a fact exploited by promoters of human caused climate change. They are also a major part of the IPCC work not yet investigated by people who work outside climate science. Whenever outsiders investigate, as with statistics and the hockey stick, the gross and inappropriate misuses are exposed. The Wegman Report investigated the Hockey Stick fiasco, but also concluded,
We believe that there has not been a serious investigation to model the underlying process structures nor to model the present instrumented temperature record with sophisticated process models.
FAQ 8.1: How Reliable Are the Models Used to Make Projections of Future Climate Change?
Nevertheless, models still show significant errors. Although these are generally greater at smaller scales, important large-scale problems also remain. For example, deficiencies remain in the simulation of tropical precipitation, the El Niño- Southern Oscillation and the Madden-Julian Oscillation (an observed variation in tropical winds and rainfall with a time scale of 30 to 90 days).
Models continue to have significant limitations, such as in their representation of clouds, which lead to uncertainties in the magnitude and timing, as well as regional details, of predicted climate change. Nevertheless, over several decades of model development, they have consistently provided a robust and unambiguous picture of significant climate warming in response to increasing greenhouse gases.
Of course they do, because that is how they are programmed.
8.2.1.1 Numerics
In this report, various models use spectral, semi-Lagrangian, and Eulerian finite-volume and finite-difference advection schemes, although there is still no consensus on which type of scheme is best.
But how different are the results and why don’t they know which is best?
8.2.1.3 Parameterizations
The climate system includes a variety of physical processes, such as cloud processes, radiative processes and boundary-layer processes, which interact with each other on many temporal and spatial scales. Due to the limited resolutions of the models, many of these processes are not resolved adequately by the model grid and must therefore be parametrized. The differences between parametrizations are an important reason why climate model results differ.
How can parameterizations vary? The variance is evidence they are simply guessing at the conditions in each grid and likely choosing the one that accentuates their bias.
8.2.2.1 Numerics
Issues remain over the proper treatment of thermobaricity (nonlinear relationship of temperature, salinity and pressure to density), which means that in some isopycnic coordinate models the relative densities of, for example, Mediterranean and Antarctic Bottom Water masses are distorted. The merits of these vertical coordinate systems are still being established.
8.2.3.2 Soil Moisture Feedbacks in Climate Models
Since the TAR, there have been few assessments of the capacity of climate models to simulate observed soil moisture. Despite the tremendous effort to collect and homogenise soil moisture measurements at global scales (Robock et al., 2000), discrepancies between large-scale estimates of observed soil moisture remain. The challenge of modelling soil moisture, which naturally varies at small scales, linked to landscape characteristics, soil processes, groundwater recharge, vegetation type, etc., within climate models in a way that facilitates comparison with observed data is considerable. It is not clear how to compare climate-model simulated soil moisture with point-based or remotely sensed soil moisture. This makes assessing how well climate models simulate soil moisture, or the change in soil moisture, difficult.
Evaporation is a major transfer of long-wave energy from the surface to the atmosphere. This inadequacy alone likely more than equals the change created by human addition of CO2.
8.2.4.1 Terrestrial Cryosphere
Glaciers and ice caps, due to their relatively small scales and low likelihood of significant climate feedback at large scales, are not currently included interactively in any AOGCMs.
How big does an ice cap have to be to influence the parameterization in a grid? Greenland is an ice cap.
8.2.5 Aerosol Modelling and Atmospheric Chemistry
The global Aerosol Model Intercomparison project, AeroCom, has also been initiated in order to improve understanding of uncertainties of model estimates, and to reduce them (Kinne et al., 2003).
Interactive atmospheric chemistry components are not generally included in the models used in this report.
8.3 Evaluation of Contemporary Climate as Simulated by Coupled Global Models
Due to nonlinearities in the processes governing climate, the climate system response to perturbations depends to some extent on its basic state (Spelman and Manabe, 1984). Consequently, for models to predict future climatic conditions reliably, they must simulate the current climatic state with some as yet unknown degree of fidelity. Poor model skill in simulating present climate could indicate that certain physical or dynamical processes have been misrepresented.
They don’t even know which ones are misrepresented?
8.3.1.2 Moisture and Precipitation
For models to simulate accurately the seasonally varying pattern of precipitation, they must correctly simulate a number of processes (e.g., evapotranspiration, condensation, transport) that are difficult to evaluate at a global scale.
Precipitation forecasts (projections?) are worse than their temperature projections (forecasts).
8.3.1.3 Extratropical Storms
Our assessment is that although problems remain, climate models are improving in their simulation of extratropical cyclones.
This is their self-serving assessment. How much are they improving and from what baseline?
8.3.2 Ocean
Comparisons of the type performed here need to be made with an appreciation of the uncertainties in the historical estimates of radiative forcing and various sampling issues in the observations.
8.3.2.1 Simulation of Mean Temperature and Salinity Structure
Unfortunately, the total surface heat and water fluxes (see Supplementary Material, Figure S8.14) are not well observed.
8.3.2.2 Simulation of Circulation Features Important for Climate Response
The MOC (meridional overturning circulation) is an important component of present-day climate and many models indicate that it will change in the future (Chapter 10). Unfortunately, many aspects of this circulation are not well observed.
8.3.2.3 Summary of Oceanic Component Simulation
The temperature and salinity errors in the thermocline, while still large, have been reduced in many models.
How much reduction and why in only some models?
8.3.3 Sea Ice
The magnitude and spatial distribution of the high-latitude climate changes can be strongly affected by sea ice characteristics, but evaluation of sea ice in models is hampered by insufficient observations of some key variables (e.g., ice thickness) (see Section 4.4). Even when sea ice errors can be quantified, it is difficult to isolate their causes, which might arise from deficiencies in the representation of sea ice itself, but could also be due to flawed simulation of the atmospheric and oceanic fields at high latitudes that drive ice movement (see Sections 8.3.1, 8.3.2 and 11.3.8).
8.3.4 Land Surface
Vast areas of the land surface have little or no current data and even less historic data. These include 19 percent deserts, 20 percent mountains, 20 percent grasslands, 33 percent combined tropical and boreal forests and almost the entire Arctic and Antarctic regions.
8.3.4.1 Snow Cover
Evaluation of the land surface component in coupled models is severely limited by the lack of suitable observations.
Why? In 1971-2 George Kukla was producing estimates of varying snow cover as a factor in climate change. Satellite data is readily available for simple assessment of the changes through time.
8.3.4.2 Land Hydrology
The evaluation of the hydrological component of climate models has mainly been conducted uncoupled from AOGCMs (Bowling et al., 2003; Nijssen et al., 2003; Boone et al., 2004). This is due in part to the difficulties of evaluating runoff simulations across a range of climate models due to variations in rainfall, snowmelt and net radiation.
8.3.4.4 Carbon
Despite considerable effort since the TAR, uncertainties remain in the representation of solar radiation in climate models (Potter and Cess, 2004).
8.4.5 Atmospheric Regimes and Blocking
Blocking events are an important class of sectoral weather regimes (see Chapter 3), associated with local reversals of the mid-latitude westerlies.
There is also evidence of connections between North and South Pacific blocking and ENSO variability (e.g., Renwick, 1998; Chen and Yoon, 2002), and between North Atlantic blocks and sudden stratospheric warmings (e.g., Kodera and Chiba, 1995; Monahan et al., 2003) but these connections have not been systematically explored in AOGCMs.
Blocking was a significant phenomenon in the weather patterns as the Circumpolar flow changed from Zonal to Meridional in 2013-14.
8.4.6 Atlantic Multi-decadal Variability
The mechanisms, however, that control the variations in the MOC are fairly different across the ensemble of AOGCMs. In most AOGCMs, the variability can be understood as a damped oceanic eigenmode that is stochastically excited by the atmosphere. In a few other AOGCMs, however, coupled interactions between the ocean and the atmosphere appear to be more important.
Translation; We don’t know.
8.4.7 El Niño-Southern Oscillation
Despite this progress, serious systematic errors in both the simulated mean climate and the natural variability persist. For example, the so-called ‘double ITCZ’ problem noted by Mechoso et al. (1995; see Section 8.3.1) remains a major source of error in simulating the annual cycle in the tropics in most AOGCMs, which ultimately affects the fidelity of the simulated ENSO.
8.4.8 Madden-Julian Oscillation
The MJO (Madden and Julian, 1971) refers to the dominant mode of intra-seasonal variability in the tropical troposphere. Thus, while a model may simulate some gross characteristics of the MJO, the simulation may be deemed unsuccessful when the detailed structure of the surface fluxes is examined (e.g., Hendon, 2000).
8.4.9 Quasi-Biennial Oscillation
The Quasi-Biennial Oscillation (QBO; see Chapter 3) is a quasi-periodic wave-driven zonal mean wind reversal that dominates the low-frequency variability of the lower equatorial stratosphere (3 to 100 hPa) and affects a variety of extratropical phenomena including the strength and stability of the winter polar vortex (e.g., Baldwin et al., 2001).. Due to the computational cost associated with the requirement of a well-resolved stratosphere, the models employed for the current assessment do not generally include the QBO.
8.4.10 Monsoon Variability
In short, most AOGCMs do not simulate the spatial or intra-seasonal variation of monsoon precipitation accurately.
Monsoons are defined by extreme seasonality of rainfall. They occur in many regions around the word, though most only associate them with Southern Asia. It is not clear what the IPCC mean. Regardless, these are massive systems of energy transfer from the region of energy surplus to the deficit region.
8.4.11 Shorter-Term Predictions Using Climate Models
This suggests that ongoing improvements in model formulation driven primarily by the needs of weather forecasting may lead also to more reliable climate predictions.
This appears to contradict the claim that weather and climate forecasts are different. As Norm Kalmonavitch notes,
The GCM models referred to as climate models are actually weather models only capable of predicting weather about two weeks into the future and as we are aware from our weather forecasts temperature predictions…
In 2008 Tim Palmer, a leading climate modeller at the European Centre for Medium-Range Weather Forecasts in Reading England said in the New Scientist.
I don’t want to undermine the IPCC, but the forecasts, especially for regional climate change, are immensely uncertain.
8.5.2 Extreme Precipitation
Sun et al. (2006) investigated the intensity of daily precipitation simulated by 18 AOGCMs, including several used in this report. They found that most of the models produce light precipitation (<10 mm day–1) more often than observed, too few heavy precipitation events and too little precipitation in heavy events (>10 mm day–1). The errors tend to cancel, so that the seasonal mean precipitation is fairly realistic (see Section 8.3).
Incredible, the errors cancel and since the results appear to match reality they must be correctly derived.
8.5.3 Tropical Cyclones
The spatial resolution of the coupled ocean-atmosphere models used in the IPCC assessment is generally not high enough to resolve tropical cyclones, and especially to simulate their intensity.
8.6.2 Interpreting the Range of Climate Sensitivity Estimates Among General Circulation Models
The climate sensitivity depends on the type of forcing agents applied to the climate system and on their geographical and vertical distributions (Allen and Ingram, 2002; Sausen et al., 2002; Joshi et al., 2003). As it is influenced by the nature and the magnitude of the feedbacks at work in the climate response, it also depends on the mean climate state (Boer and Yu, 2003). Some differences in climate sensitivity will also result simply from differences in the particular radiative forcing calculated by different radiation codes (see Sections 10.2.1 and 8.6.2.3).
Climate sensitivity has consistently declined and did so further in IPCC AR5. In fact, in the SPM for AR5 the sensitivity declined in the few weeks from the first draft to the final report.
8.6.2.2 Why Have the Model Estimates Changed Since the TAR?
The current generation of GCMs[5] covers a range of equilibrium climate sensitivity from 2.1°C to 4.4°C (with a mean value of 3.2°C; see Table 8.2 and Box 10.2), which is quite similar to the TAR. Yet most climate models have undergone substantial developments since the TAR (probably more than between the Second Assessment Report and the TAR) that generally involve improved parametrizations of specific processes such as clouds, boundary layer or convection (see Section 8.2). In some cases, developments have also concerned numerics, dynamical cores or the coupling to new components (ocean, carbon cycle, etc.). Developing new versions of a model to improve the physical basis of parametrizations or the simulation of the current climate is at the heart of modelling group activities. The rationale for these changes is generally based upon a combination of process-level tests against observations or against cloud-resolving or large-eddy simulation models (see Section 8.2), and on the overall quality of the model simulation (see Sections 8.3 and 8.4). These developments can, and do, affect the climate sensitivity of models.
All this says is that climate models are a work in progress. However, it also acknowledges that they can only hope to improve parameterization. In reality they need more and better data, but that is not possible for current or historic data. Even if they started an adequate data collection system today it would be thirty years before it would be statistically significant.
8.6.2.3 What Explains the Current Spread in Models’ Climate Sensitivity Estimates?
The large spread in cloud radiative feedbacks leads to the conclusion that differences in cloud response are the primary source of inter-model differences in climate sensitivity (see discussion in Section 8.6.3.2.2). However, the contributions of water vapour/lapse rate and surface albedo feedbacks to sensitivity spread are non-negligible, particularly since their impact is reinforced by the mean model cloud feedback being positive and quite strong.
What does “non-negligible “ mean? Is it a double negative? Apparently. Why don’t they use the term significant? They assume their inability to produce accurate results is because of clouds and water vapor. As this review shows there are countless other factors and especially those they ignore like the Sun. The 2001 TAR Report included a table of the forcings with a column labeled Level of Scientific Understanding (LOSU). Of the nine forcings only two have a ”high” rating, although that is their assessment, one is medium and the other six are “low”. The only difference in the 2007 FAR Report is the LOSU column is gone.
8.6.3.2 Clouds
Despite some advances in the understanding of the physical processes that control the cloud response to climate change and in the evaluation of some components of cloud feedbacks in current models, it is not yet possible to assess which of the model estimates of cloud feedback is the most reliable.
The cloud problem is far more complicated than this summary implies. For example, clouds function differently depending on type, thickness, percentage of water vapor, water droplets, ice crystals or snowflakes and altitude.
8.6.3.3 Cryosphere Feedbacks
A number of processes, other than surface albedo feedback, have been shown to also contribute to the polar amplification of warming in models (Alexeev, 2003, 2005; Holland and Bitz, 2003; Vavrus, 2004; Cai, 2005; Winton, 2006b). An important one is additional poleward energy transport, but contributions from local high-latitude water vapour, cloud and temperature feedbacks have also been found. The processes and their interactions are complex, however, with substantial variation between models (Winton, 2006b), and their relative importance contributing to or dampening high-latitude amplification has not yet been properly resolved.
You can’t know how much energy is transported to polar regions if you can’t determine how much is moving out of the tropics. The complete lack of data for the entire Arctic Ocean and most of the surrounding land is a major limitation.
8.6.4 How to Assess Our Relative Confidence in Feedback to controls Simulated by Different Models?
A number of diagnostic tests have been proposed since the TAR (see Section 8.6.3), but few of them have been applied to a majority of the models currently in use. Moreover, it is not yet clear which tests are critical for constraining future projections. Consequently, a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed.
The IPCC chapter on climate models appears to justify use of the models by saying they show an increase in temperature when CO2 is increased. Of course they do, that is how they’re programmed. Almost every individual component of the model has, by their admission, problems ranging from lack of data, lack of understanding of the mechanisms, and important ones are omitted because of inadequate computer capacity or priorities. The only possible conclusion is that the models were designed to prove the political position that human CO2 was a problem.
Scientists involved with producing this result knew the limitations were so severe they precluded the possibility of proving the result. This is clearly set out in the their earlier comments and the IPCC Science Report they produced. They remained silent when the SPM claimed, with high certainty, they knew what was going on with the climate. They had to know this was wrong. They may not have known about the political agenda when they were inveigled into participating, but they had to know when the 1995 SPM was published because Benjamin Santer exploited the SPM bias by rewriting Chapter 8 of the 1995 Report in contradiction to what the members of his chapter team had agreed. The gap widened in subsequent SPMs but they remained silent and therefore complicit.
Gamecock:
My bias: My first computer system (1978) was a Dec PDP-11/45. The degreed computer scientists I worked with had a saying, “If you can’t get it done in 128k, it’s not worth doing.” Hence, all my career, I was suspect of the value of more computing power. I considered it a crutch for those without sufficient intellect to figure out how to get it done in 128k.
Thanks for making me feel young. The correct answer is that it takes a VAX and 512K.
Excellent assessment. It really opened my eyes to the complexity of the process. You can’t simulate the process from a lack of data.
Billy Ruff’n says:
March 21, 2014 at 4:39 am
Claude Harvey said, “For those who know and remain silent, cowardly self-interest comes to mind…”
True enough, but can you blame them?
(Yes I can)
Put yourself in the shoes of a young, up and coming climate scientist who has finally arrived at a point where they’re invited to work in the bowels of the IPPC. They have invested considerable financial sums in their education and many years in training and hard work to arrive where they are. If they speak up, their careers in academia and the climate research establishment are over.
(If they remain silent, then they are not “working” in their career field, they are merely playing “follow the leader.”)
How many private sector jobs in climate science are there?
(Don’t you think this should have been a consideration BEFORE they chose their academic field?)
If they speak up, how will they pay the mortgage and feed the kids?
(You either have a soul or you don’t. You find a different job and down-size your expectations, and in so doing, you pay the mortgage and feed the kids. You don’t have to be dishonest.)
For an honest person, it must be devastating to find oneself in such a position.
(Frankly, if they are in this position, they are not an “honest person” since they are committing dishonest acts.)
Thanks, Tim. Nice post.
I think it’s a capital mistake to rely on quotes from fictional characters, made by an author who believed in patently nonsensical stuff. The twisting of facts and theories can and does occur whether or not one has data.
This is a good article, but Dr. Ball is wrong on one point. A computer model cannot prove a theory. At best a computer model can provide evidence for or against a theory. This is because a computer model is nothing more than a mathematical expression of the theory, ie a computer model is the theory, any claim it proves a theory would be a circular argument. If real world measurements agree with the model outputs, this is evidence of the accuracy of the theory, not proof.
For an analysis of the inherent inutility of climate models y/all should take the time to watch
Tom O writes “For an honest person, it must be devastating to find oneself in such a position.”
I completely agree. But you fail to identify what is wrong. The problem is not with junior scientists, but with senior scientists. It is the silence of the likes of the former President of the Royal Society and current Astronomer Royal, Lord Reese, who lies about the science of CAGW in public. It is these senior scientists who have remained silent, and allowed this disgusting state of affairs to occur.
Do blame the junior scientists. Blame their bosses.
Actually it’s a bicycle, and it’s going in the opposite direction.
There is a common error in both the posted article and many of the responses that has to do with the difference between models intended to predict a specific state and models intended to simulate a physical process.
It is usually summed up as the difference between intial conditions and boundary conditions.
It is certainly true that the observational data we have is insufficient to define the initial conditions adequately to make accurate predictions of a latter state of the system.
However climate modelling is an exercise in boundary conditions, not specific states.
Physical modelling in such circumstances gives insight into the envelope of behaviour of the system. It does not give specific predictions of final states. This distinction makes many of the criticisms here of the shortcomings of the initial data and model predictions irrelevant because of the ignorance of this difference.
If you can’t get it done in 128k, it’s not worth doing.”
You had 128k ?, boy back in my day all we had was ones and zeros, and sometimes we ran out of zeros and we had to use o’s.
Climate modelling is a FARCE.
Cheers, Kevin.
The vastness and complexity of Creation should awe and humble us. Dr. Ball does a good job of demonstrating how even the hugest computers cannot grasp the intricate interactions between the numerous levels, and how we lack data to fill in many of the bulky cubes of “grids.”
In essence our most bragged-about computers are pathetic, compared to what we’d need, to capture the intricacy of the atmosphere. One might as well try to capture a hurricane with a butterfly net. People who trust climate models are in some ways like people who trust a witch doctor when he shakes a bone at the sky, commanding it to rain. There is no basis for the trust, but they trust just the same.
The best weather forecasters are the ones who are awed and humbled by the atmosphere. They seem to understand the dynamics to some degree, but also to refer back to older maps which show roughly the same situation. Because they refer back to old maps so much, they know maps which start out looking remarkably alike can lead to situations that are remarkably dissimilar in only five days. (Some butterfly flapped its wings somewhere.)
I am amazed by the skill some display in the longer term. However that skill is only general, and cannot lead to specific forecasts. Also, just because a climate cycle lasted sixty years last time around does not mean it will last sixty years this time around. The infinite variety of weather offers us infinite opportunity to be wrong. This Creation we are part of is one heck of a lot bigger and more complex than we are.
It should hardly come as a surprise that none of these models are of any value. Those building them expect the models to show them how things work, while a good model can only be constructed when the builder already knows how the system works.
Follow the money. Al Gore certainly did.
Reblogged this on Power To The People and commented:
James Lovelock in the Guardian sums up the fact that Climate Scientists are well aware their theories are little more than a house of cards.
“The great climate science centres around the world are more than well aware how weak their science is. If you talk to them privately they’re scared stiff of the fact that they don’t really know what the clouds and the aerosols are doing. They could be absolutely running the show. We haven’t got the physics worked out yet. One of the chiefs once said to me that he agreed that they should include the biology in their models, but he said they hadn’t got the physics right yet and it would be five years before they do. So why on earth are the politicians spending a fortune of our money when we can least afford it on doing things to prevent events 50 years from now? They’ve employed scientists to tell them what they want to hear.”
http://www.theguardian.com/environment/blog/2010/mar/29/james-lovelock?guni=Article:in%20body%20link
eyesonu @ur momisugly 3/21, 4:39 AM
Tom O @ur momisugly 3/21, 7:51 am
I agree with you both, but at my age I guess I have a bit more empathy for the foolish decisions and mistakes of youth and the consequences thereof. The real villains in all this are the senior scientists who started and have perpetuated the fraud.
Fantastic read
And some great comments. I like short pithy ones like these two:
Quinn the Eskimo said at 4:11 am
The logic of the attribution analysis is essentially this: We don’t understand and are not able to model the climate system very well. Nevertheless, when we model the climate system, we don’t know what else could be causing the warming except for CO2, so it must be CO2. I am not making that up. It’s really that stupid.
jauntycyclist said at 5:31 am
just because the co2ers models can’t do anything doesn’t mean the processes cannot be predicted. One doesn’t have to model everything. Just the essential mechanisms. ie understanding the hierarchy. Putting co2 at the top is why they get nonsense.
David Jay says:
March 21, 2014 at 7:12 am
Thanks for making me feel young. The correct answer is that it takes a VAX and 512K.
=================================
You are welcome. I worked on VAXes for many years, too. When they came out, with their virtual memory, etc., I thought it ridiculous not just overlaying physical memory. In time, I came to appreciate virtual memory, though not as much as most did/do.
izen says:
March 21, 2014 at 8:01 am
It does not give specific predictions of final states. This distinction makes many of the criticisms here of the shortcomings of the initial data and model predictions irrelevant because of the ignorance of this difference.
=======================
Bullshit. Climate models are being used to make predictions about future temperatures. Are you daft?
Izen-
I dont see your point about boundary conditions. The climate models are predicting temperature based on co2 level. Temperature is hardly a boundary condition. If you are saying that the problem is that the climate models are designed to estimate boundary conditions and are being misused to predict temperature, I could go along with that.
The bottom line , no AGW no IPCC , so what else do you expect them to do.
Through the use of information theory, it is possible to build the best possible model from given informational resources. Experience with building models of this type supports generalization about the prospects for creating a statistically validated global warming model that successfully predicts the numerical values of probabilities of the outcomes of events.
In building such a model, the first step would be to identify the events underlying the model. This step has yet to be taken. That it has not been taken means that “predictions” cannot be made with models of the type that are currently available. These models make “projections” which, however, convey no information to a policy maker about the outcomes from his/her policy decisions. Policy makers have no information but believe they have information as a result of confusing “projection” with “prediction.” This mistake accounts for the continuing fatuous attempts by governments at controlling the climate.
If the underlying events were to be identified, each event would have a duration in time. In climatology, the canonical duration is three decades. The bare minimum number of observed statistically independent events for construction of a statistical validated model is about 150. The time to observe these events is 30 X 150 = 4500 years. The various global temperature time series extend backward in time to the year 1850 providing us with 164 years worth of data. Thus, the minimum number of years that must elapse before there is the possibility of constructing that statistically validated global warming model which is maximally efficient in its use of information is 4500 – 164 = 4300 years. In 4300 years, though, our supply of fossil fuels will have long been exhausted.
The approach now being taken is to compute the future state of the climate at time t + delta t, given the state at t, to feed the state at t + delta t into computation of the state at t + 2 * delta t and to continue in this vein ad infinitum. The growing divergence between the computed and observed global temperature demonstrates that this approach doesn’t work. It doesn’t work because at the beginning of every time step, information about the current state is missing and the missing information grows as a function of time.
There is an urgent need for the directors of the world’s climatological research program to gain knowledge about the role of entropy in science and to factor this knowledge into the planning of the research. Their ignorance on this score has already cost us a fortune and this cost continues to rise.
A good article summarizing a few — not even all — of the many problems with trying to solve the Navier-Stokes equation on a spinning, tilted ball in an eccentric orbit around a variable star, with an inhomogeneous surface consisting of 70% ocean (necessitating a separate coupled Navier-Stokes system in a moving fluid with highly variable temperature, salinity, density, depth and surface structure) and 30% of land surface that varies in terms of height above sea level, vegetation and use, moisture content, non-oceanic water systems, albedo, geology, and distribution relative to (e.g.) the Earth’s precise tilt and position in its aforementioned eccentric orbit around the variable star.
I tend to hammer on still other ones he omits — such as using a latitude-longitude grid in the first place on a spherical surface when this coordinatization has well known statistical and mathematical inadequacies for performing unbiased sampling, numerical interpolation, numerical integration (especially of the adaptive sort) and when there are well-known e.g. icosahedral tessellations that are both adaptive (systematically rescalable to a finer resolution) and which have no polar bias — all surface tessera end up with roughly the same area. Or, the inverse of the problem he describes — the fact that they don’t have data on anything like the grid resolution they are using already — which is that in a strongly coupled, highly nonlinear non-Markovian Navier-Stokes system (let alone two coupled Navier-Stokes systems consisting of completely distinct fluids with enormously strong coupling between them and nearly independent dominant circulation patterns) there is no theoretical basis for omitting detail at any scale when attempting a forward solution because even tiny fluctuations in state can nonlinearly grow until they dominate the simulated “climate’s” evolution on basically all future time scales. This is clearly evident in the enormous spread in model results produced by any given model within the “perturbed parameter ensemble”. Any actual future time evolution of the Earth’s climate is “unlikely” within the spread of possible future evolutions any given GCM produces, although the current GCMs almost all have produced PPE results from the “predictive epoch” after the training set that are predominantly systematically much warmer than reality has turned out to be.
We have the paradox that in order to get reliable results, we might well need to use a much, much finer grid to get the physics right, but have much, much worse data to use to initialize the grid in a way that believably corresponds to the current or any past climate state. Catch-22, with no way around it.
Dr. Ball also omits the fact that if different GCMs are applied to the same, vastly simplified toy problem — an untilted water world in a circular orbit around a constant star — they converge to completely different solutions for the imaginary planet’s steady state. It is difficult to sufficiently emphasize what this means as far as the possible reliability of GCMs in general are concerned. In any other branch of physics (say, quantum mechanics) if one took four different computer codes all purporting to solve the same quantum problem — perhaps determining the quantum properties of a promising new semiconductor — and all four codes:
a) produced completely, significantly, different results (different band structures with different gaps at different energies);
b) none of which agreed with direct measurements of those gaps or the band structure in the laboratory,
then who would take any one of those codes, let alone some sort of “average” of their results, seriously? Seriously enough to invest a few billion dollars building a massive fabrication plant based on their collective predictions. Seriously enough to publish paper after paper on the “average prediction” of the four codes as if it had some meaningful predictive value in spite of the fact that direct comparison with experiment proves that they do not have any predictive value, either singly or collectively.
Yet that is business as usual in the world of climate modeling, which attempts to solve a problem that is much more difficult than solving a band structure problem reasonably accurately.
Finally, Dr. Ball fails to address Chapter 9 of AR5, which is arguably even more deceptive than Chapter 8. My favorite quotes there are:
From 9.2.2.1, Multi-Model Ensembles (MME):
The MME is created from existing model simulations from multiple climate modeling centers. MMEs sample structural uncertainty and internal variability. However, the sample size of MMEs is small, and is confounded because some climate models have been developed by sharing model components leading to shared biases… Thus, MME members cannot be treated as purely independent, which implies a reduction in the effective number of independent models…
Translation for those not gifted in statistics-speak: We pretend that the GCMs make up an “ensemble”:
http://en.wikipedia.org/wiki/Statistical_ensemble_%28mathematical_physics%29
Note well, not only do they not constitute such an ensemble, it is a horrendous abuse of the term, implying a kind of controlled variability that is utterly lacking. In essence, they are pretending that GCMs are being pulled out of a large hat containing “random” variations of GCM-ness, that GCMs are somehow independent, identically distributed quantities being drawn from some distribution.
However (the paragraph continues) we know that this is not correct. And besides, in addition to there only being a paltry few GCMs in the first place, we cannot even begin to pretend that they are in any meaningful sense independent, or that the differences are random (or rather, “unbiased”) and hence likely to cancel out. Rather, we have no idea how many “independent” models the collection consists of, but it is almost certainly too small and too biased for any sort of perversion of the Central Limit Theorem in ordinary statistics to apply.
Next, from 9.2.2.3, Statistical Methods Applied to Ensembles:
The most common approach to characterize MME results is to calculate the arithmetic mean of the individual model results, referred to as an unweighted multi-model mean. This approach of ‘one vote per model’ gives equal weight to each climate model regardless of (1) how many simulations each model has contributed, (2) how interdependent the models are or (3) how well each model has fared in objective evaluation. The multi-model mean will be used often in this chapter. Some climate models share a common lineage and so share common biases… As a result, collections such as the CMIP5 MME cannot be considered a random sample of independent models. This complexity creates challenges for how best to make quantitative inferences of future climate…
Translation: In spite of the fact that the MME is not, in fact an ensemble, ignoring the fact that model results are in no possible defensible sense independent and identically distributed samples drawn from a distribution of model results produced by numerically correct models that are randomly perturbed in an unbiased way from some underlying perfectly correct mean behavior, we form the simple arithmetic mean of the mean results of each contributing model, form the standard deviation of those mean predictions around the simple arithmetical mean, and then pretend that the Central Limit Theorem is valid, that is, that the mean of the individual MME mean results will be normally distributed relative to the “true climate”.
We do this in spite of the fact that some models have only a very few runs in their contributing mean while others have many — we make no attempt to correct for an error that would be grounds for flunking any introductory statistics course — treating my mean of 100 coin flips producing a probable value of getting heads of 0.51 on the same basis as your mean of a single flip, that happened to come up tails, to get a probable value of (0.51 + 0)/2 = 0.255 for getting heads. Are they serious?
We do this in spite of the fact that Timmy and Carol were too lazy to actually flip a coin 100 times, so each of them flipped it 25 times and they then pooled the results into 50 flips and flipped it another 50 times independently. Their result still goes in with the same weight as my honestly independent 100 flips.
We do this in spite of the fact that Timmy and Carol somehow got a probability of heads of only 0.18 (Timmy) and 0.24 (Carol) for 100 flips, where I got 0.51 and you got 0 (in your one flip that is still being averaged in as if it were 100). Anyone but a complete idiot would look at the disparity in flip results between me and Timmy and Carol, use the binomial probability distribution to perform a simply hypothesis test (all coins used in this experiment/simulation are unbiased) and would reject the entire experiment until the disparity was explained. But Climate Science only makes money if two-sided coins are not approximately fifty-fifty, and are indecently eager to avoid actually looking too hard at results that suggest otherwise no matter how they are obtained or how inconsistent they are with each other or (worse) with observational reality.
Finally (it concludes) — doing all of these unjustifiably stupid things creates “challenges” for statistically meaningful climate prediction using GCMs.
Ya Mon! You sho’ nuff got dat right, yes you did. If you use statistical methodology that cannot be defended or derived in any sound way from the accepted principles of probability and statistics, it does indeed create “challenges”. Basically, what you are doing is doing things completely unjustifiably and/or incorrectlyand hoping that they’ll work out anyway!
Mathematics being what it is, the result of using made-up methodology to compute quantities incorrectly is actually rather likely not to work out anyway. And you will have nobody to blame but yourself if nature stubbornly persists in deviating further and further away from the MME mean of many biased, broken, predictive models. One day nobody will be able to possibly convince themselves that the models are correct, and then where will you be?
At the heart of a scientific scandal that will make Piltdown Man look like a practical joke, that’s where…
rgb
@- dbakerber
I dont see your point about boundary conditions. The climate models are predicting temperature based on co2 level. Temperature is hardly a boundary condition.
Models use multiple runs because temperature is a boundary condition. The multiple runs provide a range, an envelope of possible temperatures.
A comparison would be the modelling used to project the possible position of the missing airplane MH370. The initial conditions are incapable of providing a prediction of its exact position, but by modelling the physical constraints the possible area that the plane could have reached can be defined.
And where it could NOT have reached.
Billy Ruff’n says: “… If they speak up, their careers in academia and the climate research establishment are over. …If they speak up, how will they pay the mortgage and feed the kids?
No problemo. When the trials begin, the excuse “I vass only folloving ordehrs” has been extremely popular.
Bringing about the Socialist Utopia is so important, that no price is too high, even the death of millions. People are not going to willingly give up comfort, to die in poverty, so they will need to be driven to it using lies and fear.