
Guest post by Pat Frank
The summary of results from version 5 of the Coupled Model Intercomparison Project (CMIP5) has come out. [1] CMIP5 is evaluating the state-of-the-art general circulation climate models (GCMs) that will be used in the IPCC’s forthcoming 5th Assessment Report (AR5) to project climate futures. The fidelity of these models will tell us what we may believe when climate futures are projected. This essay presents a small step in evaluating the CMIP5 GCMs, and the reliability of the IPCC conclusions based on them.
I also wanted to see how the new GCMs did compared to the CMIP3 models described in 2003. [2] The CMIP3 were used in IPCC AR4. Those models produced an estimated 10.1% error in cloudiness, which translated into an uncertainty in cloud feedback of (+/-)2.8 Watts/m^2. [3] That uncertainty is equivalent to (+/-)100% of the excess forcing due to all the GHGs humans have released into the atmosphere since 1900. Cloudiness uncertainty, all by itself, is as large as the entire effect the IPCC is trying to resolve.
If we’re to know how reliable model projections are, the energetic uncertainty, e.g., due to cloudiness error, must be propagated into GCM calculations of future climate. From there the uncertainty should appear as error bars that condition the projected future air temperature. However inclusion of true physical error bars never seems to happen.
Climate modelers invariably publish projections of temperature futures without any sort of physical uncertainty bars at all. Here is a very standard example, showing several GCM projections of future arctic surface air temperatures. There’s not an uncertainty bar among them.
The IPCC does the same thing, brought to you here courtesy of an uncritical US EPA. The shaded regions in the IPCC SRES graphic refer to the numerical variability of individual GCM model runs. They have nothing to do with physical uncertainty or with the reliability of the projections.
Figure S1 in Jiang, et al.’s Auxiliary Material [1] summarized how accurately the CMIP5 GCMs hindcasted global cloudiness. Here it is:
Here’s what Jiang, et al., say about their results, “Figure S1 shows the multi-year global, tropical, mid-latitude and high-latitude mean TCFs from CMIP3 and CMIP5 models, and from the MODIS and ISCCP observations. The differences between the MODIS and ISCCP are within 3%, while the spread among the models is as large as ~15%.” “TCF” is Total Cloud Fraction.
Including the CMIP3 model results with the new CMIP5 outputs conveniently allows a check for improvement over the last 9 years. It also allows a comparison of the official CMIP3 cloudiness error with the 10.1% average cloudiness error I estimated earlier from the outputs of equivalent GCMs.
First, here are Tables of the CMIP3 and CMIP5 cloud projections and their associated errors. Observed cloudiness was taken as the average of the ISCCP and Modis Aqua satellite measurements (the last two bar groupings in Jiang Figure S1), which was 67.7% cloud cover. GCM abbreviations follow the references below.
Table 1: CMIP3 GCM Global Fractional Cloudiness Predictions and Error
| Model Source | CMIP3 GCM | Global Average Cloudiness Fraction | Fractional Global Cloudiness Error |
| NCC | bcm2 | 67.7 | 0.00 |
| CCCMA | cgcm3.1 | 60.7 | -0.10 |
| CNRM | cm3 | 73.8 | 0.09 |
| CSIRO | mk3 | 65.8 | -0.03 |
| GFDL | cm2 | 66.3 | -0.02 |
| GISS | e-h | 57.9 | -0.14 |
| GISS | e-r | 59.8 | -0.12 |
| INM | cm3 | 67.3 | -0.01 |
| IPSL | cm4 | 62.6 | -0.08 |
| MIROC | miroc3.2 | 54.2 | -0.20 |
| NCAR | ccsm3 | 55.6 | -0.18 |
| UKMO | hadgem1 | 54.2 | -0.20 |
| Avg. 62.1 | R.M.S. Avg. ±12.1% |
Table 2: CMIP5 GCM Global Fractional Cloudiness Predictions and Error
| Model Source | CMIP5 GCM | Global Average Cloudiness Fraction | Fractional Global Cloudiness Error |
| NCC | noresm | 54.2 | -0.20 |
| CCCMA | canesm2 | 61.6 | -0.09 |
| CNRM | cm5 | 57.9 | -0.14 |
| CSIRO | mk3.6 | 69.1 | 0.02 |
| GFDL | cm3 | 71.9 | 0.06 |
| GISS | e2-h | 61.2 | -0.10 |
| GISS | e2-r | 61.6 | -0.09 |
| INM | cm4 | 64.0 | -0.06 |
| IPSL | cm5a | 57.9 | -0.14 |
| MIROC | miroc5 | 57.0 | -0.16 |
| NCAR | cam5 | 63.5 | -0.06 |
| UKMO | hadgem2-a | 54.2 | -0.20 |
| Avg. 61.2 | R.M.S. Avg. ±12.4% |
Looking at these results, some models improved between 2003 and 2012, some did not, and some apparently became less accurate. The error fractions are one-dimensional numbers that are condensed from errors in three-dimensions. The error fractions do not mean that some given GCM predicts a constant fraction of cloudiness above or below the observed cloudiness. GCM cloud predictions weave about the observed cloudiness in all three dimensions; here predicting more cloudiness, there less. [4, 5] The GCM fractional errors are the positive and negative cloudiness errors integrated over the entire globe, suppressed into single numbers. The total average error of all the GCMs is represented as the root-mean-square average of the individual GCM fractional errors.
CMIP5 cloud cover was averaged over 25 model years (1980-2004) while CMIP3 averages represent 20 model years (1980-1999). GCM average cloudiness was calculated from “monthly mean grid-box averages.” So a 20-year global average included 12×20 monthly global cloudiness realizations, reducing GCM random calculational error by at least 15x.
It’s safe to surmise, therefore, that the residual errors recorded in Tables 1 and 2 represent systematic GCM cloudiness error, similar to the cloudiness error found previously. The average GCM cloudiness systematic error has not diminished between 2003 and 2012. The CMIP3 models averaged 12.1% error, which is not significantly different from my estimate of 10.1% cloudiness error for GCMs of similar sophistication.
We can now proceed to propagate GCM cloudiness error into a global air temperature projection, to see how uncertainty grows over GCM projection time. GCMs typically calculate future climate in a step-wise fashion, month-by-month to year-by-year. In each step, the climate variables calculated in the previous time period are extrapolated into the next time period. Any errors residing in those calculated variables must propagate forward with them. When cloudiness is calculationally extrapolated from year-to-year in a climate projection, the 12.4 % average cloudiness error in CMIP5 GCMs represents an uncertainty of (+/-)3.4 Watts/m^2 in climate energy. [3] That (+/-)3.4 Watts/m^2 of energetic uncertainty must propagate forward in any calculation of future climate.
Climate models represent bounded systems. Bounded variables are constrained to remain within certain limits set by the physics of the system. However, systematic uncertainty is not bounded. In a step-wise calculation, any systematic error or uncertainty in input variables must propagate as (+/-)sqrt(sum(per-step error)^2) into output variables. Per-step uncertainty increments with each new step. This condition is summarized in the next Figure.
The left picture illustrates the way a GCM projects future climate in a step-wise fashion. [6] The white rectangles represent a climate propagated forward through a series of time steps. Each prior climate includes the variables that calculationally evolve into the climate of the next step. Errors in the prior conditions, uncertainties in parameterized quantities, and limitations of theory all produce errors in projected climates. The errors in each preceding step of the evolving climate calculation propagate into the following step.
This propagation produces the growth of error shown in the right side of the figure, illustrated with temperature. The initial conditions produce the first temperature. But errors in the calculation produce high- and low uncertainty bounds (e_T1, etc.) on the first projected temperature. The next calculational step produces its own errors, which add in and expand the high and low uncertainty bounds around the next temperature.
When the error is systematic, prior uncertainties do not diminish like random error. Instead, uncertainty propagates forward, producing a widening spread of uncertainty around time-stepped climate calculations. The wedge on the far right summarizes the way propagated systematic error increases as a step-wise calculation is projected across time.
When the uncertainty due to the growth of systematic error becomes larger than the physical bound, the calculated variable no longer has any physical meaning. In the Figure above, the projected temperature would become no more meaningful than a random guess.
With that in mind, the CMIP5 average error in global cloudiness feedback can be propagated into a time-wise temperature projection. This will show the average uncertainty in projected future air temperature due to the errors GCMs make in cloud feedback.
Here, then, is the CMIP5 12.4% average systematic cloudiness error propagated into everyone’s favorite global temperature projection: Jim Hansen’s famous doomsday Figure; the one he presented in his 1988 Congressional testimony. The error bars were calculated using the insight provided by my high-fidelity GCM temperature projection emulator, as previously described in detail (892 kB pdf download).
The GCM emulator showed that, within GCMs, global surface air temperature is merely a linear function of net GHG W/m^2. The uncertainty in calculated air temperature is then just a linear function of the uncertainty in W/m^2. The average GCM cloudiness error was propagated as:
“Air temperature” is the surface air temperature in Celsius. Air temperature increments annually with increasing GHG forcing. “Total forcing” is the forcing in W/m^2 produced by CO2, nitrous oxide, and methane in each of the “i” projection years. Forcing increments annually with the levels of GHGs. The equation is the standard propagation of systematic errors (very nice lecture notes here (197 kb pdf)). And now, the doomsday prediction:
The Figure is self-explanatory. Lines A, B, and C in part “a” are the projections of future temperature as Jim Hansen presented them in 1988. Part “b” shows the identical lines, but now with uncertainty bars propagated from the CMIP5 12.4 % average systematic global cloudiness error. The uncertainty increases with each annual step. It is here safely assumed that Jim Hansen’s 1988 GISS Model E was not up to CMIP5 standards. So, the uncertainties produced by CMIP5 model inaccuracies are a true minimum estimate of the uncertainty bars around Jim Hansen’s 1988 temperature projections.
The large uncertainty by year 2020, about (+/-)25 C, has compressed the three scenarios so that they all appear to nearly run along the baseline. The uncertainty of (+/-)25 C does not mean I’m suggesting that the model predicts air temperatures could warm or cool by 25 C between 1958 and 2020. Instead, the uncertainty represents the pixel size resolvable by a CMIP5 GCM.
Each year, the pixel gets larger, meaning the resolution of the GCM gets worse. After only a few years, the view of a GCM is so pixelated that nothing important can be resolved. That’s the meaning of the final (+/-25) C uncertainty bars. They mean that, at a distance of 62 years, the CMIP5 GCMs cannot resolve any air temperature change smaller than about 25 C (on average).
Under Jim Hansen’s forcing scenario, after one year the (+/-)3.4 Watts/m^2 cloudiness error already produces a single-step pixel size of (+/-)3.4 C. In short, CMIP5-level GCMs are completely unable to resolve any change in global surface air temperature that might occur at any time, due to the presence of human-produced carbon dioxide.
Likewise, Jim Hansen’s 1988 version of GISS Model E was unable predict anything about future climate. Neither can his 2012 version. The huge uncertainty bars makes his 1988 projections physically meaningless. Arguing about which of them tracks the global anomaly trend is like arguing about angels and heads of pins. Neither argument has any factual resolution.
Similar physical uncertainty bars should surround any GCM ensemble projection of global temperature. However, the likelihood of seeing them in the AR5 (or in any peer-reviewed climate journal) is vanishingly small.
Nevertheless, CMIP5 climate models are unable to predict global surface air temperature even one year in advance. Their projections have no particular physical meaning and are entirely unreliable.
Following on from the title of this piece: there will be no reason to believe any – repeat, any — CMIP5-level future climate projection in the AR5.
Climate projections that may appear in the IPCC AR5 will be completely vacant of physical credibility. Attribution studies focusing on CO2 will necessarily be meaningless. This failure is at the center of AGW Climate Promotions, Inc.
Anyone, in short, who claims that human-produced CO2 has caused the climate to warm since 1900 (or 1950, or 1976) is implicitly asserting the physical reliability of climate models. Anyone seriously asserting the physical reliability of climate models literally does not know what they are talking about. Recent pronouncements about human culpability should be seen in that light.
The verdict against the IPCC is identical: they do not know what they’re talking about.
Does anyone think that will stop them talking?
References:
1. Jiang, J.H., et al., Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA “A-Train” satellite observations. J. Geophys. Res., 2012. 117, D14105.
2. Covey, C., et al., An overview of results from the Coupled Model Intercomparison Project. Global Planet. Change, 2003. 37, 103-133.
3. Stephens, G.L., Cloud Feedbacks in the Climate System: A Critical Review. J. Climate, 2005. 18, 237-273.
4. AchutaRao, K., et al., Report UCRL-TR-202550. An Appraisal of Coupled Climate Model Simulations, D. Bader, Editor. 2004, Lawrence Livermore National Laboratory: Livermore.
5. Gates, W.L., et al., An Overview of the Results of the Atmospheric Model Intercomparison Project (AMIP I). Bull. Amer. Met. Soc., 1999. 80(1), 29-55.
6. Saitoh, T.S. and S. Wakashima, An efficient time-space numerical solver for global warming, in Energy Conversion Engineering Conference and Exhibit (IECEC) 35th Intersociety. 2000, IECEC: Las Vegas. p. 1026-1031.
Climate Model Abbreviations
BCC: Beijing Climate Center, China.
CCCMA: Canadian Centre for Climate Modeling and Analysis, Canada.
CNRM: Centre National de Recherches Météorologiques, France.
CSIRO: Commonwealth Scientific and Industrial Research Organization, Queensland Australia.
GFDL: Geophysical Fluid Dynamics Laboratory, USA.
GISS: Goddard Institute for Space Studies, USA.
INM: Institute for Numerical Mathematics, Russia.
IPSL: Institut Pierre Simon Laplace, France.
MIROC: U. Tokyo/Nat. Ins. Env. Std./Japan Agency for Marine-Earth Sci.&Tech., Japan.
NCAR: National Center for Atmospheric Research, USA.
NCC: Norwegian Climate Centre, Norway.
UKMO: Met Office Hadley Centre, UK.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
In their own words: “climate models trump data” ( the big lie )
“The data doesn’t matter. We’re not basing our recommendations on the data. We’re basing them on the climate models.”
– Prof. Chris Folland,
Hadley Centre for Climate Prediction and Research
“The models are convenient fictions that provide something very useful.”
– Dr David Frame,
climate modeler, Oxford University
* Source: http://www.green-agenda.com
Terry Oldberg, you wrote that, “when you claim that “the probability distribution is the information,” your claim is inconsistent with the information theoretic definition of “information.””
But my claim is not inconsistent with the definition of information within science. And science, after all, provides the full context of any discussion concerning the meaning of predictions and projections made by climate scientists.
As we all know, meaning (information) in science arises from the inter-relation of falsifiable theory and replicable result, where “result” is the objective outcome of an experiment or an observation. Information in science does not come from properly defined logical premises, alone.
I don’t doubt you’re right that information theoretical logic produces a result that looks exactly like the second law of thermodynamics. But consider: in the absence of referential objective data that information theoretic outcome is no more than philosophy.
As you noted, the second law “was first discovered in empirical data.” The second law remains irremediably embedded in empirical data. Any mathematical derivation of the second law takes its meaning in science from the inter-relation of the derived expression and the relevant empirical data. Either part alone — derived expression or empirical data — are physically meaningless. Only the outcome of the relational interplay has meaning in science.
As a theory of science, the second law remains open to refutation and falsification by experiment. As a deductive conclusion within an axiomatic Information Theory, the analogical second law expression remains forever true, regardless of experiment. This distinction — open to falsification vs. forever true — completely demarcates Thermodynamics from Information Theory, and completely distinguishes the meaning of the second law in science from the same expression in information theory.
Whatever meaning the derived second law expression has within information theory, within science that meaning is conditional upon the feedback from empirical outcomes and the explanatory coherence of the second law within the greater context of relevant physical theory.
Any part of the information theoretic meaning of the second law equation that does not map exactly onto physical meaning is irrelevant to science. With respect to science, at best, any extra-scientific information theoretic meaning might constitute a scientific hypothesis that must wait upon an empirical test for its validity.
As “information” is not a physical observable, and/or has no observable consequences, “information” makes no appearance in the equations of Thermodynamics, has no physical meaning, and can make no contribution to Thermodynamic explanations. That doesn’t mean people will not be seduced by the power of analogy to suppose a congruence.
One may find that non-scientific information theoretic meanings are useful in assessing the logic and coherence of proposed scientific or engineering programs. That utility, however, does not validate any claim that the meanings of information theory can be directly imported unchanged into science itself. Nor can that utility support a claim that meaning in science is conditioned upon information theory.
I appreciate the power and coherence of your thinking, Terry. But the grounding of physical meaning in observation forever frees science from axiomatic systems.
This sentence, “But consider: in the absence of referential objective data that information theoretic outcome is no more than philosophy.” should have finished this way:
‘But consider: in the absence of referential objective data that information theoretic outcome is no more than philosophy, and its force is limited to evaluating the logic of syllogisms.’
Also, the non-axiomatic, observational basis of science frees it from any constraint implied by Godel’s incompleteness theorem.