The Overselling of Climate Modeling Predictability on Multi-Decadal time Scales in the 2013 IPCC WG1 Report – Annex 1 Is Not Scientifically Robust

promises ave and realiry wayGuest essay by Roger A. Pielke Sr.

Introduction

I have posted in the past how the development of multi-decadal regional climate projections (predictions) to give to policymakers and the impact communities is a huge waste of time and resources; e.g. see

The Huge Waste Of Research Money In Providing Multi-Decadal Climate Projections For The New IPCC Report

Today, I want to discuss this issue in relation to the Working Group I Contribution to the IPCC Fifth Assessment Report Climate Change 2013: The Physical Science Basis

The 2013 WG1 IPCC Report – Chapter 11 and Annex 1cover[1]

Projections are presented in Annex 1:

http://www.ipcc-wg1.unibe.ch/guidancepaper/WG1AR5_AnnexI-Atlas.pdf

and

http://www.climatechange2013.org/images/uploads/WGIAR5_WGI-12Doc2b_FinalDraft_AnnexI.pdf.

This is titled

Annex I: Atlas of Global and Regional Climate Projections

The foundation of this Atlas is based on the information provided in Chapter 11 of the IPCC WG1 report titled

“Near-term Climate Change: Projections and Predictability”

Click to access WG1AR5_Chapter11_FINAL.pdf

Multi-decadal regional climate projections, of course, can obviously not be any better than shorter term (i.e. “near-term”) projections (e.g. decadal) since decade time periods make up the longer period! The level of skill achieved for decadal time scales, must be the upper limit on what is obtainable for longer time periods.

As written in Chapter 11

Click to access WG1AR5_Chapter11_FINAL.pdf

Climate scientists distinguish between decadal predictions and decadal projections. Pro­jections exploit only the predictive capacity arising from external forcing.

Projections then are simply model sensitivity simulations. By ignoring internal climate dynamics their presentation to the impacts communities as scenarios is a gross overstatement of what they really provide. They are only useful as improving our understanding of a subset of climate processes. To present results from them in the IPCC report without emphasizing this important limitation is not an honest communication.

The issue of how the climate model results are presented should bother everyone, regardless of one’s view on the importance of greenhouse gases in the atmosphere.

Chapter 11, fortunately, in contrast to Annex 1, is an informative chapter in the 2013 IPCC WG1 report that provides a scientific summary regarding predictability although their discussion on the uncertainties of the “external climate forcings” and skill is incomplete (e.g. see http://pielkeclimatesci.files.wordpress.com/2009/12/r-354.pdf ).

The chapter focus is described this way

This chapter describes current scientific expectations for ‘near-term’ cli­mate. Here ‘near term’ refers to the period from the present to mid-cen­tury, during which the climate response to different emissions scenar­ios is generally similar. Greatest emphasis in this chapter is given to the period 2016–2035, though some information on projected changes before and after this period (up to mid-century) is also assessed.

Skilful multi-annual to decadal climate predictions (in the technical sense of ‘skilful’ as outlined in 11.2.3.2 and FAQ 11.1) are being pro­duced although technical challenges remain that need to be overcome in order to improve skill.

Some important extracts from the chapter are [highlight added]

Near-term prediction systems have significant skill for temperature over large regions (Figure 11.4), especially over the oceans (Smith et al., 2010; Doblas-Reyes et al., 2011; Kim et al., 2012; Matei et al., 2012b; van Oldenborgh et al., 2012; Hanlon et al., 2013). It has been shown that a large part of the skill corresponds to the correct representation of the long-term trend (high confidence) as the skill decreases substan­tially after an estimate of the long-term trend is removed from both the predictions and the observations (e.g., Corti et al., 2012; van Old­enborgh et al., 2012; MacLeod et al., 2013).

The skill in hindcasting precipitation over land (Figure 11.6) is much lower than the skill in hindcasting temperature over land.

The skill of extreme daily temperature and precipitation in multi-annu­al time scales has also been assessed (Eade et al., 2012; Hanlon et al., 2013). There is little improvement in skill with the initialization beyond the first year, suggesting that skill then arises largely from the varying external forcing. The skill for extremes is generally similar to, but slight­ly lower than, that for the mean.

As part of Chapter 11, there is a section on Frequently Asked Questions. I have extracted excerpts from the FAQ 11.1 which is titled

If You Cannot Predict the Weather Next Month, How Can You Predict Climate for the Coming Decade?

Excerpts read highlighted text.

Climate scientists do not attempt or claim to predict the detailed future evolution of the weather over coming seasons, years or decades.”Meteorological services and other agencies … have developed seasonal-to-interannual prediction systems that enable them to routinely predict seasonal climate anomalies with demonstrable predictive skill. The skill varies markedly from place to place and variable to variable. Skill tends to diminish the further the prediction delves into the future and in some locations there is no skill at all. ‘Skill’ is used here in its technical sense: it is a measure of how much greater the accuracy of a prediction is, compared with the accuracy of some typically simple prediction method like assuming that recent anomalies will persist during the period being predicted.Weather, seasonal-to-interannual and decadal prediction systems are similar in many ways (e.g., they all incorporate the same mathematical equations for the atmosphere, they all need to specify initial conditions to kick-start predictions, and they are all subject to limits on forecast accuracy imposed by the butterfly effect). However, decadal prediction, unlike weather and seasonal-to-interannual prediction, is still in its infancy. Decadal prediction systems nevertheless exhibit a degree of skill in hindcasting near-surface temperature over much of the globe out to at least nine years. A ‘hindcast’ is a prediction of a past event in which only observations prior to the event are fed into the prediction system used to make the prediction. The bulk of this skill is thought to arise from external forcing. ‘External forcing’ is a term used by climate scientists to refer to a forcing agent outside the climate system causing a change in the climate system. This includes increases in the concentration of long-lived greenhouse gases.Theory indicates that skill in predicting decadal precipitation should be less than the skill in predicting decadal sur­face temperature, and hindcast performance is consistent with this expectation.Finally, note that decadal prediction systems are designed to exploit both externally forced and internally generat­ed sources of predictability. Climate scientists distinguish between decadal predictions and decadal projections. Pro­jections exploit only the predictive capacity arising from external forcing. While previous IPCC Assessment Reports focussed exclusively on projections, this report also assesses decadal prediction research and its scientific basis.

What is remarkable about this Chapter is that they now recognize that at least out to a decade skillful predictions are very difficult. Only the skill in hindcasting near-surface temperature over much of the globe out to at least nine years has been emphasized. Skillful multi-decadal projections must be even more challenging.

Yet, Annex 1 provides detailed regional projections decades out into the future. It is

Annex I: Atlas of Global and Regional Climate Projections

http://www.ipcc-wg1.unibe.ch/guidancepaper/WG1AR5_AnnexI-Atlas.pdf

I have excerpted text from this Annex that explains what is provided (i.e. detailed regional multi decadal climate projections)

Annex I: Atlas of Global and Regional Climate Projections is an integral part of the Working Group I contribution to the IPCC Fifth Assessment Report, Climate Change 2013: The Physical Science Basis. It will provide comprehensive information on a selected range of variables (e.g., temperature and precipitation) for a few selected time horizons (e.g., 2020, 2050, and 2100) for all regions and, to the extent possible, for the four basic RCP scenarios.

However, there is a fundamental flaw in creating Annex 1, and, thus, any papers and studies on future climate impacts that result from it. Despite the widespread use of these model results, it is really a fundamentally flawed activity.

For this approach to be a robust approach to use for impact studies, these model results (when tested in hindcast) must show skill in not only replicating current climate (which is tested by comparison with reanalyses in which the climate model is NOT forced by the lateral boundary and nudging from the reanalyses), but must show skill at predicting CHANGES in regional climate statistics. This later requirement is a requirement to accept the models as robust projection (prediction) tools.

Necessary and Sufficient Tests of Model Prediction (Projection) Skill

To summarize

· The ability of the model to skillfully reproduce the regional climate statistics from the climate model (from the GCM or downscaled by a higher resolution regional model) is a NECESSARY first condition.

· The REQUIRED condition is that they must show, in hindcast runs, skill at predicting CHANGES in regional climate statistics.

There is a common mistake is to assume that one can use reanalyses to assess model prediction skill for the future. However, as discussed, for example, in the paper

Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling – what’s the point? Eos Forum, 93, No. 5, 52-53, doi:10.1029/2012EO050008

using reanalyses to drive a model places a real world constraint on the results which does not exist when the multi-decadal climate models are run for the future decades (and indeed, lateral boundary conditions and nudging from the reanalyses must not be used in true hindcast tests of model skill). This issue is discussed in the paper

Pielke Sr., R.A. 2013: Comments on “The North American Regional Climate Change Assessment Program: Overview of Phase I Results.” Bull. Amer. Meteor. Soc., 94, 1075-1077, doi: 10.1175/BAMS-D-12-00205.1.

As discussed above, unless the global climate model (dynamically and/or statistically downscaled) can be shown to skillfully predict current climate on the regional scales [when run over multi-decadal time scales in a hindcast mode, it cannot be accepted as a faithful representation of the real world climate.

Examples of IPCC Model Shortcomings

Multi-decadal global model prediction, in hindcast runs, however, have major shortcoming even with respect to current climate! Peer reviewed examples of these shortcomings include; as summarized in the Preface to

Pielke Sr, R.A., Editor in Chief., 2013: Climate Vulnerability, Understanding and Addressing Threats to Essential Resources, 1st Edition. J. Adegoke, F. Hossain, G. Kallos, D. Niyoki, T. Seastedt, K. Suding, C. Wright, Eds., Academic Press, 1570 pp. [http://pielkeclimatesci.files.wordpress.com/2013/05/b-18preface.pdf]

are

Taylor et al, 2012: Afternoon rain more likely over drier soils. Nature. doi:10.1038/nature11377. Received 19 March 2012 Accepted 29 June 2012 Published online 12 September 2012

“…the erroneous sensitivity of convection schemes demonstrated here is likely to contribute to a tendency for large-scale models to `lock-in’ dry conditions, extending droughts unrealistically, and potentially exaggerating the role of soil moisture feedbacks in the climate system.”

Driscoll, S., A. Bozzo, L. J. Gray, A. Robock, and G. Stenchikov (2012), Coupled Model Intercomparison Project 5 (CMIP5) simulations of climate following volcanic eruptions, J. Geophys. Res., 117, D17105, doi:10.1029/2012JD017607. published 6 September 2012.

“The study confirms previous similar evaluations and raises concern for the ability of current climate models to simulate the response of a major mode of global circulation variability to external forcings.”

Fyfe, J. C., W. J. Merryfield, V. Kharin, G. J. Boer, W.-S. Lee, and K. von Salzen (2011), Skillful predictions of decadal trends in global mean surface temperature, Geophys. Res. Lett.,38, L22801, doi:10.1029/2011GL049508

”….for longer term decadal hindcasts a linear trend correction may be required if the model does not reproduce long-term trends. For this reason, we correct for systematic long-term trend biases.”

Xu, Zhongfeng and Zong-Liang Yang, 2012: An improved dynamical downscaling method with GCM bias corrections and its validation with 30 years of climate simulations. Journal of Climate 2012 doi: http://dx.doi.org/10.1175/JCLI-D-12-00005.1

”…the traditional dynamic downscaling (TDD) [i.e. without tuning) overestimates precipitation by 0.5-1.5 mm d-1…..The 2-year return level of summer daily maximum temperature simulated by the TDD is underestimated by 2-6°C over the central United States-Canada region”.

Anagnostopoulos, G. G., Koutsoyiannis, D., Christofides, A., Efstratiadis, A. & Mamassis, N. (2010) A comparison of local and aggregated climate model outputs with observed data. Hydrol. Sci. J. 55(7), 1094–1110

“…. local projections do not correlate well with observed measurements. Furthermore, we found that the correlation at a large spatial scale, i.e. the contiguous USA, is worse than at the local scale.”

Stephens, G. L., T. L’Ecuyer, R. Forbes, A. Gettlemen, J.‐C. Golaz, A. Bodas‐Salcedo, K. Suzuki, P. Gabriel, and J. Haynes (2010), Dreary state of precipitation in global models, J. Geophys. Res., 115, D24211, doi:10.1029/2010JD014532.

“…models produce precipitation approximately twice as often as that observed and make rainfall far too lightly…..The differences in the character of model precipitation are systemic and have a number of important implications for modeling the coupled Earth system …….little skill in precipitation [is] calculated at individual grid points, and thus applications involving downscaling of grid point precipitation to yet even finer‐scale resolution has little foundation and relevance to the real Earth system.”

Sun, Z., J. Liu, X. Zeng, and H. Liang (2012), Parameterization of instantaneous global horizontal irradiance at the surface. Part II: Cloudy-sky component, J. Geophys. Res., doi:10.1029/2012JD017557, in press.

“Radiation calculations in global numerical weather prediction (NWP) and climate models are usually performed in 3-hourly time intervals in order to reduce the computational cost. This treatment can lead to an incorrect Global Horizontal Irradiance (GHI) at the Earth’s surface, which could be one of the error sources in modelled convection and precipitation. …… An important application of the scheme is in global climate models….It is found that these errors are very large, exceeding 800 W m-2 at many non-radiation time steps due to ignoring the effects of clouds….”

Ronald van Haren, Geert Jan van Oldenborgh, Geert Lenderink, Matthew Collins and Wilco Hazeleger, 2012: SST and circulation trend biases cause an underestimation of European precipitation trends Climate Dynamics 2012, DOI: 10.1007/s00382-012-1401-5

“To conclude, modeled atmospheric circulation and SST trends over the past century are significantly different from the observed ones. These mismatches are responsible for a large part of the misrepresentation of precipitation trends in climate models. The causes of the large trends in atmospheric circulation and summer SST are not known.”

I could go on with more examples. However, it is clear that the climate models used in the manuscript under review are not robust tools to use to predict climate conditions in the future.

Annex 1 of the 2013 IPCC WG1 report, therefore, is fundamentally flawed as it is based on multi-decadal climate model results which have not shown skill at faithfully replicating most of the basic climate dynamics, such as major atmospheric circulation features, even in the current climate. They have also shown no skill at predicting the CHANGES of regional climate statistics to the accuracy required for impact studies.

Views by Others

Now, in closing, below I have extracted text from separate e-mails of two very major well known players in the climate area. Both accept that CO2 is the dominant climate forcing and man is responsible and we need urgent action. These quotes are in e-mails that I have. They show that despite other topics in which we disagree, they presumably would agree with me on the gross inadequacies of Annex 1 in the IPCC WG1 report.

The relevant part of the first e-mail reads

“It is also worth pointing out that neither initialised decadal predictions nor RCMs are the entirety of what can be said about regional climate change in the next few decades – and in fact, it is arguable whether either add anything very much. ;-)”

The second e-mail reads

“The climate effects are largely warming, I cannot say today’s climate models can tell us much more with any certainty. I will add that there is probably poleward and continental intensification of the warming.  One further feature that appears to be robust is the movement of the storm belts in both hemispheres polewards. This has implications for the general circulation and in particular the climatology of precipitation intensity and variability (i.e., drought and flood). This poleward shift is seen in the data. There is some model evidence that over the next century ozone recovery could cancel come of this poleward shift in the Southern Hemisphere, but probably not in the NH. I think this is about as far I as we can go in forecasting climate over the next 50-100 years. Of course, sea level follows naturally from thermal expansion of the water as well as land ice melting. I think there is very little information (above noise) beyond the above in global climate models at regional level, even including downscaling.”

Since, these individuals have been silent in discussing the issue of the value of multi-decadal model regional climate predictions, I feel compelled to communicate that my flagging the failure of this approach for the impacts and policymakers communities is shared by even some in the IPCC community.

Recommendation for Responding to this IPCC Deficiency

My recommendation is that when you hear or read of climate projections on multi-decadal time periods, ask them:

What is the quantitative skill of the models used in predicting their projected CHANGES? In other words, what is the predictive skill with respect to the climate metrics that are of importance to a particular impact?

If they cannot present quantitative evidence of such skill they are inappropriately presenting their studies. Annex 1 of the 2013 IPCC WG1, therefore, still needs an honest demonstration of the skill (if any) of their projections as part of a complete assessment of the state of climate science.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

106 Comments
Inline Feedbacks
View all comments
DirkH
February 8, 2014 4:40 pm

ferdberple says:
February 8, 2014 at 2:32 pm
“So you can see, no climate model can predict the future better than the climate scientists themselves, because if the scientists don’t believe the computer model, they will change the model until they do believe it.”
Possible. But this is all so basic that they can’t possibly plead ignorance. I just wanted to hear what Dr. Pielke has to say. I might have missed attempts at validation that were actually competent, but it looks like I didn’t. Not that I’m surprised. The best way to avoid disappointments is to operate with low expectations.

February 8, 2014 9:45 pm

February 7, 2014 at 12:43 pm | Steven Mosher says:
—-
Bumkin ! When you can provide real, and I mean REAL, observed data to support your excuse then you’ll be taken seriously. Until then politicians and activists, including you, have no right to impose your anti-human policies, based on nothing but suspect, to achieve power and control over the citizenry.
You’re all just greedily chasing down whatever shekels you can … you’ll do and say anything to get your flat snouts into the public trough. Impress me, get out there and do some manual work because intellectually you are a collection of frauds.

Michel
February 9, 2014 1:33 am

On hindcasting and forecasting.
Any model can be partially validated and fully invalidated by hindcasting. A bias in the validating process, if fairly described, is not difficult to debunk.
Blind or double blind experiments are no model validation but rather enable unbiased raw data discovery, as e.g. in drug efficacy testing.
And if the model fails to hindcast then there is a probability bordering certainty that it will be inapt (show no skill) to forecast. In this case validation will have failed. Aren’t the models used by IPCC of this kind?
However, if hindcasting is successful it is not certain that forecasting will be valid, but confidence on this will increase. But only time will tell if a forecast will have been accurate.
Hindcasting is a necessary but not sufficient condition to fully validate a model.

February 9, 2014 1:58 am

“collection of frauds”
You are much too kind. What they have done goes way beyond fraud. Start with the advocacy of the totally unnecessary spending of uncounted and fraudulently confiscated billions. Continue with the suffering and death due to fuel poverty caused by that spending and associated rules and regulations. Then finish with the totally unnecessary prohibition of cheap (fossil fuel) energy for both the developed and the undeveloped nations. It adds up to causing the suffering and deaths of uncounted millions of people who would otherwise be alive and contributing to the well being of themselves, their families, their communities, and the rest of the people of the earth.
In a word, no matter what their stated intentions they are EVIL to the core. This because of the inevitable consequences of their words and deeds. Even a cursory examination of history would have demonstrated those consequences would occur and they did what they did anyway. They are not innocent in the matter! That they willfully evaded these facts only adds to their crime against the rest of us.

ferdberple
February 9, 2014 8:17 am

Michel says:
February 9, 2014 at 1:33 am
A bias in the validating process, if fairly described, is not difficult to debunk. Blind or double blind experiments are no model validation but rather enable unbiased raw data discovery, as e.g. in drug efficacy testing.
===============
The debunker introduces their own bias. This does not cancel existing bias. Validation requires careful experimental design because it is also a form of data discovery. You are seeking to discover if the model is valid. So for example, best practice in business requires that computer software validation be done independent of the developers, with (experimental) controls to ensure the validators results match what actually happened, not what they expected to happen.

Michel
Reply to  ferdberple
February 9, 2014 8:42 am

saying “The debunker introduces their own bias. …”
You are right!
But we shall remember that no model can be evaluated in a blind fashion by an “unexpert” having no idea about the subject matter. And it would be quite costly to have to redo the work of the modeller.
What is needed is a clear and understandable description of the validation (or invalidation) process. Impossible to get with dishonest people, I know.
Models are no experiments. They are deterministic constructs that generate computed data that can be then compared with actual observations. If validated and within a limited scope they can be used, with much caution, for forecasting or for engineering designs.
This kind of critical model evaluation and use has not been presented or discussed by IPCC although, as the ultimate reviewer having the blessing of the international community, this institution is expected to have done this rather than to speculate on the meaning of various model outputs.

jose
February 9, 2014 8:35 am

I wonder how many people died because of the combination of extreme winter cold and fuel poverty in Great Briton during the last 10 years and contrast that with how many died because of extreme heat in summer. The question: will there be more heat waves may be the wrong question.

ferdberple
February 9, 2014 8:38 am

Michel says:
February 9, 2014 at 1:33 am
And if the model fails to hindcast then there is a probability bordering certainty that it will be inapt (show no skill) to forecast. In this case validation will have failed. Aren’t the models used by IPCC of this kind?
=============
What if the models were indirectly trained using the hindcast data? This is easily done, even if the developers are not aware of it. For example:
I build a model, call it M1 and validate it against the hindcast. It performs poorly, so I as a develoepr manually adjust the response to aerosols and rename this model M2. I validate M2 against the hindcast and it now performs well, so I publish my M2 model.
This is how I understand all the IPCC climate models were built, and it is a complete rubbish. It violates the experimental controls required to maintain the independence of the hindcast for validation. As soon as the developer uses the hindcast in any fashion to improve the performance of the model against the hindcast, the hindcast cannot be used for validation.
This is one of many reasons why the IPCC models have no skill at prediction. Software validation requires that neither the developers nor the software be permitted to view the validation test. These sort of blind controls are essential to software validation, otherwise the computer many simply spew out the correct answer, unintentionally copied from the answer sheet in the validation test.

ferdberple
February 9, 2014 8:55 am

Lionell Griffith says:
February 9, 2014 at 1:58 am
Then finish with the totally unnecessary prohibition of cheap (fossil fuel) energy for both the developed and the undeveloped nations.
=============
This is the ultimate goal of carbon legislation. To pay the third world leaders to keep their populations from realizing the benefits of low cost fossil fuel energy. The worry being that if everyone in the world had a living standard equal to the US, then the US standard of living would suffer.

Michel
February 9, 2014 9:01 am

saying : “What if the models were indirectly trained using the hindcast data?…”
Obviously models that would do that would just be correlation attempts, no models.
People trying to do this (as for example with ENSO oscillations) are basically out of modelling but only in cherry picking possible correlations.
The reason that the “pause” since 1997 was not predicted by such no-models is exactly because of this conceptual error.
To be fair with such “modellers” this task is so huge that sometimes some sets of parameters must be taken as an “output-to-input black box” without understanding its inner parts. Honesty requires then to explicit these limitations.
But if a mathematical model about aerosol is developed with equations describing the underlying physical-chemical phenomena, then it may improve previous models that did not take such things into account.

ferdberple
February 9, 2014 9:08 am

Michel says:
February 9, 2014 at 8:42 am
Models are no experiments.
==========
http://en.wikipedia.org/wiki/Computer_experiment
“For example, climate models are often used because experimentation on an earth sized object is impossible.”

ferdberple
February 9, 2014 9:28 am

Michel says:
February 9, 2014 at 9:01 am
Obviously models that would do that would just be correlation attempts, no models.
===========
My understanding is that all climate models used by the IPCC fall into this category, yet the model builder and the IPCC all refer to them as models. It is my understanding that the IPCC climate model relies on tunable parameters to improve its fit to the hindcast. And that the values of these tunable parameters vary widely from model to model, and thus cannot be based on known physics.
The IPCC recognized in their second report that predicting climate from first principles was impossible. So they changed the name to projections. But still they present the results as though they were predictions. They claim that there will be warming because the model show warming, and cite this as evidence.
More importantly, the scientists involved continue to promote the myth that climate models show us where climate is headed in the future. If the model cannot predict the future climate, how can their result be used as evidence of future climate, unless it is via scientific fraud?

Michel
February 9, 2014 9:36 am

OK ferdberple, failed validation, we agree!
Who else in the IPCC fan club?

richardscourtney
February 9, 2014 9:51 am

Ferdberple
At February 9, 2014 at 9:28 am you say

The IPCC recognized in their second report that predicting climate from first principles was impossible. So they changed the name to projections. But still they present the results as though they were predictions. They claim that there will be warming because the model show warming, and cite this as evidence.
More importantly, the scientists involved continue to promote the myth that climate models show us where climate is headed in the future. If the model cannot predict the future climate, how can their result be used as evidence of future climate, unless it is via scientific fraud?

I agree that claims of such “predictions” are fr@ud, but the IPCC does claim the climate models do make predictions as well as projections.
The IPCC AR5 Glossray is here. It provides these IPCC definitions

Climate prediction
A climate prediction or climate forecast is the result of an attempt to produce (starting from a particular state of the climate system) an estimate of the actual evolution of the climate in the future, for example, at seasonal, interannual or decadal time scales. Because the future evolution of the climate system may be highly sensitive to initial conditions, such predictions are usually probabilistic in nature. See also Climate projection, Climate scenario, Model initialization and Predictability.
Climate projection Climate projection
A climate projection is the simulated response of the climate system to a scenario of future emission or concentration of greenhouse gases and aerosols, generally derived using climate models. Climate projections are distinguished from climate predictions by their dependence on the emission/concentration/radiative forcing scenario used, which is in turn based on assumptions concerning, for example, future socioeconomic and technological developments that may or may not be realized. See also Climate scenario.

Richard

ferdberple
February 9, 2014 9:55 am

Michel says:
February 9, 2014 at 9:01 am
People trying to do this (as for example with ENSO oscillations) are basically out of modelling but only in cherry picking possible correlations.
=============
On the contrary, pattern recognition is the only currently know method to provide a partially reliable forecast of the future state of chaotic systems.
The future state of chaotic systems cannot be calculated reliably from first principles using existing mathematics. The results diverge rather than converge. Thus the Climate modelling exercise using GCM’s as a starting point is a dead end for predicting future climate, unless and until new mathematical theories and methods are developed.
However, this does not mean climate prediction is hopeless. When we look at climate data it is not random. The human eye can see events repeating. Like intersecting waves trains on the ocean, the pattern is complex, but it has the elements of predictability that requires no understanding of the underlying process.
Nature operates in cycles because all linear trends must ultimately lead to extinction if they persist. These cycles leads to patterns and these patterns lead to successful predictions.

February 9, 2014 9:59 am

Hi ferdberple – You wrote
“It is my understanding that the IPCC climate model relies on tunable parameters to improve its fit to the hindcast. And that the values of these tunable parameters vary widely from model to model, and thus cannot be based on known physics”
This is not a correct interpretation as to what they do.
The parameterizations (e.g. for long- and short-wave radiation; deep cumulus convection, etc) are individually tuned against observations, theory and/or higher resolution models, but the total model is not tuned when run in hindcast. They may make multiple model runs but but this is not an approach to adjust their results. Indeed they have systematic biases as I gave examples on in my post.
I discuss how models are created in my book
Pielke Sr, R.A., 2013: Mesoscale meteorological modeling. 3rd Edition, Academic Press, 760 pp..http://store.elsevier.com/Mesoscale-Meteorological-Modeling/Roger-A-Pielke-Sr/isbn-9780123852373/
with respect to mesoscale models, but the same issues apply to climate models.
Roger

ferdberple
February 9, 2014 10:28 am

richardscourtney says:
February 9, 2014 at 9:51 am
Because the future evolution of the climate system may be highly sensitive to initial conditions, such predictions are usually probabilistic in nature.
==============
Thanks Richard, is there value in a probabilistic climate forecast?
For example, a forecast that in 30 years there is a 30% chance future climate will be hotter, a 30% chance it will be cooler, and a 40% chance it will be the same seems to me a reasonable forecast that no one can disprove. On that basis we divide 30% of our budget preparing for hotter temps, 30% preparing for cooler, and 40% preparing for things to stay the same.
Now we might say, OK take 10% from our budget and spent it to change the future climate (numbers chose for reasons of easy math). And the climate model now say there is a 20% chance future climate will be hotter, a 30% chance it will be cooler, and a 50% chance it will be the same. So now we allocate our now reduced budget according to these new probabilities. We spend 18% of our budget preparing for hotter temps, 27% preparing for cooler, and 45% preparing for things to stay the same.
What is interesting is that if we divide our budget according to a probabilistic climate forecast of climate change, we end up spending less in preparation for change, and more on the assumption climate will not change. It looks like the end result of preparing for climate change will be to leave us less prepared to climate change.
Which in many ways seems to fit with observations. Rather than spending money to make sea walls higher, we are instead spending money to raise fuel costs which make everything more expensive. We end up spending more money assuming things will stay the same, rather than spending the money on assuming things will change.

richardscourtney
February 9, 2014 11:00 am

ferdberple:
The conclusion of your post at February 9, 2014 at 10:28 am is

It looks like the end result of preparing for climate change will be to leave us less prepared to climate change.
Which in many ways seems to fit with observations. Rather than spending money to make sea walls higher, we are instead spending money to raise fuel costs which make everything more expensive. We end up spending more money assuming things will stay the same, rather than spending the money on assuming things will change.

Yes. And we in the British West Country are suffering because precisely that policy has been adopted and imposed on us. Many of us not flooded have lost our essential rail link. And at this moment the army has been called in and is trying to stop the floods spreading into Bridgewater tonight.
You may have noticed my outrage at Gareth Phillips who has been promoting such harmful policies on several WUWT threads. I thought he was an eco-loon but on the ‘Black Swans’ thread it has been revealed he is a shill employed by the ‘Carbon Trading’ industry to spread disinformation and propaganda. This is what science has been perverted to provide.
The ‘chickens are coming home to roost’ and I fear the damage to the reputation of science will take generations to repair.
Richard

rgbatduke
February 9, 2014 11:37 am

The debunker introduces their own bias. This does not cancel existing bias. Validation requires careful experimental design because it is also a form of data discovery. You are seeking to discover if the model is valid. So for example, best practice in business requires that computer software validation be done independent of the developers, with (experimental) controls to ensure the validators results match what actually happened, not what they expected to happen.
Oh yeah. So absolutely on the money true. You also cannot validate your model using the training set which is precisely what climate models do, especially if when ithey are applied to hindcast e.g. HADCRUT4 back to 1860 as they are in figure 9.8a of AR5, they spend the entire first half of the 20th century completely missing the initial cooling and subsequent rapid warming, a warming that causes HADCRUT4 from 1900 to 1950 to almost perfectly match HADCRUT4 from 1950 to 2000. But the CMIP5 Multiple-Model Ensemble mean does nothing of the kind — the individual models all spend thirty odd years significantly warmer than past reality and failing to correctly exhibit the natural variability of the actual climate on the entire interval outside of the training set (marked in brown on this diagram).
In actual fact, one doesn’t really need expensive stats consultants to reject most of the models in CMIP5 on a criterion of matching what has actually happened — CMIP5 models individually and collectively fail the pure eyeball test of acceptable agreement with the data outside of the training interval. There are only two intervals in which it is in decent agreement — before 1900 (where the uncertainty in the data itself is so large that it is meaningless to be in reasonable agreement) and in the single stretch from maybe 1940 to 1960 where temperatures where nearly flat. Indeed, if one eyeballs only the red MME mean and black HADCRUT4 across all 150+ years of figure 9.8a, the black curve lies above the red curve only a grand total of (being generous) 25 years out of over 150, and is never significantly above it outside of a single, meaningless spike back in circa 1875. Even the 1997-1998 super-ENSO that created a record high temperature spike barely managed to shove HADCRUT4 back to the running CMIP5 MME mean prediciton and barely, transiently, past it. Even with this spike and consequent jump in mean surface temperature, box 9.2 of AR5 carefully explains that 111 out of 114 CMIP5 models are currently in significant disagreement with nature and offers three distinct equally unprovable hypotheses to explain this “hiatus” while preserving the illusion that we should pay attention to the CMIP5 MME mean as if it has some meaning as the result of “validated” models.
What I just don’t understand is how anybody can buy this as “success” of the CMIP5 models, forget about their systematic divergence from reality almost from the minute they were loosed on the wild after being initialized on the 1961-1990 reference period. They simply do not work to explain the principle features of HADCRUT4 over the thermometric era. They also badly underestimate the role of natural variability, largely because success on the training set is touted as “validation of the model” instead of “being able to successfully build a model” and then extending that concept of “success” back in time to span the 40 year “hiatus” in the early 20th century that was all purely natural variability as greenhouse and aerosol forcing at the time were basically neutral.
rgb

ferdberple
February 9, 2014 11:39 am

Roger A. Pielke Sr. says:
February 9, 2014 at 9:59 am
This is not a correct interpretation as to what they do.
=========
Thank you Dr. Pielke,
Doesn’t model validation against the hindcast require that the model have no knowledge about the hidncast? Thus, if one can show that the model had opportunity to gain knowledge about the hindcast, doesn’t this in itself invalidate using the hindcast to validate the model?
The reason I ask is because it seems there are a number of avenues for models to gain knowlege of the hindcast, which calls into question the notion that hindcast can be used to validate climate models. For example.
1. Information about the hindcast passed from model to model by parameter transfer.
If Model A that has been validated against the hindcast, and a parameter value from Model A is used to set a paramter in Model B, then even if Model B has never seen the hindcast, it still has received information about the hindcast. This invalidates using the hindcast to validate models that have received parameter values from other models.
2. Information about the hindcast passed from validator to model by selection bias:
Computer code goes through revision after revision. Poor performing code dies off and code that performs well is retained. If the criteria used to make the selection is performance against the hindcast, this passes information about the hindcast on to succeeding generations of models. This invalidates using the hindcast to validate succeeding generations of model.
3. Information about the hindcast passed from developer to model by unconscious bias:
Nothing says a model that performs well against a hindcast is correct. It may well have offsetting errors. However, when a model performs well against the hindcast, there is resistance on the part of the developers to change. They want to believe the model is correct. This also provides the model with information about the hindcast, allowing errors to survive and potentially pass on to other models.

ferdberple
February 9, 2014 11:53 am

richardscourtney says:
February 9, 2014 at 11:00 am
You may have noticed my outrage at Gareth Phillips who has been promoting such harmful policies on several WUWT threads. I thought he was an eco-loon but on the ‘Black Swans’ thread it has been revealed he is a shill employed by the ‘Carbon Trading’ industry to spread disinformation and propaganda. This is what science has been perverted to provide.
===========
Ah, that explains it. His writing appeared knowledgeable yet goofy at the same time. I couldn’t decide if he was an avid reader new to the subject, or a modelling shill hoping to spread FOD.

rgbatduke
February 9, 2014 11:56 am

The future state of chaotic systems cannot be calculated reliably from first principles using existing mathematics. The results diverge rather than converge. Thus the Climate modelling exercise using GCM’s as a starting point is a dead end for predicting future climate, unless and until new mathematical theories and methods are developed.
Not quite. They are a dead end at the currently accessible spatiotemporal resolution, which is almost certainly too coarse grained to come close to spanning the relevant quasiparticles (as it were, nonlinear large scale structures if you prefer a different terminolology) that represent dissipative processes that might contribute and be assessable via the fluctuation-dissipation theorem and a study of spatiotemporal autocorrelation.
Also, chaotic system trajectories diverge in phase space but that doesn’t mean that they are like random walks and diverge to cover the space or wander without bound away from some neighborhood. The overall climate system is de facto limited by the constraint that it is an open system poised between the sun and outer space and manifestly has sufficient negative feedbacks to remain bounded in its thermal behavior across geological time, although the range of the bounds is large especially in glacier eras like the present one.
However, you are quite correct that the statistical analysis of model results in AR5 is abysmal and inexcusable. In section 9.2.2 AR5 openly admits in 9.2.2.2 and 9.2.2.3 that problems with any sort of collective statistical analysis of the incorrectly named “ensemble” of models “creates challenges for how best
to make quantitative inferences of future climate”. What an understatement. And where is this “challenge” expressed in a degradation of confidence in MME-mean model predictions anywhere in the entire document? It’s like: “Sorry, we know perfectly well that averaging over a bunch of models with equal weight when the models aren’t independent, when they contain unequal numbers of model runs that are all going to be averaged in as if they have equal weight, and where lots of the individual models suck when they are compared to actual data, but we’re going to do it anyway and then treat the result as if it is an average of independent, identically distributed samples drawn from a compact distribution whenever we want to talk about “confidence” in the MME mean predictions”.
It truly is shameful. A critical reading of Chapter 9 in AR5 basically leads one to the conclusion that CMIP5 models are crap, we know it, and we’re going to use them anyway and pray that another super-ENSO comes along to save our asses or that we get to retire and draw our pensions before the hiatus stretches out to 20, 30, 40 years and the deviation is so great that we cannot even pretend that it isn’t there when making up the SPM.
Right.
rgb

February 9, 2014 12:14 pm

“The worry being that if everyone in the world had a living standard equal to the US, then the US standard of living would suffer.”
That would not be the case. What would happen is that the producers in the US would finally have to compete with the rest of the world on quality, quantity, performance, and price. If they can’t, they don’t deserve to improve their standard of living. As far as I am concerned, that is the way it should be. If you can’t compete, go to the bottom of the heap and start working to improve yourself. If you can’t or won’t, you have no one to blame but yourself.
No one owes you a livelihood or standard of living. You must earn it. If you can’t, you will have to depend upon voluntary charity from those who have.

ferdberple
February 9, 2014 12:24 pm

rgbatduke says:
February 9, 2014 at 11:56 am
the fluctuation-dissipation theorem
============
Dr Brown, thank you. I was not aware of this theorem. The progress made in science in the early 20th century still continues to astound me, with the same names appearing over and over. I wonder if modern science has lost some of this with our reliance on computers and numerical methods.

rgbatduke
February 9, 2014 2:25 pm

The fluctuation-dissipation theorem is (from what I can tell) one of the most underappreciated theorems of open systems in climate science:
http://en.wikipedia.org/wiki/Fluctuation-dissipation_theorem
and its still more general variations in the generalized theory of:
http://en.wikipedia.org/wiki/Non-equilibrium_thermodynamics
Fluctuation dissipation basically says that you can learn a lot about linear/exponential processes contributing to the internal dynamics driving a stable dynamical equilibrium by watching how the system responds to a “sudden” perturbation — a delta-correlated event such as Mount Pinatubo or the 1997-1998 ENSO — and then relaxes back to equilibrium. Sadly, it has limited reach because of its assumptions of steady state behavior in the mean. However, it is the subject of much interesting work including some reasonably contemporary work even in the field of climate science in the context of self-organized criticality, a hypothesis by Prigogene about how open systems self-organize (both pro and con — to maximize entropy production and minimize energy transfer or maximize energy transfer and minimize energy production or permutations thereof). Crude examples are the appearance and disappearance of convective rolls (self-organized structures) when heating a fluid from below and cooling it at the top. A lot of the complexity of the Navier-Stokes equation derives from the fact that these self-organized structures often have long lifetimes — to the extent where fluctuation dissipation can reasonably apply to the meso-scale — but can also “suddenly” transition to a completely different structure that is also locally “stable” in the same driving regime but that has very different e.g. energy/entropy transfer rates (one large convective roll to two, to ten, to turbulence). It’s this sort of thing that makes the mathematics difficult — mathematicians cannot currently even prove that general solutions to the Navier-Stokes equation always exist.
I’m not claiming to be able to put this to use at this instant in climate science, but it is very definitely one of the areas that “should” be explored, because it is intimately related to things like how large natural variability is compared to e.g. external forcing due to CO_2 increases in the climate system. If the climate system self-organizes to more efficiently lose heat when overdriven (negative feedback) that produces a significantly different future climate than what would result from self-organization to less efficiently lose heat when overdriven (positive feedback). Right now the assumption is overwhelmingly positive feedback, a multiplier of 2 to 5 over the CO_2-only warming one might expect. But this estimate isn’t really based on analysis of the variability of the self-organized quasiparticles or structures of the climate system, it is based on assuming asymmetric linear covariance between CO_2 and water vapor as an amplifying greenhouse gas. But as many people have suggested, water vapor feedback could easily be overall negative. Fluctuation-dissipation analysis of cloud cover might actually be able to tell us at the very least the sign.
rgb

February 10, 2014 6:54 am

Hi ferdberple – You wrote
“Doesn’t model validation against the hindcast require that the model have no knowledge about the hindcast?”
Of course, that is ideal. But even with short-term weather forecast models, they assimilate real world observed data as it becomes available (e.g. 4-D data assimilation). This does give them improved results.
With respect to multi-decadal predictions in hindcast, even when they know how the weather patterns evolved over that time period, they still have major problems, as I exemplified in my post. Fully coupled climate models (ocean-atmosphere-land) do not assimilate data in those studies as the model is run, nor do they have any overarching tuning.
Roger Sr.