Initialization practices disqualify UN IPCC global circulation models from use for most climate change forecast purposes

Guest essay by Michael G. Wallace Hydroclimatologist, Albuquerque, NM

The United Nations Intergovernmental Panel on Climate Change UN IPCC features and endorses decadal global climate forecasting products. Those in turn feed into numerous downscaled regional climate forecasts. In published representations of forecast skills to date, all within this collective appear to include the poorly disclosed practice of annual boundary condition re-initializations. At one time, such initializations were isolated to obscure attempts to improve forecasts of seasonal climates (lead time 4 months). Now the initializations have become institutionalized, and they direct multi decadal predictions throughout the field of global and regional climate forecasting. The results would be more transparent to peers and the public if the true decadal history matching skills of the models (without initializations) were highlighted.

Most would agree that global circulation models (GCMs) are primarily designed to address a range of global and regional moisture and temperature prediction objectives for time scales ranging from decades to centuries. These models are also now being re-purposed for multi-annual to decadal climate forecasts in numerous specific sub-regions of the planet.

GCMs are notably consumptive of national funding resources. It’s possible that over $1.5 billion of US Federal funds have been spent directly toward GCM objectives over the past few decades. If they are to continue to consume public resources at the current burn rate, GCMs are also expected to show worthy predictive skill. For that reason, and simply because they are publicly funded models, GCMs and their surrogates are obliged to provide transparent calibration documentation for every application.

No typical scientist or engineer would apply significant trust to models which do not provide transparent calibration documentation. Many, including myself would argue further that even models which do show skill may merit some trust, but cannot be trusted very far. How then do the premier United Nations Intergovernmental Panel on Climate Change (UN IPCC) GCMs provide neither model accuracy nor transparency in association with their forecasts?

This post is written to explore the surprisingly poor predictive skills, along with the equally poor calibration documentation of these GCMs. In introducing this topic, I primarily draw upon my recent experiences in comparing my own business’s climate results to these GCMs and their regional surrogates. In this effort I also learned of the UN IPCC’s unprecedented and opaque practice of replacing model results with undisclosed data. This practice is a new variation of standard model initializations.

Those familiar with my past encounters with ocean pH data (Wallace, 2015) may share the irony. In the ocean pH case, vast amounts of observation data were replaced with non disclosed model output. In this GCM case, vast amounts of model output were replaced with non disclosed observation data.

The most recent set of technical and related publications (WG1 AR5) of the UN IPCC are the de facto references for the vetting of GCM models. A related highly cited paper (Meehl et al., 2009) also frames the new re-purposing of the GCMs. According to the authors, the models have now achieved their primary purpose of validating ghgs as the drivers of long term warming. The paper explores how these gcms will next be applied towards the benefit of societies by producing regionalized products over shorter time spans ranging from years to decades. The paper also introduces readers to the associated Coupled Model Intercomparison Project (CMIP) studies. These CMIP exercises constitute the primary calibration documentation (history matching skill exercises) for the GCMs.

In part due to these information sources, many in the relevant resource management communities now likely rely to some extent on UN IPCC based decadal climate change forecasts. This domain of climate forecasting happens to also overlap the subject matter domains of my small climate forecasting business at www.abeqas.com.

My climate forecasting approach is largely regression based (RB) in comparison to the UN IPCC’s deterministic modeling strategy. In spite of that important difference, my products and the GCMs co-exist within the same general spectrum of climate change forecasts. Figure 1, (adapted from Meehl et al, ibid), shows some scales of predictability and related for climate forecast targets. My firm’s related forecasting scales are annotated by the lowermost shaded band, which I added to the original figure.

clip_image002
Figure 1. MWA forecasting covers similar ranges to those of the newly repurposed UN IPCC models. Our approaches are based purely on lagged correlations to natural oscillatory forcings. Figure adapted from Meehl et al. 2009

 

Figures 2 and 3 are representative of some of my firm’s climate prediction products. In this example, we have forecast a mountain stream’s 5-year trailing average flow rate 3 to 6 years in advance. In Figure 2, I’ve charted a timeline (green solid line) of the annual average flow of this subject stream, via the observational record from the Pecos River gage near the town of Pecos in north central New Mexico (PnP). The open blue circles which overlie that timeline are my exercised predictions of stream flow, based on “ocean driver” information from several years prior.

clip_image004
Figure 2. Forecasts and Observations for sample MWA service area in Upper Pecos River Watershed in north central New Mexico. Forecast: 6 year projection of a 5 yr trailing average. Hindcast: 5 yr trailing average based on 3 year lag to anchor variable. Forecasts beyond solid green line termination were produced no later than Dec 31, 2015.

 

clip_image006
Figure 3. Hindcast skill representation for 5yta using 3 yr lag to anchor. Equation for regression not shown, and based on most recent total data sample. Sample MWA service area in Upper Pecos River Watershed in north central New Mexico.

I usually like to draw people’s attention to the observation that, for this location, we have some proven predictive ability to anticipate moisture deficits and surpluses over a multi year period. It could be said that this method can help in anticipating droughts in the subject region a few years in advance. Figure 3 shows a performance skill metric of this set of forecasts. Such scatter plots have diagnostic value as well as comparative value for evaluations against other forecasts. They can also be used to reproduce values such as the mean error and the root mean squared error, among other performance data.

We’ve also developed related products, such as a one year moving average forecast of stream flows 2 years in advance Figures 4 and 5 follow the logical pattern of Figures 2 and 3. Coincidentally, in this case I am also able to compare our forecasts almost directly to a UN IPCC sub-model report. That report documents the use of downscaled GCM models to simulate stream flow volumes over several decades into the future, for the Gila River near Gila, New Mexico. That stream is only a few hundred miles from the PnP gage that I used, and the time frames of concern are similar.

clip_image008
Figure 4 Superposition of actual observation time histories of two different New Mexico watersheds (green lines) to two competing forecast techniques. upper right: MWA hindcast and forecast of annual flow rates for Pecos Near Pecos. Below: UN IPCC SubVendor 001 hindcast forecast of annual flow volumes for Gila near Gila.

 

clip_image010
Figure 5 A representation of calibration skill. upper right: Percent errors for each year estimate: MWA 2 year lead annual flow volumes forecast for Pecos Near Pecos, New Mexico. below: Percent errors for each year estimate: UN IPCC SubVendor 001 annual flow volumes forecast for Gila near Gila, New Mexico.

 

 

However, that UN IPCC submodel report did not originally include a chart of the actual observations, and therefore there was no reproducible calibration documentation. To begin the process of evaluating the skill of these forecasts, I’ve added that observation timeline in green. From this figure one can see a plurality of model runs in light gray-blue, and an ensemble, or average model result in black bold. In comparison to the observed flows, the models appear to consistently over estimate the minimal flow volumes for this stream. Moreover, no model or ensemble shows compelling visual correlation to the observation history.

The calibration information for our case shows that our 2 year lead forecasts anticipate the ups and downs of annual stream flow very well. The magnitude of these swings as predicted by our forecast could be better. The skill is best for mean flow years and outliers are not always accurately forecast. This is not an uncommon result. We share this deficiency with the rest of the predictive methods. It appears that quantitative forecasting of extreme climate events remains beyond anyone’s solution at this time. In any event, our practice is to always disclose our skill for every forecast. Over the past 2 years, a majority of our modest regression based forecasts have shown 90% accuracy skills. See for example: http://www.abeqas.com/wp-content/uploads/2015/10/MWA_totalPerformanceUpdate2_102015.png

I had examined that Gila example at the request of a prospective client last October. Upon seeing the results, I felt it might help me to raise awareness of our high climate forecast skill by more formal comparisons to additional vendors. I developed estimation criteria which were intended to assess both transparency and skill for any vendor’s climate forecasting products. I sent out related surveys to an initial vendor list and also researched their sites. My Vendors list ranged from the Old Farmer’s Almanac variations, through the USDA NRCS SNOTEL forecasts and on to the UN IPCC Vendor along with their regionally downscaled sub models. For every case, I first reviewed the Vendor’s web site. If I failed to find disclosure of calibration reports, then I emailed the Vendor. If the Vendor responded, I updated my initial assessment with that information. If the Vendor did not respond, then I noted it, and moved on.

As with my evaluations for other vendors, I started from the UN IPCC’s high level summaries in order to generally understand the content of their climate predictions and hindcasts. It was already clear from Figure 4 for example, that we both had forecast products which could be compared in time and space.

I also assumed that their calibration (hindcast skill) products were based on GCMs which had all been run continuously through decades of simulation spans, and compared to historical decadal observations. Such practices can be seen for example in Shuhua et al. (2008). This assumption turned out to be incorrect as I detail here.

Figure 6 is adapted from Chapter 9 of WG1AR5 and represents some CMIP5 related potential calibration products. The figure shows an apparent array of long term model history matching time series products along with a representation of observation based time series of the same variable as a solid black line. In this case the variable is Temperature Anomaly deg C. However, the model simulations featured in the figure appear to include runs which received annual re-initialization to observations.

 

clip_image012
Figure 6. In this figure from Chapter 9 of WG1AR5, the different global circulation model results (GCMs) are shown, alongside the observations, indicated by the solid black line. Cyan highlighted text refers to UN IPCC representation of the predictive value of the GCMs shown. Cyan highlighted years simply highlight results from about 1960 through 2000.

 

These initializations are of concern. Typically in any deterministic model, initializations of parameters are applied only once at the beginning of the time series simulation. Who would argue that this one time “initiation” is the very root meaning of the word? There would need to be extensive qualifications for any initialization to be repeated through the deterministic time frame. Were such to occur, they would be re-initializations.

Merryfield et al. (2010), provides descriptions of a GCM re-initialization practice. In this case, the document stressed that such practices at best might lead to improvements in seasonal forecasting. In other words, the predictive value of their models was not asserted at the time to reach beyond seasonal time frames of 3 months or so. The authors summarized that much of the apparent skill expressed by their forecasts came from the fact that they re-initialized the model values:

“..model sea surface temperatures (SSTs) are nudged toward the observed values during a multi-year period preceding the beginning of a forecast.”

The adjustments were needed because they could not accurately model or predict the ocean parameters sufficiently far in advance.

Yet in spite of this deficiency, gradual expansion of the initialization technique was raised throughout the GCM modeling community from seasonal to decadal forecasting time spans. These initializations came to dominate my concerns of GCM calibrations and skill reporting. They raised a host of additional concerns: Why are these parameters continually reset to match observations, when the GCMs are supposed to be predicting them in the first place? Where do they now obtain those future observations they need to keep their predictive decadal models in line? How do they leap to wholesale publications of results with high confidence?

A recent paper by Suckling and Smith (2013) , covers some aspects of this new purposing and initializing of the GCMs. Notably the article points to the practice of re-initialization of the GCMs’ boundary conditions with recent observations. The authors state in simple language that

“At present it is not clear whether initialising the model with observations at each forecast launch improves the skill of decadal forecasts… At a more basic level, the ability to provide useful decadal predictions using simulation models is yet to be firmly established.”

It’s generally agreed within model intercomparison communities, that one would not adopt model forecasts which are longer in time projection than the span of time that they are calibrated to. But in spite of such concerns, the CMIP5 program persists in delivering

“..a framework for coordinated climate change experiments for the next five years and thus includes simulations for assessment in the AR5 as well as others that extend beyond the AR5.” (Taylor et al. 2008)

This CMIP5 Design guidance document specifies the following time scales for prediction applications:

“1) near-term decadal prediction simulations (10 to 30 years) initialized in some way to ..” and

“2) long-term (century time-scale) simulations initialized from ..”

No misgivings for these forecast span time scales are communicated, other than examples such as this deep within subheadings:

Chapter 3., Part 1.2 Extend integrations with initial dates near the end of 1960, 1980 and 2005 to 30 yrs. sub section: Further details on the core runs:“..though the whole question of initializing the climate system presents one of the biggest scientific challenges of decadal prediction” (my emphasis in bold).

The strange sentence appears to claim that their exercise is an enormous scientific challenge. I wonder if they meant to say that it might be impossible. Certainly many scientists and engineers share their UN IPCC Chapter 9 and 11 hand-wringing about non linear dynamics and chaos, including me. But if such obstacles are the real problem, then why are my forecasts so accurate?[1]

In any case, by virtue of default acceptance of the CMIP5 Design Guidance document, the re-initializing of GCM runs now appear to be the institutional cover for all of the subsequent regional decadal forecasts. Lawrence Livermore National Laboratory (LLNL) has dedicated the Program for Climate Model Diagnosis and Intercomparison (PCMDI) to maintain operational and re-initialization capability for CMIP work at:

Click to access Taylor_CMIP5_design.pdf

http://cmip-pcmdi.llnl.gov/cmip5/experiment_design.html

This program includes detailed guidance and support on how re-initializations are to be implemented. Figure 7 represents an example from Kharin et al. (2012) of four steps associated with data re-initialization.

clip_image014
Figure 7. From Kharin et al.: Statistical Adjustment of Predictions

 

Little is left to the imagination in this paper, except any reflection that the results are only suitable (if even then) for seasonal forecasts. The diagrams show four components to the adjustments:

1. The models are reinitialized every year

2. The reinitialized models and then bias-corrected,

3. The bias-corrected models are then trend-adjusted, and

4. The ensemble of results then are merged together.

I tried a somewhat equivalent but much looser process in an informal confirmation exercise of this initialization practice (Figure 8). I also found that the general practice does improve the perception of higher skill.

clip_image016
Figure 8. I produced my own version of the Initialization approach. Instead of running GCMs, I produced sets of random numbers shown as blue dots in the left image. I consider these to be random walk models (RWMs). The right image documents by the open blue circles that I re-initialized each RWM result every year to anchor to the observations as published by the black line. Then I blended the results together via the dotted blue line. I ended up with a very nice fit!

 

I had suggested that ocean parameters were the main subject of re-initialization. Figure 9, adapted from Gonzales et al. (2015), provides a partial explanation for this process.

clip_image018
Figure 9. After Gonzalez et al., (2015). The longer a CMIP model is run, the lower the fidelity to observations. Hence the practice of initialization.

 

The modeled Pacific Ocean surface signature patterns of temperature in both hemispheres including the equatorial zones decay rapidly, and are poorly represented by the GCMs after only a year. Re-initialization appears to apply to those temperatures, if I’m not mistaken. Only then can the models lurch forward again for a few months of potentially plausible forecasting.

An apparent appeal to bring CMIP validation and calibration practices back into line of accountable and transparent skill designation may be contained in a recent paper by Fricker et al. (2013). However the paper’s final paragraph remains disregarded:

” The ideas that we have presented are somewhat underdeveloped and they suggest several directions for further investigation. We feel that greater understanding of different types of spurious skill, their causes and their implications for evaluation would be valuable. Many more fair performance measures for ensembles that are interpreted as different types of sample could be constructed too. We shall also be interested to see if more investigations of the performance of predictions across a wide range of timescales will help to improve the quality and utility of climate predictions.”

Now that the process of CMIP validation via initializations has been institutionalized, climate change scientists apparently believe that there is no further need for actual experiments, calibration reports, or any other customary transparency. Rather, so long as the money flows, the latest observations will continue to be fed into the CMIP machine at one end. The other end will continue to emit long term climate forecasts for all locations, modalities and scales. The deployment of these inaccurate results will continue then to grow in cost and scope around the world.

It appears that without transparency demands from the public, the multi – billion dollar CMIP – blessed GMS machine will endure. What is to be done about a titanic and misguided enterprise? I recommend to start, that the skill of any climate change Vendors’ decadal forecasts, predictions, projections, and hindcasts be clearly disclosed with and without initializations. Otherwise, at the very least, the playing field is not level for small independents who offer less alarming but also more accurate solutions.


References

Fricker, T.E., C.A.T. Ferro, and D.B. Stephenson, 2013, Three recommendations for evaluating climate predictions METEOROLOGICAL APPLICATIONS 20: 246 – 255 DOI: 10.1002/met.1409

Gonzalez, P.L.M,and L. Goddard, 2015, Long-lead ENSO predictability from CMIP5 decadal hindcasts. Journal of Climate Dynamics DOI 10.1007/s00382-015-2757-0

HYDROCOIN Sweden hosted workshop in 1992 on groundwater model skill inter-comparisons

Click to access 24002761.pdf

Kharin, V.V., G. J. Boer, W. J. Merryfield, J. F. Scinocca, and W.-S. Lee, 2012, Statistical adjustment of decadal predictions in a changing climate GEOPHYSICAL RESEARCH LETTERS, VOL. 39, L19705, DOI:10.1029/2012GL052647, 2012

Meehl, GA, L Goddard, J Murphy, RJ Stouffer, G Boer, G Danabasoglu, K Dixon, MA Giorgetta, AM Greene, E Hawkins, G Hegerl, D Karoly, N Keenlyside, M Kimoto, B Kirtman, A. Navarra, R Pulwarty, D Smith, D Staffer, and T Stockdale, 2009, Decadal Prediction, Can It Be Skillful? American Meteorological Society, Articles October 2009 1467 – 1485

Merryfield, W.J., W.S. Lee, G.J. Boer, V.V.Kharin, P. Badal, J.F. Scinocca, and G.M. Flato, 2010, The first coupled historical forecasting project (CHFP1). Atmosphere-Ocean Vol. 48, Issue 4 pp. 263-283

Shuhua Li, Lisa Goddard, and David G. DeWitt, 2008: Predictive Skill of AGCM Seasonal Climate Forecasts Subject to Different SST Prediction Methodologies. J. Climate, 21, 2169–2186. doi: http://dx.doi.org/10.1175/2007JCLI1660.1

Suckling, E.B. and L.A. Smith, 2013, An evaluation of decadal probability forecasts from state-of-the-art climate models. Centre for the Analysis of Time Series, London School of Economics.

Taylor, K.E., R.J. Stouffer, and G.A. Meehl, 2008, A Summary of the CMIP5 Experiment Design, Lawrence Livermore National Laboratory http://cmip-pcmdi.llnl.gov/cmip5/docs/Taylor_CMIP5_design.pdf

UN Intergovernmental Panel on Climate Change UN IPCC home page:

Home Page

UN Intergovernmental Panel on Climate Change UN IPCC WG1AR5 Chapter 9 Evaluation of Climate Models 2013

Click to access WG1AR5_Chapter09_FINAL.pdf

Flato, G., J. Marotzke, B. Abiodun, P. Braconnot, S.C. Chou, W. Collins, P. Cox, F. Driouech, S. Emori, V. Eyring, C. Forest, P. Gleckler, E. Guilyardi, C. Jakob, V. Kattsov, C. Reason and M. Rummukainen, 2013: Evaluation of Climate Models. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assess­ment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.

UN IPCC assessment report: 2014 Climate Change 2014 Synthesis Report Summary for Policymakers

Click to access AR5_SYR_FINAL_SPM.pdf

UN Intergovernmental Panel on Climate Change UN IPCC WG1AR5 Chapter 11 Near-term Climate Change: Projections and Predictability 2013

Kirtman, B., S.B. Power, J.A. Adedoyin, G.J. Boer, R. Bojariu, I. Camilloni, F.J. Doblas-Reyes, A.M. Fiore, M. Kimoto, G.A. Meehl, M. Prather, A. Sarr, C. Schär, R. Sutton, G.J. van Oldenborgh, G. Vecchi and H.J. Wang, 2013: Near-term Climate Change: Projections and Predictability. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)). Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.

Wallace, M.G. 2015 Ocean pH Accuracy Arguments Challenged with 80 Years of Instrumental Data.

Guest post at http://wattsupwiththat.com/2015/03/31/ocean-ph-accuracy-arguments-challenged-with-80-years-of-instrumental-data/

 


[1] For what it may be worth, I hope to get closer to an answer in my current graduate elective course in non-linear dynamics and chaos.

0 0 votes
Article Rating
100 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
ChrisB
February 3, 2016 4:33 pm

Spot on. No wonder they can simulate the movement of not only the trunk but also the tail on a day by day basis.
Now move on, nothing to see here. The trunk will move and the tail will wiggle a hundred years into the future.

george e. smith
Reply to  ChrisB
February 4, 2016 9:18 am

If the “models” do not model any actual real planet, such as one that rotates once on its axis, in about 24 hours, and has about 73% of its surface area covered by oceans, all interconnected; then it is not very likely that you will get any real information about what might happen on such a real planet.
Real planets take NO notice of the average value of ANYTHING.
They only respond to the real time here and now values of EVERYTHING.
g

4TimesAYear
Reply to  george e. smith
February 4, 2016 9:09 pm

“Real planets take NO notice of the average value of ANYTHING.
They only respond to the real time here and now values of EVERYTHING”
Absolutely. I for one am so tired of the “global average” being called a “temperature” – it’s a statistic. It won’t melt ice anywhere.

RACookPE1978
Editor
Reply to  ChrisB
February 4, 2016 10:06 am

Strange thing.
For all of the thousands of Global Circulation Model “average” global average temperature vs years-in-the-future “results” we have heard about since the mid-1980’s, I have never seen their actual global “output” ( temperature, wind, pressure, and humidity results) plotted over a real globe on a daily, monthly, or yearly basis.

Aphan
Reply to  RACookPE1978
February 4, 2016 11:26 am

Sounds like a Willis project. You should suggest it to him. I’d love to see those results.

steve in Seattle
February 3, 2016 4:36 pm

WoW … just so very much to digest here, my initial comment is to thank you for your time and efforts at this topic ! I usually have to reread the post several times and then try to wade thru the references. Thanks for all the Links also !

Reply to  steve in Seattle
February 4, 2016 3:44 am

+1. So much to learn.

Janice Moore
Reply to  steve in Seattle
February 4, 2016 10:09 am

12 Key Points for Non-technical Readers (like I)
{with my edits for clarity}
1. “No typical scientist or engineer would apply significant trust to models which do not provide transparent calibration documentation.”
2. “… {The} UN IPCC’s unprecedented and opaque practice of replacing model results with undisclosed data … is a new variation of completely at odds with standard model initialization {protocol}.”
3. “… {the author’s firm’s} 2-year lead forecasts anticipate the ups and downs of annual stream flow very well. The magnitude of these swings as predicted by our forecast could be better. The skill is best for mean flow years and outliers are not always accurately forecast. This is not an uncommon result. We share this deficiency with the rest of the predictive methods. It appears that quantitative forecasting of extreme climate events remains beyond anyone’s solution at this time.”
4. “I also assumed that {the IPCC’s} calibration (hindcast skill) products were based on GCMs which had all been run continuously through decades of simulation spans, and {then} compared to historical decadal observations. … This assumption turned out to be incorrect … .”
5. “… the model {hindcasting} simulations featured in {Figure 6 above} appear to include runs which received annual re-initialization to observations.”
6. “These initializations are of concern. Typically in any deterministic model, initializations of parameters are applied only once at the beginning of the time series simulation.”
7. “… ‘… model sea surface temperatures (SSTs) are nudged toward the observed values during a multi-year period preceding the beginning of a forecast.’” {Merryfield, et. al. (2010)}
8. “… expansion of the initialization technique {, i.e., “the practice of re-initialization of the GCMs’ boundary conditions with recent observations”} … from seasonal to decadal forecasting time spans. … raised a host of additional concerns: Why are these parameters continually reset to match observations, when the GCMs are supposed to be predicting them in the first place? Where do they now obtain those future observations they need to keep their predictive decadal models in line? How do they leap to wholesale publications of results with high confidence?”
9. “It’s generally agreed within {the} model inter-comparison {profession}, that one would not adopt model forecasts which are longer in time projection than the span of time that they are calibrated to.”
10. “But in spite of such concerns, the CMIP5 program persists in delivering: ‘..a framework for coordinated climate change experiments for the next five years … ‘ {and} ‘1) near-term decadal prediction simulations (10 to 30 years) initialized in some way …’ and ‘2) long-term (century{!} time-scale) simulations initialized from …’. No {qualification, i.e., no expressed lack of confidence in} these forecast span time scales {is} communicated … .”
11. “LLNL (Lawrence Livermore National Laboratory) has dedicated the PCMDI (Program for Climate Model Diagnosis and Intercomparison) to maintain operational and re-initialization capability for CMIP work … {Figure 7 illustrates the} four steps associated with data re-initialization: …
1. The models are reinitialized every year
2. The reinitialized models and then bias-corrected,
3. The bias-corrected models are then trend-adjusted, and
4. The ensemble of results then are merged together. …
I also found {See Figure 8} that the general practice does improve the perception of higher skill.”
12. “The modeled Pacific Ocean surface signature patterns of temperature … are poorly represented by the GCMs after only a year. Re-initialization appears to apply to those temperatures, if I’m not mistaken.

Only then can the models lurch forward again for a few months of potentially plausible forecasting.

” {favorite line 🙂 }
***************************************
Well done, Dr. Wallace!

Aphan
Reply to  Janice Moore
February 4, 2016 10:23 am

Thank you Janice! Makes this easier to share with non techies!

Janice Moore
Reply to  Janice Moore
February 4, 2016 10:36 am

Seriously, Aphan? COOL! You are a skillful, experienced, well-informed-on-this-subject-matter, teacher and, thus, your remark is praise worth smiling about! Thank you!

Curious George
Reply to  Janice Moore
February 4, 2016 11:19 am

That’s why they don’t forecast anymore. Projections only. By definition, any projection is plausible – it is a different game altogether.

February 3, 2016 4:50 pm

Covered the inability of GCMs to regionally downscale in essay Last Cup of Coffee. Touches on other deceptions, also.

Janice Moore
Reply to  ristvan
February 4, 2016 10:34 am

Mr. Rud Istvan (smile),
I think it would be okay if you also just mentioned the title of your book… . I will go farther and say: BUY RUD ISTVAN’S GREAT BOOK!
“Blowing Smoke: Essays on Energy and Climate.” Available here: http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=Rud+Istvan+%22Blowing+Smoke%22
Great effort on behalf of science truth and, thus, freedom.
Gratefully,
Janice
#(:))

1sky1
February 3, 2016 4:52 pm

Since predictive skill is very much at issue here, there’s a curious absence of any reference to well-established optimal autoregressive methods, such as Wiener or Kalman-Bucy filters.

ferdberple
Reply to  1sky1
February 4, 2016 6:06 am

The UN IPCC climate model results are “Projections”. These are not “Predictions”.
A projection has no predictive skill. If it did it would be called a prediction. The fraud and deception comes when these projections are then misrepresented as predictions.
A prediction can be falsified. Either it will turn out to be true or not. If it turns out to be false, then the hypothesis is false. Prediction is key to the scientific method.
A projection cannot be falsified. A projection is always true, because it does not predict. A projections is simply a mathematical construct that says nothing about the underlying hypothesis.
The fraud and deception occurs when the audience is misled to believe that Projection means Prediction. It does not, but this is never made clear to the public. We are misled to believe that climate projections are predictions. But they are not.

Hivemind
Reply to  ferdberple
February 6, 2016 1:37 am

projection: a ​calculation or ​guess about the ​future ​based on ​information that you have
prediction: a ​statement about what you ​think will ​happen in the ​future
Projection and prediction are different, but only in a minor way. To say, as the IPCC have said, that their projections shouldn’t be used as if they are predictions is pure sophistry.

Reply to  ferdberple
February 6, 2016 2:10 am

“A projection cannot be falsified.”
Nonsense. A projection is simply a conditional prediction. If the conditions are fulfilled and the projection turns out wrong, it is falsified.
In the IPCC case, they say, we can work out the consequences of prescribed GHG emissions. But we cannot predict what people will choose to emit. So we calculate scenarios. Users decide which scenario they think most probable. Afterwards, we know how the scenarios worked out, and can evaluate the projection.

george e. smith
Reply to  1sky1
February 4, 2016 9:12 am

Before you can try to predict the future, you better be able to predict the past which you already know what happened.
But you can’t predict what happened in the past unless you actually use the past observations that tell you exactly what happened in the past, and use those to predict for you what happened in the past that you already know about.
ANY sort of filter, will only assist you in hiding what you already know about what happened in the past, because any sort of filter can only throw away information that you already have.
The best way to predict what happened in the past, is to record what happened in the past, and report that. Because that is the most information about what happened in the past that you can every have, so it is most likely to lead to an accurate prediction of what happened in the past.
As for predicting the future. Forget it; I’m sure there’s a thermodynamics law violation there somewhere.
G

1sky1
Reply to  1sky1
February 4, 2016 1:37 pm

My comment was addressed to an author who states that his “post was written to explore the surprisingly poor predictive skills…of GCMs.” Subsequent comments here miss that thrust completely.
BTW, autoregressive prediction filters do not in any way “hide what you already know about behavior the past.” On the contrary, they utilize the recorded past to construct optimal estimates of the future, taking account of noise-levels and the estimated bandwidth of the underlying physical signal.

Reply to  1sky1
February 4, 2016 1:59 pm

The UN IPCC documents do pay some lip service to auto-regression but that appears to be all. I agree with you 1sky1 that autoregression techniques have great value, and I use them myself on many occasions. However the forecasts of mine featured in this post are developed by lagged correlations to other “climate precursor” time series.

1sky1
Reply to  1sky1
February 4, 2016 3:43 pm

Mike Wallace:
While I’m aware that you’re using a different technique, it occurred to me that predictive filters would provide a valuable baseline for judging stochastic predictability

February 3, 2016 4:54 pm

very useful and well written.
thank you.
i printed it out.

goldminor
February 3, 2016 4:57 pm

I would think that the stream flow graph shown in Fig2 will show an upward spike at the end of this year, which should continue on to the next winter. If it is following the changes in the ENSO regions.

February 3, 2016 5:02 pm

Ye gods! Your random walk example is decisive, but the result of it was obvious from sentence three alone in the article telling us what they are doing. Either a total tone-deafness to science and mathematics is involved here, or deliberate, appalling dishonesty on a world-wide scale. All the more I am convinced that when this farago finally collapses, criminal law suits must be launched against thousands of scientists, traders and politicians for all the damage done to humanity and to wildlife.

Reply to  Ron House
February 4, 2016 2:40 am

“Either a total tone-deafness to science and mathematics is involved here, or deliberate, appalling dishonesty on a world-wide scale.”
After watching the climate clowns for decades, I have come to the conclusion that it is a mixture of both, with deliberate, appalling dishonesty being the majority portion of the mix.
We do have to account for climate “science” drawing some of the least capable men ever to earn (earn???) a PhD. (Consider the clown from Penn State) But after accounting for the lack of any real science training or ability to think logically, the overwhelming evidence is that they are mostly just dishonest rent-seeking grant eaters.
~ Mark

Leonard Lane
Reply to  Ron House
February 4, 2016 9:52 am

Ron House, I agree the tricks and schemes revealed by the author are sad, even sickening. How can such fraudulent and spurious results be called science, science-based, or anything near to science. The trickery and lack of transparency used in the GCMs is indeed discouraging, maddening, and deserving of criminal investigation and prosecution of the perpetrators.
Re-initialization every delta t is not physical modeling, it is mere data manipulation. Astounding article and deep appreciation and gratitude to the author in Albuquerque.

February 3, 2016 5:14 pm

Various skeptics have for years been demanding engineering-quality verification and validation tests of climate models. Climate modelers have scoffed at the idea. Models would fail completely.
One might think fear of model failure is the cause of modeler scoffery. It’s not. Climate modelers have no idea about error analysis, and see no need for external validation. They are of the opinion, anyway, that taking differences from a base-state simulation subtracts away all the error (Don’t believe they’re that naïve? I have it in black-and-white).
Universities will not require validation studies, and are immune to reason on the subject. The only body that can demand model verification is Congress, and they seem too mentally opaque to grasp the need.
So, thanks for your detailed appraisal, Mr. Wallace. It’s very worth doing, and I hope your work reaches a publishable state.

Reply to  Pat Frank
February 4, 2016 12:40 pm

Pat,
After several private emails from people in climate science, I have to agree with you that they have no idea what error analysis is. I asked how you can accurately measure a value that is smaller than the accuracy of the instrument. I was told that by taking many measurements and averaging the error is reduced. They just don’t get it that there is a minimum signal to noise ratio for a measurement to be useful. Below that point, your “measurement” is mostly noise rather than signal.
Brooks

James at 48
Reply to  isthatright
February 4, 2016 12:57 pm

Back in the day on Real Climate I used to now and again raise the subject of Gage R & R. I think most of them had no clue what I was referring to.

Pat from country Vic
Reply to  isthatright
February 4, 2016 1:30 pm

I agree completely. There needs to be other uncertainties included in the error analysis of temperature records.
When calculating the “average temperature across the Earth” at a particular moment in time, the uncertainty at all the unknown points is not just the uncertainty at the known points. It has to be greater.

Reply to  Pat Frank
February 4, 2016 4:02 pm

isthatright, that’s been my experience as well, after considerable exposure.
Last August, I gave a talk at the WFS conference in Erice on systematic measurement error in the air temperature record. It’s both considerable (1 MB pdf) and completely ignored.
My last slide was on instrumental resolution of the thermometers and sensors used to measure air and SS temperature since the Cavendish min-max thermometer of 1753. It showed that the published uncertainties are equal to (before 1900) or less than (ever since) the resolution of the instruments themselves. I.e., the published uncertainty limits are literally physically impossible.
This caused considerable uproar. One physicist, from MIT no less, claimed that using a meter stick of 1 mm resolution, one could get atomic scale accuracy with enough repeated measurements. Another physicist said that resolution doesn’t matter when many measurements are averaged. And the group leader of the record I’d used to illustrate the point said later in an email that “natural dithering” from combining multiple measurements allowed resolution to be discounted.
They were all wrong (I demonstrated that for disclaimers #2 and #3). But that’s the sort of damaged thinking that is virtually universal in the climate business.

Clyde Spencer
Reply to  Pat Frank
February 5, 2016 3:38 pm

Pat,
From my old surveying text (Smirnoff, 1961):
“3-3. REPEATED MEASUREMENTS
It is possible to increase the accuracy of a measurement by making repeated measurements of the SAME quantity and taking the average of the results. The method of repeated measurements is used in all cases when the precision of the instrument is lower than the prescribed accuracy….In those instances when the readings cannot be accumulated in repeated measurements, the prerequisite condition for improving the accuracy is that measurements must be of such an order of precision that there will be some variations in recorded values…In this connection it must be pointed out that the increased accuracy of the mean value of repeated single measurements is possible only if the discrepancies in measurements are entirely due to so-called accidental errors…It should be observed in the above example that if the precision ratio in measuring 99.50 were 1:100, all ten measurements would be recorded as 99.5 and the average would still be 99.5. In other words, AT A LOW ORDER OF PRECISION NO INCREASE IN ACCURACY WILL RESULT FROM REPEATED MEASUREMENTS.”

Reply to  Pat Frank
February 5, 2016 7:10 pm

Right on, Clyde. There’s no getting past resolution. This is almost freshman basic, and one would expect them all to know it. But evidence says otherwise.
Neglect of error is rife in climate science, explaining much of what goes on there. But how the APS lost track is beyond knowing.

Aphan
February 3, 2016 5:22 pm

I wonder if we’ll see Brandon Gates on this thread or not…..:P
And thank you for all of the incredibly hard work Mr. Wallace.

Marcus
Reply to  Aphan
February 3, 2016 6:54 pm

..Sorry Aphan, your stuck with Miss Nicky below !! LOL

Brandon Gates
Reply to  Aphan
February 4, 2016 7:56 pm

Aphan,
It appears Mr. Wallace has concluded that the re-initializations done for decadal forecasting/downscaling were also done in the CMIP5 model runs published in AR5. I’ll start with this:
Figure 6 is adapted from Chapter 9 of WG1AR5 and represents some CMIP5 related potential calibration products. The figure shows an apparent array of long term model history matching time series products along with a representation of observation based time series of the same variable as a solid black line. In this case the variable is Temperature Anomaly deg C. However, the model simulations featured in the figure appear to include runs which received annual re-initialization to observations.
http://wattsupwiththat.files.wordpress.com/2016/02/clip_image012_thumb.jpg
Figure 6. In this figure from Chapter 9 of WG1AR5, the different global circulation model results (GCMs) are shown, alongside the observations, indicated by the solid black line. Cyan highlighted text refers to UN IPCC representation of the predictive value of the GCMs shown. Cyan highlighted years simply highlight results from about 1960 through 2000.

If he explains elsewhere why he thinks some runs as shown in that plot appear to have been re-initialized, I have not seen it. His next paragraph proceeds from that conclusion:
These initializations are of concern. Typically in any deterministic model, initializations of parameters are applied only once at the beginning of the time series simulation. Who would argue that this one time “initiation” is the very root meaning of the word? There would need to be extensive qualifications for any initialization to be repeated through the deterministic time frame. Were such to occur, they would be re-initializations.
Apart from his conclusion that any of those model runs show evidence of having been reinitialized, I agree with everything else he writes in the above paragraph. If he is still following this thread, perhaps he will describe what he is seeing in that plot that I am not.

Reply to  Brandon Gates
February 4, 2016 11:13 pm

Brandon, Is your concern that I inferred this without being able to connect the dots to your satisfaction? You appear to be an author related to this figure. In the interest of transparency of your works, please connect the dots to demonstrate they were NOT reinitialized. The description in Chapter 9 for this figure is moot on that point. My guess is still that they were initialized, but perhaps every 5 years vs every year. Same problem of course.

Brandon Gates
Reply to  Brandon Gates
February 5, 2016 1:34 am

Mike Wallace,

Is your concern that I inferred this without being able to connect the dots to your satisfaction?

Yes.

You appear to be an author related to this figure.

Such is not my occupation, and I’ve never contributed anything to any IPCC report.

In the interest of transparency of your works, please connect the dots to demonstrate they were NOT reinitialized.

Even if I had been involved in producing the data in that figure, negative claims are difficult to demonstrate. As you are the one bringing the claim, and particularly since it’s a positive claim, I’d say it’s up to you to tie it all together.

The description in Chapter 9 for this figure is moot on that point.

As we’d expect if they deliberately did something you and I both agree they should not.

My guess is still that they were initialized, but perhaps every 5 years vs every year.

Good guesses can lead to valid findings, but nothing is jumping out of Figure 6 at me to suggest re-initializations every 5 years. It might help if you described what you’re seeing in it to suggest otherwise.

EternalOptimist
February 3, 2016 6:14 pm

I am no climate scientist.
my Takeaway from this detailed post is
‘What are we getting for our money ?”

February 3, 2016 6:25 pm

“However, the model simulations featured in the figure appear to include runs which received annual re-initialization to observations.”
Where on earth do you get that from? I see no evidence. And it would be very difficult to do. You can’t “re-initialize”. You can only initialize.
“A related highly cited paper (Meehl et al., 2009) also frames the new re-purposing of the GCMs.”
“Re-purposing”?. GCM’s will continue to be used for climate modelling. The decadal forecast project is a separate enterprise. A useful reference is the 2014 Meehl paper
“Decadal Climate Prediction: An Update from the Trenches” The abstract begins
“This paper provides an update on research in the relatively new and fast-moving field of decadal climate prediction, and addresses the use of decadal climate predictions not only for potential users of such information but also for improving our understanding of processes in the climate system. External forcing influences the predictions throughout, but their contributions to predictive skill become dominant after most of the improved skill from initialization with observations vanishes after about six to nine years”
which puts it in perspective. GCM’s are versions of Numerical Weather Forecasting programs which are used beyond their time for predicting weather. The models keep generating weather that complies with the physics and forcings, but is no longer synchronised to real weather. In recognition of this, they are initially wound back to start well before the period of interest. The basis for that is that it is better to get rid of any ructions caused by unphysical aspects of initial data than to get benefit from the initial state (because no benefit is expected). Decadal prediction takes a compromise view, in trying to see what of climate relevance can be predicted, and for how long.
“I am also able to compare our forecasts almost directly to a UN IPCC sub-model report”
You should give a link or reference to that report. But I think it is odd that you should suggest that decadal prediction of climate fails because you have been able to make a regression predictor for stream flow.

Marcus
Reply to  Nick Stokes
February 3, 2016 6:51 pm

….Better to Remain Silent and Be Thought a Fool than to Speak and Remove All Doubt
…………………………— ABRAHAM LINCOLN.
…….You would do well to take heed !

Janice Moore
Reply to  Marcus
February 4, 2016 10:51 am

+1!

Curious George
Reply to  Marcus
February 4, 2016 11:26 am

“weather that complies with the physics and forcings” – do I understand correctly that forcings have little if anything to do with physics?

Hugs
Reply to  Marcus
February 4, 2016 11:39 am

Please don’t stalk Nick.

Reply to  Marcus
February 4, 2016 12:42 pm

Another +1

Don K
Reply to  Nick Stokes
February 3, 2016 8:55 pm

Nick
It’d take me days to fully understand this article, but perhaps it will help you to describe a context where reinitialization is (or was at any rate) practiced with a predictive model. And it made sense. Back in the early 1960s there were not a lot of satellite tracking stations in the world. The handful that existed didn’t have great coverage — especially for low altitude satellites. And even when the stations managed to acquire and track a satellite, it was orders of magnitude more difficult to retrieve tracking data from them than it is today. So the drill was to predict satellite positions based on launch time and launch site radar data plus the planned orbit. That’s initialization. Then when (if) tracking data turned up, it was used to adjust the satellite position and velocity estimates to observed positions. That’s reinitialization.
The article seems to be describing something along that line. Does it make sense in a GCM context? I haven’t a clue.

Reply to  Don K
February 4, 2016 9:46 am

“Does it make sense in a GCM context?”
No, nor in CFD. They are solving partial differential equations, by time stepping, where each state is developed from the previous state. And each state has to have a lot of internal consistencies, else unphysical accelerations etc will occur (blow-up). For example, winds have momentum, which you can’t arbitrarily change. Any “re-initialization” would take a long time to settle down.

Reply to  Don K
February 4, 2016 11:51 am

“Surely you are talking short-term general circulation models”
All circulation models use a time step of less than an hour. In CFD pressure is transmitted by sound waves, and you have to solve on that scale.
But the bottom line is that the author claims re-initialization, but gives absolutely no evidence. I don’t believe it.

Reply to  Nick Stokes
February 3, 2016 9:08 pm

“However, the model simulations featured in the figure appear to include runs which received annual re-initialization to observations.”
Where on earth do you get that from? I see no evidence. And it would be very difficult to do. You can’t “re-initialize”. You can only initialize.

He is talking about the “bias adjustments” that are applied to bring predictions back in line with observations. For example, Meehl (2014) explained that:

Due to issues involved with sampling model variability, trend, and conditional bias as noted above, more robust estimates of the bias adjustment are possible with more start dates for the hindcasts. This can involve start dates every year, rather than every 5 years as originally planned for CMIP5…≤/blockquote>
If you can explain what this actually meant, it would be appreciated.

Reply to  opluso
February 3, 2016 9:11 pm

Obviously, the blockquote did not end properly. That should be my question at the end:
If you can explain what this actually meant, it would be appreciated.

ferdberple
Reply to  Nick Stokes
February 4, 2016 6:52 am

“improved skill from initialization with observations vanishes after about six to nine years”
=======================================
This is a very telling quote. Very. It means than the cumulative error over six to nine years overwhelms the predictive ability of the models. They cannot be relied upon for anything more than 6 to 9 years into the future.
Since climate is the generally regarded to be 30 years of weather, a model that is only useful for 6 to 9 years is useless as a climate model, as it cannot supply accurate results for a long enough period to create even 1 climate data point.

RACookPE1978
Editor
Reply to  ferdberple
February 4, 2016 10:10 am

But the world’s economies are being destroyed and very real people needlessly harmed and killed not based on even a two or three year “extrapolation” into the future, but 100 and 200 years into the future! And the worst of the “futures” are for results 500 and 800 and 1500 years out (Greenland and Antarctica “melting” )

February 3, 2016 7:39 pm

Nick, Help me out here. You say “their contributions to predictive skill become dominant after most of the improved skill from initialization with observations vanishes after about six to nine years”
After 9 years the GCM have no predictive skill, I don’t have a problem with that. But I am puzzled by the models accuracy then, when I look at the charts they produce they hindcast extremely accurately over very large time scales.
You claim that is impossible for the models to have any skill after 9 years and I agree. Who should the Justice Department start investigating first?

Reply to  jinghis
February 3, 2016 8:21 pm

“You say”. Not me, that’s a quote from Meehl’s paper. He didn’t say they have no skill after 9 years – he said they lose the improved skill gained from initialization.

Reply to  Nick Stokes
February 3, 2016 8:56 pm

Nick, “You say”. Not me, that’s a quote from Meehl’s paper. He didn’t say they have no skill after 9 years – he said they lose the improved skill gained from initialization.”
Let me try again, quoting Meehl, “External forcing influences the predictions throughout, but their contributions to predictive skill become dominant after most of the improved skill from initialization with observations vanishes after about six to nine years”
Clearly he is saying that the models are less skillful at predicting nine 9 years after initialization.
Once again, how do you explain the long periods of astounding accuracy the models have at hind casting, when Meehl says that the improved skill from initialization last for less than 9 years?
Clearly the hind cast results are adjusted on a yearly basis, just as Mr. Wallace claims above.

Reply to  Nick Stokes
February 3, 2016 10:21 pm

“Once again, how do you explain the long periods of astounding accuracy the models have at hind casting”
Mostly people here complain of poor hind-casting. Could you show some data to substantiate?

Reply to  Nick Stokes
February 4, 2016 3:20 am

Nick, ““Once again, how do you explain the long periods of astounding accuracy the models have at hind casting”
Mostly people here complain of poor hind-casting. Could you show some data to substantiate?”
Sure, every single ensemble prediction shows very good correlation with temperature up to the year 2000 or so when they actually have to start making predictions, then the correlation rapidly collapses. Obviously predicting skill declines with time.
I have to point out though that you have now failed three times to answer the question, instead you have opted to obfuscate the issue. Apparently you have no answer, I can live with that.

Aphan
Reply to  jinghis
February 4, 2016 11:04 am

I’m not convinced what you are talking about is really hind casting. Hind casting is the opposite of forecasting. It’s inputting current data and trying to go backwards in time to see if the model can “match” the past for which we have records to compare to the model’s results. Because modelers don’t fully understand Earth’s climate system, hind casts are usually pretty poor.
But initializing a model with accurate data up to 2000, and then letting it run FORWARD in time is not “hindcasting”, it is FOREcasting.

billw1984
Reply to  Nick Stokes
February 4, 2016 6:11 am

jinghis, maybe ok correlation instead of good. There are still fairly large differences between the average GCM output and historical temperatures for time periods in which the temperatures move more sharply up or down. (Which may be part of what Nick means by people complaining of poor hind casts?).

Don
Reply to  jinghis
February 4, 2016 12:10 pm

During my pitiful attempts to learn fly-fishing in my youth, I exhibited much more skill at hindcasting than at forecasting.

601nan
February 3, 2016 8:34 pm

Most unfortunate.
The IPCC’s best and brightest …. Mmmmm … are actually Geographers armed with Microsoft Excel, and alas nothing more. There are no “Super Computers” or TOP 500 Super Computers at work here, being driven by armies of educated, skilled and trained Programmers and Statisticians. No. Just a few Geographers (Hansen, Jones and Mann for example) and a PC Microsoft Excel spreadsheet.
Sad

bw
February 3, 2016 9:11 pm

Great guest post.
Paragraph 5 says it. Models have “surprisingly poor predictive skill”
The calibrations are improper. But, at least the IPCC obscure the model results with data of “opaque” origin.

4 Eyes
February 3, 2016 11:24 pm

It seems to me that the IPCC are not modelling the way the rest of us would. If I prepare a depletion model of an oil reservoir I input initial conditions and then hit the start button. If the output does not match the known history I have to revisit the model and use my knowledge and experience to determine the likely cause of the mismatch, then I modify the model to suit and re-run the model using the same initial conditions. I repeat this process if necessary until the model matches history reasonably well. Then I run the model into the future and take my forecast to the boss. Being able to take a forecast to the boss is the only reason for preparing the model – if can’t present a rèliable forecast it’s adios senor. If I had the luxury of re-initializing every year during a single model run my model would appear to be fantastic and oh so accurate and I’d be promoted to chief reservoir engineer in no time. It seems the IPCC is not actually modelling.

ferdberple
Reply to  4 Eyes
February 4, 2016 6:25 am

If I had the luxury of re-initializing every year during a single model run my model would appear to be fantastic and oh so accurate and I’d be promoted to chief reservoir engineer in no time.
=====================
the only problem would be that every model would look great. no matter how wrong it was.
which of course is why climate models are so keen to adopt the technique.
The only problem then becomes how often you re-initialize. Every year is probably too infrequent, as it allows the model to drift rudderless for a whole year. Daily re-initialization however would help keep drift to a minimum, making the models incredibly accurate.
In fact, daily initialization would allow the climate models to predict climate, perhaps as much as 3 days in advance. We could call these “Weather Forecasting Models”.

Aphan
Reply to  ferdberple
February 4, 2016 11:08 am

I think you’re onto something here ferd! (Wink) Thanks for the grin.

February 4, 2016 12:34 am

Would it be fair to say that if your aim is to provide short term predictions, buttressing a weak model by frequent re-initialisation is OK, but if your aim is to provide long term predictions, you have to evaluate your model without such re-initialisation to see if it can do the job? Or have I misundestood?

D. J. Hawkins
Reply to  Richard A. O'Keefe
February 4, 2016 6:27 am

What’s really fair to say is that by re-initializing during the hind-cast period you establish a false sense of accuracy or fitness for use for the model so as to bolster it’s supposed accuracy for future temperature profiles. It’s like that stock scam where someone takes 1,024 prospective marks and sends them a newsletter predicting the future movement of a stock, but for half he says “up” and half he says “down”. The next round is for a different stock, rinse and repeat. So four or five iterations in, he says if you want future editions of the newsletter cough up $1,000 or so. This is exactly what these models are doing.

Aphan
Reply to  Richard A. O'Keefe
February 4, 2016 11:39 am

Very fair and accurate Richard. But in this case, not only do the models fail, but so does the theory that inspired those models in the first place. They simply cannot let that happen.

siamiam
Reply to  Richard A. O'Keefe
February 4, 2016 1:05 pm

That’s my takeaway as well.

Dodgy Geezer
February 4, 2016 2:44 am

MINOR TYPO
“2. The reinitialized models and then bias-corrected,”
should read “2. The reinitialized models are then bias-corrected,”

Jurgen
February 4, 2016 3:10 am

There is this long tradition of fortune tellers. I guess there will always be a market for them. An important part of the selling trick of your fortune-telling is a) to impress of hypnotize your victims with magical woohaa and theatrics b) to use vague generalizations to play onto the victims needs without committing to specifics that can be easily falsified and c) to make yourself immune for falsification anyhow by being paid in advance without a “no cure no pay” clause, or by just being not there any more when the future arrives.
These are the only reasons I can think of this horde of climate modelers are making money with their theatricks. A profitable industry indeed.

ferdberple
Reply to  Jurgen
February 4, 2016 6:28 am

d) use observation of the victim to re-initialize (adjust) your fortune telling as the story progresses.

Wil Pretty
February 4, 2016 4:56 am

Looks to me like a scaled up version of the daily weather forecast.
Its mostly accurate for the moment of its publication.

ferdberple
Reply to  Wil Pretty
February 4, 2016 6:37 am

In many places if you simply forecast yesterday’s weather for today, this is more accurate than the weather forecasting models.
Since 1850 temperatures have been increasing as we came out of the LIA. A forecast that temperatures will continue to increase is nothing special. That is the idiot prediction. The prediction of the straight line.
What would have been skillful would have been to predict a plateau. A hiatus. A pause. Not a single climate model saw it coming. Not one.
Thus it must be concluded that the climate models have no skill. Their climate forecasts were that the 150 year trend would accelerate. Instead it decelerated.
re-initializing the climate models does nothing to improve their skill. It simply serves to hide their error.

Aphan
Reply to  ferdberple
February 4, 2016 11:15 am

Exactly!

robinedwards36
February 4, 2016 6:51 am

I like this post. Here’s someone involved in climate events (and forecasting) whose current remuneration and future prospects depend on getting the forecasts sufficiently correct to satisfy his clients and induce them to return for more. Contrast this with academia! and with the anthropogenic climate change industry in general – I’ll not mention names – sometimes can’t bring myself to type them, but we all would come up with very similar lists. Governments and their often low grade offspring are in the same cosy situation. No forecasting success? Never mind, we control the purse strings!

ferdberple
February 4, 2016 6:57 am

The Law of Large Numbers lets you add up many wrong answers and arrive at the right answer. Except of course if this is surface temperature data. In this case the errors will not randomly cancel themselves. Instead the data must be continually re-adjusted, time and time again, until the results are correct and match whatever the party in power says they must match.

February 4, 2016 8:39 am

So we have to wait until 2100 to see what the models say about CAGW in the year 2100?

Don V
Reply to  bobbyvalentine466921
February 4, 2016 9:25 am

No. Apparently we have to wait until 2101 when the models are “re-initialized” to see that they predicted correctly.

Aphan
Reply to  bobbyvalentine466921
February 4, 2016 11:22 am

No, we can see it now. But if you can find one that is NOT being “adjusted” or “corrected” or “reinitialized”, then document exactly what it says today, and next week, and every year until 2100 and then see how well what it said would happen, with what does happen.
I’m willing to bet that if such a model actually exists, the general public will never, ever, know about it.

Aphan
Reply to  Aphan
February 4, 2016 11:42 am

Oops…”see how well what it said would happen MATCHES what does happen. “

Janice Moore
February 4, 2016 10:26 am

Some Relevant Excerpts from Bob Tisdale’s e book, Climate Models Fail:
“Climate models are tuned using observational data. Sometimes the models use data, e.g., sea surface temperatures, as inputs, but the climate models which try to simulate past climate (for attribution studies) or future climate (for projections) contain no data within themselves. Also, climate model simulations are often initialized from data observed at a specific point in time. From that point forward, however, the models go on their merry way, blithely crunching numbers, trying to guess at the past (known as hindcasts) or trying to guess at the climate of the future (known as projections). The models make two massive assumptions: 1) that future human emissions of greenhouse gases and other anthropogenic factors turn out to be what they guess they will be; 2) that climate on Earth actually responds to those greenhouse gases and other anthropogenic factors the way the modelers guess that it does — that’s a massive assumption…an assumption that is not supported by data nor by the models.”
“This book presents how poorly climate models simulate past climate on Earth and that climate models show no skill whatsoever at hindcasting — which means climate models FAIL, for they are not realistic, not even for the last few decades.”
“Fyfe, et al. (2011) in Skillful Predictions of Decadal Trends in Global Mean Surface Temperature note that they had to adjust the model outputs for their evaluations:
….for longer term decadal hindcasts a linear trend correction may be required if the model does not reproduce long-term trends. For this reason, we correct for systematic long-term trend biases.
Remarkable. The models performed so poorly that Fyfe, et al. (2011) adjusted the models’ outputs before they evaluated the models.”
{Climate Models Fail by Bob Tisdale is available here: https://bobtisdale.wordpress.com/2013/09/24/new-book-climate-models-fail/ (and on Amazon, here: http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=%22Climate+Models+Fail%22+by+Bob+Tisdale&rh=i%3Aaps%2Ck%3A%22Climate+Models+Fail%22+by+Bob+Tisdale)}

Reply to  Janice Moore
February 4, 2016 1:01 pm

Ha Ha. Bob T clearly saw this as well. Funny how the Fyfe et al. content documents the exact opposite of “Skillful Predictions of Decadal Trends..”

Janice Moore
Reply to  Mike Wallace
February 4, 2016 2:34 pm

Lol, indeed. It should be: “Forced-skill Predictions …”

Kevin Kilty
February 4, 2016 10:57 am

“…The paper explores how these gcms will next be applied towards the benefit of societies by producing regionalized products over shorter time spans ranging from years to decades….”
und you vill enjoy das application und benevits!
In 1984 I attended a public hearing regarding groundwater usage in an unnamed community. The meeting was prompted by a complaining group of citizens who had to deepen their wells because of “declining” water levels. The state engineer had run models demonstrating the decline in water levels, but the state engineer had only very sketchy, inadequate data on which to build a model, and didn’t actually know much about the algorithm except to type “run”. In fact, actual groundwater measurements showed rising water table in many places. The complaining citizens lived above an unusually impermeable section of aquifer, and were causing their own excessive drawdown trying the make the prairie look like Kentucky. Never mind, the state engineer put a moratorium on new wells in place just the same.

Curious George
Reply to  Kevin Kilty
February 4, 2016 11:52 am

Modelers have not changed much since 1984.

Janice Moore
Reply to  Curious George
February 4, 2016 12:38 pm

True. And, if I may… “Modelers {Data meddlers} have not changed much since 1984.”

James at 48
February 4, 2016 12:52 pm

Initializations? We ain’t got no initializations! We don’t need no initializations! I don’t have to show you any stinking initializations!
And the corollary: Don’t like the data you’re gettin’? Tweak the boundary conditions, on the fly, to give the data you NEED! There now … doesn’t that feel better?

February 4, 2016 7:39 pm

My major professor had a funny reply in discussing hydrologic models. “Nobody but the modeler believes the model output. Everyone except the experimentalist believes the empirical results.” I do have some experience with developing models for an EPA grant and even though it passed a validation it failed when applied to results from large scale trials. Several other hydrologic models also failed. Flood on the Cedar River in Iowa in 2011 or 2008 was under estimated by the state hydrologic model by about three fold. Flood ended up at 30 feet over flood stage rather than 10 feet. This model has been refined for at least 30 years by multiple groups. If the climate models are indeed being reinitiazed I would hope the modelers are able to figure out what exactly needs tweaking. Given their complexity and the number of nonlinear, partial differential equations involved, I would say good luck with that.

4TimesAYear
February 4, 2016 9:12 pm

“No typical scientist or engineer would apply significant trust to models which do not provide transparent calibration documentation….How then do the premier United Nations Intergovernmental Panel on Climate Change (UN IPCC) GCMs provide neither model accuracy nor transparency in association with their forecasts?”
I sure hope someone raised this issue in Congressional hearings on climate change……

rxc
February 5, 2016 5:54 am

This is a very significant paper, if he has actually determined that they really do re-initialize the models periodicallyduring the calculation. The wording of the referenced papers is very obscure, which immediately leads to a suspicion that they are re-initializing. Tuning a code with model “dials” and settings that deal with particular special phenomena is generally considered to be bad form, and something that you only do as a last resort. Re-initializing just produces junk output.
Engineers would NEVER accept a model for an important public purpose that involved re-initializion unless it was completely disclosed, fully explained, and there was no other way to get the result. You could not really use that result to make any decisions, however, because it would be so contaminated by the re-initialization that you would never be sure exactly what ultimate result you were calculating.
I used to do international standard problems where people calculated the behavior of nuclear power plants during accident scenarios, and no one would ever do re-initialization and then claim that they had actually calculated the transient. They might complain because the boundary conditions along the path changed or were uncertain, or the instruments were not accurate, or the operators did something crazy, or something else changed along the way, but if those sorts of events were defined in the problem definition, then you had to live with them. No one gets to change the trajectory of the calculations along the way by re-initialization.
It is like saying that you can correct the trajectory of an artillery shell along its path to account for your inability to aim the cannon correctly. It may now be possible to do this, with “smart” weapons, but calculations of a physical system are not “smart bombs”, except in the most perjorative sense.

Science or Fiction
February 5, 2016 6:05 am

This is an extremely important post – I just wish leaders of the governments around the world were able to grasp that the conclusions of the fifth assessment report by IPCC is actually falsified by this post.
This is my favorite quote from the IPCC report regarding initializations:
«When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years. Biases can be largely removed using empirical techniques a posteriori. …»
(Ref: Contribution from Working Group I to the fifth assessment report by IPCC; 11.2.3 Prediction Quality; 11.2.3.1 Decadal Prediction Experiments )
Full post here:
Model biases can be largely removed using empirical techniques a posteriori!

If you combine this with how IPCC used the models and circular reasoning to excluded natural variation /long term trends:
“Observed Global Mean Surface Temperature anomalies relative to 1880–1919 in recent years lie well outside the range of Global Mean Surface Temperature anomalies in CMIP5 simulations with natural forcing only, but are consistent with the ensemble of CMIP5 simulations including both anthropogenic and natural forcing … Observed temperature trends over the period 1951–2010, … are, at most observed locations, consistent with the temperature trends in CMIP5 simulations including anthropogenic and natural forcings and inconsistent with the temperature trends in CMIP5 simulations including natural forcings only.”
(Ref.: Working Group I contribution to fifth assessment report by IPCC. TS.4.2 Surface Temperature)
Full post here:
IPPC used circular reasoning to exclude natural variation!

Within real science / engineering / technology any actions based on the IPCC report would have been suspended by discovery of these flaws.

February 5, 2016 9:53 am

Thanks Science or Fiction, most interesting. Glad you appreciate the bizarre and tortured language they use to explain what they did without explaining what they did.
Thanks all others as well, I’ll stay tuned and comment if I think I can add further value.
Speaking of realclimate, I did comment on this issue recently there at
http://www.realclimate.org/index.php/archives/2015/11/and-the-winner-is/comment-page-1/#comment-637826
Gavin appeared to claim at the end that they did not reset anything other than solar dynamo and volcanoes. Then the post was closed to further discussion. Gavin if you are reading this, please feel free to respond.
As a climate change modeler, Gavin is also one of a number I reached out to with my original survey, but he never responded to that.

Science or Fiction
Reply to  Mike Wallace
February 5, 2016 2:55 pm

Thanks 🙂
On a mobile now. There are more of interest towards the end of 11.2.3.1 if my memory serves me right.

Aphan
Reply to  Science or Fiction
February 5, 2016 3:30 pm

“On a mobile now.”
I know I shouldn’t do it, but for some overwhelming reason I simply cannot resist the urge to say “That’s what SHE said”. 🙂
Forgive me….and now, back to science!

Aphan
Reply to  Science or Fiction
February 5, 2016 3:32 pm

And THEN I notice it’s the word MOBILE and not MODEL…..:) Laugh’s on me! (And yes, I highly amuse myself all the time. lol)

Science or Fiction
Reply to  Aphan
February 5, 2016 11:03 pm

D)

February 5, 2016 11:01 am

Brandon, you wrote earlier:
Good guesses can lead to valid findings, but nothing is jumping out of Figure 6 at me to suggest re-initializations every 5 years. It might help if you described what you’re seeing in it to suggest otherwise.
Like others who have commented, this seemed obvious, but in any case, here is more detail, already partly covered by Science or Fiction above. First, from WG1AR5_Chapter 9:
“9.3.2.3 Relationship of Decadal and Longer-Term Simulations
The CMIP5 archive also includes a new class of decadal-prediction
experiments (Meehl et al., 2009, 2013b) (Figure 9.1). The goal is to
understand the relative roles of forced changes and internal variability
in historical and near-term climate variables, and to assess the predictability
that might be realized on decadal time scales. These experiments
comprise two sets of hindcast and prediction ensembles with initial
conditions spanning 1960 through 2005. The set of 10-year ensembles
are initialized starting at 1960 in 1-year increments through the year
2005 while the 30-year ensembles are initialized at 1960, 1980 and
2005. The same physical models are often used for both the short-term
and long-term experiments (Figure 9.1) despite the different initialization
of these two sets of simulations. Results from the short-term
experiments are described in detail in Chapter 11. ”
Next, as SoF pointed out above, from WG1AR5_Chapter11, page 967:
“When initialized with states close to the observations, models ‘drift’
towards their imperfect climatology (an estimate of the mean climate),
leading to biases in the simulations that depend on the forecast time.”
Figure 11.2 below that phrase covers “CMIP5 multi-model initialized hindcasts..”
Regardless of their various claims of start dates, I remain mindful of the above statement in Chapter 9 that, “The set of 10-year ensembles are initialized starting at 1960 in 1-year increments through the year 2005”
That appears to disclose that re-initializations took place every year, not just every 5 years or every 10 years. And that is consistent with with the Kharin et al., reference. (my Figure 7).
Finally, a manual review of Figure 11.2b in comparison to my Figure 6 suggests the two are very consistent. I do believe they both come from the same re-initialized exercises.

AnonyMoose
Reply to  Mike Wallace
February 9, 2016 6:36 pm

“Next, as SoF pointed out above, from WG1AR5_Chapter11, page 967:
“When initialized with states close to the observations, models ‘drift’
towards their imperfect climatology (an estimate of the mean climate),
leading to biases in the simulations that depend on the forecast time.””
But this section is about 10-year climate model experiments. Look up at page 961, Box 11.1. After initialization, the correlation drops until the 3-year point and then begins rising again. There are several repetitions that the model’s inherent behavior takes over — but the model’s default behavior is considered to be correct for the long term. Huh, who would have thought that the IPCC’s favored models are considered by the IPCC to be correct.
https://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter11_FINAL.pdf