Guest essay by Roger A. Pielke Sr.
Introduction
I have posted in the past how the development of multi-decadal regional climate projections (predictions) to give to policymakers and the impact communities is a huge waste of time and resources; e.g. see
The Huge Waste Of Research Money In Providing Multi-Decadal Climate Projections For The New IPCC Report
Today, I want to discuss this issue in relation to the Working Group I Contribution to the IPCC Fifth Assessment Report Climate Change 2013: The Physical Science Basis
The 2013 WG1 IPCC Report – Chapter 11 and Annex 1![cover[1]](http://wattsupwiththat.files.wordpress.com/2014/02/cover1.jpg?resize=149%2C202&quality=83)
Projections are presented in Annex 1:
http://www.ipcc-wg1.unibe.ch/guidancepaper/WG1AR5_AnnexI-Atlas.pdf
and
http://www.climatechange2013.org/images/uploads/WGIAR5_WGI-12Doc2b_FinalDraft_AnnexI.pdf.
This is titled
Annex I: Atlas of Global and Regional Climate Projections
The foundation of this Atlas is based on the information provided in Chapter 11 of the IPCC WG1 report titled
“Near-term Climate Change: Projections and Predictability”
Click to access WG1AR5_Chapter11_FINAL.pdf
Multi-decadal regional climate projections, of course, can obviously not be any better than shorter term (i.e. “near-term”) projections (e.g. decadal) since decade time periods make up the longer period! The level of skill achieved for decadal time scales, must be the upper limit on what is obtainable for longer time periods.
As written in Chapter 11
Click to access WG1AR5_Chapter11_FINAL.pdf
Climate scientists distinguish between decadal predictions and decadal projections. Projections exploit only the predictive capacity arising from external forcing.
Projections then are simply model sensitivity simulations. By ignoring internal climate dynamics their presentation to the impacts communities as scenarios is a gross overstatement of what they really provide. They are only useful as improving our understanding of a subset of climate processes. To present results from them in the IPCC report without emphasizing this important limitation is not an honest communication.
The issue of how the climate model results are presented should bother everyone, regardless of one’s view on the importance of greenhouse gases in the atmosphere.
Chapter 11, fortunately, in contrast to Annex 1, is an informative chapter in the 2013 IPCC WG1 report that provides a scientific summary regarding predictability although their discussion on the uncertainties of the “external climate forcings” and skill is incomplete (e.g. see http://pielkeclimatesci.files.wordpress.com/2009/12/r-354.pdf ).
The chapter focus is described this way
This chapter describes current scientific expectations for ‘near-term’ climate. Here ‘near term’ refers to the period from the present to mid-century, during which the climate response to different emissions scenarios is generally similar. Greatest emphasis in this chapter is given to the period 2016–2035, though some information on projected changes before and after this period (up to mid-century) is also assessed.
Skilful multi-annual to decadal climate predictions (in the technical sense of ‘skilful’ as outlined in 11.2.3.2 and FAQ 11.1) are being produced although technical challenges remain that need to be overcome in order to improve skill.
Some important extracts from the chapter are [highlight added]
Near-term prediction systems have significant skill for temperature over large regions (Figure 11.4), especially over the oceans (Smith et al., 2010; Doblas-Reyes et al., 2011; Kim et al., 2012; Matei et al., 2012b; van Oldenborgh et al., 2012; Hanlon et al., 2013). It has been shown that a large part of the skill corresponds to the correct representation of the long-term trend (high confidence) as the skill decreases substantially after an estimate of the long-term trend is removed from both the predictions and the observations (e.g., Corti et al., 2012; van Oldenborgh et al., 2012; MacLeod et al., 2013).
The skill in hindcasting precipitation over land (Figure 11.6) is much lower than the skill in hindcasting temperature over land.
The skill of extreme daily temperature and precipitation in multi-annual time scales has also been assessed (Eade et al., 2012; Hanlon et al., 2013). There is little improvement in skill with the initialization beyond the first year, suggesting that skill then arises largely from the varying external forcing. The skill for extremes is generally similar to, but slightly lower than, that for the mean.
As part of Chapter 11, there is a section on Frequently Asked Questions. I have extracted excerpts from the FAQ 11.1 which is titled
If You Cannot Predict the Weather Next Month, How Can You Predict Climate for the Coming Decade?
Excerpts read highlighted text.
“Climate scientists do not attempt or claim to predict the detailed future evolution of the weather over coming seasons, years or decades.”Meteorological services and other agencies … have developed seasonal-to-interannual prediction systems that enable them to routinely predict seasonal climate anomalies with demonstrable predictive skill. The skill varies markedly from place to place and variable to variable. Skill tends to diminish the further the prediction delves into the future and in some locations there is no skill at all. ‘Skill’ is used here in its technical sense: it is a measure of how much greater the accuracy of a prediction is, compared with the accuracy of some typically simple prediction method like assuming that recent anomalies will persist during the period being predicted.Weather, seasonal-to-interannual and decadal prediction systems are similar in many ways (e.g., they all incorporate the same mathematical equations for the atmosphere, they all need to specify initial conditions to kick-start predictions, and they are all subject to limits on forecast accuracy imposed by the butterfly effect). However, decadal prediction, unlike weather and seasonal-to-interannual prediction, is still in its infancy. Decadal prediction systems nevertheless exhibit a degree of skill in hindcasting near-surface temperature over much of the globe out to at least nine years. A ‘hindcast’ is a prediction of a past event in which only observations prior to the event are fed into the prediction system used to make the prediction. The bulk of this skill is thought to arise from external forcing. ‘External forcing’ is a term used by climate scientists to refer to a forcing agent outside the climate system causing a change in the climate system. This includes increases in the concentration of long-lived greenhouse gases.Theory indicates that skill in predicting decadal precipitation should be less than the skill in predicting decadal surface temperature, and hindcast performance is consistent with this expectation.Finally, note that decadal prediction systems are designed to exploit both externally forced and internally generated sources of predictability. Climate scientists distinguish between decadal predictions and decadal projections. Projections exploit only the predictive capacity arising from external forcing. While previous IPCC Assessment Reports focussed exclusively on projections, this report also assesses decadal prediction research and its scientific basis.
What is remarkable about this Chapter is that they now recognize that at least out to a decade skillful predictions are very difficult. Only the skill in hindcasting near-surface temperature over much of the globe out to at least nine years has been emphasized. Skillful multi-decadal projections must be even more challenging.
Yet, Annex 1 provides detailed regional projections decades out into the future. It is
Annex I: Atlas of Global and Regional Climate Projections
http://www.ipcc-wg1.unibe.ch/guidancepaper/WG1AR5_AnnexI-Atlas.pdf
I have excerpted text from this Annex that explains what is provided (i.e. detailed regional multi decadal climate projections)
Annex I: Atlas of Global and Regional Climate Projections is an integral part of the Working Group I contribution to the IPCC Fifth Assessment Report, Climate Change 2013: The Physical Science Basis. It will provide comprehensive information on a selected range of variables (e.g., temperature and precipitation) for a few selected time horizons (e.g., 2020, 2050, and 2100) for all regions and, to the extent possible, for the four basic RCP scenarios.
However, there is a fundamental flaw in creating Annex 1, and, thus, any papers and studies on future climate impacts that result from it. Despite the widespread use of these model results, it is really a fundamentally flawed activity.
For this approach to be a robust approach to use for impact studies, these model results (when tested in hindcast) must show skill in not only replicating current climate (which is tested by comparison with reanalyses in which the climate model is NOT forced by the lateral boundary and nudging from the reanalyses), but must show skill at predicting CHANGES in regional climate statistics. This later requirement is a requirement to accept the models as robust projection (prediction) tools.
Necessary and Sufficient Tests of Model Prediction (Projection) Skill
To summarize
· The ability of the model to skillfully reproduce the regional climate statistics from the climate model (from the GCM or downscaled by a higher resolution regional model) is a NECESSARY first condition.
· The REQUIRED condition is that they must show, in hindcast runs, skill at predicting CHANGES in regional climate statistics.
There is a common mistake is to assume that one can use reanalyses to assess model prediction skill for the future. However, as discussed, for example, in the paper
Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling – what’s the point? Eos Forum, 93, No. 5, 52-53, doi:10.1029/2012EO050008
using reanalyses to drive a model places a real world constraint on the results which does not exist when the multi-decadal climate models are run for the future decades (and indeed, lateral boundary conditions and nudging from the reanalyses must not be used in true hindcast tests of model skill). This issue is discussed in the paper
Pielke Sr., R.A. 2013: Comments on “The North American Regional Climate Change Assessment Program: Overview of Phase I Results.” Bull. Amer. Meteor. Soc., 94, 1075-1077, doi: 10.1175/BAMS-D-12-00205.1.
As discussed above, unless the global climate model (dynamically and/or statistically downscaled) can be shown to skillfully predict current climate on the regional scales [when run over multi-decadal time scales in a hindcast mode, it cannot be accepted as a faithful representation of the real world climate.
Examples of IPCC Model Shortcomings
Multi-decadal global model prediction, in hindcast runs, however, have major shortcoming even with respect to current climate! Peer reviewed examples of these shortcomings include; as summarized in the Preface to
Pielke Sr, R.A., Editor in Chief., 2013: Climate Vulnerability, Understanding and Addressing Threats to Essential Resources, 1st Edition. J. Adegoke, F. Hossain, G. Kallos, D. Niyoki, T. Seastedt, K. Suding, C. Wright, Eds., Academic Press, 1570 pp. [http://pielkeclimatesci.files.wordpress.com/2013/05/b-18preface.pdf]
are
Taylor et al, 2012: Afternoon rain more likely over drier soils. Nature. doi:10.1038/nature11377. Received 19 March 2012 Accepted 29 June 2012 Published online 12 September 2012
“…the erroneous sensitivity of convection schemes demonstrated here is likely to contribute to a tendency for large-scale models to `lock-in’ dry conditions, extending droughts unrealistically, and potentially exaggerating the role of soil moisture feedbacks in the climate system.”
Driscoll, S., A. Bozzo, L. J. Gray, A. Robock, and G. Stenchikov (2012), Coupled Model Intercomparison Project 5 (CMIP5) simulations of climate following volcanic eruptions, J. Geophys. Res., 117, D17105, doi:10.1029/2012JD017607. published 6 September 2012.
“The study confirms previous similar evaluations and raises concern for the ability of current climate models to simulate the response of a major mode of global circulation variability to external forcings.”
Fyfe, J. C., W. J. Merryfield, V. Kharin, G. J. Boer, W.-S. Lee, and K. von Salzen (2011), Skillful predictions of decadal trends in global mean surface temperature, Geophys. Res. Lett.,38, L22801, doi:10.1029/2011GL049508
”….for longer term decadal hindcasts a linear trend correction may be required if the model does not reproduce long-term trends. For this reason, we correct for systematic long-term trend biases.”
Xu, Zhongfeng and Zong-Liang Yang, 2012: An improved dynamical downscaling method with GCM bias corrections and its validation with 30 years of climate simulations. Journal of Climate 2012 doi: http://dx.doi.org/10.1175/JCLI-D-12-00005.1
”…the traditional dynamic downscaling (TDD) [i.e. without tuning) overestimates precipitation by 0.5-1.5 mm d-1…..The 2-year return level of summer daily maximum temperature simulated by the TDD is underestimated by 2-6°C over the central United States-Canada region”.
Anagnostopoulos, G. G., Koutsoyiannis, D., Christofides, A., Efstratiadis, A. & Mamassis, N. (2010) A comparison of local and aggregated climate model outputs with observed data. Hydrol. Sci. J. 55(7), 1094–1110
“…. local projections do not correlate well with observed measurements. Furthermore, we found that the correlation at a large spatial scale, i.e. the contiguous USA, is worse than at the local scale.”
Stephens, G. L., T. L’Ecuyer, R. Forbes, A. Gettlemen, J.‐C. Golaz, A. Bodas‐Salcedo, K. Suzuki, P. Gabriel, and J. Haynes (2010), Dreary state of precipitation in global models, J. Geophys. Res., 115, D24211, doi:10.1029/2010JD014532.
“…models produce precipitation approximately twice as often as that observed and make rainfall far too lightly…..The differences in the character of model precipitation are systemic and have a number of important implications for modeling the coupled Earth system …….little skill in precipitation [is] calculated at individual grid points, and thus applications involving downscaling of grid point precipitation to yet even finer‐scale resolution has little foundation and relevance to the real Earth system.”
Sun, Z., J. Liu, X. Zeng, and H. Liang (2012), Parameterization of instantaneous global horizontal irradiance at the surface. Part II: Cloudy-sky component, J. Geophys. Res., doi:10.1029/2012JD017557, in press.
“Radiation calculations in global numerical weather prediction (NWP) and climate models are usually performed in 3-hourly time intervals in order to reduce the computational cost. This treatment can lead to an incorrect Global Horizontal Irradiance (GHI) at the Earth’s surface, which could be one of the error sources in modelled convection and precipitation. …… An important application of the scheme is in global climate models….It is found that these errors are very large, exceeding 800 W m-2 at many non-radiation time steps due to ignoring the effects of clouds….”
Ronald van Haren, Geert Jan van Oldenborgh, Geert Lenderink, Matthew Collins and Wilco Hazeleger, 2012: SST and circulation trend biases cause an underestimation of European precipitation trends Climate Dynamics 2012, DOI: 10.1007/s00382-012-1401-5
“To conclude, modeled atmospheric circulation and SST trends over the past century are significantly different from the observed ones. These mismatches are responsible for a large part of the misrepresentation of precipitation trends in climate models. The causes of the large trends in atmospheric circulation and summer SST are not known.”
I could go on with more examples. However, it is clear that the climate models used in the manuscript under review are not robust tools to use to predict climate conditions in the future.
Annex 1 of the 2013 IPCC WG1 report, therefore, is fundamentally flawed as it is based on multi-decadal climate model results which have not shown skill at faithfully replicating most of the basic climate dynamics, such as major atmospheric circulation features, even in the current climate. They have also shown no skill at predicting the CHANGES of regional climate statistics to the accuracy required for impact studies.
Views by Others
Now, in closing, below I have extracted text from separate e-mails of two very major well known players in the climate area. Both accept that CO2 is the dominant climate forcing and man is responsible and we need urgent action. These quotes are in e-mails that I have. They show that despite other topics in which we disagree, they presumably would agree with me on the gross inadequacies of Annex 1 in the IPCC WG1 report.
The relevant part of the first e-mail reads
“It is also worth pointing out that neither initialised decadal predictions nor RCMs are the entirety of what can be said about regional climate change in the next few decades – and in fact, it is arguable whether either add anything very much. ;-)”
The second e-mail reads
“The climate effects are largely warming, I cannot say today’s climate models can tell us much more with any certainty. I will add that there is probably poleward and continental intensification of the warming. One further feature that appears to be robust is the movement of the storm belts in both hemispheres polewards. This has implications for the general circulation and in particular the climatology of precipitation intensity and variability (i.e., drought and flood). This poleward shift is seen in the data. There is some model evidence that over the next century ozone recovery could cancel come of this poleward shift in the Southern Hemisphere, but probably not in the NH. I think this is about as far I as we can go in forecasting climate over the next 50-100 years. Of course, sea level follows naturally from thermal expansion of the water as well as land ice melting. I think there is very little information (above noise) beyond the above in global climate models at regional level, even including downscaling.”
Since, these individuals have been silent in discussing the issue of the value of multi-decadal model regional climate predictions, I feel compelled to communicate that my flagging the failure of this approach for the impacts and policymakers communities is shared by even some in the IPCC community.
Recommendation for Responding to this IPCC Deficiency
My recommendation is that when you hear or read of climate projections on multi-decadal time periods, ask them:
What is the quantitative skill of the models used in predicting their projected CHANGES? In other words, what is the predictive skill with respect to the climate metrics that are of importance to a particular impact?
If they cannot present quantitative evidence of such skill they are inappropriately presenting their studies. Annex 1 of the 2013 IPCC WG1, therefore, still needs an honest demonstration of the skill (if any) of their projections as part of a complete assessment of the state of climate science.
ferdberple –
You write
“Hindcasting is not a valid test of any model if either the model or the model builder has seen the hindcast data. The experiment is flawed because it is not double blind.”
I agree; an ideal test is blind. However, even when they do know the weather over the last several decades, they still fail to simulate significant aspects of the weather patterns, as I have documented in my post and papers. Thus, they would likely do even worse in a blind experiment.
Roger Sr.
Roger A. Pielke Sr.
Thankyou. Your article and your comments in the thread are excellent. Thankyou.
Richard
Climate models do tell us something interesting, which is largely overlooked. When you look at the IPCC graph of climate model result, you see a spaghetti graph. The results are all over the place.
The IPCC then tries to average these results and claim that this somehow represents the future. This is simply a nonsense, because the future is not an average of all possibilities.
What the spaghetti graph is really telling us is that the range of what may happen, with no change in external forcings, is quite large. That regardless of any actions we may take, it may get warmer or colder, and there is nothing we can do about it except adapt.
Thus, we are much wiser to spend money on adapting than on mitigating, because regardless of our actions, the climate may still change.
John Droz, jr.
Thank you for your comment. I agree with you but would modify slightly. The question that needs to be asked is
What is the predictive skill with respect to CHANGES in the climate metrics that are of importance to a particular social and/or environmental impact?
We can use reanalyses and other observations for the current climate. We do not need model predictions for that.
Roger Sr.
Roger, good article as usual. You’re indispensable in this debate. Marty
@Peter Taylor
“and 1957chev…
less of the ‘lefty shrills’ please – go do some homework on just who supports the scary climate story…..it commands support from both left and right….and I am left on that spectrum and have been an active critic of the IPCC’s supposed consensus for more than five years, but oddly, only the right-wing free-market press will publish my views or review my book….it is not a simple thing, this political analysis!”
This cannot be repeated often enough.
I’m going to buy your book.
@1957chev The truth does not benefit from turning this into a left-right thing. There is plenty of right wing support for AGW hysteria.
Roger A. Pielke Sr. says:
February 8, 2014 at 6:48 am
I agree; an ideal test is blind. However, even when they do know the weather over the last several decades, they still fail to simulate significant aspects of the weather patterns, as I have documented in my post and papers. Thus, they would likely do even worse in a blind experiment.
===========
Agreed, due to the lack of experimental controls, model performance only tells us how bad the models are performing. We cannot draw conclusions about how good they might be, because any ability to hindcast might simply be an artifact of the lack of experimental controls.
My point was directed at readers that are perhaps less familiar with computer models. These readers might naively assume, wrongly, that some ability to hindcast might indicate that the models had some ability to forecast. This assumption is wrong, because of the lack of experimental controls.
Personally, I appreciate your courage, leadership and huge contribution to science in speaking out about this abuse of the Public Trust being conducted in the name of Science. For more than a century scientists around the world have known the dangers of conducting science without experimental controls. The classic example being Clever Hans.
http://en.wikipedia.org/wiki/Clever_Hans
The only way to test a model that lacks rigorous experimental controls is to test it against the future as compiled by an independent observer. If however, as in the case of GISS for example, the model builder also collects the future data, then again there are no experimental controls and even the results against the future cannot be trusted.
We know from experiment after experiment that human beings unconsciously introduce bias into their work, no matter how honest their intentions. No matter how hard they try, their subconscious will cook the books to give them the answer they expect, and their conscious mind will remain completely unaware of what is going on behind the scenes. Thus, experimental design, the use of double blind controls to eliminate bias, must be at the heart of any scientific investigation.
Anyone that has ever proof read their own writing will have experienced this. You proof read an email over and over and it looks perfect. You send it out. The next day you re-read your email and discover their is a word missing. You can hardly believe it because when you read the email the day before, the word was there. This effect is so startling that you suspect that somehow someone must have erased the word from your email overnight.
But of course no one erased the word. It was your subconscious, trying to help. Filling in the missing information, so that the sentence made perfect sense while we were proof reading. It was only later, when you had forgotten what it was your were trying to say, that the missing word becomes visible to your conscious mind.
ferdberple:
At February 8, 2014 at 9:46 am you rightly say
Actually, there is a more fundamental reason why an ability to hindcast is not an indication of an ability to forecast.
There is only one past but there are an infinite number of ways to obtain an accurate hindcast, while there are an infinite number of possible futures but there is only one future that will evolve.
Forecast skill is demonstrated by a series of successful forecasts and nothing else. No GCM has existed for 50 years so no GCM has any demonstrated forecast skill for periods of 50 years.
Richard
richardscourtney says:
February 8, 2014 at 10:01 am
There is only one past but there are an infinite number of ways to obtain an accurate hindcast, while there are an infinite number of possible futures but there is only one future that will evolve.
==========
An interesting point and fundamentally correct. Even demonstrated ability to successfully predict the future is no guarantee, because some models may accidental get it right.
ferdberple:
Thankyou for your reply to me at February 8, 2014 at 10:44 am which says
Yes, and that is why I also wrote
emphasis added: RSC
Richard
{“Will we see more heat waves in the future or fewer?”}…..wouldn’t that be better stated as ” is there going to be a change in the frequency of heat waves?”. What are the proposed reasons for expecting such a change? Is there current evidence for more/fewer heat waves in the last decade as compared to historical records? Why is there no thought given to the possibility that heat wave frequency will not deviate from the known record?
Richard – Hindcast tests of skill is a necessary condition. I agree, however, that it is
i) not a complete assessment, but it is the best that can be done (without waiting 50 years)
and
ii) the IPCC failed to satisfactorily do that test.
Roger
Roger A. Pielke Sr. says:
February 8, 2014 at 12:41 pm
“Richard – Hindcast tests of skill is a necessary condition. I agree, however, that it is
i) not a complete assessment, but it is the best that can be done (without waiting 50 years)
and
ii) the IPCC failed to satisfactorily do that test. ”
When one builds a model and one wants to use hindcasting as a validation tool, it is necessary to not use the data on which one does the hindcasting during training (“Parametrization” in climate science) of the model.
I have never seen any mention of that in any paper about climate modeling. A serious attempt at validation by hindcasting would have to detail how “knowledge pollution” is avoided during the setup; how did they make sure that no knowledge about the hindcasting period could have influenced the training?
Have any such attempts been made? I know of none.
I think “Shut up.” is a perfectly valid answer to a stupid question.
Roger A. Pielke Sr.:
Sincere thanks for your response to my comments to ferdberple which you provide with your post at February 8, 2014 at 12:41 pm.
Please note that what I am saying is NOT a disagreement with your fine article which I applaud.
I agree your two points in your post addressed to me and I also agree with the resulting post made by DirkH at February 8, 2014 at 12:56 pm. But my concern at the use of the models is much, much more fundamental than yours.
I became concerned at what I saw as the misuse of the models when I studied development of the Hadley Center GCM in the 1990s
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
My concern was enhanced by reading Kiehle’s assessment in 2007. He found the same as me but his analysis was of 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
I had published that the Hadley GCM could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling. And my paper showed that the assumed aerosol cooling was not the cause – or at least not the sole cause – of the model’s failure to hindcast.
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
Simply, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Hence, the models are excellent heuristic tools. And they should be used as such.
But there is no reason to suppose that any of them is a predictive tool. And averaging model predictions (e.g. CMIP5) is an error because average wrong is wrong.
But, as you say, the models are being used as predictive tools such that their projections are influencing policy. It is less problematic to know one is ignorant when formulating policy than it is to formulate policy when influenced by false information .
Anyway, those are my views and I hope they are useful to your thought. If so, then I have provided some repayment for the interest which you have given to me by providing your excellent article.
Richard
ferdberple says:
For any set of data of N+1 points, you can solve mathematically a polynomial of degree N that exactly passes though all the data points. This is the training run for your model, where you adjust the coefficients (weights) of the polynomial to fit the data.
From now on, your model will perfectly hindcast the past. However, it will have no skill at predicting the future. For example, consider:
ax^2 + bX + c = y
If you now have 3 data points (x,y), the computer can solve for a,b,c such that for every value of X, you will get the correct value of Y, Now consider that X is the date, and Y is the temperature. You model will now correctly predict the past temperature (Y) for any of the 3 dates (X) provided.
However, when you ask the computer to predict the temperature for any date it has not seen, it will demonstrate no more skill than a dart board.
The dart board might well do a better job. Since it produces some degree of randomness. Whereas the equation will always give exactly the same results. It’s unlikely to correctly predict anything other than the original known values either. This being the case regardless of the number of known values used…
Steven Mosher says:
“As a Policy maker…”
If that is the case, then IMHO we are in deep doo-doo. ☹
Really.
Policy should be made on the basis of real world data and empirical observations, not on computer models.
Models have their place. But when the real world conflicts with models, then we should go with what the real world is telling us.
The real world is telling us that global warming stopped about seventeen years ago. So there is no urgency; ipso facto.
Besides, “policy” seems to be that the U.S. never says one word about China, Russia, India, or a hundred other countries’ ramping up of their CO2 emissions. Therefore, ‘policy’ is merely self-serving alarmist propaganda, intended to separate American taxpayers from their money. That is surely not a good thing. Who do these policy makers represent? Americans? Or other UN countries?
Wake me when ‘policy’ becomes objective again. So far, it only serves the interests of the climate alarmist clique.
dbstealey says:
February 8, 2014 at 2:01 pm
“Steven Mosher says:
“As a Policy maker…”
If that is the case, then IMHO we are in deep doo-doo.”
Only California for the moment, and they’re there already.
DirkH says:
February 8, 2014 at 12:56 pm
how did they make sure that no knowledge about the hindcasting period could have influenced the training?
===========
The evidence suggests quite the opposite. It is quite possible that 17 years ago a number of climate models predicted that temperatures would flat line. How many of those models would have survived that prediction?
Almost none, because the scientists involved would have said “this can’t be right”. So they would have adjusted the models until the models delivered a result the scientists believed was right. Thus, those models that predicted the right answer would have been thrown on the rubbish pile of history, and those models that predicted rubbish would have survived and been published by the IPCC.
So you can see, no climate model can predict the future better than the climate scientists themselves, because if the scientists don’t believe the computer model, they will change the model until they do believe it.
Hi Richard
Thank you for your follow up. We are in complete agreement, as you wrote, that
“Hence, the models are excellent heuristic tools. And they should be used as such.
But there is no reason to suppose that any of them is a predictive tool. And averaging model predictions (e.g. CMIP5) is an error because average wrong is wrong.”
The bottom line, based on our perspective of the models, is that IPCC Annex 1 results are fundamentally flawed..
Roger
Hi DirkH – I agree with you that they cannot train the models when used to objectively test predictive skill using a hindcast approach. The papers I cited with respect to hindcast runs, specifically described when they did make any adjustments to the results (e.g. see the . Zhongfeng and Yang 2012 paper), which improved them as one would expect.
Indeed showing the improvement when they do insert real data is an effective way to show the lack of skill of the original adjusted results.
Roger
I hate to pile onto Mosher. He’s really a nice guy. I mean that. But he makes it so easy:
Easy to say; impossible to do. If we could elect different people who would play fair, we would. But the system has been gamed. Even if it weren’t, votes don’t seem to matter any more. Even our Congressional Representatives no longer matter. Witness the ‘Dream Act’, which would have legalized millions of illegal aliens.
The Dream Act was thoroughly debated for months in Congress. When it was finally voted on, it lost decisively. Congress voted down the president’s Dream Act, because American citizens overwhelmingly deluged their Representatives with calls, emails and letters opposing it.
But Obama implemented it anyway. By decree! So Mosher’s “officials try to prepare for risks” disregards the plain fact that Americans Do. Not. Want. more tax money wasted on the fake “carbon” scare! But that failed Policy is being implemented anyway — witness Obama’s new ‘climate hubs’, which is yet another presidential decree — with no spending authorized, and without any Congressional approval. Therefore, I think we can disregard the failed advice to depend on elections to fix the carbon false alarm.
Mosher continues:
Yes. Self-serving policy wonks ‘want answers’. But doesn’t everyone ‘want answers’?
No. The public wants correct answers. But we aren’t getting them. We are getting wrong answers instead, based on the preconcevied assumption that there is a “carbon” crisis. But the real world makes it clear that CO2 is not a problem at all. So those trying to influence policy are doing it for job security — not because they have correct answers. They don’t, as the real world makes clear.
Mosher adds:
Physician, heal thyself! You do not have “better answers” for our country. You have answers that are better for your own wallet. But those ‘answers’ are at the expense of the rest of us.
There is no climate crisis. There never was. There is no runaway global warming. Again, there never was. The whole CO2 scare is one giant head-fake, intended to separate taxpayers from their earnings. As such, it is extremely dishonest.
Every climate parameter has been exceeded in the past, and to a much greater degree. The fact is that nothing either unusual or unprecedented is happening. The reality is that the current global climate is extremely benign. Unusually benign. The past century and a half has been wonderful for the biosphere, including humanity. Yet the climate alarmist clique constantly tries to tell the public that doomsday is upon us. That is self-serving nonsense.
I keep asking: if there is any testable, measurable scientific evidence showing runaway gloabl warming, or an approaching climate crisis, then… POST IT HERE. Show us your evidence! But there is no evidence.
I keep pointing out: computer models are not evidence. Evidence is testable, measurable raw data, and empirical [real world] observations.
The alarmist clique has no scientific evidence to support their assertions. None at all. But they keep telling the public that runaway global warming is upon us. It isn’t, as the real world makes clear. There isn’t a polite way to put it: they are lying in order to influence policy; to amass power and money for themselves. Their self-serving assertions are certainly not in the best interest of the citizens.
If the false-alarm clique decides to get its fingers out of our pockets, then we will finally be on the right track. Right now, we’re not. They can start by telling people the way things really are — not by incessantly sounding their “carbon” false alarm.
dbstealey:
In your post at February 8, 2014 at 3:16 pm you say
Indeed so.
I point out that
(a) in this thread we are discussing that the climate models are being used as predictive tools when they have no demonstrated predictive skill
and
(b) in another thread we are discussing that the statistical methods used by so-called ‘climate science’ are not fit for purpose
and
(c) in past threads we have discussed the problems with acquisition of climate data notably GASTA
and
(d) in another thread there is discussion of climate sensitivity which is a reflection of the problem of an inadequate theory of climate change.
Simply, the only thing about climate which is known with certainty is that nothing about climate behaviour is known with sufficient certainty to assist policy making. It is better have no information than to be influenced by wrong information when formulating policy.
Richard