
Guest post by Robert E. Levine, PhD
The two principal claims of climate alarmism are human attribution, which is the assertion that human-caused emissions of carbon dioxide are warming the planet significantly, and climate danger prediction (or projection), which is the assertion that this human-caused warming will reach dangerous levels. Both claims, which rest largely on the results of climate modeling, are deceptive. As shown below, the deception is obvious and requires little scientific knowledge to discern.
The currently authoritative source for these deceptive claims was produced under the direction of the UN-sponsored Intergovernmental Panel on Climate Change (IPCC) and is titled Climate Change 2007: The Physical Science Basis (PSB). Readers can pay an outrageous price for the 996 page bound book, or view and download it by chapter on the IPCC Web site at http://www.ipcc.ch/publications_and_data/ar4/wg1/en/contents.html
Alarming statements of attribution and prediction appear beginning on Page 1 in the widely quoted Summary for Policymakers (SPM).
Each statement is assigned a confidence level denoting the degree of confidence that the statement is correct. Heightened alarm is conveyed by using terms of trust, such as high confidence or very high confidence.
Building on an asserted confidence in climate model estimates, the PSB SPM goes on to project temperature increases under various assumed scenarios that it says will cause heat waves, dangerous melting of snow and ice, severe storms, rising sea levels, disruption of climate-moderating ocean currents, and other calamities. This alarmism, presented by the IPCC as a set of scientific conclusions, has been further amplified by others in general-audience books and films that dramatize and exaggerate the asserted climate threat derived from models.
For over two years, I have worked with other physicists in an effort to induce the American Physical Society (APS) to moderate its discussion-stifling Statement on Climate Change, and begin to facilitate normal scientific interchange on the physics of climate. In connection with this activity, I began investigating the scientific basis for the alarmist claims promulgated by the IPCC. I discovered that the detailed chapters of the IPCC document were filled with disclosures of climate model deficiencies totally at odds with the confident alarmism of the SPM. For example, here is a quote from Section 8.3, on Page 608 in Chapter 8:
“Consequently, for models to predict future climatic conditions reliably, they must simulate the current climatic state with some as yet unknown degree of fidelity.”
For readers inclined to accept the statistical reasoning of alarmist climatologists, here is a disquieting quote from Section 10.1, on Page 754 in Chapter 10:
“Since the ensemble is strictly an ‘ensemble of opportunity’, without sampling protocol, the spread of models does not necessarily span the full possible range of uncertainty, and a statistical interpretation of the model spread is therefore problematic.”
The full set of climate model deficiency statements is presented in the table below. Each statement appears in the referenced IPCC document at the indicated location. I selected these particular statements from the detailed chapters of the PSB because they show deficiencies in climate modeling, conflict with the confidently alarming statements of the SPM, and can easily be understood by those who lack expertise in climatology. No special scientific expertise of any kind is required to see the deception in treating climate models as trustworthy, presenting confident statements of climate alarm derived from models in the Summary, and leaving the disclosure of climate model deficiencies hidden away in the detailed chapters of the definitive work on climate change. Climategate gave us the phrase “Hide the decline.” For questionable and untrustworthy climate models, we may need another phrase. I suggest “Conceal the flaws.”
I gratefully acknowledge encouragement and a helpful suggestion given by Dr. S. Fred Singer.
| Climate Model Deficiencies in IPCC AR4 PSB | |||
| Chapter | Section | Page | Quotation |
| 6 | 6.5.1.3 | 462 | “Current spatial coverage, temporal resolution and age control of available Holocene proxy data limit the ability to determine if there were multi-decadal periods of global warmth comparable to the last half of the 20th century.” |
| 6 | 6.7 | 483 | “Knowledge of climate variability over the last 1 to 2 kyr in the SH and tropics is severely limited by the lack of paleoclimatic records. In the NH, the situation is better, but there are important limitations due to a lack of tropical records and ocean records. Differing amplitudes and variability observed in available millennial-length NH temperature reconstructions, and the extent to which these differences relate to choice of proxy data and statistical calibration methods, need to be reconciled. Similarly, the understanding of how climatic extremes (i.e., in temperature and hydro-climatic variables) varied in the past is incomplete. Lastly, this assessment would be improved with extensive networks of proxy data that run up to the present day. This would help measure how the proxies responded to the rapid global warming observed in the last 20 years, and it would also improve the ability to investigate the extent to which other, non-temperature, environmental changes may have biased the climate response of proxies in recent decades.” |
| 8 | Executive Summary | 591 | “The possibility that metrics based on observations might be used to constrain model projections of climate change has been explored for the first time, through the analysis of ensembles of model simulations. Nevertheless, a proven set of model metrics that might be used to narrow the range of plausible climate projections has yet to be developed.” |
| 8 | Executive Summary | 593 | “Recent studies reaffirm that the spread of climate sensitivity estimates among models arises primarily from inter-model differences in cloud feedbacks. The shortwave impact of changes in boundary-layer clouds, and to a lesser extent mid-level clouds, constitutes the largest contributor to inter-model differences in global cloud feedbacks. The relatively poor simulation of these clouds in the present climate is a reason for some concern. The response to global warming of deep convective clouds is also a substantial source of uncertainty in projections since current models predict different responses of these clouds. Observationally based evaluation of cloud feedbacks indicates that climate models exhibit different strengths and weaknesses, and it is not yet possible to determine which estimates of the climate change cloud feedbacks are the most reliable.” |
| 8 | 8.1.2.2 | 594 | “What does the accuracy of a climate model’s simulation of past or contemporary climate say about the accuracy of its projections of climate change” This question is just beginning to be addressed, exploiting the newly available ensembles of models.” |
| 8 | 8.1.2.2 | 595 | “The above studies show promise that quantitative metrics for the likelihood of model projections may be developed, but because the development of robust metrics is still at an early stage, the model evaluations presented in this chapter are based primarily on experience and physical reasoning, as has been the norm in the past.” |
| 8 | 8.3 | 608 | “Consequently, for models to predict future climatic conditions reliably, they must simulate the current climatic state with some as yet unknown degree of fidelity.” |
| 8 | 8.6.3.2.3 | 638 | “Although the errors in the simulation of the different cloud types may eventually compensate and lead to a prediction of the mean CRF in agreement with observations (see Section 8.3), they cast doubts on the reliability of the model cloud feedbacks.” |
| 8 | 8.6.3.2.3 | 638 | “Modelling assumptions controlling the cloud water phase (liquid, ice or mixed) are known to be critical for the prediction of climate sensitivity. However, the evaluation of these assumptions is just beginning (Doutraix-Boucher and Quaas, 2004; Naud et al., 2006). |
| 8 | 8.6.4 | 640 | “A number of diagnostic tests have been proposed since the TAR (see Section 8.6.3), but few of them have been applied to a majority of the models currently in use. Moreover, it is not yet clear which tests are critical for constraining future projections. Consequently, a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed.” |
| 9 | Executive Summary | 665 | “Difficulties remain in attributing temperature changes on smaller than continental scales and over time scales of less than 50 years. Attribution at these scales, with limited exceptions, has not yet been established.” |
| 10 | 10.1 | 754 | “Since the ensemble is strictly an ‘ensemble of opportunity’, without sampling protocol, the spread of models does not necessarily span the full possible range of uncertainty, and a statistical interpretation of the model spread is therefore problematic.” |
| 10 | 10.5.4.2 | 805 | “The AOGCMs featured in Section 10.5.2 are built by selecting components from a pool of alternative parameterizations, each based on a given set of physical assumptions and including a number of uncertain parameters.” |
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
From http://www.ciwem.org/media/144832/WAPUG_User_Note_13.pdf
“When force-fitting the assumption is made that the field survey data [insert paleo / temperature record] is correct. Arbitrary adjustments are then made to the input data so that the models response matches as closely as possible the results of the field monitoring [insert modern temperature record]. Little or no checking of data takes place. The result is a model which appears to be an excellent representation of reality, and which requires comparatively little effort to produce. However, the force-fitted model may not in fact be a true representation of existing conditions, this is because it is not possible without checking, to ascertain whether or not the adjusted input data itself correctly reflects reality.”
Stumpy,
I think your use of a cautionary note for use with a sewer model, inserting climate terms, shows your bias, and lack of objectivity.
It is easy enough to check your source and find that the terms ” [insert paleo / temperature record]” and ” [insert modern temperature record]. ” do not exist anywhere within where you say that they do.
Sewer models are not climate models!
My understanding is that IPCC reports are finalised in rather an odd way – that the IPCC scientists do not draw up the executive summary, but it is instead the subject of negotiation between politicians/diplomats/who knows who? – and that the IPCC scientists are then under an obligation to edit the main report to conform to the executive summary. Have I got this wrong? – because, if not, that process would rather explain some of the oddities noted above …
Bob Ryan compares averaging economic models with averaging climate models. Essex and McKitrick’s book ‘Taken by Storm’ (http://www.takenbystorm.info/) answers this argument – McKitrick is an economist and Essex is a physicist. The factors involved are so complex, so subject to small events (which large-scale models can’t see) causing large effects, averaging doesn’t cause climate models to make more accurate predictions.
Petter says:
October 21, 2010 at 1:22 am
How is one supposed to interpret “ensemble of opportunity”?
To create an ensemble of models you take 100 IPCC model programmers, who select the parameters to their individual models based on the same theories of climate and the same measurements. Then you run those models and possibly select the ones that fit the agenda the best and then apply statistics and say that you have high confidence on ensemble average because the variance is low.
Using this kind of approach the precision of the projections is high but still the accuracy is low because none of the projections (maybe with the exception of the Russian one) matches the what happened in the real world.
If the notion of “programmers selecting parameters” surprises, please explain why don’t we have IPCC projections of decreasing CO2 levels. Shouldn’t we have variations of our sociological models as well as before we make averages.
Professor Bob Ryan says:
October 21, 2010 at 4:24 am
Excuse my ignorance, but what does any of this waffle have to do with the cause and effects of climate from either Professor Lindzen, or Bob Carter, for example, on the one hand, and the farrago of abstractions from the IPCC on the other?
Professor Bob Ryan says: “Well not quite – there is considerable difference between [monkey] ensembles and individual [monkeys.] Much of the IPCC’s caveats highlighted in this post are comments about the deficiencies of individual [monkeys] which could be surmounted by a well constructed [whole-bunch-o’-monkeys] methodology and research design. In my view this is the direction of travel that climate research should follow. The problem is that it relies upon both individual and collective, highly coordinated effort…”
No, the problem is that it relies on mönkeys–Wank-O-Matic computer programs that don’t reflect even a tenth part of the reality.
Prof Ryan
I clearly have less confidence in models and modelers than you do. Part of this lack of confidence rests in the incontestibility of funding for the work. Its like the modelers belong to a Trade Union to which only signed up alarmists can belong. The system ensures that no other potential independent provider of climate modeling services will even bother seeking funding. So the rate of penetration of new ideas into the collective mind of the climate modeling community must be slower than it should be.
At this blog we see regular comments from a number of climate scientists. I have yet to see a climate modeller step forth into this forum to discuss or defend the work they do. They seem to be a rather insular group.
Petter asked:
I interpret that to refer to the output of the set of climate models available to sample. It means they are just an uncontrolled grab bag of models.
As Dr. Levine points out, it says on page 764 of the IPCC report: “Since the ensemble is strictly an ‘ensemble of opportunity’, without sampling protocol, the spread of models does not necessarily span the full possible range of uncertainty, and a statistical interpretation of the model spread is therefore problematic.”
In other words, all these models can’t be averaged because there is no control over how they differ. It’s just a grab bag. They do not represent all likely cases.
Professor Bob Ryan says:
October 21, 2010 at 5:04 am
Ryan, if tou think this ensemle idea is so great, why not sell the models to XBox 360 players all around the world? Imagine what an ensemble!
IPCC is tasked to find a warming trend, based on CO2 emissions and they do just that. They are not free to be independant, its not in the brief. However, they also stray into advocacy which can be heard at Pachauri lectures but goes further. viz:
“Invitation Wednesday, September 29, 2010 at 18:00-20:00
The Club of Rome EU-Chapter (Brussels) in collaboration with the Club of Rome – European Support Centre (Vienna) is pleased to invite you to the 73rd Aurelio Peccei Lecture in the presence of H.R.H. Prince Philip of Belgium, Honorary President CoR-EU. Science, the Future of Climate, and Governance by Jean-Pascal van Ypersele.” http://www.clubofrome.at/events/lectures/73/index.html
“Jean-Pascal van Ypersele
IPCC Vice-chair and professor of climatology at UCL, Belgium, has been made Honorary Member of the Club of Rome EU Chapter by H.R.H. Prince Philip of Belgium.”
http://www.ipcc.ch/
“New ideas and strategies will be needed to ensure that improved living conditions and opportunities for a growing population across the world can be reconciled with the conservation of a viable climate and of the fragile ecosystems on which all life depends. A new vision and path for world development must be conceived and adopted if humanity is to surmount the challenges ahead.”
http://www.clubofrome.org/eng/new_path/
Petter says
What is the meaning of “Ensemble of opportunity”?
Essentially is suggested by the only other similar phrase in English I know of, which is “crime of opportunity” such as may occur if you left your door open and got robbed. It happened because it was easy to do. This usage may be like a Freudian slip (crime of opp. In mind).
My thanks to Rod McLaughlin and to Geoff for their references which I will follow up and read. To P Wilson I think I was careful to say that a successful ensemble analysis would require a broader spectrum of models into the analysis than those considered by the IPCC. I was particularly amused with jorgekafkaza’s monkey interpolation. He may not be aware of the embarrassing fact that monkey’s lobbing a dart at the Financial Times or the Wall Street Journal are better on average at stock picking than the City’s or Wall Street’s finest equity analysts/fund managers. I wouldn’t write off the monkeys just yet – they might be better at climate modelling than you suspect.
“You’ve left out the primary statement buried in the Appendicies:-
“We definitely possibly maybe think we might know how it all works assuming our assumptions are right provided we really think we maybe have we’ve chosen the right assumptions…”
Could someone point me to this appendice? I think this is the first thing I ever heard of from the IPCC that actually made sense… 🙂
What makes CAGW the perfect “green collar crime” is that the core perpetrators are not actually lying. When all this falls apart, the scientists will say – we always stressed the uncertainties and clearly said these were only model projections. The IPCC will say – the 90% confidence was clearly our opinion – we did not misrepresent it as a scientific probability, and look, we even reported all these uncertainties in the text – your problem if you didn’t read it properly. The politicians will say we just followed the scientists and the green industry will say we just followed government policy.
Not one misdemeanor in the whole $trillion scam.
Makes you want to have a piece of it, don’t it? Now where was that home solar power installer’s phone number.
Way back in my Physics 101 days I learned a valuable lesson about laboratory experiments from my professor. It is better to do one very good experiment to get one very good measurement result than it is to do an experiment that requires you to average 1000 values to get a result. Based on this valuable lesson, I suggest that attempts to ensemble average a bunch of different climate models is a bad approach to doing climate science. Go out and do real science, climatologists:
1. Turn OFF your computer
2. Design a real experiment.
3. Go out into the field and conduct your experiment.
4. Return to your office with real experimental data.
5. Turn ON your computer.
6. Analyze your results.
7. Does experiment confirm the theory?
………NO? Publish it and GoTo 1
……..YES? Publish it and GoTo 1
bob says: (October 21, 2010 at 12:32 pm)
“Sewer models are not climate models!”
I assume this is humor, as the cautionary note stumpy referred to is apparently well understood by the “Team” – hence the call to “hide the decline”!.
If not humor, then perhaps it is your bias and lack of objectivity on display.
Bob
Professor Bob Ryan says: “///I was particularly amused with jorgekafkaza’s monkey interpolation. He may not be aware of the embarrassing fact that monkey’s lobbing a dart at the Financial Times or the Wall Street Journal are better on average at stock picking than the City’s or Wall Street’s finest equity analysts/fund managers. I wouldn’t write off the monkeys just yet – they might be better at climate modelling than you suspect.”
LOL. This merely highlights the low skill of equity analysts relative to monkeys. It does not establish the Ensemble as Der Übermönkey.
“The shortwave impact of changes in boundary-layer clouds, and to a lesser extent mid-level clouds, constitutes the largest contributor to inter-model differences in global cloud feedbacks. The relatively poor simulation of these clouds in the present climate is a reason for some concern. The response to global warming of deep convective clouds is also a substantial source of uncertainty in projections since current models predict different responses of these clouds. Observationally based evaluation of cloud feedbacks indicates that climate models exhibit different strengths and weaknesses, and it is not yet possible to determine which estimates of the climate change cloud feedbacks are the most reliable.”
Ok, so they can`t actually model climate, and they will not be able to until looking at the solar factors that are driving it.
Levine has cleverly said nothing it such a way that some readers will think he has said something. All he has done is list statements of uncertainty in the IPCC report and claim the IPCC report ignores these. He has in no way demonstrated that the IPCC conclusions do not sufficiently account for the uncertainties. He just says they don’t; so there!
But those of you looking for confirmation bias will feel confirmed!
“”” Professor Bob Ryan says:
October 21, 2010 at 1:30 am
LabMunkey:
‘It’s worse than we thought…..’
Well not quite – there is considerable difference between model ensembles and individual models. Much of the IPCC’s caveats highlighted in this post are comments about the deficiencies of individual models which could be surmounted by a well constructed ensemble methodology and research design. In my view this is the direction of travel that climate research should follow. The problem is that it relies upon both individual and collective, highly coordinated effort. “””
I didn’t bother to append the rest of your post Professor; just enough to identify the citation.
I can see how your (purely mathematical) analysis might establish some correlations or other statistical metrics. But it all adds not one scintilla of knowledge about causality.
And don’t plead any “overwhelming preponderance of correlation.”
You would have to beat an agreement of experiment and theory to better than a part in 10^8 to get in the game; and sadly that example from science history circa 1960-70s failed to prove causation, and the theory was shown to be pure fiction. Well of course all science theories are pure fiction, as is all of mathematics. We make it all up out of whole cloth.
But this case was totally made up with no experimental input data whatsoever.
So don’t bet on statistics to solve science mysteries; the bar level for burden of proof; has already been placed out of reach at an impossible level.
But I’ll keep you in mind when my next stock investments come up for review.
I should point out to Professor Ryan; that one can do a perfectly elegant statistical analysis; or ensemble of statistical analyses, on all the Telephone numbers in the Manhattan Telephone directory; and no doubt reach some interesting results includign a nice average telephone number.
But it has no meaning or value; unless it turns out to be the telephone number for the New York Stock Exchange; or maybe YOUR phone number.
Professor Bob Ryan,
My understanding of the theory of group prediction about which you suggest is as follows:
1. all participants are humans, not models.
2. all participants possess some information about some aspects of the problem being investigated.
3. some of the information is correct, some is in error.
4. The concept uses many independent participants, who must participate in private without knowledge of the submissions or thinking of their fellow participants.
5. The ideas expressed by many of the participants will be wrong. BUT these errors will be idiosyncrastic and will therfore not be reinforced by the different errors of others, which will all remain inconsequent scattered low level noise.
6. The ideas expressed that happen to be correct will accumulate and will come to predominate, as using many witnesses is likely to entail gathering the same correct ideas about fragments of the truth, many times over.
The IPCC modellers in contrast come from a very small clique who constantly communicate philosophies, ideas, systems and code.
Combining their models will never do anything to further the search for knowledge about climate or anything else.
You are searching for the Wisdom of Crowds.
The method you propose will only encounter the Madness of Crowds, the evil twin brother of wisdom.
anna v says: at 4:23 am Climate is average weather,…
It seems more so that climate is a “pattern over time” of things we think of as weather variables. For example, the Seattle, WA area is usually warm and dry in the N. H. summer but cool and damp in the winter. Meanwhile, Charleston, SC has its peak rainfall in August when it is warmest. If you only present the “average” of these things, then you miss the “pattern.”
Climates are much harder to come by then averages.
http://en.wikipedia.org/wiki/K%C3%B6ppen_climate_classification
Dave Springer says: October 21, 2010 at 7:07 am
“The first test any model must pass is an accurate reconstruction of past climate change… If the climate history the model is designed to replicate is wrong then the model, by design, is wrong too.”
A very good point and one worthy of expansion. While past climate change includes periods distant in time such as ice ages (and the attendent difficulty in achieving sufficient accuracy of knowledge of that time), it also includes relatively well established recent phenomena such as the climate response to the Mt Pinatubo eruption. It includes observable shifts in relative temperature in recent decades such as night-time increases relative to day-time, polar increases over equatorial, winter over summer, tropospheric over stratospheric… It includes the rise in water vapour content associated with the recent warming trend and so forth.
A model that produces the recent effects mentioned from bottom-up behaviour may still be incorrect of course, but it need not necessarily suffer from “wrong climate history”. Not perfect does not necessarily mean totally worthless.