Climate Model Deception – See It for Yourself

From a workshop being held at the University of Northen Colorado today:"Teaching About Earth's Climate Using Data and Numerical Models" - click for more info

Guest post by Robert E. Levine, PhD

The two principal claims of climate alarmism are human attribution, which is the assertion that human-caused emissions of carbon dioxide are warming the planet significantly, and climate danger prediction (or projection), which is the assertion that this human-caused warming will reach dangerous levels. Both claims, which rest largely on the results of climate modeling, are deceptive. As shown below, the deception is obvious and requires little scientific knowledge to discern.

The currently authoritative source for these deceptive claims was produced under the direction of the UN-sponsored Intergovernmental Panel on Climate Change (IPCC) and is titled Climate Change 2007: The Physical Science Basis (PSB). Readers can pay an outrageous price for the 996 page bound book, or view and download it by chapter on the IPCC Web site at http://www.ipcc.ch/publications_and_data/ar4/wg1/en/contents.html

Alarming statements of attribution and prediction appear beginning on Page 1 in the widely quoted Summary for Policymakers (SPM).

Each statement is assigned a confidence level denoting the degree of confidence that the statement is correct. Heightened alarm is conveyed by using terms of trust, such as high confidence or very high confidence.

Building on an asserted confidence in climate model estimates, the PSB SPM goes on to project temperature increases under various assumed scenarios that it says will cause heat waves, dangerous melting of snow and ice, severe storms, rising sea levels, disruption of climate-moderating ocean currents, and other calamities. This alarmism, presented by the IPCC as a set of scientific conclusions, has been further amplified by others in general-audience books and films that dramatize and exaggerate the asserted climate threat derived from models.

For over two years, I have worked with other physicists in an effort to induce the American Physical Society (APS) to moderate its discussion-stifling Statement on Climate Change, and begin to facilitate normal scientific interchange on the physics of climate. In connection with this activity, I began investigating the scientific basis for the alarmist claims promulgated by the IPCC. I discovered that the detailed chapters of the IPCC document were filled with disclosures of climate model deficiencies totally at odds with the confident alarmism of the SPM. For example, here is a quote from Section 8.3, on Page 608 in Chapter 8:

“Consequently, for models to predict future climatic conditions reliably, they must simulate the current climatic state with some as yet unknown degree of fidelity.”

For readers inclined to accept the statistical reasoning of alarmist climatologists, here is a disquieting quote from Section 10.1, on Page 754 in Chapter 10:

“Since the ensemble is strictly an ‘ensemble of opportunity’, without sampling protocol, the spread of models does not necessarily span the full possible range of uncertainty, and a statistical interpretation of the model spread is therefore problematic.”

The full set of climate model deficiency statements is presented in the table below. Each statement appears in the referenced IPCC document at the indicated location. I selected these particular statements from the detailed chapters of the PSB because they show deficiencies in climate modeling, conflict with the confidently alarming statements of the SPM, and can easily be understood by those who lack expertise in climatology. No special scientific expertise of any kind is required to see the deception in treating climate models as trustworthy, presenting confident statements of climate alarm derived from models in the Summary, and leaving the disclosure of climate model deficiencies hidden away in the detailed chapters of the definitive work on climate change. Climategate gave us the phrase “Hide the decline.” For questionable and untrustworthy climate models, we may need another phrase. I suggest “Conceal the flaws.”

I gratefully acknowledge encouragement and a helpful suggestion given by Dr. S. Fred Singer.

Climate Model Deficiencies in IPCC AR4 PSB
Chapter Section Page Quotation
6 6.5.1.3 462 “Current spatial coverage, temporal resolution and age control of available Holocene proxy data limit the ability to determine if there were multi-decadal periods of global warmth comparable to the last half of the 20th century.”
6 6.7 483 “Knowledge of climate variability over the last 1 to 2 kyr in the SH and tropics is severely limited by the lack of paleoclimatic records. In the NH, the situation is better, but there are important limitations due to a lack of tropical records and ocean records. Differing amplitudes and variability observed in available millennial-length NH temperature reconstructions, and the extent to which these differences relate to choice of proxy data and statistical calibration methods, need to be reconciled. Similarly, the understanding of how climatic extremes (i.e., in temperature and hydro-climatic variables) varied in the past is incomplete. Lastly, this assessment would be improved with extensive networks of proxy data that run up to the present day. This would help measure how the proxies responded to the rapid global warming observed in the last 20 years, and it would also improve the ability to investigate the extent to which other, non-temperature, environmental changes may have biased the climate response of proxies in recent decades.”
8 Executive Summary 591 “The possibility that metrics based on observations might be used to constrain model projections of climate change has been explored for the first time, through the analysis of ensembles of model simulations. Nevertheless, a proven set of model metrics that might be used to narrow the range of plausible climate projections has yet to be developed.”
8 Executive Summary 593 “Recent studies reaffirm that the spread of climate sensitivity estimates among models arises primarily from inter-model differences in cloud feedbacks. The shortwave impact of changes in boundary-layer clouds, and to a lesser extent mid-level clouds, constitutes the largest contributor to inter-model differences in global cloud feedbacks. The relatively poor simulation of these clouds in the present climate is a reason for some concern. The response to global warming of deep convective clouds is also a substantial source of uncertainty in projections since current models predict different responses of these clouds. Observationally based evaluation of cloud feedbacks indicates that climate models exhibit different strengths and weaknesses, and it is not yet possible to determine which estimates of the climate change cloud feedbacks are the most reliable.”
8 8.1.2.2 594 “What does the accuracy of a climate model’s simulation of past or contemporary climate say about the accuracy of its projections of climate change” This question is just beginning to be addressed, exploiting the newly available ensembles of models.”
8 8.1.2.2 595 “The above studies show promise that quantitative metrics for the likelihood of model projections may be developed, but because the development of robust metrics is still at an early stage, the model evaluations presented in this chapter are based primarily on experience and physical reasoning, as has been the norm in the past.”
8 8.3 608 “Consequently, for models to predict future climatic conditions reliably, they must simulate the current climatic state with some as yet unknown degree of fidelity.”
8 8.6.3.2.3 638 “Although the errors in the simulation of the different cloud types may eventually compensate and lead to a prediction of the mean CRF in agreement with observations (see Section 8.3), they cast doubts on the reliability of the model cloud feedbacks.”
8 8.6.3.2.3 638 “Modelling assumptions controlling the cloud water phase (liquid, ice or mixed) are known to be critical for the prediction of climate sensitivity. However, the evaluation of these assumptions is just beginning (Doutraix-Boucher and Quaas, 2004; Naud et al., 2006).
8 8.6.4 640 “A number of diagnostic tests have been proposed since the TAR (see Section 8.6.3), but few of them have been applied to a majority of the models currently in use. Moreover, it is not yet clear which tests are critical for constraining future projections. Consequently, a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed.”
9 Executive Summary 665 “Difficulties remain in attributing temperature changes on smaller than continental scales and over time scales of less than 50 years. Attribution at these scales, with limited exceptions, has not yet been established.”
10 10.1 754 “Since the ensemble is strictly an ‘ensemble of opportunity’, without sampling protocol, the spread of models does not necessarily span the full possible range of uncertainty, and a statistical interpretation of the model spread is therefore problematic.”
10 10.5.4.2 805 “The AOGCMs featured in Section 10.5.2 are built by selecting components from a pool of alternative parameterizations, each based on a given set of physical assumptions and including a number of uncertain parameters.”
0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

110 Comments
Inline Feedbacks
View all comments
October 22, 2010 12:12 am

I work in business. I’ve lost count of the number of executive summaries I’ve written and read. And I know that this is NOT executive summary material:
“The possibility that metrics based on observations might be used to constrain model projections of climate change has been explored for the first time, through the analysis of ensembles of model simulations. Nevertheless, a proven set of model metrics that might be used to narrow the range of plausible climate projections has yet to be developed.”
No executive I know would read that. It’s incomprehensible on a swift scan. A receiving executive would require a rewrite into English. Could it just have said:
“We’ve started to compare our models to reality, but failed so far to use this to make our predictions more reliable”?
This would have given executives something to go on. Perhaps this indicates how little the report has actually been read by executives. I guess that instead they’ve relied on others to tell them what it means… and so we see the results.

Robert E. Levine
October 22, 2010 1:31 am

Writing on October 21 at 6:35 pm, Mike is concerned that I have not “demonstrated that the IPCC conclusions do not sufficiently account for the uncertainties,” and that this omission will appeal to those looking for confirmation.
I think it was up to the authors of the IPCC report to assert and demonstrate that they accounted for all uncertainties, particularly those stated so clearly to be applicable. The Summary for Policymakers neither acknowledges the uncertainties I cited from the main report, nor states that they are accounted for. Mike’s implied suggestion that they might be accounted for does not prove that they have been, and in no way relieves the IPCC AR4 PSB authors of their responsibility to fully account for uncertainty and not publish a document with a misleading summary.
As I read the main document, some uncertainties appear to receive significant discussion and may be partially accounted for (e.g., the cloud simulation problem stated on Pages 593 and 638), while others do not appear to be accounted for at all (e.g., the lack of proven model metrics discussed on Page 591 and the problems of climatic state and poor model skill discussed on Page 608). However, the reader of this report, with its global scientific and policy impact, should not be obliged to make any inferences whatsoever concerning the accounting for uncertainties, both individually and in their aggregate overall impact on climate attribution and projection.
Moreover, the degree of uncertainty is in most instances stated in the SPM in terms of a level of confidence that is either high (about 8 out of 10 chance) or very high (at least 9 out of 10 chance). I suggest that the statement on Page 608 that the models must simulate the current climatic state “with some as yet unknown degree of fidelity” conflicts at face value with this asserted level of confidence, and that no special scientific or statistical expertise is required to so conclude. If Mike believes otherwise, I would like to know why.

Bill Toland
October 22, 2010 2:13 am

Rob R, you won’t see any climate modellers on this forum trying to defend their models.
As far as I can tell, there are two types of climate modeller.
The first type is a climate scientist who thinks he can program. Invariably, their programs are appalling and full of bugs.
The second type is a programmer who writes the model based on the instructions of a climate scientist. These programs are much better written but the programs are written based on the programmer’s understanding of climate science which is often completely lacking.
In both cases, the creator of the computer model will not try to defend their model because he knows either his programming is shaky or his science knowledge is poor. At least I have a background in astrophysics and computing and I have a lot of experience in creating computer models. Perhaps this is why my climate model shows so little warming in the future.
In an attempt to address some of the problems listed above, some models are created using a team of climate scientists and programmers. Unfortunately, this brings to mind the story about a camel being a horse designed by a committee.
The people who create these climate models know how iffy their models are; that is why they liaise so much with other modellers. If models from university B and university C both show much more warming than their model, they assume that they have done something wrong and adjust their models accordingly. People from outside the modelling clique have no idea of the amount of informal communication that goes on among modellers. Unfortunately, this has resulted in a lot of models being very similar in their assumptions and therefore in their results.

peakbear
October 22, 2010 4:46 am

Bill Toland says: October 22, 2010 at 2:13 am
Good summary Bill. Modeling can also lead to very lazy science and circular reasoning without people actually going out in the real world to measure, observe and collect real data.
The models themselves do have very good uses, such as a 3 day weather forecast for the UK for something like typical Atlantic weather coming in. It might be possible to improve the model to theoretically give say a 1% forecast improvement (whatever that would mean) but it obviously needs to be balanced with the observations out in the Atlantic that seeded the model in the first place. I don’t have the actual details but I believe the observational network is declining while modeling budgets increase, so we’d probably get a better rate of return on 1000 new buoys to measure things, though that would involve going out into the cold wet weather. Also satellites are used much more now and really can’t see through cloud very well, so the observations themselves can be quite bad at times.
Back to climate modeling, using these weather models for a 100 year forecast isn’t a sensible thing to do, it will just give you back the assumptions and parameterisations you put into the model in the first place.

Tim Clark
October 22, 2010 9:43 am

Not being up to date on programming, at times these discussions usually are not very appealing to me. However, there is something that bothers me about this proposition by Dr. Levine.
My simplistic concept of these GCM’s is that they contain numerous subsets of algorithyms intended to quantify a host of variable parameters. Probably an extensive listing, but along the lines of [CO2 physics], [frontal/wind/jet stream patterns], [temp-historical], [atmospheric physics], etc……………….. Then, additional mathematical machinations to approximate the interactions, some linear, some not.
One of these subset algorithyms herein indicated [clouds] is described by the following quote from above:
“The shortwave impact of changes in boundary-layer clouds, and to a lesser extent mid-level clouds, constitutes the largest contributor to inter-model differences in global cloud feedbacks. The relatively poor simulation of these clouds in the present climate is a reason for some concern. The response to global warming of deep convective clouds is also a substantial source of uncertainty in projections since current models predict different responses of these clouds. Observationally based evaluation of cloud feedbacks indicates that climate models exhibit different strengths and weaknesses, and it is not yet possible to determine which estimates of the climate change cloud feedbacks are the most reliable.”
and here:
“Modelling assumptions controlling the cloud water phase (liquid, ice or mixed) are known to be critical for the prediction of climate sensitivity. However, the evaluation of these assumptions is just beginning (Doutraix-Boucher and Quaas, 2004; Naud et al., 2006).
This results in the GCM analysis,simply stated as [subset][subset],[subset][cloud][etc]……..
Dr. Levine, you know as well as I that the assumed forcing value contained in subset [cloud] which has a large number of calculations in the IPCC report is reported as a positive value ([+cloud]). This results in the positive forcing causing CAGW. But as the IPCC itself admits: However, the evaluation of these assumptions is just beginning. CO2, alone, is only 1 degree or so (with a host of other assumptions).
Therefore, in order to entertain a diverse assembly of runs with a wide range of values to compensate for lack of knowledge as you propose, some runs must also contain [-cloud]. So, the equation of all runs averaged would be:
X GCM runs ([subset]I[subset],I[subset]I[etc]I[+cloud] + X GCM runs([subset]I[subset]I,[subset]I[etc]I[-cloud]) / 2 = temp change/doubling [CO2] where the determinant I denotes some mathematical interaction.
My question of course is, doesn’t the average of these result in zero cloud effect. I understand this is a very simple approach to GCM’s but in theory, you are proposing just this, a democratic mathematical calculation. An average of assumptions. It probably would be closer to reality than the precontrived assumptions elaborated on above by others. But result by average will still only be that, an average of assumptions. Tom Fuller essentially is proposing the same thing with his club of 2.5 in http://wattsupwiththat.com/2010/10/20/the-league-of-2-5/
Far better in my opinion to defer any policy decisions until such time that longer-term cyclic climate approximations can be refined. In the interim, we should drop the CAGW fearmongering, spending money on proper data collection.

Scott Covert
October 22, 2010 6:08 pm

Awesome comments!
I see some guest posts?
John Marshall?
Prof Ryan?
John Marshall, is your model open source?

Geoff
October 22, 2010 8:58 pm

Dear Prof. Ryan,
You’re welcome for the Knutti reference. If you are an AMS member you may want to look at the article “in press” at the Journal of Climate by Pennell. If not I re-print the abstract of the article here:
“Projections of future climate change are increasingly based on the output of many different models. Typically, the mean over all model simulations is considered as the optimal prediction, with the underlying assumption that different models provide statistically independent information evenly distributed around the true state. However, there is reason to believe that this is not the best assumption. Coupled models are of comparable complexity and are constructed in similar ways. Some models share parts of the same code and some models are even developed at the same center. Therefore, the limitations of these models tend to be fairly similar, contributing to the well-known problem of common model biases and possibly to an unrealistically small spread in the outcomes of model predictions.
This study attempts to quantify the extent of this problem by asking how many models there effectively are and how to best determine this number. Quantifying the effective number of models is achieved by evaluating 24 state-of-the-art models and their ability to simulate broad aspects of 20th century climate. Using two different approaches, we calculate the amount of unique information in the ensemble and find that the effective ensemble size is much smaller than the actual number of models. As more models are included in an ensemble the amount of new information diminishes in proportion. Furthermore, we find that this reduction goes beyond the problem of “same-center” models and that systemic similarities exist across all models. We speculate that current methodologies for the interpretation of multi-model ensembles may lead to overly confident climate predictions”.
On the Effective Number of Climate Models, Christopher Pennell and Thomas Reichler, (Department of Atmospheric Sciences, University of Utah), Journal of Climate, in press

P Wilson
October 23, 2010 2:27 am

Professor Bob Ryan says:
October 21, 2010 at 3:20 pm
Oh I see. Well lets forget the monkey’s as we have to find a correlation that can be modelled based on human activity, andmake it appear as a causal affinity, not the correlation that it is.
I propose that the increase in mobile phone use, coupled with increase in global TV transmission, say from 1979-present day, could be programmed and a correlation with increase (very nominal) in temperatures would definitely be found.
This would dispense with the increasingly unconvincing need for c02 as the culprit, which could be programmed out of th emodels.
It would still be b*ll**** but at least it would lay the blame on humanity’s doorstep.
What do you think?

Tenuc
October 23, 2010 2:39 am

Geoff says:
October 22, 2010 at 8:58 pm
Quantifying the effective number of models is achieved by evaluating 24 state-of-the-art models and their ability to simulate broad aspects of 20th century climate. Using two different approaches, we calculate the amount of unique information in the ensemble and find that the effective ensemble size is much smaller than the actual number of models. As more models are included in an ensemble the amount of new information diminishes in proportion. Furthermore, we find that this reduction goes beyond the problem of “same-center” models and that systemic similarities exist across all models. We speculate that current methodologies for the interpretation of multi-model ensembles may lead to overly confident climate predictions”.
Good find Geof!
All the models used by the IPCC are using the same basic assumptions about how climate operates, with CO2 and an assumed positive feedback from water vapour.
However, in the real world temperature leads CO2 and extra water vapour has a negative feedback. No surprise that the current GCMs cannot model the detail of historic climate, and have no predictive power.
This deception of politicians and the public is shameful.

Karmakaze
October 23, 2010 1:21 pm

There are nearly 50 thousand members of the APU. How many of them supported this guys views?
Is that why he quit?

1 3 4 5
Verified by MonsterInsights