Climate Model Deception – See It for Yourself

From a workshop being held at the University of Northen Colorado today:"Teaching About Earth's Climate Using Data and Numerical Models" - click for more info

Guest post by Robert E. Levine, PhD

The two principal claims of climate alarmism are human attribution, which is the assertion that human-caused emissions of carbon dioxide are warming the planet significantly, and climate danger prediction (or projection), which is the assertion that this human-caused warming will reach dangerous levels. Both claims, which rest largely on the results of climate modeling, are deceptive. As shown below, the deception is obvious and requires little scientific knowledge to discern.

The currently authoritative source for these deceptive claims was produced under the direction of the UN-sponsored Intergovernmental Panel on Climate Change (IPCC) and is titled Climate Change 2007: The Physical Science Basis (PSB). Readers can pay an outrageous price for the 996 page bound book, or view and download it by chapter on the IPCC Web site at http://www.ipcc.ch/publications_and_data/ar4/wg1/en/contents.html

Alarming statements of attribution and prediction appear beginning on Page 1 in the widely quoted Summary for Policymakers (SPM).

Each statement is assigned a confidence level denoting the degree of confidence that the statement is correct. Heightened alarm is conveyed by using terms of trust, such as high confidence or very high confidence.

Building on an asserted confidence in climate model estimates, the PSB SPM goes on to project temperature increases under various assumed scenarios that it says will cause heat waves, dangerous melting of snow and ice, severe storms, rising sea levels, disruption of climate-moderating ocean currents, and other calamities. This alarmism, presented by the IPCC as a set of scientific conclusions, has been further amplified by others in general-audience books and films that dramatize and exaggerate the asserted climate threat derived from models.

For over two years, I have worked with other physicists in an effort to induce the American Physical Society (APS) to moderate its discussion-stifling Statement on Climate Change, and begin to facilitate normal scientific interchange on the physics of climate. In connection with this activity, I began investigating the scientific basis for the alarmist claims promulgated by the IPCC. I discovered that the detailed chapters of the IPCC document were filled with disclosures of climate model deficiencies totally at odds with the confident alarmism of the SPM. For example, here is a quote from Section 8.3, on Page 608 in Chapter 8:

“Consequently, for models to predict future climatic conditions reliably, they must simulate the current climatic state with some as yet unknown degree of fidelity.”

For readers inclined to accept the statistical reasoning of alarmist climatologists, here is a disquieting quote from Section 10.1, on Page 754 in Chapter 10:

“Since the ensemble is strictly an ‘ensemble of opportunity’, without sampling protocol, the spread of models does not necessarily span the full possible range of uncertainty, and a statistical interpretation of the model spread is therefore problematic.”

The full set of climate model deficiency statements is presented in the table below. Each statement appears in the referenced IPCC document at the indicated location. I selected these particular statements from the detailed chapters of the PSB because they show deficiencies in climate modeling, conflict with the confidently alarming statements of the SPM, and can easily be understood by those who lack expertise in climatology. No special scientific expertise of any kind is required to see the deception in treating climate models as trustworthy, presenting confident statements of climate alarm derived from models in the Summary, and leaving the disclosure of climate model deficiencies hidden away in the detailed chapters of the definitive work on climate change. Climategate gave us the phrase “Hide the decline.” For questionable and untrustworthy climate models, we may need another phrase. I suggest “Conceal the flaws.”

I gratefully acknowledge encouragement and a helpful suggestion given by Dr. S. Fred Singer.

Climate Model Deficiencies in IPCC AR4 PSB
Chapter Section Page Quotation
6 6.5.1.3 462 “Current spatial coverage, temporal resolution and age control of available Holocene proxy data limit the ability to determine if there were multi-decadal periods of global warmth comparable to the last half of the 20th century.”
6 6.7 483 “Knowledge of climate variability over the last 1 to 2 kyr in the SH and tropics is severely limited by the lack of paleoclimatic records. In the NH, the situation is better, but there are important limitations due to a lack of tropical records and ocean records. Differing amplitudes and variability observed in available millennial-length NH temperature reconstructions, and the extent to which these differences relate to choice of proxy data and statistical calibration methods, need to be reconciled. Similarly, the understanding of how climatic extremes (i.e., in temperature and hydro-climatic variables) varied in the past is incomplete. Lastly, this assessment would be improved with extensive networks of proxy data that run up to the present day. This would help measure how the proxies responded to the rapid global warming observed in the last 20 years, and it would also improve the ability to investigate the extent to which other, non-temperature, environmental changes may have biased the climate response of proxies in recent decades.”
8 Executive Summary 591 “The possibility that metrics based on observations might be used to constrain model projections of climate change has been explored for the first time, through the analysis of ensembles of model simulations. Nevertheless, a proven set of model metrics that might be used to narrow the range of plausible climate projections has yet to be developed.”
8 Executive Summary 593 “Recent studies reaffirm that the spread of climate sensitivity estimates among models arises primarily from inter-model differences in cloud feedbacks. The shortwave impact of changes in boundary-layer clouds, and to a lesser extent mid-level clouds, constitutes the largest contributor to inter-model differences in global cloud feedbacks. The relatively poor simulation of these clouds in the present climate is a reason for some concern. The response to global warming of deep convective clouds is also a substantial source of uncertainty in projections since current models predict different responses of these clouds. Observationally based evaluation of cloud feedbacks indicates that climate models exhibit different strengths and weaknesses, and it is not yet possible to determine which estimates of the climate change cloud feedbacks are the most reliable.”
8 8.1.2.2 594 “What does the accuracy of a climate model’s simulation of past or contemporary climate say about the accuracy of its projections of climate change” This question is just beginning to be addressed, exploiting the newly available ensembles of models.”
8 8.1.2.2 595 “The above studies show promise that quantitative metrics for the likelihood of model projections may be developed, but because the development of robust metrics is still at an early stage, the model evaluations presented in this chapter are based primarily on experience and physical reasoning, as has been the norm in the past.”
8 8.3 608 “Consequently, for models to predict future climatic conditions reliably, they must simulate the current climatic state with some as yet unknown degree of fidelity.”
8 8.6.3.2.3 638 “Although the errors in the simulation of the different cloud types may eventually compensate and lead to a prediction of the mean CRF in agreement with observations (see Section 8.3), they cast doubts on the reliability of the model cloud feedbacks.”
8 8.6.3.2.3 638 “Modelling assumptions controlling the cloud water phase (liquid, ice or mixed) are known to be critical for the prediction of climate sensitivity. However, the evaluation of these assumptions is just beginning (Doutraix-Boucher and Quaas, 2004; Naud et al., 2006).
8 8.6.4 640 “A number of diagnostic tests have been proposed since the TAR (see Section 8.6.3), but few of them have been applied to a majority of the models currently in use. Moreover, it is not yet clear which tests are critical for constraining future projections. Consequently, a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed.”
9 Executive Summary 665 “Difficulties remain in attributing temperature changes on smaller than continental scales and over time scales of less than 50 years. Attribution at these scales, with limited exceptions, has not yet been established.”
10 10.1 754 “Since the ensemble is strictly an ‘ensemble of opportunity’, without sampling protocol, the spread of models does not necessarily span the full possible range of uncertainty, and a statistical interpretation of the model spread is therefore problematic.”
10 10.5.4.2 805 “The AOGCMs featured in Section 10.5.2 are built by selecting components from a pool of alternative parameterizations, each based on a given set of physical assumptions and including a number of uncertain parameters.”
0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

110 Comments
Inline Feedbacks
View all comments
October 21, 2010 3:44 am

Bill Toland makes an obvious but good point. Model ensembles need to be drawn across a wide spectrum and I too suspect that the IPCC’s subset are too restrictive. However, I am not too sure that isolating one particular assumption, a-priori, would be the right way to go – and I know that is not what Bill is suggesting – if we can get real diversity of input, and modelling independence then the results will be of a much higher quality than any one model could achieve. The big question would be how to do the ensemble modelling properly? The referee needs to be independent too and provide a mechanism for aggregating the results something like Craven’s Bayesian approach. I do not think the IPCC would meet the first criterion. Perhaps this is where some of the large sums spent by governments on climate research could be usefully diverted.

Brownedoff
October 21, 2010 3:48 am

Nick Stokes says:
October 21, 2010 at 2:03 am
“But even stranger – the first two “deceptions” quoted don’t seem to have anything to do with models at all.”
===================================
Really?
It seems to me that the first two “deceptions” are saying that the output from the models cannot be justified because there is not enough information to verify what the models are showing for the period leading up to the last half of the 20th century.
That seems to have everything to do with models.

Keith at hastings UK
October 21, 2010 3:49 am

How to get this known? (rhetorical question!)
UK is still rushing ahead with windmills, carbon capture technologies, carbon taxes on business, etc, despite the recession, all “protected” in the just released round of spending cuts (actually, mostly cuts in spending increases)
I fear that only a series of cold years will serve to unseat most AGW believers.

Rob R
October 21, 2010 3:53 am

Prof Ryan
Many observers of climate science have developed the opinion that the real observable climate is not playing ball. The model ensemble mean is running too hot and virtually all the individual models are as well. You see, there is a history to this. The trial runs began some time ago. In order for the models to be proven right the “share price” will need to undergo a substantial upward correction.
The problem I see with the analogy as you have expressed it is that we are not talking about the value of an individual company on a particular day. We are dealing with the trend in the price, and this is subject to a short to medium-term random walk. In the end the market price is the price that counts, not the mean of the models. You can’t sell the shares in the market based on the modelled price, as no one will buy them if the price is artificially inflated. You can’t sell them based on the modelled price if the model undervalues them. On rare occassions the model ensemble may be correct, but that may be entirely by accident/chance. The way I look at it the market (buyers and sellers) factors in the relevant information with variable lag time depending on forcing factors, some of which are not completely obvious or clear. The climate behaves rather like this. The factors which force change in the climate are not all known at present, and even many those that have been recognised are poorly understood. If you have been reading this blog for a while you will have noticed that several potentially significant forcing factors have been discovered since 2007. These will not yet be accounted for in the models. Clouds have been recognised as a forcing factor for a while, but we still don’t know for sure if the forcing is net positive or negative.
In your analogy at least the modelers have access to some decent data. In climate science the data sucks. So we don’t even know if the models are being calibrated correctly.
In your stock price example one can be reasonably sure that markets in different countries will value the same multinational stock at similar level and react quickly to currency changes. In climate science the models can’t seem to get things consistently right at sub-continental scale let alone country-scale.
To be honest with you I would say that the climate models are not capable of accurate forecasts. If the performance of the models were put to the test by experienced well-informed traders, rather than climate scientists, and if real money was on the line, I suspect the models would be ditched rather quickly. By the way do you know many experienced and successful traders that base stock purchases on model ensembles? Traders work on rapid delivery of real-time data (the current price). Model ensemble prices are outdated before the guidance is available to traders.
In the markets it is the price that counts. In the real world it is the weather that counts. When you break it right down people want to live in places where the weather is good, rather than where the climate is good. My opinion is that the weather controls the climate and that in fact the climate is made of weather. Anything that is further into the future than near-term weather is just speculative opinion. The same is true in the financial markets. The economic weather sure dealt with Enron and more recently rather a lot of Banks and a couple of Governments. The ensemble models didn’t do too well there.

Paul Coppin
October 21, 2010 3:57 am

Nick Stokes says:
October 21, 2010 at 2:03 am
What a strange post! The “deception” is proved entirely by quotes from the AR4!
But even stranger – the first two “deceptions” quoted don’t seem to have anything to do with models at all.

What a strange post! Perhaps you should begin all of your posts with “But, but, but…” You have to be the most most obtuse fellow on the ‘net.

Brian Williams
October 21, 2010 3:57 am

But that nice Mr Gore said that the science was settled! Isn’t that why he bought the beach mansion?

David, UK
October 21, 2010 4:09 am

Professor Bob Ryan says:
October 21, 2010 at 1:30 am
Much of the IPCC’s caveats highlighted in this post are comments about the deficiencies of individual models which could be surmounted by a well constructed ensemble methodology and research design. In my view this is the direction of travel that climate research should follow. The problem is that it relies upon both individual and collective, highly coordinated effort.
It is well established through research into predictive markets that aggregation eliminates unrealistic assumptions/beliefs and gives a central moment which is much closer to the underlying reality. To give an example: a study of 43 independent groups forecasting five year EPS and DPS figures produced 43 valuations of a target company none of which were close to the actual share price. The average was, however, very accurate – being just 30c out on a $34 share price. This type of study has been replicated on numerous occasions in different contexts. Obviously ensemble modelling of stock prices is far more straight-forward than modelling climate but the underlying approach is sound. As far as I understand the construction of an ensemble such as this one needs a number of conditions in place: the modelling should be conducted independently with different research groups and should not be simple replications. They should capture a wide variety of both exogenous and endogenous uncertainties and a wide variety of initial conditions. They should also be done coterminously eliminating feedback of individual model results into subsequent modelling and so on. The research design should examine ensemble performance on a stringent back-test and finally, and most importantly the team who analyse the ensemble should not be one of the teams doing the modelling.
Given this and if the ensemble performed well – as I suspect it might – then I think we might get some really credible forecasts and ultimately understanding of the causes of climate change.

That is incredibly flawed logic. All the models, as different as they are, have some things in common: they have built into them the pre-conceived prejudices and assumptions of the IPCC-funded programmer. To suggest that as wrong as individual models are, if you get enough of them then the errors will cancel each other out, makes no sense if they’re mostly all built with those same assumptions about feedbacks and sensitivity. You might as well just roll a dice several million times and take an average of that, it would be as useful.
And when you talk about “credible forecasts,” do you mean within the average lifetime, or after we’re all dead and no one is around to validate the forecast?

Steve Allen
October 21, 2010 4:22 am

jonjermy says;
“Well, of course they have to feign uncertainty, because if they let it be known that they actually know it all then they wouldn’t get funding for more research. Come to think of it, that’s probably why those trivial and very minor errors sometimes slip through into AGW papers and articles; the Illuminati just don’t want to reveal yet that, in sober truth, they know everything that is to be known. It would blow our tiny minds.”
Right. Just like the Myan kings knew it all. And the Eygptian pharohs. They wasted empire-sized fortunes, employed all known science and engineering (and yes, undoubtedly learned aplenty in the process) on ridiculous crap to help in their afterlife. I know, some of you archelogy types might be offended. Yeah, if only those pyramid workers with “tiny minds” knew what any modern day school child knows today, representative democracy may have gotten off to an earlier start.

anna v
October 21, 2010 4:23 am

1) Weather was the first example used when chaos theory was being developed.
2) Climate is average weather, and as the study of fractals shows, once the under level is fractal, the over levels are too. Phrased differently, order can come out of chaos, as, for example, thermodynamics out of statistical mechanics, only if the mathematical description changes. Climate models are not in this class because they are weather models with some change of parameters and assumed averages.
3) Tsonis et al have shown how to use neural net models to describe the chaotic nature of climate. It is a beginning but that is the way to go in describing chaotic climate. They predict cooling for the next 30 years btw.
4) I doubt that the existing GCMs can be squeezed into a chaotic model. The very basic draw back is that they use numerical methods of solving non linear differential equations. By necessity, this means that they are taking the first terms in a putative expansion of the ideal solution usually Constant +a*X, where a is a parameter (related to the average of some quatnity) they try to get from the data and let the system develop in time. At most, they might have a b*X^2 term, if they expect symmetry from physical arguments. This means as the iteration in time develops the solutions differ from the true solution more and more, because non linear systems of coupled differential equations do not have linear solutions. This is the reason they cannot predict weather sometimes not even for two days. And they have the hubris to believe they can morph these models using average values for a lot of stuff ( those a’s ) and they will have a solution close to the real solution of nature, good for 100 years!

October 21, 2010 4:24 am

Reply to Eric: Thank you for your interesting comment. There is a substantial literature on prediction markets and how they work. All of the points you make are common across a range of modelling applications. With respect to the way that ensemble modelling can work I came across a useful overview you might find interesting http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA469632&Location=U2&doc=GetTRDoc.pdf.
As far as the mechanics of the process are concerned each model is of course an abstraction of reality. Each possesses certain attributes which are valid and some which are not. The trouble is no one knows with certainty which are which. As with markets there are some who believe that their models are better than others – the problem I think you have to accept is that there is no ‘correct model’. All models are to some degree or other incorrect. What ensemble modelling should do, if properly constructed, is to reduce the significance of the incorrect. The problem is of course that even the modellers cannot say to what degree their models misspecify reality – and certainly their critics would not agree with them anyway. In the study I referred to each investigation team knew something of the company concerned and across the set those forecasting model had varying degrees of sophistication, different types of data inputs, and different ways of reducing forecasts into spot values. They too posses a high degree of granularity of the sort you describe. The process I describe, and what a prediction market achieves, is to simulate across as wide a variety of different models as possible. Simulating with just one model will handle uncertainty attaching to the input variables – it cannot handle misspecification within the model itself – unless of course the modeller knew where the misspecification lay and that is clearly what they do not know. I hope that helps.

October 21, 2010 4:27 am

Flagging Charles the Moderator.
Please don’t approve this comment!!!! Guest Post submission.

I have a post that is more informational in nature, but still pretty good, I think. It is up on my site, but here it is as well. If you don’t want it, please don’t approve this comment regardless.
Thanks,
John Kehr
Ice Core Data: Truths and Misconceptions
Overview (Scientific Content below)
There have been a variety of discussions in the past week about what ice core data tells us. There are several misconceptions about what information the ice core data contains. Some people believe that it is the air bubbles trapped in the ice that tell the temperature history of the Earth and others believe that the ice cores tell the temperature history of the location that the ice core came from.
Both of these believes are in fact incorrect. The reason is simple. It is the water itself that tells the temperature history in the ice cores. Since ice cores are a measure of the water in the ice, what the ice core is actually measuring are the conditions of the oceans that the water originally evaporated from.
There are two methods of determining the past temperature. One is to measure the ratio of heavy and light oxygen atoms in the ice and the other is to measure the ratios of heavy and light hydrogen. Since these are the two components of water there are plenty of both to measure in ice. The ratio of heavy and light oxygen is the standard measurement.
Water that is made of the light oxygen evaporates more easily. Water made of the heavy oxygen condenses more easily. This means that the warmer the oceans are near the location of the glaciers or ice sheets, the more heavy oxygen there is in the ice core.
Another way to explain it is how warm the ocean is near the location where the ice core is from. The less heavy oxygen there is, the colder the water is near the glacier. The warmer the water is near the location of the ice core, the higher the content of heavy water there will be in the core.
Heavy Oxygen Cycle: NASA Earth Observatory
The water that evaporates and falls as snow on a glacier (or ice sheet, but I will only use glacier now) records how warm or cold the water near it was. Then the layer the year after that tells the story of that year. Each layer is recorded one on top of the other. Scientists then measure each layer and count down how many layers they are from today. Then they can know the story told for an exact year. It is like counting the rings on a tree, but it is actually much more accurate. It also tells a much larger story than a tree (or forest) possibly could.
Since Greenland is near the end of the path for the Gulf Stream and most of the water vapor in the atmosphere in that region is from the Gulf Stream, ice cores from Greenland are a good indicator of the ocean temperatures in the north Atlantic Ocean. Ice cores in Alaska tell the story of the north Pacific Ocean. Different regions of Antarctica tell different stories based on the weather patterns of that region.
The simplest reason that it works this way is that cold water does not evaporate very much. Water that is 25 °C (77°F) evaporates about twice as much water vapor as water that is 15 °C (59°F). Since there is much more light oxygen than heavy oxygen, cold water evaporates very little heavy oxygen. The warmer the water, the more heavy oxygen is released.
So ice cores are really telling a larger story than they are often given credit for. That is why they are so useful in understanding the global climate. A coral reef can tell a story, but only at the exact location that the coral exists. Since many of the coral reefs are near the equator, they see less of the change in the Earth’s climate and so they tell a very small portion of the story. This is especially true for the past couple of million years as most of the climate changes have been far stronger in the Northern Hemisphere than they have in any other place on Earth. There are very few corals in that part of the world.
This is also why ice cores in Antarctica can measure changes that are mostly happening in the Northern Hemisphere. Temperatures in Antarctica do not change as much as the ice cores indicate. What does change is how much warm water is close to Antarctica. During a glacial period (ice age) the oceans near both poles are much colder so the amount of heavy oxygen is very small. When the the northern hemisphere is warmer (like now) the oceans have a higher sea level and warmer water is closer to the both poles.
Even if a location in Antarctica stayed exactly the same temperature for 100,000 years, the ice core at that location would tell the temperature record of the ocean that evaporated the water that fell as snow at that location. In this way ice cores do not reflect the temperature of the location they are drilled. Ice cores primarily tell the record of the ocean the snow evaporated from and how far that water vapor traveled.
Any type of record that involves the ratio of oxygen that has fallen as rain can also tell the same story. Water that drips in caves to form stalagmites and stalactites can also be used to determine information like this. In places where there are no glaciers this type of thing can be done, but it is more complicated.
Ice cores give the broadest temperature reconstruction because of how the record accumulates. Each layer is distinct and can provide a wide view of the climate for the region for that specific year. This information is recorded for the period that matches the age of the glacier. The bottom and older layers do get squeezed by the weight above, but reliable ice cores that are hundreds of thousands of year old have been recovered.
.
.
Scientific Content….
————————————————————————————–
The specific oxygen isotopes are 16O and 18O. The hydrogen isotopes are hydrogen and deuterium. These are often called the stable isotopes. They are used precisely for the reason that they are stable over time. There can also be combinations of the the different isotopes in a water molecule.. The heavier the overall molecule is, the less it will evaporate and the quicker it will condense. Anything that triggers condensation will drop the amount of 18O that is present. Altitude is another factor that makes a difference as it also decreases the 18O content of the ice.
The Inconvenient Skeptic
What this does is complicate the comparison of ice cores. One location will have a different general path that the water molecules take. An ice core from a lower location will generally have a higher ratio of 18O. This makes comparing ice cores difficult. Each one must be separately calibrated to temperature. For this reason ice core data is not often converted to temperature, but left as isotope ratios. Plotting the isotope ratio’s will show the temperature history, but not calibrated to temperature. So scale is a factor that can be ignored or calibrated. Precise scale calibration is usually not needed though as the relative changes are sufficient.
This is also why different ice cores have different ranges in the isotope record. The farther they are from the ocean source, the less the range will be. Some of the Greenland ice cores have a very large range as the Gulf Stream can get warm water close to Greenland. The Taylor ice core from Antarctica has about half the oxygen isotope range that those in Greenland do.
The Vostok or EPICA ice cores deep in Antarctica use the deuterium ratio’s because those will resolve at that distance since so little heavy oxygen makes it that far. That makes the oxygen isotopes less useful in those locations. The light oxygen with the deuterium can make it that far though and those ratios can be used instead of oxygen.
This should make it clear that ice cores do not tell the local temperature. There is no mechanism for the temperature of a location on an ice sheet to dictate the stable isotope ratios in the snow that falls there. It is truly a function of the distance and path from the location of evaporation to the location that the water molecules became part of the ice sheet.
No single type of record carries as much of a global temperature signal as the ice cores do. That is why they are so often used in paleoclimatology. A single ice core reveals a broader picture than any other method. The main limiting factor for ice cores is of course location. They only exist where there is permanent ice sheets. Glaciers that have a flow are also useless as the record from a core is not a time series. That is why only certain glaciers or parts of glaciers can be cored.
Another problem is glaciers on locations that have warm rock (ie on a volcano). The bottom of the glacier is often melting and the age of the glacier is difficult to determine. That makes the temperature of the bottom ice of a core important. Cold ice at the bottom is an indication that the bottom ice was the forming layer of the glacier. That information can also be very, very useful.

Curiousgeorge
October 21, 2010 4:27 am

Confucius say: “If you can’t dazzle them with brilliance, baffle them with [/snip]“. Seems to be the motto engraved on the IPCC coat of arms.
[Vulgarity snipped.. .. bl57~mod]

October 21, 2010 4:30 am

RichieP says:
October 21, 2010 at 1:51 am
Petter says:
October 21, 2010 at 1:22 am
‘How is one supposed to interpret “ensemble of opportunity”?
(English is not my mother tongue)’
Don’t worry, English *is my mother tongue and it still sounds like bs to me.
==========================================================
Wild a**ed Guess!!!!

Ammonite
October 21, 2010 4:39 am

John Marshall says: October 21, 2010 at 1:59 am
“… by changing the CO2 residence time from that stated by the IPCC, 100-200 years, to what current research has shown it to be, 5-10 years…”
“Effective residence time” (the 100-200 year estimate) relates to the time it would the atmostpheric concentration of CO2 to return to its initial value if a pulse of CO2 were added. “Residence time” relates to the lifetime of an individual CO2 molecule in the atmosphere (the 5-10 year figure). The overload of terms is a common source of confusion.

October 21, 2010 4:55 am

Interesting.
Over on her Blog Dr. Curry has been discussing this very thing, especially in part 2:
http://judithcurry.com/2010/10/17/overconfidence-in-ipccs-detection-and-attribution-part-i/
http://judithcurry.com/2010/10/19/overconfidence-in-ipccs-detection-and-attribution-part-ii/

The IPCC AR4 has this to say about the uncertainties:
“Model and forcing uncertainties are important considerations in attribution research. Ideally, the assessment of model uncertainty should include uncertainties in model parameters (e.g., as explored by multi-model ensembles), and in the representation of physical processes in models (structural uncertainty). Such a complete assessment is not yet available, although model intercomparison studies (Chapter 8) improve the understanding of these uncertainties. The effects of forcing uncertainties, which can be considerable for some forcing agents such as solar and aerosol forcing (Section 9.2), also remain difficult to evaluate despite advances in research. Detection and attribution results based on several models or several forcing histories do provide information on the effects of model and forcing uncertainty. Such studies suggest that while model uncertainty is important, key results, such as attribution of a human influence on temperature change during the latter half of the 20th century, are robust.” The last sentence provides a classical example of IPCC’s leaps of logic that contribute to its high confidence in its attribution results

October 21, 2010 5:04 am

Reply to David: Thank you for your comment. The point I would make is that we simply do not know what is incorrect and what is correct in the way the models are built. We are all arguing, expert and critic alike from positions of relative ignorance. This surely is the sceptics position on climate change. It is the same as the behavioural finance people who say that because individuals are ‘irrational’ that therefore market prices are irrational. The evidence is overwhelmingly that this is not the case. we do not know all the reasons why – and there are many conflicting views – but Mark Rubinstein’s ‘Rational Markets – yes or no? The Affirmative Case’ can be found at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=242259 is a first class discussion of the issue.
What you appear to be saying is that there is some commonality in the models – some systematic bias which is incorrect. First, that isn’t true – there are many commentators to this website and others who claim to have models built on other assumptions. Second, you do not actually know that the dominant bias as you see it is actually incorrect. You think it is but you do not know. Now if an ensemble of models failed back-testing systematically one way or another – because of the bias that you suggest, then the climate research community would really have to think again. As I have suggested above a diverse modelling framework, properly coordinated and summarised is our best chance of making sense of the wide disagreement on the outcomes of the various models that the above post reflects. But an effective ensemble process would have to be much more diverse than that currently considered by the IPCC – but then that is where I may be being naive.

Frank K.
October 21, 2010 5:13 am

John Marshall says:
October 21, 2010 at 1:59 am
“I have run a GCM on my computer and by changing the CO2 residence time from that stated by the IPCC, 100-200 years, to what current research has shown it to be, 5-10 years, the supposed critical temperature rise becomes a fall in temperature, everything else being equal. Just shows how you can confuse with rubbish inputs into models. Do they work with chaotic systems? I do not think that they do.”
Hi John – would it be possible to post a summary your experiences running your GCM? I’m willing to bet it’s pretty easy to get them to crash and burn by changing some of the parameters. People don’t realize how many “tuning knobs” and “empirically-derived constants” these models have in them…

Murray Grainger
October 21, 2010 5:29 am

The IPCC has helpfully explained all the known unknowns but they appear to have completely forgotten to explain the unknown unknowns.

Pascvaks
October 21, 2010 5:38 am

At the present time we have a World System of science oversight and management that boils down to the inmates being in charge of the nut house while the goodguys are in straight jackets, sedated, and locked in rubber rooms. Little wonder that “science” is slipping. If someone within the scientific community doesn’t come up with a new set of protocals and do something fast, “scientists” will slip below “members of congress” on the scale of “Public Confidence”. Wonder how it happened? Who or what started the great slide? Bet it was those worthless hippies and their commie gurus of the 1950’s who are “managing” the system now in their dottage. College students today are soooo tame and docil… How times change!

October 21, 2010 5:48 am

John Marshall says:
October 21, 2010 at 1:59 am
I have run a GCM on my computer and by changing the CO2 residence time from that stated by the IPCC, 100-200 years, to what current research has shown it to be, 5-10 years, the supposed critical temperature rise becomes a fall in temperature, everything else being equal.

When you did that what did the [CO2] look like as a function of time, did it continue you to increase like the real world or did it decrease?

MikeP
October 21, 2010 5:51 am

RichieP says:
October 21, 2010 at 1:51 am
Petter says:
October 21, 2010 at 1:22 am
‘How is one supposed to interpret “ensemble of opportunity”?
(English is not my mother tongue)’
Don’t worry, English *is my mother tongue and it still sounds like bs to me.
***An ensemble of opportunity is a collection of whatever you can find (possibly with a bit of selection). It’s gathering what opportunity throws your way (not random and not necessarily independent). It’s something like going into an apple orchard late season and picking up what you find on the ground.***

October 21, 2010 5:52 am

@Professor Bob Ryan
I think what they’re trying to say is that if you can have no confidence in the numbers the models generate then averaging them together just results in more numbers that you can’t have confidence in.
Lets do a thought experiment. We’ll take 3 models that generate a temperature increase of +3 +-.5, +2 +-.5, and +1 +-.5 at a certain span in time. If you average them together you seem to get a value of 2 as the aggregate predicition. Lets say that the actual value turns out to be .5. Sure, that is a possible value at the low end prediction, but averaging in the other models (that we eventually determine to be incorrect) only hurts us.
But just because the model predicted 1 +- .5 hits the temperature, even that doesn’t make it correct. It could be that the model that would have predicted 0 +- .5 is the correct model. Or -2 +- 3 At a different time scale, 1+-.5 could have been completely wrong.
The problem with aggregating these models is that there’s enough uncertainty in them that you may as well be picking random numbers to average together for all of the confidence that you can place in them.

October 21, 2010 5:56 am

Addendum: I guess what I’m saying is that in my opinion their error bars are a complete lie [perhaps you mean error . . lie is so pejorative . . mod] because they don’t acknowledge ALL of the uncertainty in the model, only in their measurements.

October 21, 2010 6:00 am

Rob R: thank you for your excellent comment which to a very large extent I agree with. You are right that financial models give spot prices but they also accurately predict the drift term in the stochastic process that financial securities typically follow. Anyway that is not really the concern of this blog. What matters is how do we make sense of what the models are telling us. Appealing to the historical climate record does not provide an answer. Even if we conclusively demonstrated that we were all warmer in 1066 that would not prove that current climate changes are not driven by human emissions. The answer to this ultimately must come from the models which in the end are what we connect our theories of climate change to what we observe on the ground. What I suspect is the best chance we all have of getting closer to the truth is through some ensemble process – more independent of vested interests than it is now. At the very least would help the more open minded members of the climate science community to consider alternative theoretical explanations, if you are correct, it becomes very clear that the ensemble is running too hot. At the moment it can be put down to singular as opposed to collective misspecification. Anyway, a fascinating debate but its back to the marking. I will look by later to see if there is any support for the general thrust of what I am saying – or otherwise – criticism is good.

Tom in Florida
October 21, 2010 6:02 am

I believe what we have here is scientific proof of GIGO.

Verified by MonsterInsights