Climate Model Deception – See It for Yourself

From a workshop being held at the University of Northen Colorado today:"Teaching About Earth's Climate Using Data and Numerical Models" - click for more info

Guest post by Robert E. Levine, PhD

The two principal claims of climate alarmism are human attribution, which is the assertion that human-caused emissions of carbon dioxide are warming the planet significantly, and climate danger prediction (or projection), which is the assertion that this human-caused warming will reach dangerous levels. Both claims, which rest largely on the results of climate modeling, are deceptive. As shown below, the deception is obvious and requires little scientific knowledge to discern.

The currently authoritative source for these deceptive claims was produced under the direction of the UN-sponsored Intergovernmental Panel on Climate Change (IPCC) and is titled Climate Change 2007: The Physical Science Basis (PSB). Readers can pay an outrageous price for the 996 page bound book, or view and download it by chapter on the IPCC Web site at http://www.ipcc.ch/publications_and_data/ar4/wg1/en/contents.html

Alarming statements of attribution and prediction appear beginning on Page 1 in the widely quoted Summary for Policymakers (SPM).

Each statement is assigned a confidence level denoting the degree of confidence that the statement is correct. Heightened alarm is conveyed by using terms of trust, such as high confidence or very high confidence.

Building on an asserted confidence in climate model estimates, the PSB SPM goes on to project temperature increases under various assumed scenarios that it says will cause heat waves, dangerous melting of snow and ice, severe storms, rising sea levels, disruption of climate-moderating ocean currents, and other calamities. This alarmism, presented by the IPCC as a set of scientific conclusions, has been further amplified by others in general-audience books and films that dramatize and exaggerate the asserted climate threat derived from models.

For over two years, I have worked with other physicists in an effort to induce the American Physical Society (APS) to moderate its discussion-stifling Statement on Climate Change, and begin to facilitate normal scientific interchange on the physics of climate. In connection with this activity, I began investigating the scientific basis for the alarmist claims promulgated by the IPCC. I discovered that the detailed chapters of the IPCC document were filled with disclosures of climate model deficiencies totally at odds with the confident alarmism of the SPM. For example, here is a quote from Section 8.3, on Page 608 in Chapter 8:

“Consequently, for models to predict future climatic conditions reliably, they must simulate the current climatic state with some as yet unknown degree of fidelity.”

For readers inclined to accept the statistical reasoning of alarmist climatologists, here is a disquieting quote from Section 10.1, on Page 754 in Chapter 10:

“Since the ensemble is strictly an ‘ensemble of opportunity’, without sampling protocol, the spread of models does not necessarily span the full possible range of uncertainty, and a statistical interpretation of the model spread is therefore problematic.”

The full set of climate model deficiency statements is presented in the table below. Each statement appears in the referenced IPCC document at the indicated location. I selected these particular statements from the detailed chapters of the PSB because they show deficiencies in climate modeling, conflict with the confidently alarming statements of the SPM, and can easily be understood by those who lack expertise in climatology. No special scientific expertise of any kind is required to see the deception in treating climate models as trustworthy, presenting confident statements of climate alarm derived from models in the Summary, and leaving the disclosure of climate model deficiencies hidden away in the detailed chapters of the definitive work on climate change. Climategate gave us the phrase “Hide the decline.” For questionable and untrustworthy climate models, we may need another phrase. I suggest “Conceal the flaws.”

I gratefully acknowledge encouragement and a helpful suggestion given by Dr. S. Fred Singer.

Climate Model Deficiencies in IPCC AR4 PSB
Chapter Section Page Quotation
6 6.5.1.3 462 “Current spatial coverage, temporal resolution and age control of available Holocene proxy data limit the ability to determine if there were multi-decadal periods of global warmth comparable to the last half of the 20th century.”
6 6.7 483 “Knowledge of climate variability over the last 1 to 2 kyr in the SH and tropics is severely limited by the lack of paleoclimatic records. In the NH, the situation is better, but there are important limitations due to a lack of tropical records and ocean records. Differing amplitudes and variability observed in available millennial-length NH temperature reconstructions, and the extent to which these differences relate to choice of proxy data and statistical calibration methods, need to be reconciled. Similarly, the understanding of how climatic extremes (i.e., in temperature and hydro-climatic variables) varied in the past is incomplete. Lastly, this assessment would be improved with extensive networks of proxy data that run up to the present day. This would help measure how the proxies responded to the rapid global warming observed in the last 20 years, and it would also improve the ability to investigate the extent to which other, non-temperature, environmental changes may have biased the climate response of proxies in recent decades.”
8 Executive Summary 591 “The possibility that metrics based on observations might be used to constrain model projections of climate change has been explored for the first time, through the analysis of ensembles of model simulations. Nevertheless, a proven set of model metrics that might be used to narrow the range of plausible climate projections has yet to be developed.”
8 Executive Summary 593 “Recent studies reaffirm that the spread of climate sensitivity estimates among models arises primarily from inter-model differences in cloud feedbacks. The shortwave impact of changes in boundary-layer clouds, and to a lesser extent mid-level clouds, constitutes the largest contributor to inter-model differences in global cloud feedbacks. The relatively poor simulation of these clouds in the present climate is a reason for some concern. The response to global warming of deep convective clouds is also a substantial source of uncertainty in projections since current models predict different responses of these clouds. Observationally based evaluation of cloud feedbacks indicates that climate models exhibit different strengths and weaknesses, and it is not yet possible to determine which estimates of the climate change cloud feedbacks are the most reliable.”
8 8.1.2.2 594 “What does the accuracy of a climate model’s simulation of past or contemporary climate say about the accuracy of its projections of climate change” This question is just beginning to be addressed, exploiting the newly available ensembles of models.”
8 8.1.2.2 595 “The above studies show promise that quantitative metrics for the likelihood of model projections may be developed, but because the development of robust metrics is still at an early stage, the model evaluations presented in this chapter are based primarily on experience and physical reasoning, as has been the norm in the past.”
8 8.3 608 “Consequently, for models to predict future climatic conditions reliably, they must simulate the current climatic state with some as yet unknown degree of fidelity.”
8 8.6.3.2.3 638 “Although the errors in the simulation of the different cloud types may eventually compensate and lead to a prediction of the mean CRF in agreement with observations (see Section 8.3), they cast doubts on the reliability of the model cloud feedbacks.”
8 8.6.3.2.3 638 “Modelling assumptions controlling the cloud water phase (liquid, ice or mixed) are known to be critical for the prediction of climate sensitivity. However, the evaluation of these assumptions is just beginning (Doutraix-Boucher and Quaas, 2004; Naud et al., 2006).
8 8.6.4 640 “A number of diagnostic tests have been proposed since the TAR (see Section 8.6.3), but few of them have been applied to a majority of the models currently in use. Moreover, it is not yet clear which tests are critical for constraining future projections. Consequently, a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed.”
9 Executive Summary 665 “Difficulties remain in attributing temperature changes on smaller than continental scales and over time scales of less than 50 years. Attribution at these scales, with limited exceptions, has not yet been established.”
10 10.1 754 “Since the ensemble is strictly an ‘ensemble of opportunity’, without sampling protocol, the spread of models does not necessarily span the full possible range of uncertainty, and a statistical interpretation of the model spread is therefore problematic.”
10 10.5.4.2 805 “The AOGCMs featured in Section 10.5.2 are built by selecting components from a pool of alternative parameterizations, each based on a given set of physical assumptions and including a number of uncertain parameters.”
0 0 votes
Article Rating
110 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
richard telford
October 21, 2010 12:12 am

“Current spatial coverage, temporal resolution and age control of available Holocene proxy data limit the ability to determine if there were multi-decadal periods of global warmth comparable to the last half of the 20th century.”
This is not a criticism of the models but of the proxy-data.
I can think of much better places to conceal uncertainty than in the IPCC report.

LabMunkey
October 21, 2010 12:18 am

It’s worse than we thought…..

Adam Gallon
October 21, 2010 1:01 am

I wonder how many delegates and contributors to this circus showe actually have
1) Read all 996 pages
2) Followed up all references
3) Understood it all
Reading just these selected highlights has made me go “Eh, what?” and I’ve a science degree, an IQ over 135 and am used to reading & interpreting medical clinical trial papers.
Reading this a few times leaves me with the following “take home message”
462 “We don’t know if the changes seen over the past decades are anything out of the ordinary as far as climate goes”
483 ” Ditto and we can’t even decide which, if any, set of treemometers, shellfish, layers of mud or offerings to the gods are the ones we should be using, nor can we agree how we should be analysing them, even if they are the right ones to use”
591 “Actually, we’ve been winging it ever since this circus started and we’ve still not even decided if we’re on the right track”
593 “We’re using a whole load of assumptions that we’ve really not got a shred of experimental data to support and we can’t even agree about what we should be measuring”
594 “We’ve been winging it and we’re still flapping like mad and getting nowhere fast”
595 “We’re still flapping our arms, we reckon we’ll fly because we’re doing basically what birds do and they fly”
608 “Anyone know what’s the best bird suit to wear whilst we flap?”
638 “Apparently, if we get enough monkeys typing, we’ll be able to produce the complete works of Shakespear!”
638″We’ve made a load of guesses, now we’ll start thinking about checking if they’re right.”
640 “However, we really don’t know how to check if our guesses are the ones we should have been using for the past, oooh, 30 years?”
665 “how warm is it anyway?”
754 “Quick, we really don’t know what we’re on about, write some management speak, pass me a copy of “The Dilbert Principle”!”
805 “Who’s for a quick game or three of Blackjack?”
I think I can sum it all up now.
A few scientists convinced themselves that we’re heading for trouble, a lot more thought it’d probably be a good idea to have a look at this, afterall, there’s not a lot else to do at the moment with all these new PhD students. Some politicians decided that this looked like a good band waggon to hitch a ride on; some of the scientists involved have been coughing discretely, and pointing out that we really don’t know enough yet, but possibly with a bit more time, money & research we’ll have a better idea. Some more scientists have also decided that now they’ve rubbed the lamp and a Genie’s popped out, they’d better start thinking hard about which three questions they should be asking it, because knowing what Genies are like, there’s going to be tears before bedtime if they don’t get it right!
In the mean time, plenty of hangers on have realised that there’s a few easy bucks to be made and careers to be advanced here and if anyone rocks the boat, there’s going to be trouble!

Alan the Brit
October 21, 2010 1:17 am

You’ve left out the primary statement buried in the Appendicies:-
“We definitely possibly maybe think we might know how it all works assuming our assumptions are right provided we really think we maybe have we’ve chosen the right assumptions to assume in the first place, assuming that the theory of manmade emissions of Carbon Dioxide really is a main driver of climate, assuming it really can procduce changes in global averaged temperatures as some Swedish bloke thought over a hundred years ago, providing that he was wrong to completely change his mind 10 years later!”

Petter
October 21, 2010 1:22 am

How is one supposed to interpret “ensemble of opportunity”?
(English is not my mother tongue)

Orson
October 21, 2010 1:24 am

It’s worse than I thought…..

kMc2
October 21, 2010 1:24 am

Thanks ever so. With this, to the local publisher, sold out to “settled science.”

Professor Bob Ryan
October 21, 2010 1:30 am

LabMunkey:
‘It’s worse than we thought…..’
Well not quite – there is considerable difference between model ensembles and individual models. Much of the IPCC’s caveats highlighted in this post are comments about the deficiencies of individual models which could be surmounted by a well constructed ensemble methodology and research design. In my view this is the direction of travel that climate research should follow. The problem is that it relies upon both individual and collective, highly coordinated effort.
It is well established through research into predictive markets that aggregation eliminates unrealistic assumptions/beliefs and gives a central moment which is much closer to the underlying reality. To give an example: a study of 43 independent groups forecasting five year EPS and DPS figures produced 43 valuations of a target company none of which were close to the actual share price. The average was, however, very accurate – being just 30c out on a $34 share price. This type of study has been replicated on numerous occasions in different contexts. Obviously ensemble modelling of stock prices is far more straight-forward than modelling climate but the underlying approach is sound. As far as I understand the construction of an ensemble such as this one needs a number of conditions in place: the modelling should be conducted independently with different research groups and should not be simple replications. They should capture a wide variety of both exogenous and endogenous uncertainties and a wide variety of initial conditions. They should also be done coterminously eliminating feedback of individual model results into subsequent modelling and so on. The research design should examine ensemble performance on a stringent back-test and finally, and most importantly the team who analyse the ensemble should not be one of the teams doing the modelling.
Given this and if the ensemble performed well – as I suspect it might – then I think we might get some really credible forecasts and ultimately understanding of the causes of climate change.

jonjermey
October 21, 2010 1:32 am

Well, of course they have to feign uncertainty, because if they let it be known that they actually know it all then they wouldn’t get funding for more research. Come to think of it, that’s probably why those trivial and very minor errors sometimes slip through into AGW papers and articles; the Illuminati just don’t want to reveal yet that, in sober truth, they know everything that is to be known. It would blow our tiny minds.
Like Pooh-Bah: “I am, in point of fact, a particularly haughty and exclusive person, of pre-Adamite ancestral descent. … But I struggle hard to overcome this defect. I mortify my pride continually. When all the great officers of State resigned in a body because they were too proud to serve under an ex-tailor, did I not unhesitatingly accept all their posts at once?”

Frank
October 21, 2010 1:36 am

“each based on a given set of physical assumptions and including a number of uncertain parameters”.
Translates as: “We make it up as we go along”.

David, UK
October 21, 2010 1:51 am

@ richard telford, October 21, 2010 at 12:12 am:
Proxy data are an integral part of the models. To say that something is “not a criticism of the models but of the proxy-data” is a tad tenuous, and kind of misses the point.
I can think of much better places to conceal uncertainty than in the IPCC report.
Good, I’m glad you can, and I’m sure most of us are aware of other examples of hiding uncertainty. But the fact remains that uncertainty has been concealed in the IPCC report.

RichieP
October 21, 2010 1:51 am

Petter says:
October 21, 2010 at 1:22 am
‘How is one supposed to interpret “ensemble of opportunity”?
(English is not my mother tongue)’
Don’t worry, English *is my mother tongue and it still sounds like bs to me.

NS
October 21, 2010 1:58 am

Professor Bob Ryan says:
October 21, 2010 at 1:30 am
LabMunkey:
‘It’s worse than we thought…..’
Well not quite – there is considerable difference between model ensembles and individual models. Much of the IPCC’s caveats highlighted in this post are comments about the deficiencies of individual models which could be surmounted by a well constructed ensemble methodology and research design. In my view this is the direction of travel that climate research should follow. The problem is that it relies upon both individual and collective, highly coordinated effort.
It is well established through research into predictive markets that aggregation eliminates unrealistic assumptions/beliefs and gives a central moment which is much closer to the underlying reality.
—————
If there are base deficiencies in the ASSUMPTIONS IN THE THEORY underlying the models then this point is invalid. You can refer to the stock market crash of 2008 caused by CDOs based on risky sub-prime mortgages bundled together and sold as AAA debt using the statistical trickery referred to above.

John Marshall
October 21, 2010 1:59 am

I have run a GCM on my computer and by changing the CO2 residence time from that stated by the IPCC, 100-200 years, to what current research has shown it to be, 5-10 years, the supposed critical temperature rise becomes a fall in temperature, everything else being equal. Just shows how you can confuse with rubbish inputs into models. Do they work with chaotic systems? I do not think that they do.

Tom
October 21, 2010 2:02 am

@Petter – Describing a data set as an ensemble of opportunity means that they haven’t got any plan for how to get an unbiased, representative data set – they’ve just used whatever data is available. They hope it is unbiased and representative, but who knows?
When you try to measure how reliable a product is, you pick samples on some random scheme that ensures the sample is representative. The current “ensemble of opportunity” of climate models is a bit like measuring reliability by testing all the ones that come back from customers because they don’t work – you will find that nearly 100% of them don’t work (there are some idiots out there, after all). But it’s a data set that is available to you, so unless you have access to a real sample then it might be the best you can do and you will have to infer from it what information you can.

October 21, 2010 2:03 am

What a strange post! The “deception” is proved entirely by quotes from the AR4!
But even stranger – the first two “deceptions” quoted don’t seem to have anything to do with models at all.

Manfred
October 21, 2010 2:12 am

“As shown below, the deception is obvious and requires little scientific knowledge to discern.”
I think this is the most effective way to to restore truth in this debate. My own selection of deceptions in climate science, which can be understood by anyone, is the following:
1. The global temperature record is false:
The assumed transition from bucket to inlet measurements did not happen in 1941 as assumed by all global temperature data sets. This assumption is proven to be false.
A simple verification of types of measurements here
http://ssrn.com/abstract=1653928 (figure 3.3 in downloadable document)
or here
http://climateaudit.org/2010/09/01/icoads-hawaii/
is sufficient to verify that the temperature adjustment ending in 1941 is false.
Temperatures need to be adjusted upwards until perhaps the 1980s and the size of this error is about the size of the complete assumed warming since the 1940s.
This is arguably the most influential error in climate science and alone capable of refuting the whole agenda.
2. The IPCC hides the lack of knowledge about climate feedbacks:
A simple synopsys between the IPCC text and the referenced literature is sufficient to verify this claim.
http://climateaudit.org/2009/07/15/boundary-layer-clouds-ipcc-bowdlerizes-bony/
3. Hide the decline in context
Bringing the prima facie evidence of the climategate emails into context is an undisputable and easy to follow must see lesson for everyone.
http://climateaudit.org/2009/12/10/ipcc-and-the-trick/
4. Prima facie evidence of FOI obstruction and the false statement of the Muir Russel inquiry:
2 days after David Hollands FOI request, on the 29th May 2008, Phil Jones asked Michael Mann in an email:
“―Can you delete any emails you may have had with Keith re AR4? Keith will do likewise. He’s not in at the moment – minor family crisis. Can you also email Gene and get him to do the same? I don’t have his new email address. We will be getting Caspar to do likewise”.
The Muir Russel inquiry concluded boldly:
“we have seen no evidence of any attempt to delete information in respect of a request already made.”
http://climateaudit.org/2010/07/22/blatant-misrepresentation-by-muir-russell-panel/

Bill Toland
October 21, 2010 2:17 am

As professor Ryan has said, an ensemble of models can produce good predictions by averaging their output.
However, this only works well if the models are constructed independently of each other. Unfortunately, all of the models which the ipcc has chosen to use make the same (extremely dubious) assumption about water vapour feedback being positive. My climate model also shows a large temperature increase by using a large positive feedback in water vapour. However, water vapour feedback could just as easily be negative; in this case, my climate model actually forecasts a modest fall in temperature in the next century.
The ipcc models are also assuming a large increase in the rate of carbon dioxide accumulation in the atmosphere. This also appears to be unrealistic; the rate of increase appears to be remarkably steady and shows no sign of acceleration. This assumption has the effect of hugely exaggerating the temperature increase in the model output.

Eric (skeptic)
October 21, 2010 2:29 am

Professor Bob Ryan, can you explain how averaging incorrect models produces a correct result? It seems like you are implying that averaging models with different assumptions produces a better result than averaging multiple runs of the same model. I’m not sure even that more limited statement is true.
There are two main problems with models, both of which result in incorrect results. First is the granularity, since models cannot model small scale convective processes, those must be parameterized and those assumptions dictate the results. Your claim may be true if different modelers are using a quantitatively full range of convective parameterizations. But I highly doubt that is the case, I have never heard of such an analysis. The second problem with models is the problem of chaos. In that case the statistics of lots of runs of the same model would be much more informative than the statistics of runs of different models. That still only reveals an average and an uncertainty which will be quite large.

LJHills
October 21, 2010 3:15 am

So, to summarise, Playstation science has limited relevance to the real world.

Stephen Wilde
October 21, 2010 3:17 am

Well since we are considering models here is my latest take on the basic parameters and observations that the models need to take account of, all drawn into a reasonably coherent climate description:
http://climaterealists.com/index.php?id=6482&linkbox=true&position=4
It is best to go to the PDF version because the layout is easier to follow than the site version.

3x2
October 21, 2010 3:22 am

Petter says:
October 21, 2010 at 1:22 am
How is one supposed to interpret “ensemble of opportunity”?
(English is not my mother tongue)

If you were to bet on every horse to win in a particular race then you have framed your “ensemble of opportunity”. At the end of the race you can, barring disaster, claim to have correctly picked the winner.
Likewise with your climate model “ensemble”. Throw in a model that indicates cooling and you can safely claim that whatever climate seems to be doing is well within your “ensemble” of “projections”.

Roger Carr
October 21, 2010 3:25 am

Petter says: (October 21, 2010 at 1:22 am) How is one supposed to interpret “ensemble of opportunity”?
Try this definition from the web, Petter: “In fluid mechanics, an ensemble is an imaginary collection of notionally identical experiments.”
After all, there is nothing much more “fluid” than climate science…

Stephen Wilde
October 21, 2010 3:26 am

About two years ago in a climate thread on a non climate site I carried out a very similar exercise, highlighting many of those hidden doubts and uncertainties for the benefit of a wider audience and pointing out the scale of the discrepancy between those doubts and uncertainties and the Summary For Policymakers.
I think this is potentially a very fruitful way for those more expert than me to dismantle the credibility of the IPCC once and for all. The Summary is completely discredited by the huge numbers of reservations and qualifications set out by the contributors in the main body of the documentation leading up to the Summary.
It has long been my opinion that the Summary is essentially a work of fiction.
In fact it was observing that very situation that gave me the incentive to try and produce something better and more in tune with real world observations.

Jim Cripwell
October 21, 2010 3:31 am

I put it a little more simply. NONE of the models ujsed by the warmaholics has ever been validated.

Professor Bob Ryan
October 21, 2010 3:44 am

Bill Toland makes an obvious but good point. Model ensembles need to be drawn across a wide spectrum and I too suspect that the IPCC’s subset are too restrictive. However, I am not too sure that isolating one particular assumption, a-priori, would be the right way to go – and I know that is not what Bill is suggesting – if we can get real diversity of input, and modelling independence then the results will be of a much higher quality than any one model could achieve. The big question would be how to do the ensemble modelling properly? The referee needs to be independent too and provide a mechanism for aggregating the results something like Craven’s Bayesian approach. I do not think the IPCC would meet the first criterion. Perhaps this is where some of the large sums spent by governments on climate research could be usefully diverted.

Brownedoff
October 21, 2010 3:48 am

Nick Stokes says:
October 21, 2010 at 2:03 am
“But even stranger – the first two “deceptions” quoted don’t seem to have anything to do with models at all.”
===================================
Really?
It seems to me that the first two “deceptions” are saying that the output from the models cannot be justified because there is not enough information to verify what the models are showing for the period leading up to the last half of the 20th century.
That seems to have everything to do with models.

Keith at hastings UK
October 21, 2010 3:49 am

How to get this known? (rhetorical question!)
UK is still rushing ahead with windmills, carbon capture technologies, carbon taxes on business, etc, despite the recession, all “protected” in the just released round of spending cuts (actually, mostly cuts in spending increases)
I fear that only a series of cold years will serve to unseat most AGW believers.

Rob R
October 21, 2010 3:53 am

Prof Ryan
Many observers of climate science have developed the opinion that the real observable climate is not playing ball. The model ensemble mean is running too hot and virtually all the individual models are as well. You see, there is a history to this. The trial runs began some time ago. In order for the models to be proven right the “share price” will need to undergo a substantial upward correction.
The problem I see with the analogy as you have expressed it is that we are not talking about the value of an individual company on a particular day. We are dealing with the trend in the price, and this is subject to a short to medium-term random walk. In the end the market price is the price that counts, not the mean of the models. You can’t sell the shares in the market based on the modelled price, as no one will buy them if the price is artificially inflated. You can’t sell them based on the modelled price if the model undervalues them. On rare occassions the model ensemble may be correct, but that may be entirely by accident/chance. The way I look at it the market (buyers and sellers) factors in the relevant information with variable lag time depending on forcing factors, some of which are not completely obvious or clear. The climate behaves rather like this. The factors which force change in the climate are not all known at present, and even many those that have been recognised are poorly understood. If you have been reading this blog for a while you will have noticed that several potentially significant forcing factors have been discovered since 2007. These will not yet be accounted for in the models. Clouds have been recognised as a forcing factor for a while, but we still don’t know for sure if the forcing is net positive or negative.
In your analogy at least the modelers have access to some decent data. In climate science the data sucks. So we don’t even know if the models are being calibrated correctly.
In your stock price example one can be reasonably sure that markets in different countries will value the same multinational stock at similar level and react quickly to currency changes. In climate science the models can’t seem to get things consistently right at sub-continental scale let alone country-scale.
To be honest with you I would say that the climate models are not capable of accurate forecasts. If the performance of the models were put to the test by experienced well-informed traders, rather than climate scientists, and if real money was on the line, I suspect the models would be ditched rather quickly. By the way do you know many experienced and successful traders that base stock purchases on model ensembles? Traders work on rapid delivery of real-time data (the current price). Model ensemble prices are outdated before the guidance is available to traders.
In the markets it is the price that counts. In the real world it is the weather that counts. When you break it right down people want to live in places where the weather is good, rather than where the climate is good. My opinion is that the weather controls the climate and that in fact the climate is made of weather. Anything that is further into the future than near-term weather is just speculative opinion. The same is true in the financial markets. The economic weather sure dealt with Enron and more recently rather a lot of Banks and a couple of Governments. The ensemble models didn’t do too well there.

Paul Coppin
October 21, 2010 3:57 am

Nick Stokes says:
October 21, 2010 at 2:03 am
What a strange post! The “deception” is proved entirely by quotes from the AR4!
But even stranger – the first two “deceptions” quoted don’t seem to have anything to do with models at all.

What a strange post! Perhaps you should begin all of your posts with “But, but, but…” You have to be the most most obtuse fellow on the ‘net.

Brian Williams
October 21, 2010 3:57 am

But that nice Mr Gore said that the science was settled! Isn’t that why he bought the beach mansion?

David, UK
October 21, 2010 4:09 am

Professor Bob Ryan says:
October 21, 2010 at 1:30 am
Much of the IPCC’s caveats highlighted in this post are comments about the deficiencies of individual models which could be surmounted by a well constructed ensemble methodology and research design. In my view this is the direction of travel that climate research should follow. The problem is that it relies upon both individual and collective, highly coordinated effort.
It is well established through research into predictive markets that aggregation eliminates unrealistic assumptions/beliefs and gives a central moment which is much closer to the underlying reality. To give an example: a study of 43 independent groups forecasting five year EPS and DPS figures produced 43 valuations of a target company none of which were close to the actual share price. The average was, however, very accurate – being just 30c out on a $34 share price. This type of study has been replicated on numerous occasions in different contexts. Obviously ensemble modelling of stock prices is far more straight-forward than modelling climate but the underlying approach is sound. As far as I understand the construction of an ensemble such as this one needs a number of conditions in place: the modelling should be conducted independently with different research groups and should not be simple replications. They should capture a wide variety of both exogenous and endogenous uncertainties and a wide variety of initial conditions. They should also be done coterminously eliminating feedback of individual model results into subsequent modelling and so on. The research design should examine ensemble performance on a stringent back-test and finally, and most importantly the team who analyse the ensemble should not be one of the teams doing the modelling.
Given this and if the ensemble performed well – as I suspect it might – then I think we might get some really credible forecasts and ultimately understanding of the causes of climate change.

That is incredibly flawed logic. All the models, as different as they are, have some things in common: they have built into them the pre-conceived prejudices and assumptions of the IPCC-funded programmer. To suggest that as wrong as individual models are, if you get enough of them then the errors will cancel each other out, makes no sense if they’re mostly all built with those same assumptions about feedbacks and sensitivity. You might as well just roll a dice several million times and take an average of that, it would be as useful.
And when you talk about “credible forecasts,” do you mean within the average lifetime, or after we’re all dead and no one is around to validate the forecast?

Steve Allen
October 21, 2010 4:22 am

jonjermy says;
“Well, of course they have to feign uncertainty, because if they let it be known that they actually know it all then they wouldn’t get funding for more research. Come to think of it, that’s probably why those trivial and very minor errors sometimes slip through into AGW papers and articles; the Illuminati just don’t want to reveal yet that, in sober truth, they know everything that is to be known. It would blow our tiny minds.”
Right. Just like the Myan kings knew it all. And the Eygptian pharohs. They wasted empire-sized fortunes, employed all known science and engineering (and yes, undoubtedly learned aplenty in the process) on ridiculous crap to help in their afterlife. I know, some of you archelogy types might be offended. Yeah, if only those pyramid workers with “tiny minds” knew what any modern day school child knows today, representative democracy may have gotten off to an earlier start.

anna v
October 21, 2010 4:23 am

1) Weather was the first example used when chaos theory was being developed.
2) Climate is average weather, and as the study of fractals shows, once the under level is fractal, the over levels are too. Phrased differently, order can come out of chaos, as, for example, thermodynamics out of statistical mechanics, only if the mathematical description changes. Climate models are not in this class because they are weather models with some change of parameters and assumed averages.
3) Tsonis et al have shown how to use neural net models to describe the chaotic nature of climate. It is a beginning but that is the way to go in describing chaotic climate. They predict cooling for the next 30 years btw.
4) I doubt that the existing GCMs can be squeezed into a chaotic model. The very basic draw back is that they use numerical methods of solving non linear differential equations. By necessity, this means that they are taking the first terms in a putative expansion of the ideal solution usually Constant +a*X, where a is a parameter (related to the average of some quatnity) they try to get from the data and let the system develop in time. At most, they might have a b*X^2 term, if they expect symmetry from physical arguments. This means as the iteration in time develops the solutions differ from the true solution more and more, because non linear systems of coupled differential equations do not have linear solutions. This is the reason they cannot predict weather sometimes not even for two days. And they have the hubris to believe they can morph these models using average values for a lot of stuff ( those a’s ) and they will have a solution close to the real solution of nature, good for 100 years!

Professor Bob Ryan
October 21, 2010 4:24 am

Reply to Eric: Thank you for your interesting comment. There is a substantial literature on prediction markets and how they work. All of the points you make are common across a range of modelling applications. With respect to the way that ensemble modelling can work I came across a useful overview you might find interesting http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA469632&Location=U2&doc=GetTRDoc.pdf.
As far as the mechanics of the process are concerned each model is of course an abstraction of reality. Each possesses certain attributes which are valid and some which are not. The trouble is no one knows with certainty which are which. As with markets there are some who believe that their models are better than others – the problem I think you have to accept is that there is no ‘correct model’. All models are to some degree or other incorrect. What ensemble modelling should do, if properly constructed, is to reduce the significance of the incorrect. The problem is of course that even the modellers cannot say to what degree their models misspecify reality – and certainly their critics would not agree with them anyway. In the study I referred to each investigation team knew something of the company concerned and across the set those forecasting model had varying degrees of sophistication, different types of data inputs, and different ways of reducing forecasts into spot values. They too posses a high degree of granularity of the sort you describe. The process I describe, and what a prediction market achieves, is to simulate across as wide a variety of different models as possible. Simulating with just one model will handle uncertainty attaching to the input variables – it cannot handle misspecification within the model itself – unless of course the modeller knew where the misspecification lay and that is clearly what they do not know. I hope that helps.

October 21, 2010 4:27 am

Flagging Charles the Moderator.
Please don’t approve this comment!!!! Guest Post submission.

I have a post that is more informational in nature, but still pretty good, I think. It is up on my site, but here it is as well. If you don’t want it, please don’t approve this comment regardless.
Thanks,
John Kehr
Ice Core Data: Truths and Misconceptions
Overview (Scientific Content below)
There have been a variety of discussions in the past week about what ice core data tells us. There are several misconceptions about what information the ice core data contains. Some people believe that it is the air bubbles trapped in the ice that tell the temperature history of the Earth and others believe that the ice cores tell the temperature history of the location that the ice core came from.
Both of these believes are in fact incorrect. The reason is simple. It is the water itself that tells the temperature history in the ice cores. Since ice cores are a measure of the water in the ice, what the ice core is actually measuring are the conditions of the oceans that the water originally evaporated from.
There are two methods of determining the past temperature. One is to measure the ratio of heavy and light oxygen atoms in the ice and the other is to measure the ratios of heavy and light hydrogen. Since these are the two components of water there are plenty of both to measure in ice. The ratio of heavy and light oxygen is the standard measurement.
Water that is made of the light oxygen evaporates more easily. Water made of the heavy oxygen condenses more easily. This means that the warmer the oceans are near the location of the glaciers or ice sheets, the more heavy oxygen there is in the ice core.
Another way to explain it is how warm the ocean is near the location where the ice core is from. The less heavy oxygen there is, the colder the water is near the glacier. The warmer the water is near the location of the ice core, the higher the content of heavy water there will be in the core.
Heavy Oxygen Cycle: NASA Earth Observatory
The water that evaporates and falls as snow on a glacier (or ice sheet, but I will only use glacier now) records how warm or cold the water near it was. Then the layer the year after that tells the story of that year. Each layer is recorded one on top of the other. Scientists then measure each layer and count down how many layers they are from today. Then they can know the story told for an exact year. It is like counting the rings on a tree, but it is actually much more accurate. It also tells a much larger story than a tree (or forest) possibly could.
Since Greenland is near the end of the path for the Gulf Stream and most of the water vapor in the atmosphere in that region is from the Gulf Stream, ice cores from Greenland are a good indicator of the ocean temperatures in the north Atlantic Ocean. Ice cores in Alaska tell the story of the north Pacific Ocean. Different regions of Antarctica tell different stories based on the weather patterns of that region.
The simplest reason that it works this way is that cold water does not evaporate very much. Water that is 25 °C (77°F) evaporates about twice as much water vapor as water that is 15 °C (59°F). Since there is much more light oxygen than heavy oxygen, cold water evaporates very little heavy oxygen. The warmer the water, the more heavy oxygen is released.
So ice cores are really telling a larger story than they are often given credit for. That is why they are so useful in understanding the global climate. A coral reef can tell a story, but only at the exact location that the coral exists. Since many of the coral reefs are near the equator, they see less of the change in the Earth’s climate and so they tell a very small portion of the story. This is especially true for the past couple of million years as most of the climate changes have been far stronger in the Northern Hemisphere than they have in any other place on Earth. There are very few corals in that part of the world.
This is also why ice cores in Antarctica can measure changes that are mostly happening in the Northern Hemisphere. Temperatures in Antarctica do not change as much as the ice cores indicate. What does change is how much warm water is close to Antarctica. During a glacial period (ice age) the oceans near both poles are much colder so the amount of heavy oxygen is very small. When the the northern hemisphere is warmer (like now) the oceans have a higher sea level and warmer water is closer to the both poles.
Even if a location in Antarctica stayed exactly the same temperature for 100,000 years, the ice core at that location would tell the temperature record of the ocean that evaporated the water that fell as snow at that location. In this way ice cores do not reflect the temperature of the location they are drilled. Ice cores primarily tell the record of the ocean the snow evaporated from and how far that water vapor traveled.
Any type of record that involves the ratio of oxygen that has fallen as rain can also tell the same story. Water that drips in caves to form stalagmites and stalactites can also be used to determine information like this. In places where there are no glaciers this type of thing can be done, but it is more complicated.
Ice cores give the broadest temperature reconstruction because of how the record accumulates. Each layer is distinct and can provide a wide view of the climate for the region for that specific year. This information is recorded for the period that matches the age of the glacier. The bottom and older layers do get squeezed by the weight above, but reliable ice cores that are hundreds of thousands of year old have been recovered.
.
.
Scientific Content….
————————————————————————————–
The specific oxygen isotopes are 16O and 18O. The hydrogen isotopes are hydrogen and deuterium. These are often called the stable isotopes. They are used precisely for the reason that they are stable over time. There can also be combinations of the the different isotopes in a water molecule.. The heavier the overall molecule is, the less it will evaporate and the quicker it will condense. Anything that triggers condensation will drop the amount of 18O that is present. Altitude is another factor that makes a difference as it also decreases the 18O content of the ice.
The Inconvenient Skeptic
What this does is complicate the comparison of ice cores. One location will have a different general path that the water molecules take. An ice core from a lower location will generally have a higher ratio of 18O. This makes comparing ice cores difficult. Each one must be separately calibrated to temperature. For this reason ice core data is not often converted to temperature, but left as isotope ratios. Plotting the isotope ratio’s will show the temperature history, but not calibrated to temperature. So scale is a factor that can be ignored or calibrated. Precise scale calibration is usually not needed though as the relative changes are sufficient.
This is also why different ice cores have different ranges in the isotope record. The farther they are from the ocean source, the less the range will be. Some of the Greenland ice cores have a very large range as the Gulf Stream can get warm water close to Greenland. The Taylor ice core from Antarctica has about half the oxygen isotope range that those in Greenland do.
The Vostok or EPICA ice cores deep in Antarctica use the deuterium ratio’s because those will resolve at that distance since so little heavy oxygen makes it that far. That makes the oxygen isotopes less useful in those locations. The light oxygen with the deuterium can make it that far though and those ratios can be used instead of oxygen.
This should make it clear that ice cores do not tell the local temperature. There is no mechanism for the temperature of a location on an ice sheet to dictate the stable isotope ratios in the snow that falls there. It is truly a function of the distance and path from the location of evaporation to the location that the water molecules became part of the ice sheet.
No single type of record carries as much of a global temperature signal as the ice cores do. That is why they are so often used in paleoclimatology. A single ice core reveals a broader picture than any other method. The main limiting factor for ice cores is of course location. They only exist where there is permanent ice sheets. Glaciers that have a flow are also useless as the record from a core is not a time series. That is why only certain glaciers or parts of glaciers can be cored.
Another problem is glaciers on locations that have warm rock (ie on a volcano). The bottom of the glacier is often melting and the age of the glacier is difficult to determine. That makes the temperature of the bottom ice of a core important. Cold ice at the bottom is an indication that the bottom ice was the forming layer of the glacier. That information can also be very, very useful.

Curiousgeorge
October 21, 2010 4:27 am

Confucius say: “If you can’t dazzle them with brilliance, baffle them with [/snip]“. Seems to be the motto engraved on the IPCC coat of arms.
[Vulgarity snipped.. .. bl57~mod]

October 21, 2010 4:30 am

RichieP says:
October 21, 2010 at 1:51 am
Petter says:
October 21, 2010 at 1:22 am
‘How is one supposed to interpret “ensemble of opportunity”?
(English is not my mother tongue)’
Don’t worry, English *is my mother tongue and it still sounds like bs to me.
==========================================================
Wild a**ed Guess!!!!

Ammonite
October 21, 2010 4:39 am

John Marshall says: October 21, 2010 at 1:59 am
“… by changing the CO2 residence time from that stated by the IPCC, 100-200 years, to what current research has shown it to be, 5-10 years…”
“Effective residence time” (the 100-200 year estimate) relates to the time it would the atmostpheric concentration of CO2 to return to its initial value if a pulse of CO2 were added. “Residence time” relates to the lifetime of an individual CO2 molecule in the atmosphere (the 5-10 year figure). The overload of terms is a common source of confusion.

October 21, 2010 4:55 am

Interesting.
Over on her Blog Dr. Curry has been discussing this very thing, especially in part 2:
http://judithcurry.com/2010/10/17/overconfidence-in-ipccs-detection-and-attribution-part-i/
http://judithcurry.com/2010/10/19/overconfidence-in-ipccs-detection-and-attribution-part-ii/

The IPCC AR4 has this to say about the uncertainties:
“Model and forcing uncertainties are important considerations in attribution research. Ideally, the assessment of model uncertainty should include uncertainties in model parameters (e.g., as explored by multi-model ensembles), and in the representation of physical processes in models (structural uncertainty). Such a complete assessment is not yet available, although model intercomparison studies (Chapter 8) improve the understanding of these uncertainties. The effects of forcing uncertainties, which can be considerable for some forcing agents such as solar and aerosol forcing (Section 9.2), also remain difficult to evaluate despite advances in research. Detection and attribution results based on several models or several forcing histories do provide information on the effects of model and forcing uncertainty. Such studies suggest that while model uncertainty is important, key results, such as attribution of a human influence on temperature change during the latter half of the 20th century, are robust.” The last sentence provides a classical example of IPCC’s leaps of logic that contribute to its high confidence in its attribution results

Professor Bob Ryan
October 21, 2010 5:04 am

Reply to David: Thank you for your comment. The point I would make is that we simply do not know what is incorrect and what is correct in the way the models are built. We are all arguing, expert and critic alike from positions of relative ignorance. This surely is the sceptics position on climate change. It is the same as the behavioural finance people who say that because individuals are ‘irrational’ that therefore market prices are irrational. The evidence is overwhelmingly that this is not the case. we do not know all the reasons why – and there are many conflicting views – but Mark Rubinstein’s ‘Rational Markets – yes or no? The Affirmative Case’ can be found at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=242259 is a first class discussion of the issue.
What you appear to be saying is that there is some commonality in the models – some systematic bias which is incorrect. First, that isn’t true – there are many commentators to this website and others who claim to have models built on other assumptions. Second, you do not actually know that the dominant bias as you see it is actually incorrect. You think it is but you do not know. Now if an ensemble of models failed back-testing systematically one way or another – because of the bias that you suggest, then the climate research community would really have to think again. As I have suggested above a diverse modelling framework, properly coordinated and summarised is our best chance of making sense of the wide disagreement on the outcomes of the various models that the above post reflects. But an effective ensemble process would have to be much more diverse than that currently considered by the IPCC – but then that is where I may be being naive.

Frank K.
October 21, 2010 5:13 am

John Marshall says:
October 21, 2010 at 1:59 am
“I have run a GCM on my computer and by changing the CO2 residence time from that stated by the IPCC, 100-200 years, to what current research has shown it to be, 5-10 years, the supposed critical temperature rise becomes a fall in temperature, everything else being equal. Just shows how you can confuse with rubbish inputs into models. Do they work with chaotic systems? I do not think that they do.”
Hi John – would it be possible to post a summary your experiences running your GCM? I’m willing to bet it’s pretty easy to get them to crash and burn by changing some of the parameters. People don’t realize how many “tuning knobs” and “empirically-derived constants” these models have in them…

Murray Grainger
October 21, 2010 5:29 am

The IPCC has helpfully explained all the known unknowns but they appear to have completely forgotten to explain the unknown unknowns.

Pascvaks
October 21, 2010 5:38 am

At the present time we have a World System of science oversight and management that boils down to the inmates being in charge of the nut house while the goodguys are in straight jackets, sedated, and locked in rubber rooms. Little wonder that “science” is slipping. If someone within the scientific community doesn’t come up with a new set of protocals and do something fast, “scientists” will slip below “members of congress” on the scale of “Public Confidence”. Wonder how it happened? Who or what started the great slide? Bet it was those worthless hippies and their commie gurus of the 1950’s who are “managing” the system now in their dottage. College students today are soooo tame and docil… How times change!

October 21, 2010 5:48 am

John Marshall says:
October 21, 2010 at 1:59 am
I have run a GCM on my computer and by changing the CO2 residence time from that stated by the IPCC, 100-200 years, to what current research has shown it to be, 5-10 years, the supposed critical temperature rise becomes a fall in temperature, everything else being equal.

When you did that what did the [CO2] look like as a function of time, did it continue you to increase like the real world or did it decrease?

MikeP
October 21, 2010 5:51 am

RichieP says:
October 21, 2010 at 1:51 am
Petter says:
October 21, 2010 at 1:22 am
‘How is one supposed to interpret “ensemble of opportunity”?
(English is not my mother tongue)’
Don’t worry, English *is my mother tongue and it still sounds like bs to me.
***An ensemble of opportunity is a collection of whatever you can find (possibly with a bit of selection). It’s gathering what opportunity throws your way (not random and not necessarily independent). It’s something like going into an apple orchard late season and picking up what you find on the ground.***

October 21, 2010 5:52 am

@Professor Bob Ryan
I think what they’re trying to say is that if you can have no confidence in the numbers the models generate then averaging them together just results in more numbers that you can’t have confidence in.
Lets do a thought experiment. We’ll take 3 models that generate a temperature increase of +3 +-.5, +2 +-.5, and +1 +-.5 at a certain span in time. If you average them together you seem to get a value of 2 as the aggregate predicition. Lets say that the actual value turns out to be .5. Sure, that is a possible value at the low end prediction, but averaging in the other models (that we eventually determine to be incorrect) only hurts us.
But just because the model predicted 1 +- .5 hits the temperature, even that doesn’t make it correct. It could be that the model that would have predicted 0 +- .5 is the correct model. Or -2 +- 3 At a different time scale, 1+-.5 could have been completely wrong.
The problem with aggregating these models is that there’s enough uncertainty in them that you may as well be picking random numbers to average together for all of the confidence that you can place in them.

October 21, 2010 5:56 am

Addendum: I guess what I’m saying is that in my opinion their error bars are a complete lie [perhaps you mean error . . lie is so pejorative . . mod] because they don’t acknowledge ALL of the uncertainty in the model, only in their measurements.

Professor Bob Ryan
October 21, 2010 6:00 am

Rob R: thank you for your excellent comment which to a very large extent I agree with. You are right that financial models give spot prices but they also accurately predict the drift term in the stochastic process that financial securities typically follow. Anyway that is not really the concern of this blog. What matters is how do we make sense of what the models are telling us. Appealing to the historical climate record does not provide an answer. Even if we conclusively demonstrated that we were all warmer in 1066 that would not prove that current climate changes are not driven by human emissions. The answer to this ultimately must come from the models which in the end are what we connect our theories of climate change to what we observe on the ground. What I suspect is the best chance we all have of getting closer to the truth is through some ensemble process – more independent of vested interests than it is now. At the very least would help the more open minded members of the climate science community to consider alternative theoretical explanations, if you are correct, it becomes very clear that the ensemble is running too hot. At the moment it can be put down to singular as opposed to collective misspecification. Anyway, a fascinating debate but its back to the marking. I will look by later to see if there is any support for the general thrust of what I am saying – or otherwise – criticism is good.

Tom in Florida
October 21, 2010 6:02 am

I believe what we have here is scientific proof of GIGO.

Pamela Gray
October 21, 2010 6:02 am

Since I believe most of our CO2 increase is being caused by human breath, I propose that those with the hottest air wear the CO2 scrubbers. That would be anyone who offered, penned, edited, or presented a single word in that book. Problem solved. Enjoy the return of cooler temperatures.

A C Osborn
October 21, 2010 6:26 am

At http://nofrakkingconsensus.wordpress.com/
Donna is now looking at the qualifications and experience of the Authors & Lead Authors that the IPCC used.
Quite an interesting approach too.

Chris B
October 21, 2010 6:38 am

Should have been the Summary for Aspiring Policymakers (SPAM).

Richard M
October 21, 2010 7:02 am

John Marshall says:
October 21, 2010 at 1:59 am
I have run a GCM on my computer and by changing the CO2 residence time from that stated by the IPCC, 100-200 years, to what current research has shown it to be, 5-10 years, the supposed critical temperature rise becomes a fall in temperature, everything else being equal. Just shows how you can confuse with rubbish inputs into models. Do they work with chaotic systems? I do not think that they do.

What do you get at 35-40 years? Have you tried doing a plot over values from 5-200 showing the trend? That would be very interesting.

Dave Springer
October 21, 2010 7:07 am

richard telford says:
October 21, 2010 at 12:12 am

“Current spatial coverage, temporal resolution and age control of available Holocene proxy data limit the ability to determine if there were multi-decadal periods of global warmth comparable to the last half of the 20th century.”
This is not a criticism of the models but of the proxy-data.
I can think of much better places to conceal uncertainty than in the IPCC report.

It indirectly indicts the models. The first test any model must pass is an accurate reconstruction of past climate change. Sometimes this is called “training” the model. If it can’t reconstruct the past climate with reasonable accuracy the model or its assumptions or both are changed until it gets the past climate right. If the climate history the model is designed to replicate is wrong then the model, by design, is wrong too.

Dave Springer
October 21, 2010 7:18 am

Readers here should consider that the OP author, Dr. Levine, raised a red flag about scientific integrity to his fellow APS members almost two years ago here:
http://www.aps.org/units/fps/newsletters/200901/levine.cfm
Imagine how Levine must have reacted to the revelations surrounding climategate when almost a year earlier he’d alerted the APS members of his concern and at that time was already alarmed about the potential damage that CAGW advocacy could do to public trust in scientific integrity. In modern parlance it must have been an OMG moment to say the least.

Enneagram
October 21, 2010 7:30 am

The fact is that we do not have tools to deal with actual reality. Now being long forgotten of rejected the real universal laws, represented by analogical symbols transmitted by traditions from the distant past of humanity.
Thus, we, self conceited and self deluded people, believing ourselves to be members of the most advanced and educated civilization of the history of the world, while not even knowing why the heck zillion of tons of water FLOAT above our heads defying the most Holy Newton’s Law!, However, now, there is a positive way of knowing how much total energy is released by the earth. Now you can play calculating the actual energy emitted by the whole emission system of the Earth, by using the Unified Field equation:
E= (Sin y + Cos y)(V/D)
http://www.scribd.com/doc/38598073/Unified-Field
Where Gravity/10= Sin Y= 0.981
Rest of the Field=Cos Y =-0.019, where it is added 1 (total field)- 0.981 = 0.019 x 10= 0.19 Nm (a positive emission field- 19% of the total field= 10 Nm)
V=Earth velocity around its axis in m/s
D=Earth Diameter in meters.
And, of course, the result is in Joules/second.
Now, you can have, also in consideration the Moon which “sucks” at perigee and emits at apogee:
Moon (a) at eccentricity=0,026
-2,24915291288904 Nm
Moon (b) at eccentricity=0,077
+9,40962149507112 Nm
http://www.scribd.com/doc/39678117/Planets-Moon-Field

artwest
October 21, 2010 7:31 am

More fascinating stuff about the IPCC reports here:
http://nofrakkingconsensus.wordpress.com/2010/10/19/lead-author-lacked-a-masters-degree/
http://nofrakkingconsensus.wordpress.com/2010/10/20/more-grad-student-expertise/
http://nofrakkingconsensus.wordpress.com/2010/10/21/meet-the-ipccs-youngest-lead-author/
Apparently having virtually no experience or qualifications qualified you amply for being a coordinating lead author, e.g. (last link above):
“The coordinating lead authors (usually there are two of them) are a chapter’s most senior personnel. (…) Klein was promoted to the most senior IPCC authorial role when he [was] just 28.
(…) he didn’t earn his PhD until six years after that – in 2003. So Klein served as an IPPC author four times while he was still a graduate student.
The fact that he was comically young didn’t disqualify him. The fact that he’d recently worked for Greenpeace didn’t disqualify him. While still in his twenties, while still years away from completing his doctorate, those in charge of the IPCC decided Klein was one of the worlds top experts. “

Geoff
October 21, 2010 7:44 am

Dear Professor Ryan,
I understand you don’t deal professionally with climate models, so you may be interested in the recent paper by a well known climate modeler on just this issue of ensembles. One representative quote – “averaging models leads to unwanted
effects like smoothing of spatially heterogeneous patterns, so it is unclear whether an
average across models is physically meaningful at all”.
You can read the full paper (fortunately open access) at http://www.springerlink.com/content/97132434001l7676/fulltext.pdf .
(The End of Model Democracy? by Reto Knutti)

AnonyMoose
October 21, 2010 7:50 am

The IPCC has repeatedly confirmed deficiencies in its science and methods. Just read any of the reports after the first one and notice how much improvement they claim over their previous report. Nobody revisits the decisions made based upon the flaws in the preceding report.

james
October 21, 2010 8:13 am

“How is one supposed to interpret “ensemble of opportunity”?
(English is not my mother tongue)”
my rough and ready translation would be “cherry picked”

james
October 21, 2010 8:16 am

averaging models where everyone is competing to show the scariest outcome does not yield truth.

Steven Kopits
October 21, 2010 8:32 am

Professor Ryan:
There is a significant difference between financial forecasts and climate forecasts. In financial forecasts, the attitude is positive: no one has a stake in any given outcome. In a climate forecast, the attitude is often–and for modelers, presumed to be uniformly–normative. Those working on these models are presumed to believe that CO2 leads to warming and that man is the cause.
Thus, for example, an equity analyst at Barclays Capital has no stake in whether the shares of Exxon go up or down (except during an offering). By contrast, a climate change modeler has every incentive to show temperatures going up. It would be like asking a broad cross section of analysts working for the Democratic Party whether Republicans are good. I would venture a guess that, even if you dropped the outliers, the answer would be ‘no’–because the sample itself is not random. If the forecasters have a stake in the outcome, the ensemble approach is unlikely to produce reliable results.

bob
October 21, 2010 8:33 am

You list self identified deficiencies listed in the AR4 and you say the IPCC is being deceptive???????
With no analysis of what problems you have with the list of deficiencies.
A rather incomplete post IMHO.
Oh, and climate models and weather models are very much beasts of a diffenent taxonomy if you will.
You know that there were climate models well before there were any decent weather models.

October 21, 2010 8:33 am

NS says: “Much of the IPCC’s caveats highlighted in this post are comments about the deficiencies of individual models which could be surmounted by a well constructed ensemble methodology and research design. In my view this is the direction of travel that climate research should follow.”
You sound like a person lost on a hill saying: “I think this way”, and strolling boldly over a cliff.
What climate “science” needs is some basic common sense: like a walker needs a map and compass, and even better, someone who can read them, likewise climate “science” needs some basic information, like a reliable measure of local temperatures and someone who can understand what these temperatures mean.
And whether it is one person or a thousand people who haven’t got a clue where they are going because they never got the basic necessary information first, it really doesn’t matter if you average one or the entire “consensus”, the average of those who haven’t got a clue will always be: THEY HAVEN’T GOT A CLUE!

October 21, 2010 8:36 am

From what I have seen on the topic, and what common sense tells me is that no model can be programed that can possibly account of all the factors that control and dictate climate. I think computer modeling is simply a high tech guessing game and has no place in science.

October 21, 2010 8:38 am

[snip – off color humor ~mod]

Ken Hall
October 21, 2010 8:39 am

“ensemble of opportunity”
As stated above, it is backing every eventuality, then claiming that you were right all along. So long as they at least one model that shows cooling in some phase (enough to be safely ignored) and at least one that shows no change in temps in some phase, then they can continue the hype about runaway warming and even if the earth cools, they can ignore that and still claim that they were right all along.

October 21, 2010 8:43 am

artwest says:
http://nofrakkingconsensus.wordpress.com/2010/10/21/meet-the-ipccs-youngest-lead-author/
“…The fact that he’d recently worked for Greenpeace didn’t disqualify him. While still in his twenties, while still years away from completing his doctorate, those in charge of the IPCC decided Klein was one of the worlds top experts. ”
That is completely outrageous. It’s like in the cold war, discovering some top Russian spy was working for the CIA – not covertly, but openly having been appointed apparently for no other reason than that they were a Russian spy.
What does that tell you about the top of the organisation that appointed them? … everything!

Jean Parisot
October 21, 2010 9:04 am

There is a top to bottom problem in the climate modeling and weather data with regards to the spatial error components. The gridding, the land/sea relationships, the clustering of data sources in inhabited areas, etc. It needs a professional look from several people.

artwest
October 21, 2010 9:55 am

Mike Hessler, it’s not just the conflict of interest it’s the lack of experience and qualifications of major contributors.
More examples:
“So, in the 15 years prior to earning her PhD, Kovats served once as a contributing author and twice as a lead author for the IPCC.
Which means governments around the world have been relying on the expertise of grad students when they make multi-billion-dollar climate change decisions.”
http://nofrakkingconsensus.wordpress.com/2010/10/20/more-grad-student-expertise/
and
“(…) a mere two years after Patz achieved his Masters, with no relevant publications whatsoever, he was one of nine people chosen to be a lead author of the IPCC’s health chapter. Remember, this is a report that is supposed to have been written by the world’s top experts.”
http://nofrakkingconsensus.wordpress.com/2010/09/09/the-new-graduate-who-served-as-ipcc-lead-author/
and of course experience as a climate activist was not seen as any hindrance to being an IPCC author:
http://nofrakkingconsensus.wordpress.com/2010/09/03/the-book-the-ipcc-plagiarized/
http://nofrakkingconsensus.wordpress.com/2010/08/25/ipcc-author-profile-alistair-woodward/

AJ
October 21, 2010 9:57 am

If we use Newton’s “Law” of Cooling, then the committment is very small. This law uses the term e^(-rt) to model the unrealized change. So at t=0, 100% of the change is unrealized and as time goes to infinity the change is fully realized.
You can estimate the “r” value by analyzing temperature lags to a cyclical forcing. For example, the hottest time of day is not high noon, but a few hours later. The approach is described here:
http://www.math.montana.edu/frankw/ccp/cases/newton/overview.htm
This method shows that with a high “r” value, the lag is essentially zero and the thermal potential is fully met. Conversely, as “r” approaches zero, the lag approaches 1/4 of the cycle period and very little of the thermal potential is met. I’ve modeled this approach in the following spreadsheet:
https://spreadsheets.google.com/ccc?key=0AiP3g3LokjjZdHcxcVhUb2NyQTdXQmpPcjZzVFpYUUE&hl=en
For my lag analysis I chose to look at mid-latitude Sea Surface Temperatures (SSTs). I chose this because at 45 degrees latitude the yearly cycle of daily solar irradiance follows a sinusoidal pattern. By contrast, at the equator there are two peaks at the equinoxes and in the polar regions there is complete darkness in the winter. My analysis shows a seasonal lag of ~75 days which gives -r=-1.80 on a yearly basis. So 99% of the forcing is realized in ln(.01)/-1.80 ~ 2.5 years. Here are my spreadsheets of this analysis:
https://spreadsheets.google.com/ccc?key=0AiP3g3LokjjZdDg3UDNPR21QRk9TUmVZZWlORU5uU3c&hl=en&authkey=CL-ow8wM
https://spreadsheets.google.com/ccc?key=0AiP3g3LokjjZdG8zdW5aZ2xTbWN5Zmd4ZWdMU0ZJM3c&hl=en&authkey=CM-G48cI
This analysis also allows us to ponder two hypothetical cases. If the tilt of the earth became zero degrees, then the temperature between the north and south hemispheres would be 99% equalized within 2.5 years. The second case is that at 75 days lag, the model amplitude is about 25% of its potential. So if the earth’s tilt became stuck at the summer solstice, the resulting temperature amplitude would be 4 times the actual amplitude. So instead of increasing 3C above mean, the increase would be 12C and again would be 99% realized with 2.5 years. Neither one of these results seems unreasonable to me.
Newton’s “Law” works well with small scale experiments, but does it work at a global scale? As stated here, the IPCC doesn’t think so. To test their argument, however, I would present the Roe paper that shows zero lag between temperature and Milankovitch forcings:
http://motls.blogspot.com/2010/07/in-defense-of-milankovitch-by-gerard.html
The zero lag in these cycles agrees with Newton’s Law. It is given that r=1.80 on a yearly basis will translate into 1.80 * 26,000 over the earth’s precession (wobble) cycle. This high rate agrees with the observation that the lag is essentially zero. On the other hand, I’ll assume the IPCC models have a variable “r” value which diminishes over time. But shouldn’t this predict a measurable lag? Can their models match the observations? Maybe, but I’d be surprised.
Thanks, AJ

james
October 21, 2010 10:58 am

“You list self identified deficiencies listed in the AR4 and you say the IPCC is being deceptive???????”
The gap between the main text and the summary allows us to “peer into the heart” of the authors. The summary allows one to pick and choose what to emphasize. The fact that the authors choose to emphasize certainty as opposed to uncertainty in the summary does seem deceptive and provides prima facia evidence for the direction of the IPCC’s bias imo.

Noblesse Oblige
October 21, 2010 11:15 am

As often happens with IPCC, you have to keep your eye on the pea. Let’s take the passage on the model ensemble approach (p 754) and go a little deeper. The model ensemble approach allows modelers and the IPCC to skirt the fact that different models contain different physics. These differences account for the wide variation between the calculated climate sensitivities. Normally in science, such differences are used in combination with observations to learn which assumptions, as represented by the choice of various parameters, do a better job of reproducing nature. Not so in climate modeling. In IPCC, these differences are thrown aside, and all models are treated equally. In effect, you can say that they are circling the wagons around all the models.

stumpy
October 21, 2010 11:30 am

Whilst this document is intended for hydrological / hydraulic modellors like myself, the IPCC climate modellors should also give it a read since they have no standards of their own:
http://www.ciwem.org/media/144832/WAPUG_User_Note_13.pdf

stumpy
October 21, 2010 11:33 am

From http://www.ciwem.org/media/144832/WAPUG_User_Note_13.pdf
“When force-fitting the assumption is made that the field survey data [insert paleo / temperature record] is correct. Arbitrary adjustments are then made to the input data so that the models response matches as closely as possible the results of the field monitoring [insert modern temperature record]. Little or no checking of data takes place. The result is a model which appears to be an excellent representation of reality, and which requires comparatively little effort to produce. However, the force-fitted model may not in fact be a true representation of existing conditions, this is because it is not possible without checking, to ascertain whether or not the adjusted input data itself correctly reflects reality.”

bob
October 21, 2010 12:32 pm

Stumpy,
I think your use of a cautionary note for use with a sewer model, inserting climate terms, shows your bias, and lack of objectivity.
It is easy enough to check your source and find that the terms ” [insert paleo / temperature record]” and ” [insert modern temperature record]. ” do not exist anywhere within where you say that they do.
Sewer models are not climate models!

JohnH
October 21, 2010 12:35 pm

My understanding is that IPCC reports are finalised in rather an odd way – that the IPCC scientists do not draw up the executive summary, but it is instead the subject of negotiation between politicians/diplomats/who knows who? – and that the IPCC scientists are then under an obligation to edit the main report to conform to the executive summary. Have I got this wrong? – because, if not, that process would rather explain some of the oddities noted above …

October 21, 2010 12:35 pm

Bob Ryan compares averaging economic models with averaging climate models. Essex and McKitrick’s book ‘Taken by Storm’ (http://www.takenbystorm.info/) answers this argument – McKitrick is an economist and Essex is a physicist. The factors involved are so complex, so subject to small events (which large-scale models can’t see) causing large effects, averaging doesn’t cause climate models to make more accurate predictions.

Cherry Pick
October 21, 2010 12:42 pm

Petter says:
October 21, 2010 at 1:22 am
How is one supposed to interpret “ensemble of opportunity”?
To create an ensemble of models you take 100 IPCC model programmers, who select the parameters to their individual models based on the same theories of climate and the same measurements. Then you run those models and possibly select the ones that fit the agenda the best and then apply statistics and say that you have high confidence on ensemble average because the variance is low.
Using this kind of approach the precision of the projections is high but still the accuracy is low because none of the projections (maybe with the exception of the Russian one) matches the what happened in the real world.
If the notion of “programmers selecting parameters” surprises, please explain why don’t we have IPCC projections of decreasing CO2 levels. Shouldn’t we have variations of our sociological models as well as before we make averages.

P WIlson
October 21, 2010 12:46 pm

Professor Bob Ryan says:
October 21, 2010 at 4:24 am
Excuse my ignorance, but what does any of this waffle have to do with the cause and effects of climate from either Professor Lindzen, or Bob Carter, for example, on the one hand, and the farrago of abstractions from the IPCC on the other?

jorgekafkazar
October 21, 2010 2:01 pm

Professor Bob Ryan says: “Well not quite – there is considerable difference between [monkey] ensembles and individual [monkeys.] Much of the IPCC’s caveats highlighted in this post are comments about the deficiencies of individual [monkeys] which could be surmounted by a well constructed [whole-bunch-o’-monkeys] methodology and research design. In my view this is the direction of travel that climate research should follow. The problem is that it relies upon both individual and collective, highly coordinated effort…”
No, the problem is that it relies on mönkeys–Wank-O-Matic computer programs that don’t reflect even a tenth part of the reality.

Rob R
October 21, 2010 2:02 pm

Prof Ryan
I clearly have less confidence in models and modelers than you do. Part of this lack of confidence rests in the incontestibility of funding for the work. Its like the modelers belong to a Trade Union to which only signed up alarmists can belong. The system ensures that no other potential independent provider of climate modeling services will even bother seeking funding. So the rate of penetration of new ideas into the collective mind of the climate modeling community must be slower than it should be.
At this blog we see regular comments from a number of climate scientists. I have yet to see a climate modeller step forth into this forum to discuss or defend the work they do. They seem to be a rather insular group.

DCC
October 21, 2010 2:17 pm

Petter asked:

How is one supposed to interpret “ensemble of opportunity”? (English is not my mother tongue)

I interpret that to refer to the output of the set of climate models available to sample. It means they are just an uncontrolled grab bag of models.
As Dr. Levine points out, it says on page 764 of the IPCC report: “Since the ensemble is strictly an ‘ensemble of opportunity’, without sampling protocol, the spread of models does not necessarily span the full possible range of uncertainty, and a statistical interpretation of the model spread is therefore problematic.”
In other words, all these models can’t be averaged because there is no control over how they differ. It’s just a grab bag. They do not represent all likely cases.

kwik
October 21, 2010 2:40 pm

Professor Bob Ryan says:
October 21, 2010 at 5:04 am
Ryan, if tou think this ensemle idea is so great, why not sell the models to XBox 360 players all around the world? Imagine what an ensemble!

simpleseekeraftertruth
October 21, 2010 2:44 pm

IPCC is tasked to find a warming trend, based on CO2 emissions and they do just that. They are not free to be independant, its not in the brief. However, they also stray into advocacy which can be heard at Pachauri lectures but goes further. viz:
“Invitation Wednesday, September 29, 2010 at 18:00-20:00
The Club of Rome EU-Chapter (Brussels) in collaboration with the Club of Rome – European Support Centre (Vienna) is pleased to invite you to the 73rd Aurelio Peccei Lecture in the presence of H.R.H. Prince Philip of Belgium, Honorary President CoR-EU. Science, the Future of Climate, and Governance by Jean-Pascal van Ypersele.” http://www.clubofrome.at/events/lectures/73/index.html
“Jean-Pascal van Ypersele
IPCC Vice-chair and professor of climatology at UCL, Belgium, has been made Honorary Member of the Club of Rome EU Chapter by H.R.H. Prince Philip of Belgium.”
http://www.ipcc.ch/
“New ideas and strategies will be needed to ensure that improved living conditions and opportunities for a growing population across the world can be reconciled with the conservation of a viable climate and of the fragile ecosystems on which all life depends. A new vision and path for world development must be conceived and adopted if humanity is to surmount the challenges ahead.”
http://www.clubofrome.org/eng/new_path/

Gary Pearse
October 21, 2010 3:03 pm

Petter says
What is the meaning of “Ensemble of opportunity”?
Essentially is suggested by the only other similar phrase in English I know of, which is “crime of opportunity” such as may occur if you left your door open and got robbed. It happened because it was easy to do. This usage may be like a Freudian slip (crime of opp. In mind).

Professor Bob Ryan
October 21, 2010 3:20 pm

My thanks to Rod McLaughlin and to Geoff for their references which I will follow up and read. To P Wilson I think I was careful to say that a successful ensemble analysis would require a broader spectrum of models into the analysis than those considered by the IPCC. I was particularly amused with jorgekafkaza’s monkey interpolation. He may not be aware of the embarrassing fact that monkey’s lobbing a dart at the Financial Times or the Wall Street Journal are better on average at stock picking than the City’s or Wall Street’s finest equity analysts/fund managers. I wouldn’t write off the monkeys just yet – they might be better at climate modelling than you suspect.

Robert of Texas
October 21, 2010 3:33 pm

“You’ve left out the primary statement buried in the Appendicies:-
“We definitely possibly maybe think we might know how it all works assuming our assumptions are right provided we really think we maybe have we’ve chosen the right assumptions…”
Could someone point me to this appendice? I think this is the first thing I ever heard of from the IPCC that actually made sense… 🙂

October 21, 2010 3:38 pm

What makes CAGW the perfect “green collar crime” is that the core perpetrators are not actually lying. When all this falls apart, the scientists will say – we always stressed the uncertainties and clearly said these were only model projections. The IPCC will say – the 90% confidence was clearly our opinion – we did not misrepresent it as a scientific probability, and look, we even reported all these uncertainties in the text – your problem if you didn’t read it properly. The politicians will say we just followed the scientists and the green industry will say we just followed government policy.
Not one misdemeanor in the whole $trillion scam.
Makes you want to have a piece of it, don’t it? Now where was that home solar power installer’s phone number.

October 21, 2010 3:44 pm

Way back in my Physics 101 days I learned a valuable lesson about laboratory experiments from my professor. It is better to do one very good experiment to get one very good measurement result than it is to do an experiment that requires you to average 1000 values to get a result. Based on this valuable lesson, I suggest that attempts to ensemble average a bunch of different climate models is a bad approach to doing climate science. Go out and do real science, climatologists:
1. Turn OFF your computer
2. Design a real experiment.
3. Go out into the field and conduct your experiment.
4. Return to your office with real experimental data.
5. Turn ON your computer.
6. Analyze your results.
7. Does experiment confirm the theory?
………NO? Publish it and GoTo 1
……..YES? Publish it and GoTo 1

R. M. Lansford
October 21, 2010 4:01 pm

bob says: (October 21, 2010 at 12:32 pm)
“Sewer models are not climate models!”
I assume this is humor, as the cautionary note stumpy referred to is apparently well understood by the “Team” – hence the call to “hide the decline”!.
If not humor, then perhaps it is your bias and lack of objectivity on display.
Bob

jorgekafkazar
October 21, 2010 4:19 pm

Professor Bob Ryan says: “///I was particularly amused with jorgekafkaza’s monkey interpolation. He may not be aware of the embarrassing fact that monkey’s lobbing a dart at the Financial Times or the Wall Street Journal are better on average at stock picking than the City’s or Wall Street’s finest equity analysts/fund managers. I wouldn’t write off the monkeys just yet – they might be better at climate modelling than you suspect.”
LOL. This merely highlights the low skill of equity analysts relative to monkeys. It does not establish the Ensemble as Der Übermönkey.

October 21, 2010 5:29 pm

“The shortwave impact of changes in boundary-layer clouds, and to a lesser extent mid-level clouds, constitutes the largest contributor to inter-model differences in global cloud feedbacks. The relatively poor simulation of these clouds in the present climate is a reason for some concern. The response to global warming of deep convective clouds is also a substantial source of uncertainty in projections since current models predict different responses of these clouds. Observationally based evaluation of cloud feedbacks indicates that climate models exhibit different strengths and weaknesses, and it is not yet possible to determine which estimates of the climate change cloud feedbacks are the most reliable.”
Ok, so they can`t actually model climate, and they will not be able to until looking at the solar factors that are driving it.

Mike
October 21, 2010 6:35 pm

Levine has cleverly said nothing it such a way that some readers will think he has said something. All he has done is list statements of uncertainty in the IPCC report and claim the IPCC report ignores these. He has in no way demonstrated that the IPCC conclusions do not sufficiently account for the uncertainties. He just says they don’t; so there!
But those of you looking for confirmation bias will feel confirmed!

George E. Smith
October 21, 2010 6:58 pm

“”” Professor Bob Ryan says:
October 21, 2010 at 1:30 am
LabMunkey:
‘It’s worse than we thought…..’
Well not quite – there is considerable difference between model ensembles and individual models. Much of the IPCC’s caveats highlighted in this post are comments about the deficiencies of individual models which could be surmounted by a well constructed ensemble methodology and research design. In my view this is the direction of travel that climate research should follow. The problem is that it relies upon both individual and collective, highly coordinated effort. “””
I didn’t bother to append the rest of your post Professor; just enough to identify the citation.
I can see how your (purely mathematical) analysis might establish some correlations or other statistical metrics. But it all adds not one scintilla of knowledge about causality.
And don’t plead any “overwhelming preponderance of correlation.”
You would have to beat an agreement of experiment and theory to better than a part in 10^8 to get in the game; and sadly that example from science history circa 1960-70s failed to prove causation, and the theory was shown to be pure fiction. Well of course all science theories are pure fiction, as is all of mathematics. We make it all up out of whole cloth.
But this case was totally made up with no experimental input data whatsoever.
So don’t bet on statistics to solve science mysteries; the bar level for burden of proof; has already been placed out of reach at an impossible level.
But I’ll keep you in mind when my next stock investments come up for review.

George E. Smith
October 21, 2010 7:09 pm

I should point out to Professor Ryan; that one can do a perfectly elegant statistical analysis; or ensemble of statistical analyses, on all the Telephone numbers in the Manhattan Telephone directory; and no doubt reach some interesting results includign a nice average telephone number.
But it has no meaning or value; unless it turns out to be the telephone number for the New York Stock Exchange; or maybe YOUR phone number.

AusieDan
October 21, 2010 7:56 pm

Professor Bob Ryan,
My understanding of the theory of group prediction about which you suggest is as follows:
1. all participants are humans, not models.
2. all participants possess some information about some aspects of the problem being investigated.
3. some of the information is correct, some is in error.
4. The concept uses many independent participants, who must participate in private without knowledge of the submissions or thinking of their fellow participants.
5. The ideas expressed by many of the participants will be wrong. BUT these errors will be idiosyncrastic and will therfore not be reinforced by the different errors of others, which will all remain inconsequent scattered low level noise.
6. The ideas expressed that happen to be correct will accumulate and will come to predominate, as using many witnesses is likely to entail gathering the same correct ideas about fragments of the truth, many times over.
The IPCC modellers in contrast come from a very small clique who constantly communicate philosophies, ideas, systems and code.
Combining their models will never do anything to further the search for knowledge about climate or anything else.
You are searching for the Wisdom of Crowds.
The method you propose will only encounter the Madness of Crowds, the evil twin brother of wisdom.

John F. Hultquist
October 21, 2010 9:29 pm

anna v says: at 4:23 am Climate is average weather,…
It seems more so that climate is a “pattern over time” of things we think of as weather variables. For example, the Seattle, WA area is usually warm and dry in the N. H. summer but cool and damp in the winter. Meanwhile, Charleston, SC has its peak rainfall in August when it is warmest. If you only present the “average” of these things, then you miss the “pattern.”
Climates are much harder to come by then averages.
http://en.wikipedia.org/wiki/K%C3%B6ppen_climate_classification

Ammonite
October 21, 2010 11:53 pm

Dave Springer says: October 21, 2010 at 7:07 am
“The first test any model must pass is an accurate reconstruction of past climate change… If the climate history the model is designed to replicate is wrong then the model, by design, is wrong too.”
A very good point and one worthy of expansion. While past climate change includes periods distant in time such as ice ages (and the attendent difficulty in achieving sufficient accuracy of knowledge of that time), it also includes relatively well established recent phenomena such as the climate response to the Mt Pinatubo eruption. It includes observable shifts in relative temperature in recent decades such as night-time increases relative to day-time, polar increases over equatorial, winter over summer, tropospheric over stratospheric… It includes the rise in water vapour content associated with the recent warming trend and so forth.
A model that produces the recent effects mentioned from bottom-up behaviour may still be incorrect of course, but it need not necessarily suffer from “wrong climate history”. Not perfect does not necessarily mean totally worthless.

October 22, 2010 12:12 am

I work in business. I’ve lost count of the number of executive summaries I’ve written and read. And I know that this is NOT executive summary material:
“The possibility that metrics based on observations might be used to constrain model projections of climate change has been explored for the first time, through the analysis of ensembles of model simulations. Nevertheless, a proven set of model metrics that might be used to narrow the range of plausible climate projections has yet to be developed.”
No executive I know would read that. It’s incomprehensible on a swift scan. A receiving executive would require a rewrite into English. Could it just have said:
“We’ve started to compare our models to reality, but failed so far to use this to make our predictions more reliable”?
This would have given executives something to go on. Perhaps this indicates how little the report has actually been read by executives. I guess that instead they’ve relied on others to tell them what it means… and so we see the results.

Robert E. Levine
October 22, 2010 1:31 am

Writing on October 21 at 6:35 pm, Mike is concerned that I have not “demonstrated that the IPCC conclusions do not sufficiently account for the uncertainties,” and that this omission will appeal to those looking for confirmation.
I think it was up to the authors of the IPCC report to assert and demonstrate that they accounted for all uncertainties, particularly those stated so clearly to be applicable. The Summary for Policymakers neither acknowledges the uncertainties I cited from the main report, nor states that they are accounted for. Mike’s implied suggestion that they might be accounted for does not prove that they have been, and in no way relieves the IPCC AR4 PSB authors of their responsibility to fully account for uncertainty and not publish a document with a misleading summary.
As I read the main document, some uncertainties appear to receive significant discussion and may be partially accounted for (e.g., the cloud simulation problem stated on Pages 593 and 638), while others do not appear to be accounted for at all (e.g., the lack of proven model metrics discussed on Page 591 and the problems of climatic state and poor model skill discussed on Page 608). However, the reader of this report, with its global scientific and policy impact, should not be obliged to make any inferences whatsoever concerning the accounting for uncertainties, both individually and in their aggregate overall impact on climate attribution and projection.
Moreover, the degree of uncertainty is in most instances stated in the SPM in terms of a level of confidence that is either high (about 8 out of 10 chance) or very high (at least 9 out of 10 chance). I suggest that the statement on Page 608 that the models must simulate the current climatic state “with some as yet unknown degree of fidelity” conflicts at face value with this asserted level of confidence, and that no special scientific or statistical expertise is required to so conclude. If Mike believes otherwise, I would like to know why.

Bill Toland
October 22, 2010 2:13 am

Rob R, you won’t see any climate modellers on this forum trying to defend their models.
As far as I can tell, there are two types of climate modeller.
The first type is a climate scientist who thinks he can program. Invariably, their programs are appalling and full of bugs.
The second type is a programmer who writes the model based on the instructions of a climate scientist. These programs are much better written but the programs are written based on the programmer’s understanding of climate science which is often completely lacking.
In both cases, the creator of the computer model will not try to defend their model because he knows either his programming is shaky or his science knowledge is poor. At least I have a background in astrophysics and computing and I have a lot of experience in creating computer models. Perhaps this is why my climate model shows so little warming in the future.
In an attempt to address some of the problems listed above, some models are created using a team of climate scientists and programmers. Unfortunately, this brings to mind the story about a camel being a horse designed by a committee.
The people who create these climate models know how iffy their models are; that is why they liaise so much with other modellers. If models from university B and university C both show much more warming than their model, they assume that they have done something wrong and adjust their models accordingly. People from outside the modelling clique have no idea of the amount of informal communication that goes on among modellers. Unfortunately, this has resulted in a lot of models being very similar in their assumptions and therefore in their results.

peakbear
October 22, 2010 4:46 am

Bill Toland says: October 22, 2010 at 2:13 am
Good summary Bill. Modeling can also lead to very lazy science and circular reasoning without people actually going out in the real world to measure, observe and collect real data.
The models themselves do have very good uses, such as a 3 day weather forecast for the UK for something like typical Atlantic weather coming in. It might be possible to improve the model to theoretically give say a 1% forecast improvement (whatever that would mean) but it obviously needs to be balanced with the observations out in the Atlantic that seeded the model in the first place. I don’t have the actual details but I believe the observational network is declining while modeling budgets increase, so we’d probably get a better rate of return on 1000 new buoys to measure things, though that would involve going out into the cold wet weather. Also satellites are used much more now and really can’t see through cloud very well, so the observations themselves can be quite bad at times.
Back to climate modeling, using these weather models for a 100 year forecast isn’t a sensible thing to do, it will just give you back the assumptions and parameterisations you put into the model in the first place.

Tim Clark
October 22, 2010 9:43 am

Not being up to date on programming, at times these discussions usually are not very appealing to me. However, there is something that bothers me about this proposition by Dr. Levine.
My simplistic concept of these GCM’s is that they contain numerous subsets of algorithyms intended to quantify a host of variable parameters. Probably an extensive listing, but along the lines of [CO2 physics], [frontal/wind/jet stream patterns], [temp-historical], [atmospheric physics], etc……………….. Then, additional mathematical machinations to approximate the interactions, some linear, some not.
One of these subset algorithyms herein indicated [clouds] is described by the following quote from above:
“The shortwave impact of changes in boundary-layer clouds, and to a lesser extent mid-level clouds, constitutes the largest contributor to inter-model differences in global cloud feedbacks. The relatively poor simulation of these clouds in the present climate is a reason for some concern. The response to global warming of deep convective clouds is also a substantial source of uncertainty in projections since current models predict different responses of these clouds. Observationally based evaluation of cloud feedbacks indicates that climate models exhibit different strengths and weaknesses, and it is not yet possible to determine which estimates of the climate change cloud feedbacks are the most reliable.”
and here:
“Modelling assumptions controlling the cloud water phase (liquid, ice or mixed) are known to be critical for the prediction of climate sensitivity. However, the evaluation of these assumptions is just beginning (Doutraix-Boucher and Quaas, 2004; Naud et al., 2006).
This results in the GCM analysis,simply stated as [subset][subset],[subset][cloud][etc]……..
Dr. Levine, you know as well as I that the assumed forcing value contained in subset [cloud] which has a large number of calculations in the IPCC report is reported as a positive value ([+cloud]). This results in the positive forcing causing CAGW. But as the IPCC itself admits: However, the evaluation of these assumptions is just beginning. CO2, alone, is only 1 degree or so (with a host of other assumptions).
Therefore, in order to entertain a diverse assembly of runs with a wide range of values to compensate for lack of knowledge as you propose, some runs must also contain [-cloud]. So, the equation of all runs averaged would be:
X GCM runs ([subset]I[subset],I[subset]I[etc]I[+cloud] + X GCM runs([subset]I[subset]I,[subset]I[etc]I[-cloud]) / 2 = temp change/doubling [CO2] where the determinant I denotes some mathematical interaction.
My question of course is, doesn’t the average of these result in zero cloud effect. I understand this is a very simple approach to GCM’s but in theory, you are proposing just this, a democratic mathematical calculation. An average of assumptions. It probably would be closer to reality than the precontrived assumptions elaborated on above by others. But result by average will still only be that, an average of assumptions. Tom Fuller essentially is proposing the same thing with his club of 2.5 in http://wattsupwiththat.com/2010/10/20/the-league-of-2-5/
Far better in my opinion to defer any policy decisions until such time that longer-term cyclic climate approximations can be refined. In the interim, we should drop the CAGW fearmongering, spending money on proper data collection.

Scott Covert
October 22, 2010 6:08 pm

Awesome comments!
I see some guest posts?
John Marshall?
Prof Ryan?
John Marshall, is your model open source?

Geoff
October 22, 2010 8:58 pm

Dear Prof. Ryan,
You’re welcome for the Knutti reference. If you are an AMS member you may want to look at the article “in press” at the Journal of Climate by Pennell. If not I re-print the abstract of the article here:
“Projections of future climate change are increasingly based on the output of many different models. Typically, the mean over all model simulations is considered as the optimal prediction, with the underlying assumption that different models provide statistically independent information evenly distributed around the true state. However, there is reason to believe that this is not the best assumption. Coupled models are of comparable complexity and are constructed in similar ways. Some models share parts of the same code and some models are even developed at the same center. Therefore, the limitations of these models tend to be fairly similar, contributing to the well-known problem of common model biases and possibly to an unrealistically small spread in the outcomes of model predictions.
This study attempts to quantify the extent of this problem by asking how many models there effectively are and how to best determine this number. Quantifying the effective number of models is achieved by evaluating 24 state-of-the-art models and their ability to simulate broad aspects of 20th century climate. Using two different approaches, we calculate the amount of unique information in the ensemble and find that the effective ensemble size is much smaller than the actual number of models. As more models are included in an ensemble the amount of new information diminishes in proportion. Furthermore, we find that this reduction goes beyond the problem of “same-center” models and that systemic similarities exist across all models. We speculate that current methodologies for the interpretation of multi-model ensembles may lead to overly confident climate predictions”.
On the Effective Number of Climate Models, Christopher Pennell and Thomas Reichler, (Department of Atmospheric Sciences, University of Utah), Journal of Climate, in press

P Wilson
October 23, 2010 2:27 am

Professor Bob Ryan says:
October 21, 2010 at 3:20 pm
Oh I see. Well lets forget the monkey’s as we have to find a correlation that can be modelled based on human activity, andmake it appear as a causal affinity, not the correlation that it is.
I propose that the increase in mobile phone use, coupled with increase in global TV transmission, say from 1979-present day, could be programmed and a correlation with increase (very nominal) in temperatures would definitely be found.
This would dispense with the increasingly unconvincing need for c02 as the culprit, which could be programmed out of th emodels.
It would still be b*ll**** but at least it would lay the blame on humanity’s doorstep.
What do you think?

Tenuc
October 23, 2010 2:39 am

Geoff says:
October 22, 2010 at 8:58 pm
Quantifying the effective number of models is achieved by evaluating 24 state-of-the-art models and their ability to simulate broad aspects of 20th century climate. Using two different approaches, we calculate the amount of unique information in the ensemble and find that the effective ensemble size is much smaller than the actual number of models. As more models are included in an ensemble the amount of new information diminishes in proportion. Furthermore, we find that this reduction goes beyond the problem of “same-center” models and that systemic similarities exist across all models. We speculate that current methodologies for the interpretation of multi-model ensembles may lead to overly confident climate predictions”.
Good find Geof!
All the models used by the IPCC are using the same basic assumptions about how climate operates, with CO2 and an assumed positive feedback from water vapour.
However, in the real world temperature leads CO2 and extra water vapour has a negative feedback. No surprise that the current GCMs cannot model the detail of historic climate, and have no predictive power.
This deception of politicians and the public is shameful.

Karmakaze
October 23, 2010 1:21 pm

There are nearly 50 thousand members of the APU. How many of them supported this guys views?
Is that why he quit?