New peer reviewed paper shows just how bad the climate models really are

One of the biggest, if not the biggest issues of climate science skepticism is the criticism of over-reliance on computer model projections to suggest future outcomes. In this paper, climate models were hindcast tested against actual surface observations, and found to be seriously lacking. Just have a look at Figure 12 (mean temperature -vs- models for the USA)  from the paper, shown below:

Fig. 12. Various temperature time series spatially integrated over the USA (mean annual), at annual and 30-year scales. Click image for the complete graph

The graph above shows temperature in the blue lines, and model runs in other colors. Not only are there no curve shape matches, temperature offsets are significant as well. In the study, they also looked at precipitation, which fared even worse in correlation. The bottom line: if the models do a poor job of hindcasting, why would they do any better in forecasting? This from the conclusion sums it up pretty well:

…we think that the most important question is not whether GCMs can produce credible estimates of future climate, but whether climate is at all predictable in deterministic terms.

Selected sections of the entire paper, from the Hydrological Sciences Journal is available online here as HTML, and  as PDF ~1.3MB are given below:

A comparison of local and aggregated climate model outputs with observed data

Anagnostopoulos, G. G. , Koutsoyiannis, D. , Christofides, A. , Efstratiadis, A. and Mamassis, N. ‘A comparison of local and aggregated climate model outputs with observed data’, Hydrological Sciences Journal, 55:7, 1094 – 1110

Abstract

We compare the output of various climate models to temperature and precipitation observations at 55 points around the globe. We also spatially aggregate model output and observations over the contiguous USA using data from 70 stations, and we perform comparison at several temporal scales, including a climatic (30-year) scale. Besides confirming the findings of a previous assessment study that model projections at point scale are poor, results show that the spatially integrated projections are also poor.

Citation Anagnostopoulos, G. G., Koutsoyiannis, D., Christofides, A., Efstratiadis, A. & Mamassis, N. (2010) A comparison of local and aggregated climate model outputs with observed data. Hydrol. Sci. J. 55(7), 1094-1110.

INTRODUCTION

According to the Intergovernmental Panel on Climate Change (IPCC), global circulation models (GCM) are able to “reproduce features of the past climates and climate changes” (Randall et al., 2007, p. 601). Here we test whether this is indeed the case. We examine how well several model outputs fit measured temperature and rainfall in many stations around the globe. We also integrate measurements and model outputs over a large part of a continent, the contiguous USA (the USA excluding islands and Alaska), and examine the extent to which models can reproduce the past climate there. We will be referring to this as “comparison at a large scale”.

This paper is a continuation and expansion of Koutsoyiannis et al. (2008). The differences are that (a) Koutsoyiannis et al. (2008) had tested only eight points, whereas here we test 55 points for each variable; (b) we examine more variables in addition to mean temperature and precipitation; and (c) we compare at a large scale in addition to point scale. The comparison methodology is presented in the next section.

While the study of Koutsoyiannis et al. (2008) was not challenged by any formal discussion papers, or any other peer-reviewed papers, criticism appeared in science blogs (e.g. Schmidt, 2008). Similar criticism has been received by two reviewers of the first draft of this paper, hereinafter referred to as critics. In both cases, it was only our methodology that was challenged and not our results. Therefore, after presenting the methodology below, we include a section “Justification of the methodology”, in which we discuss all the critical comments, and explain why we disagree and why we think that our methodology is appropriate. Following that, we present the results and offer some concluding remarks.

Here’s the models they tested:

Comparison at a large scale

We collected long time series of temperature and precipitation for 70 stations in the USA (five were also used in the comparison at the point basis). Again the data were downloaded from the web site of the Royal Netherlands Meteorological Institute (http://climexp.knmi.nl). The stations were selected so that they are geographically distributed throughout the contiguous USA. We selected this region because of the good coverage of data series satisfying the criteria discussed above. The stations selected are shown in Fig. 2 and are listed by Anagnostopoulos (2009, pp. 12-13). THSJ_A_513518_O_XML_IMAGES\THSJ_A_513518_O_F0002g.jpg

Fig. 2. Stations selected for areal integration and their contribution areas (Thiessen polygons).

In order to produce an areal time series we used the method of Thiessen polygons (also known as Voronoi cells), which assigns weights to each point measurement that are proportional to the area of influence; the weights are the “Thiessen coefficients”. The Thiessen polygons for the selected stations of the USA are shown in Fig. 2.

The annual average temperature of the contiguous USA was initially computed as the weighted average of the mean annual temperature at each station, using the station’s Thiessen coefficient as weight. The weighted average elevation of the stations (computed by multiplying the elevation of each station with the Thiessen coefficient) is Hm = 668.7 m and the average elevation of the contiguous USA (computed as the weighted average of the elevation of each state, using the area of each state as weight) is H = 746.8 m. By plotting the average temperature of each station against elevation and fitting a straight line, we determined a temperature gradient θ = -0.0038°C/m, which implies a correction of the annual average areal temperature θ(H – Hm) = -0.3°C.

The annual average precipitation of the contiguous USA was calculated simply as the weighted sum of the total annual precipitation at each station, using the station’s Thiessen coefficient as weight, without any other correction, since no significant correlation could be determined between elevation and precipitation for the specific time series examined.

We verified the resulting areal time series using data from other organizations. Two organizations provide areal data for the USA: the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA). Both organizations have modified the original data by making several adjustments and using homogenization methods. The time series of the two organizations have noticeable differences, probably because they used different processing methods. The reason for calculating our own areal time series is that we wanted to avoid any comparisons with modified data. As shown in Fig. 3, the temperature time series we calculated with the method described above are almost identical to the time series of NOAA, whereas in precipitation there is an almost constant difference of 40 mm per year. THSJ_A_513518_O_XML_IMAGES\THSJ_A_513518_O_F0003g.jpg

Fig. 3. Comparison between areal (over the USA) time series of NOAA (downloaded from http://www.ncdc.noaa.gov/oa/climate/research/cag3/cag3.html) and areal time series derived through the Thiessen method; for (a) mean annual temperature (adjusted for elevation), and (b) annual precipitation.

Determining the areal time series from the climate model outputs is straightforward: we simply computed a weighted average of the time series of the grid points situated within the geographical boundaries of the contiguous USA. The influence area of each grid point is a rectangle whose “vertical” (perpendicular to the equator) side is (ϕ2 – ϕ1)/2 and its “horizontal” side is proportional to cosϕ, where ϕ is the latitude of each grid point, and ϕ2 and ϕ1 are the latitudes of the adjacent “horizontal” grid lines. The weights used were thus cosϕ(ϕ2 – ϕ1); where grid latitudes are evenly spaced, the weights are simply cosϕ.

CONCLUSIONS AND DISCUSSION

It is claimed that GCMs provide credible quantitative estimates of future climate change, particularly at continental scales and above. Examining the local performance of the models at 55 points, we found that local projections do not correlate well with observed measurements. Furthermore, we found that the correlation at a large spatial scale, i.e. the contiguous USA, is worse than at the local scale.

However, we think that the most important question is not whether GCMs can produce credible estimates of future climate, but whether climate is at all predictable in deterministic terms. Several publications, a typical example being Rial et al. (2004), point out the difficulties that the climate system complexity introduces when we attempt to make predictions. “Complexity” in this context usually refers to the fact that there are many parts comprising the system and many interactions among these parts. This observation is correct, but we take it a step further. We think that it is not merely a matter of high dimensionality, and that it can be misleading to assume that the uncertainty can be reduced if we analyse its “sources” as nonlinearities, feedbacks, thresholds, etc., and attempt to establish causality relationships. Koutsoyiannis (2010) created a toy model with simple, fully-known, deterministic dynamics, and with only two degrees of freedom (i.e. internal state variables or dimensions); but it exhibits extremely uncertain behaviour at all scales, including trends, fluctuations, and other features similar to those displayed by the climate. It does so with a constant external forcing, which means that there is no causality relationship between its state and the forcing. The fact that climate has many orders of magnitude more degrees of freedom certainly perplexes the situation further, but in the end it may be irrelevant; for, in the end, we do not have a predictable system hidden behind many layers of uncertainty which could be removed to some extent, but, rather, we have a system that is uncertain at its heart.

Do we have something better than GCMs when it comes to establishing policies for the future? Our answer is yes: we have stochastic approaches, and what is needed is a paradigm shift. We need to recognize the fact that the uncertainty is intrinsic, and shift our attention from reducing the uncertainty towards quantifying the uncertainty (see also Koutsoyiannis et al., 2009a). Obviously, in such a paradigm shift, stochastic descriptions of hydroclimatic processes should incorporate what is known about the driving physical mechanisms of the processes. Despite a common misconception of stochastics as black-box approaches whose blind use of data disregard the system dynamics, several celebrated examples, including statistical thermophysics and the modelling of turbulence, emphasize the opposite, i.e. the fact that stochastics is an indispensable, advanced and powerful part of physics. Other simpler examples (e.g. Koutsoyiannis, 2010) indicate how known deterministic dynamics can be fully incorporated in a stochastic framework and reconciled with the unavoidable emergence of uncertainty in predictions.

h/t to WUWT reader Don from Paradise

Advertisements

  Subscribe  
newest oldest most voted
Notify of
John Q Public

Haven’t we learned anything from models that try to forecast weather or the economy … or, even that sub-prime asset backed investments were safe?
Statistics, mathematics, and program code are just the new smoke and mirrors for the illiterate AGW believer.
Dumb, dumber, dumbest.

Earl Wood

It is about time we’ve gotten around to comparing these models to actual data. This should have been done for each model for each publication using a model to give the reader an idea of its accuracy. This is the standard for other fields in science.

Joshua J

However, the GCM’s are apparently very good at generating hockey stick graphs out of random data.

anna v

Great. If I were ten years younger I would have tried to weasel my way into working with their group. They obviously know their physics and statistics.

Lucia has shown the bad fit
of model temperature outputs with reality, but I do not know whether she is aiming at publishing in peer review.

Lew Skannen

This is exactly where the attack needs to concentrate. I have never understood why we keep battling AGW alarmists on territory that is no use to anyone – ie Raw Data. (was this year hotter? how much ice melted? what caused that flood? etc)
The second weakest link in the whole AGW chain is the modelling. It is so clear that the models are just feeble guesses which crumble on first contact with evidence. It is quite likely that modelling is impossible given the chaotic nature of the problem.
(The first weakest link, by the way is the steering mechanism that AGW alarmists think that they have control over. It is the giant thermostat in the sky that they are currently trying to appease by sacrificing dollars in Cancun…)

rob m

The secong paragraph in the conclusion pretty much sums up why I don’t believe in AGW.

Robert

I can’t think of any field where this kind of inaccuracy in modeling would be OK. No place where real money is at stake, certainly. Have these people no standards? Does nobody think they ought to check anything? Do they honestly think that trying hard is all you need?
The mind boggles.
Thank you for publishing these results. It may be painful, but not as painful as the results of taking models more seriously then they deserve.

Brian H

But … hydrologists aren’t UEA – approved climatologists! What can they possibly know?

stumpy

I have also looked at model rainfall and temp data for NZ and none matches the observed – its not restricted to just the USA, its a global problem!

anna v

It is worth noting that this paper exposes the real reason why “anomalies” have been foisted on us. Anomalies, which are a delta(T) study the shapes of terrains ignoring the height. Under a gravitational field, for example, one expects the shapes of mountains to be similar, but can one really average the Himalayas anomaly( average height of mountains taken as base of anomaly) with a hilly country anomaly and come out with an anomaly in height that has any meaning for anything real?

Pat Frank

Demetris’ paper showing deterministic chaos in a toy model was published in 2006 and is here, rather than at the “2010” linked above. It’s important to note that the hindcast reconstruction method Demetris and his co-authors used maximized the likelihood that the GCMs would get it right.
In a conversation I had with Demetris by email awhile ago, he agreed that hindcast tests such as he’s done really could have, and should have, been done 20 years ago. That would have spared us all a lot of trouble and expense.
It’s a fair question to ask professionals such as Gavin Schmidt or Kevin Trenberth why they didn’t think to test what they have so long asserted.
Thanks for posting the work, Anthony, and Demetris may you and your excellent colleagues walk in the light of our admiration, and may some well-endowed and altruistic foundation take benevolent notice of your fine work. 🙂

Robert M

It’s worse then we thought!
and…
There are way too many Roberts, Rob Ms and Me Hanging around this board…

Roger Carr

Robert says: (December 5, 2010 at 10:00 pm) I can’t think of any field where this kind of inaccuracy in modeling would be OK. No place where real money is at stake, certainly.
There is real money at stake, Robert. That would seem to be the reason why “this kind of inaccuracy in modeling” is acceptable, or perhaps that should be welcomed, or perhaps even “necessary”…

english2016

At what point does Al Gore get charged with perpetrating a fraud, and the nobel prize withdrawn?

ZZZ

I don’t think that following the suggestion presented in this latest paper about how to model climate — namely, assuming that the change in climate is a stochastic, random process modulated by the physical processes involved — will affect the basics of the climate argument much. Instead of asserting that there will be, say, a 4C temperature rise in the next century, now alarmists will say something like “there is a 90% chance of a 4C or higher temperature rise in the next century” and the skeptics will retort that, to the contrary, the chance of that happening is much smaller — that it is less than, say, 10%. The essence of the argument will not go away, and the new stochastic climate forecasts will come to resemble current short-range weather forecasts with their predictions that tomorrow there is a certain percentage chance of rain, snow, etc. Another point worth remembering is that we do observe a great deal of non-randomness in how the climate changes over very long times — for example the beginning and end of the ice ages have followed a fairly regular 100,000 year to 130,000 year cycle over the last several million years.

Andrew30

Climate models are an illustration of what the ‘climate scientist’ wants you to thinks that they know.
Comparing a climate model to reality is a simple way of illustrating the ‘climate scientists’ ignorance, amorality and hubris.

Doug in Seattle

The team will argue that Koutsoyiannis and his little band of deniers in Greece are playing in the wrong sand box again. They are hydrologists – not climatologists and their article is not published in a team controlled approved journal.

Leon Brozyna

Computer models? Something like the models the National Weather Service used to forecast our recent snow event here in Buffalo?
Let’s see how well that one turned out.
Wednesday morning they were still calling for us to get 2 to 4 inches of snow before the band of lake effect snow drifted south to ski country. They were doing pretty good, till that evening, when the band changed direction and drifted back to its start point over us and stayed put and kept on dumping snow so that, instead of 2-4 inches, we got 2-3 feet of heavy wet snow.
The only thing the models got right was there was going to be lake effect snow. They didn’t even get the amount right. Originally they called for 1-2 feet in ski country. My neighboring town got 42″, I only got about 30″. Ski country only got a few inches.
One good thing … I was able to burn hundreds of calories an hour shoveling all that global warming.
An added word about lake effect events (snow and even rain) … they are a bear to nail just right. Wind speed, humidity, direction, temperature all have to be just right. I can sympathize with what local forecasters have to come up with. But I don’t give that kind of understanding to the climate scientists and their models which pretend to encompass the globe and cover all possible variations.

anna v

Pat Frank says:
December 5, 2010 at 10:39 pm
In a conversation I had with Demetris by email awhile ago, he agreed that hindcast tests such as he’s done really could have, and should have, been done 20 years ago. That would have spared us all a lot of trouble and expense.
It’s a fair question to ask professionals such as Gavin Schmidt or Kevin Trenberth why they didn’t think to test what they have so long asserted.

But I am sure they did have these studies. That is why they came up with the brilliant idea of anomalies, as I discuss above.
It is true that matter when subjected to forces behaves in a similar manner just because there is a limited way the gravitational and electromagnetic forces can impact it and a limited way that matter can react: elastic, inelastic, etc. That is why when we look at a terrain we need an absolute scale to be able to know whether we are looking at low level rock formations or the Alps. The scale is very important to life and limb. Similarly for waves, we need an absolute scale to know whether it is a storm in a teacup or out in the wild wild world.

anna v

ZZZ says:
December 5, 2010 at 10:46 pm
I don’t think that following the suggestion presented in this latest paper about how to model climate — namely, assuming that the change in climate is a stochastic, random process modulated by the physical processes involved — will affect the basics of the climate argument much. Instead of asserting that there will be, say, a 4C temperature rise in the next century, now alarmists will say something like “there is a 90% chance of a 4C or higher temperature rise in the next century”
The models as they are now do not propagate errors and thus cannot give a consistent probability of output expected. The spaghetti plots are sleight of hand, where is the pea method of hiding this.
Once one gets models that have error propagation, trust will go up because they will be truly predicting and not handwaving.
I still believe that an analogue computer specifically designed for climate would be a solution. Or chaotic models on the lines of Tsonis et al .

Thanks very much, Anthony, for this post, and Pat and all for the comments.
You may find interesting to see the accompanying Editorial by Zbigniew W. Kundzewicz and Eugene Z. Stakhiv in the same issue (go to the official journal page linked above and hit “issue 7” around the top of the page to get to the issue contents). There is also a counter-opinion paper by R. L. Wilby just below the Editorial.

Dean McAskil

With my pathetic undergraduate mathematics I thought the comments in para 2 of the conclusions were self evident.
I am sure I have read studies and mathematical proofs before (I will have to hunt them down), relating to climate, financial markets and purely mathematical models, that proved conclusively that stochastic systems with many degrees of freedom simply cannot be modelled to predict extreme events. In particular for financial markets extreme events such as bubbles or busts cannot be predicted at all. This would seem analogous to climate warming runaway, or cooling for that matter.
And this doesn’t appear to me to be a difficult concept to grasp. I have never quite understood this faith that AGW proponents place in the GCM models. So I assumed it was my ignorance.

very kool Dr. K
Voronoi cells. Hydrology types seem to be the only guys who use this. Nice

Phillip Bratby

In a nutshell, climate models are unvalidated.
In a word, climate models are invalid.

davidmhoffer

Lew Skannen;
The second weakest link in the whole AGW chain is the modelling….
(The first weakest link, by the way is the steering mechanism that AGW alarmists think that they have control over>>
The weakest link is in fact the physics. The models, the temperature records, the glaciers receding, the tree ring proxies, sea ice extent…these are all distractions from the actual physics. The AGW proponents keep changing the subject to one misleading data set to the next until their arguments is so warped that polar bear populations quadrupling becomes proof that they are going extinct due to global warming.
The fact is that they won’t discuss the physics because they can’t win the argument on the fundamentals, so they ignore them. But it won’t change the facts:
CO2 is logarithmic. The most warming that CO2 can cause is long behind us and it would take centuries of fossil fuel consumption at ten to a hundred times what we are using now to get another degree out of it over what we are already getting.
Almost no warming happens at the equator, the most happens at the poles. Most of what happens at the poles happens during winter. Most of what happens during the winter happens at the night time low.
So a really hot day at the equator goes from a day time high of +36 to +36.1 and a really cold night, in winter, at the pole, goes from -44 to -36. The lions and tigers aren’t likely to notice, and neither will the polar bears. Well, unless a climatologist shows up to study the polar bears and it warms up enough that they come out of hibernation. On the other hand, WE might notice less in that case due to a sparcity of climatologists.

Phillip Bratby

Now tell me, how do we know what level of CO2 will give us 2degC of warming. An accurate figure would be appreciated.
I’m sure the guys and gals at UKCIP (http://www.ukcip.org.uk/) can tell us very accurately.
At http://ukclimateprojections.defra.gov.uk/content/view/857/500/
they provide projections of climate change, and absolute future climate for:
* Annual, seasonal and monthly climate averages.
* Individual 25 km grid squares, and for pre-defined aggregated areas.
* Seven 30 year time periods.
* Three emissions scenarios.
* Projections are based on change relative to a 1961–1990 baseline.
Now that is what I call good science.

So the models get more complicated and the computers more powerful, and all they are able to produce is just some kind of variation of the old Keeling curve.
Notice that GCMs are not able to model the PDO/AMO cycle at all, since all they are working with is the fictional “radiative forcing” concept. They are tuned to catch the 1975-2005 warming trend, but they wildly divorce with reality before and after.
Polar regions, allegedly the most sensitive areas to “increased greenhouse forcing” do not show any sign of it: Arctic shows just AMO variation and Antarctic shows even slight cooling.
http://i43.tinypic.com/14ihncp.jpg
This alone totally and unequivocally disqualifies the AGW theory. No other scientific hypotesis would survive such discrepancy between theory and observation. Shame on all scientists, who keep their mouth shut.

Brian H

Pat Frank, thank you for your comments. You say:

Demetris’ paper showing deterministic chaos in a toy model was published in 2006 and is here, rather than at the “2010″ linked above.

Actually the 2010 is correct. It is a more recent toy model by Koutsoyiannis, in what we think is his best paper to date.
ZZZ:

we do observe a great deal of non-randomness in how the climate changes over very long times — for example the beginning and end of the ice ages have followed a fairly regular 100,000 year to 130,000 year cycle over the last several million years.

The fact that you have cycles that resemble periodicity does not necessarily mean that that they are non-random. (See the 2010 paper by Koutsoyiannis linked above for a definition of randomness.) The toy model of Koutsoyiannis (in the same paper) has unpredictable cycles and trends without any difference in forcings. Having unpredictable cycles and trends can be more random than not having them, because if you have a constant long-term average “signal” plus “noise”, then you know more than if you don’t have even that. I also explain that in the epilogue of my HK Climate web site.

Doubting Thomas

Dear Greeks, Well done! Very well done!
U.S. climate scientists: With more than a billion a year allocated for research … who wants answers?
Anna … You’re smart. Very smart.
Davidmhoffer … You too. [snip]
Our climate way well be too chaotic to model in the fine grain. But even the proverbially “chaotic” r^2 equation is bounded. In fact it’s closely constrained between a max and a min. Make an r^2 plot and has lots of chaotic ups and downs. But stand far enough back and all you see if a flat line. Our climate is probably like that. Certainly on multi-millennial scales. It’s almost never -20°C in LA, and never plus 35°C at the polls.
dT

Roger Carr

anna v says: (December 5, 2010 at 10:32 pm) It is worth noting that this paper exposes the real reason why “anomalies” have been foisted on us. Anomalies, which are a delta(T) study the shapes of terrains ignoring the height. (and at December 5, 2010 at 11:00 pm) Similarly for waves, we need an absolute scale to know whether it is a storm in a teacup or out in the wild wild world.
Thank you, Anna. I always read your posts, and always either learn or find myself puzzling — my hope is that our present scientists-in-the-making do likewise.
    Better still, perhaps our present crop will take note and learn from your wisdom (and gentle humour).

Jimmy Haigh

Is it ironic that AGW: global warming caused by human derived CO2 – is a figment of the imagination of an SiO2 based lifeform? The next element in the period is Germanium. Then Tin and Lead.

Latimer Alder

Very interesting (and very topical discussions with some real live climate modellers (all of whom are terrified of our esteemed host here and so won’t turn up) at Judith Curry’s blog.
Simply put they do not see verification and validation as a priority. Their work is ipso facto so brilliant that no checks (by observation or by external scrutiny) are needed.
Here’s the link.

Having failed to make accurate short-term forecasters (the Met office have an atrocious forecasting record …. and e.g. singularly failed to forecast the snow in Glasgow (I know because I started some outdoor work and removed all the gutters because the five day forecast was “light snow” on a single day followed by clear weather … I now have 2 foot icicles hanging from the entire roof)
So … clearly the modellers tried to justify the huge amount of money the public spent on their toy computers by claiming that “whilst we can’t predict the weather … we can predict the climate”.
HA HA HA HA HA HA HA!
What a joke they are!

Alexander K

‘Duh!’ as Mr Simpson would so cogently expresses his frustration with and scorn for the dumb ideas that supposedly-intelligent blokes with proper Phds and everything have persuaded the alarmist and alarmed world to take seriously for far too long.
When I was a boy, I built model airoplanes badly, being blessed with more than my share of metaphorical thumbs plus a very limited set of construction skills, but the quality of my models never stopped me dreaming they were real and capable of marvellous aeronautical feats. Sometimes they even flew.
When I grew up I realised that childhood dreams should remain in childhood, but, sadly, many clever kids never mature to full and responsible adulthood; a few of them become scientists and go on to scare the world with their clever but childish fantasies. And sadly, many ordinary people seem to like the thrill of being scared, but never willingly put themselves in any kind of actual danger, so the timid fasten on to the scaremongers for our ration of thrills and the Marxists fasten on to the models to satisfy their anti-humanity rage and control freakery. It is very interesting that many (but not all) of the alarmist tribe never go motor-racing, blue-water yachting, mountain-climbing or play vigorous contact sports and so never put themselves in a situation which might supply a good ration of adrenaline or even plain old-fashioned fun, which, apart from the joy of control or exterminating something, is not permitted by the Marxist doctrine.
And finally, we find the alarmist scientists’ juvenile climate models are c**p – wow, who woulda thunk it!!

Martin Lewitt

Demetris,
I note that you did not cite the Wentz publication in the journal Science that found that the models reproduced less than one-third to one-half the increase in precipitation observed during the recent warming. Unfortunately, model based studies projecting increased risk of drought in the future fail to discuss this result and its implications for model credibility in representation of future warming associated increases in precipitation. I have wondered if they’ve been able to ignore the Wentz paper because it doesn’t report the specific model results. Your work may be more difficult to ignore, but it did not see a specific comparison of the increase in precipitation during the warm decades in the models relative to the increase seen in the observations. It would help to know if your results are consistent with those of Wentz. I’d have to read the Wentz paper again, but I assume we’d expect to see this in a comparison of the 80s or 90s with the 70s or 60s. Thanx.

Roger Carr says: “There is real money at stake, Robert.“.
True. But is there a difference between the way in which money is at stake here, compared with how it is normally at stake? When organisations build models, it is usually with the aim of making money or saving money. Their own money. So the models tend to get tested carefully, and are ditched if they aren’t up to standard.
Is it possible that the climate models’ true aim is to lose vast amounts of other people’s money? If so, then accuracy in the models would not help achieve this, would therefore not be an objective, and need not be tested for. A model could then be coded to produce certain required results, and could remain in operation for as long as it can produce those results.
Nah. Absurd. Forget it.

Peter Miller

Another classic example of the difference between real science and ‘climate science’ – no different from Mannian Maths versus real statistics.
Any real branch of science would have a pre-requisite of testing a model by hindcasting, but with ‘climate science’, this is is not deemed to desirable or necessary, unless the raw data has been sufficiently manipulated to fit the models.

David

I think the UK Met Office £33m computer, currently called ‘Deep Black’, should be renamed ‘Deep Sh*t’ and taken to PC World to get their/our money back…

Henry Galt

Apart from the fact that this should have been done, thoroughly, before a single character of any IPCC report was struck I don’t mind taxes spent out on exposing the bleeding obvious if it saves us all money.

londo

How can there be offsets. The absolute temperature is of critical importance to the radiation balance and it must be an initial condition of the simulation. If they don’t get that right, especially with an error of 2.5-3 degress C, the error exceeds the so called CO2 forcing during the 20th century several times.
Also, I’ve spent many years fighting divergences in numerical simulations. With linear problems, it can be almost done, at least in many cases. I have always assumed that people like Gavin S. with degrees in applied math have these things under control. Now, I’m not all that confident they do.

This confirms what we knew in essence, that its darn hard to model over a long period of time any system which is under a constant state of flux as to what exactly constitutes the system and hence where the ‘model boundary’ should exist – on so many dimensions. Models work best with well defined and understood materials within a finite and often small ‘domain’ – as soon as you go beyond this the usefulness of the model quickly tails off and you just end up observing what is in essence noise.
Although very nice to have it be written up in the literature in such a precise way I must say – well done!

Roger Knights

Latimer Alder says:
December 6, 2010 at 1:00 am
Very interesting (and very topical discussions with some real live climate modellers meddellers …

Fixed.

DirkH

Wonderful work. Drive the stake deeper into the heart of the GCM’s, boys. This waste of money has to be stopped.

Mac

AGW is based on physics.
CAGW is based on computer models.
AGW may or may not be happening.
CAGW is plainly rubbish.

In this paper climate models were hindcast tested against actual surface observations, and found to be seriously lacking.
Let’s be clear about this. GCM’s do not match the past, which is known. The modelers disdain the past. They don’t care that their models don’t fit known data, i.e. the reality of the known past. They excuse that lack of correlation as too mundane to matter. Their theoretical framework, a gross oversimplification of a complex system, is not based on empirical data. Their model outputs fail when compared to known reality.
It’s laughable to consider whether GCM “projections” into the future are “science”. Science is the study of reality (we could argue about what “reality” is, and get bogged down in epistemology, but there is a common ground we mostly all share). The GCM’s do not comport with reality as is generally agreed upon, and as indeed the modelers themselves agree upon.
If the nut does not fit the bolt today, it is not going to suddenly fit that bolt tomorrow.

Jessie

Robert says: December 5, 2010 at 10:00 pm
All the Roberts, I can think of where modelling is useful.
Self interest.
What’s the bet, that is the the parameters (ever shape shifting) that frame the financial, gambling, voting, acquiescence, alcohol (or any other addictive substance) devised enterprises FOR and OF human nature which one cares to mention. Neuro-cognitive research is in it’s infancy.
It was Friedman or von Mises that stated ‘own interest’ not ‘self interest’ I thought. But I will check.
Also what is hindcast? I may have missed the original discussion, but this is a new word. A priori and posteriori is understandable.
I would appreciate an explanation on hindcast thanks.

Jimmy Haigh

Not CO2. Tea For 2…
TF2.

Mike

That global climate models are not very good at local and regional levels and on shorter times scales is well known. This Science News article gives an overview of work being done to improve models. Most of the article is about modeling aerosols but the last section “Getting Local” deal with efforts to improve local projections. I did not see anything shockingly new in the Greek research paper.
http://www.sciencenews.org/view/feature/id/65734/title/The_final_climate_frontiers
REPLY: It wasn’t meant to be “shockingly new”, that’s your panic descriptor. It is an update to a previous paper. – Anthony

Roger Carr

Mike D. says: (December 6, 2010 at 2:03 am) If the nut does not fit…

    Please be more specific, Mike. Which nut? There are so many…