
From the University of Gothenburg
Climate models are not good enough
Only a few climate models were able to reproduce the observed changes in extreme precipitation in China over the last 50 years. This is the finding of a doctoral thesis from the University of Gothenburg, Sweden.
Climate models are the only means to predict future changes in climate and weather.
“It is therefore extremely important that we investigate global climate models’ own performances in simulating extremes with respect to observations, in order to improve our opportunities to predict future weather changes,” says Tinghai Ou from the University of Gothenburg’s Department of Earth Sciences.
Tinghai has analysed the model simulated extreme precipitation in China over the last 50 years.
“The results show that climate models give a poor reflection of the actual changes in extreme precipitation events that took place in China between 1961 and 2000,” he says. “Only half of the 21 analysed climate models analysed were able to reproduce the changes in some regions of China. Few models can well reproduce the nationwide change.”
China is often affected by extreme climate events. Such as, the flooding of 1998 in southern and north-eastern China caused billions of dollars worth of financial losses, and killed more than 3,000 people. And the drought of 2010-11 in southern China affected 35 million people and also caused billions of dollars worth of financial losses.
“Our research findings show that extreme precipitation events have increased in most areas of China since 1961, while the number of dry days – days on which there is less than one millimetre of precipitation – has increase in eastern China but decreased in the western China.”
Cold surges in south-eastern China often cause severe snow, leading to significant devastation. Snow, ice and storms in January and February 2008 resulted in hundreds of deaths. Studies show that the occurrence of cold surges in southeast China significantly decreased from 1961 to 1980, but the levels have remained stable since 1980 despite global warming.
So his analysis showed a small minority of models actually were good in hindcasting?
I’d be shocked if everybody was right. But if a few people are, that’s all it takes. I wonder if that was coincidence or skill.
Thanks for gracing us with your presence, Tinghai.
In my last comment, which I wrote before reading your comment, I noted that not every model seemed to perform badly. In your opinion, did the few models that worked better do so mainly from random chance or was there something within the models that made them an effective representation of what is happening in the real world?
So far I’m not clear about why some models are better than the others. We are still try to find out the truth behind this. By doing so, we can provide suggestions to the people who are working with the improvement of models. My guess is that this could be affected by both the random part (especially when comes to the small regions) and the dynamic part of the model. There are some part of the model dynamics can not be resolved theoretically and this can only be resolved mathematically, which means this is close to the truth not the truth. And this make the model simulation very difficult to reproduce the observation. On the other hand, you may know that there is error in the observation so that we can not get the truth from the observation. Back to the second part, some of the model can well simulation the basic pattern of the circulation field while some models cannot. This can be link to the model dynamic which I’m not familiar with. The topography in east Asia is very complicate which is a big change to the models. How does the topography be represented in the model and how to resolve the topography effect is very difficult.
Throughout China’s recorded history there have been catastrophic floods and droughts. Therefore anything that can help predict extreme wet or dry years will be helpful. Let us not forget that in the case of China, we’re talking of hundreds of thousands, if not millions, of lives being adversely affected by these events. Since we are talking real life scenarios here, funding for this kind of research (this includes models and their validation) is much more important than the hundreds of billions we have been pissing away on the CAGW/Climate Change hoax.
Where it the link to the thesis so we can see for ourselves what it says?
Where is the link to the thesis so we can see for ourselves what it says?
You can find the link to the thesis here (https://gupea.ub.gu.se/handle/2077/31816).
The climate models are only good for predicting the climate of models – like Railways in the basement of Hansen and Mann
I spent several years verifying the computer model for a nuclear reactor for accident analysis’s (of design accidents.) This entailed many days of multiple runs expending hundreds of dollars per run (in 1975 dollars), fixing one or two things (never more, as that just confused things) and then doing the same the next day. Dumpsters full of printouts (this was before recycling was in vogue) were generated. (we used to joke “they could heat the building with the paper.”They let me take as much used paper home as I wanted. the kids loved it.) For over two years. And this was something that had a limited, known set of measurable, quantifiable parameter’s. I would hazard a guess that it took over a hundred runs per parameter to get a model that was acceptable, but in no way would be considered accurate by todays standards. Over the years since that model was completed it has been tweaked and modified to get it more accurate on the order of 6 to 10 times per year.
Meanwhile, in the CAGW world they haven’t even run one projection that has proven that the model predicts what it is supposed to predict with an accuracy of assuring that if you shoot a pistol in a barn, it will hit a wall. They haven’t identified all of the parameters involved, nor the polarity (feedback/forcing) of some of the known parameters and still ignore parameters that they don’t like or understand, or don’t fit their ideology.
Good Luck.
Many years ago I worked on the development of a computer model for a nuclear reactor that was used for analyzing design basis accidents. This involved many daily runs verifying the accuracy and correcting the model. After each run we checked the output, fixed the errors and tried again. The standing joke was that we could heat the building with the paper we hauled to the dumpster. They let me take as much home as I wanted. The kids loved it. This took over two years and we had a known, identifiable set of parameters. Still, I would guess, it took over a hundred runs per parameter (times the 20 – 30 power levels) to get a model that was acceptable. Over the years since then that model was tweaked and modified at least 6 times a year to improve it’s accuracy and correct mistakes.
However, in the CAGW world, they have an unidentifiable number of parameters, with some having unknown polarity (feedback/forcing), parameters that are not identified, not included, and even those that are ignored because of ideology. And to my knowledge they only predict the past/future with the accuracy (as stated by them) that is equivalent to saying that if you shoot a pistol in a barn it will hit a wall. And for this new found knowledge we are to spend thousands of trillions of dollars? Shut down all coal plants, and cover the earth with wind turbines? And we have already shown that the bullet went in the ground (the 15 years with no warming) and didn’t hit a wall ( their error band that is large enough to cover even the accuracy of the worst model e.g. 75% accurate with a 95% confidence level) as they projected.
What are GCMs good for, except wasting money?
There is a phenomenon, ENSO, that seems to ‘model’ the climate pretty well, or is it ‘modulate’ the climate? I think trying to model ENSO might teach us more, today, and in the short term.
Thanks, Tinghai, for your reply to my question.
I can tell you’ve given a lot of thought to these issues. Keep up the thinking, including maintaining an open mind and acknowledging such realities as:
This is not surprising. The global climate models have been shown, again and again, to not “be skilful” at prediction or hindcast, at regional scales.
Yet we are supposed to believe that they magically ARE skilful on a global
scale. RIIIIIIGHHTTT!!
Tinghai says: March 26, 2013 at 10:30 am
I am glad you said that. I think many people were misreading your opinion related to this, based on some of the comments here.
Oh I don’t think they’ve taken any particular effort to absorb Tinghai’s position in the first place. I think most people here engaged in a highly-selective reading in something like a mass exercise in confirmation bias.
According to Tinghai, a small minority of models were either fairly accurate in hindcast or got lucky. He also believes the models are getting better over time, or at least that some of them are.
Tinghai seems like an open-minded analytical sort to me, neither too credulous nor too conspirationally-minded. I suspect most of the attempts to model the climate are honest, but crap. However, some are less crap than others, is one of Tinghai’s points.
As someone who has been practicing CFD for over 20 years, I wish you the best in your research.
If Frank K. has been practicing CFD (computational fluid dynamics) modeling for over 20 years, I would like his opinion on the influence of grid sizes on results of climate models. I have done some atmospheric dispersion modeling (of air pollutants) using CFD, and have seen that weather-forecasting models are usually fairly good up to five days into the future, then observed weather tends to diverge from what the models predicted more than six days ago.
Weather forecasting models can usually use fairly small grid sizes (on the order of 1 km horizontally and 100 m vertically) because they don’t need too many time steps to predict weather 5 to 10 days later than the time the model is run. But errors tend to propagate when the model only calculates flows, temperatures, and pressures through the boundaries of each grid cell, without knowing what happens inside a grid cell.
If a “global climate model” is trying to forecast overall weather trends 50 or 100 years into the future, over the entire globe, the length and/or number of time steps must be much greater than those for a weather-forecasting model designed for a 5-to-10-day prediction. If the modeler attempts to maintain short time steps, he may need to use a smaller number of larger grid cells in order to obtain a result within a reasonable computation time.
Does Frank K. know the grid cell sizes used in most of the “global climate models”? Is it possible that the use of large grid cells may overlook local effects within a grid cell that propagate more errors than a relatively short-range weather forecasting model? In particular, do some of the larger grid cells in GCM’s include topographic features such as mountains or coastlines, which can “generate their own weather” from the sea-breeze effect, lake-effect snow, or orographic lifting over mountains, which can generate local precipitation on the windward side, and drying (Foehn effect) on the lee side? For example, could a model really calculate the “average” weather in a grid cell that included both Mount Rainier and Seattle ?
Also, what are the time steps used in “global climate models”? Are they on the order of seconds, hours, or longer? If a weather-forecasting model starts to diverge from reality after five days or so, how many time steps does that represent, and what would be the effect of longer time steps in a GCM? Are there errors within time steps and/or grid cells which can propagate and become much larger than any effect of infrared absorption by CO2?
Hi Steve Zell,
The first thing to note about climate (and weather) modeling is that it is an initial value problem (though some in the climate modeling community claim it is a boundary value problem, which is total B.S.). For initial value problems, numerical solutions must at least satisfy Lax’s Equivalence Theorem:
Given a properly-posed initial value problem, the necessary and sufficient condition for convergence (i.e. a valid numerical representation of the IVP) is stability.
Stability, unfortunately, can only be proven for linear equations, and the equations which form the basis of weather and climate models are highly non-linear. Moreover, we aren’t talking about a single equation but a system of coupled equations with significant source terms (i.e. the “forcings” that modelers like to talk about),
And also remember that coupled GCMs are solving equations which represent the evolution of the BOTH the ocean and atmosphere. So this means, continuity, momentum, energy, species, etc. equations for both atmosphere and ocean (along with special modeling of polar icecaps and sea ice etc. etc.).
In the end, the number of non-linear, coupled differential equations with source terms (fed my sub-models for radiation thermal energy transfer, tracers for aerosol transport, clouds and precipitation in all forms) is very large! And there is no way to guarantee that: (1) the system is well-posed mathematically or numerically, and (2) the numerical scheme is stable. Which means that errors in the initial state can grow unbounded and swamp the solution with garbage.
For most numerical schemes, the errors will be proportional to size of the mesh and the time step. Errors can be reduced by reducing both the cell sizes and time step. Usually, you have to do both, since finer meshes reduce the stability through what is known as the CFL number: CFL = dt*U/dx, where dt is the time step and dx is the spatial discretization size. For time marching schemes you typically want CFL to be about 1.
Finally, getting to your question, climate models are usually more coarsely resolved than weather models, and so use meshes on the order of 100 km cell size. Time steps can be as low as 30 minutes or as much as several hours, depending on the cell size and stability. Here is a good resource for you:
http://www.windows2universe.org/earth/climate/climate_modeling.html
Everything I have mentioned above (and more) is the reason I hammer on these climate modelers to make sure they properly document everything they do, and not just the superficial “fluffy” junk that you see in their papers. I mean: source code, extensive documents on EVERY equation solved and EVERY numerical method employed, a description and results for unit tests and more extensive test cases. And all in one place (not just a reference list of papers which typically do NOT provide all of the details).
Hope this helps…
Jonathan Carter and others wrote a paper in Reliability Engineering & System Safety entitled Our calibrated model has poor predictive value: An example from the petroleum industry (2004). Essentially, this paper, plus follow-up work by Carter and others, has demonstrated that in many complex models, even if one is able to develop a model that can simulate past results quite well (which rarely happens), the model has no capability to predict the future of the system. This is, potentially, a very important result that may have application in many disciplines, both in the physical sciences and the social sciences (I realize the latter phrase is an oxymoron, but I’ll use it for clarity). I’m not sure why I’ve not seen this paper referenced in any discussion of AGW. Perhaps I’ve simply overlooked such.
Reblogged this on Truth, Lies and In Between and commented:
Seems more like guesswork than solid science.
Time to stop the global warming scam. It’s just that. A scam. Look who makes money off of the lies and you can pretty much figure it out.