Latest Supercomputers Enable High-Resolution Climate Models, Truer Simulation of Extreme Weather
Berkeley Lab researcher says climate science is entering a new golden age.
Not long ago, it would have taken several years to run a high-resolution simulation on a global climate model. But using some of the most powerful supercomputers now available, Lawrence Berkeley National Laboratory (Berkeley Lab) climate scientist Michael Wehner was able to complete a run in just three months.
What he found was that not only were the simulations much closer to actual observations, but the high-resolution models were far better at reproducing intense storms, such as hurricanes and cyclones. The study, “The effect of horizontal resolution on simulation quality in the Community Atmospheric Model, CAM5.1,” has been published online in the Journal of Advances in Modeling Earth Systems.
“I’ve been calling this a golden age for high-resolution climate modeling because these supercomputers are enabling us to do gee-whiz science in a way we haven’t been able to do before,” said Wehner, who was also a lead author for the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). “These kinds of calculations have gone from basically intractable to heroic to now doable.”
Using version 5.1 of the Community Atmospheric Model, developed by the Department of Energy (DOE) and the National Science Foundation (NSF) for use by the scientific community, Wehner and his co-authors conducted an analysis for the period 1979 to 2005 at three spatial resolutions: 25 km, 100 km, and 200 km. They then compared those results to each other and to observations.
One simulation generated 100 terabytes of data, or 100,000 gigabytes. The computing was performed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility. “I’ve literally waited my entire career to be able to do these simulations,” Wehner said.
The higher resolution was particularly helpful in mountainous areas since the models take an average of the altitude in the grid (25 square km for high resolution, 200 square km for low resolution). With more accurate representation of mountainous terrain, the higher resolution model is better able to simulate snow and rain in those regions.
“High resolution gives us the ability to look at intense weather, like hurricanes,” said Kevin Reed, a researcher at the National Center for Atmospheric Research (NCAR) and a co-author on the paper. “It also gives us the ability to look at things locally at a lot higher fidelity. Simulations are much more realistic at any given place, especially if that place has a lot of topography.”
The high-resolution model produced stronger storms and more of them, which was closer to the actual observations for most seasons. “In the low-resolution models, hurricanes were far too infrequent,” Wehner said.
The IPCC chapter on long-term climate change projections that Wehner was a lead author on concluded that a warming world will cause some areas to be drier and others to see more rainfall, snow, and storms. Extremely heavy precipitation was projected to become even more extreme in a warmer world. “I have no doubt that is true,” Wehner said. “However, knowing it will increase is one thing, but having a confident statement about how much and where as a function of location requires the models do a better job of replicating observations than they have.”
Wehner says the high-resolution models will help scientists to better understand how climate change will affect extreme storms. His next project is to run the model for a future-case scenario. Further down the line, Wehner says scientists will be running climate models with 1 km resolution. To do that, they will have to have a better understanding of how clouds behave.
“A cloud system-resolved model can reduce one of the greatest uncertainties in climate models, by improving the way we treat clouds,” Wehner said. “That will be a paradigm shift in climate modeling. We’re at a shift now, but that is the next one coming.”
The paper’s other co-authors include Fuyu Li, Prabhat, and William Collins of Berkeley Lab; and Julio Bacmeister, Cheng-Ta Chen, Christopher Paciorek, Peter Gleckler, Kenneth Sperber, Andrew Gettelman, and Christiane Jablonowski from other institutions. The research was supported by the Biological and Environmental Division of the Department of Energy’s Office of Science.
# # #
– See more at: http://newscenter.lbl.gov/2014/11/12/latest-supercomputers-enable-high-resolution-climate-models-truer-simulation-of-extreme-weather/#sthash.HIQAHanC.dpuf

Wehner and his co-authors conducted an analysis for the period 1979 to 2005 at three spatial resolutions: 25 km, 100 km, and 200 km. They then compared those results to each other and to observations.
This is curve fitting to known data. They keep making this exact same mistake over and over again. Show me a hindcast or a forecast from initial condition in say 1880 and I’ll be more interested.
So just where the hell did they get their hands on measured observed climate data at 25 km resolution all over the earth. There aren’t nearly enough measuring stations to have that much data. And I won’t bother rewriting the exact same comment, replacing 25 km, with 200 km.
And of course you need all those spatial observations to be taken at the same time, otherwise they don’t mean anything.
If I measure a different variable at a different place, at a different time, I really do not have the makings of a model of reality.
But I’m anxious to see some of their megaterafloppy results.
I presume that their computer model is capable of replicating the data for each of the grid points that they used to construct the model.
they didn’t they used another model to imagine it , this is climate ‘science ‘ after all, were realty comes a poor second to models.
easy. there are global datasets down to 1km.
depnds on the metric and the area.
US tempertures and precipitation? 1km. PRISM. Thats just one.
Even the best resolution with the MSU is not fine enough if I remember correctly.
No matter what you may want to believe, Steve Mosher, we dont have precipitation data at the 1km resolution. Just because a dataset claims it, doesn’t make it correct because we simply dont measure it. Or even close to it for that matter. Satellites only get approximately a daily view of a region…say over the ocean. How can that possibly equate to an accurate measurement of precipitation?
Agreed!
I don’t care how big the computer is if the model has the wrong forcings and wrong assumptions the answer will still be WRONG.
They can be inaccurate so much faster now. That’s real progress…I guess
why change that which pays so well and always gives you the ‘right ‘ results ?
This is curve fitting to known data.
wrong.
Soooo, how DO they validate/verify the model, especially with repeatedly adjusted data. I am already suspicious when they see more severe storm frequency, which has not been the case. There also needs to be a method of falsifying the model results which ain’t gonna happen as it will be repeatedly adjusted/updated to prevent any result except catastrophic climate disruption. Can’t have any result which would cut the paychecks off.
True. “Extremely heavy precipitation was projected to become even more extreme in a warmer world.” This hasn’t happened yet. Here is one list of the record rainfalls http://www.nws.noaa.gov/oh/hdsc/record_precip/record_precip_world.html
There is only one of the 6 record rainfall events in the 21st century and three in the 19th.
From the paper (my bolds):
Abstract: “ In the absence of extensive model tuning at high resolution, simulation of many of the mean fields analyzed in this study is degraded compared to the tuned lower-resolution public released version of the model.”
Section 2: “ Differences among select parameters employed for the three resolutions required by tuning and stability considerations are listed in Table A1 of Appendix Details of CAM5.1.”
“ Like its previous versions, the tuned, publically distributed 0.9° × 1.3° version of CAM5.1, exhibits a spurious “double ITCZ” pattern in the Pacific. In the GPCP observations, the Intertropical Convergence Zone (ITCZ) exhibits both a band of enhanced precipitation slightly north of the equator and the South Pacific Convergence Zone (SPCZ) extending from the maritime continent toward the southeast. ”
“ Such errors may be influenced by the lack of energy budget tuning (due to high computational costs) of the high-resolution configuration.”
Section 5: “ While the simulated weather at high resolution offers more realism in terms of reproducing intense storms and in some mean fields that are strongly influenced by local orography, mean fields at large scales are often better represented by the tuned ∼100 km public release version of the model than by the high-resolution simulation presented here.”
Appendix A: “Differences among select parameters employed for the three resolutions required by tuning and stability considerations are listed in Table A1.”
Table A1: “The Stability and Tuning Parameters That Were Varied in This Study”
What is “tuning,” Steve?
That’s when you don’t like the station you’re on so you tune it until you get what you want. When I was a youngster TV’s didn’t have a lot of channels but I always could go with Howdy Doody. Now I find we even sent him to China!
Mosher writes “wrong.”
Wrong.
Appendix A says this: “ parameterizations of cloud microphysics, cloud macrophysics, orographic gravity wave drag, the radiative effects of aerosols, and parameterizations of shortwave and longwave radiations are included [Neale et al., 2010].”
Neale, et al, 2010 is, “Description of the NCAR Community Atmosphere Model (CAM 5.0), NCAR Tech. Note NCAR/TN-486+STR”
Neale, et al. say, “The ∇2 diffusion coefficient has a vertical variation which has been tuned to give reasonable Northern and Southern Hemisphere polar night jets.”
They say, “At entrainment interfaces, eddy diffusivity is computed using Eqn.(4.10). … where a2 is a tuning parameter being allowed to be changed between 10 and 60, and we chose a2 = 30.”
They say, concerning eq. [4.57], “where Δp_pen is vertical overshooting distance of cumulus updraft above LNB and 1 ≤ r_pen ≤ 10 is a tunable non-dimensional penetrative entrainment coefficient. In CAM5, we chose r_pen = 10.”
They say, regarding eq. [4.142]. “C_u and C_d are tunable parameters. In the CAM 5.0 implementation we use C_u = C_d = 0.4. The value of C_u and C_d control the strength of convective momentum transport. As these coefficients increase so do the pressure gradient terms, and convective momentum transport decreases.”
They say, concerning eq. [4.180], “Originally, this empirical formula was obtained by including not only cumulus but also stratus generated by detrained cumulus condensate, which by construction results in overestimated cumulus fraction. Thus, we are using a freedom to change the two coefficients 0.04 and 675 to simulate convective updraft fractional area only. Currently these coefficients are also used as tuning parameters to obtain reasonable regional/global radiation budget and grid-mean LWC/IWC.”
One could go on– there are more tuning factors discussed. It is clear from the Report of Neale, et al., that CAM5 has many more tuned parameters than are mentioned by M. F. Wehner, et al. in “The effect of horizontal resolution on simulation quality in the Community Atmospheric Model, CAM5.1”
The described tuning makes completely factual davidmhoffer’s description of the work as, “curve fitting to known data.” There just isn’t any doubt about it.
By the way, all those tuned parameters have an uncertainty range. That’s the physical range of magnitudes that the unknown true parameter value might have, and the width of that range represents the prevailing knowledge (or ignorance) regarding the true magnitude.
Every single parameter uncertainty should be propagated through a projection to determine the reliability of that projection. However, this standard of physical science is never met in climate projection studies; probably because error propagation is never done for them.
A climate projection with no physical error bars has no knowable relevance to physical reality. Mere visual correspondence is physically meaningless.
read harder pat frank.
and you will figure out why you never get your crap published and why jeff id and Lucia tore your arguments to shreds.
you dont tune to observations or curve fit. period
Mosher writes “you dont tune to observations or curve fit. period”
Wrong again.
And the reason you dont think its a curve fit is because you dont appreciate what a curve fit actually is, Steve. Multiple inputs that approximate physical processes, are tuned and feedback upon each other is a curve fit.
Now deny it.
As has been historically typical with your comments on my work, Steve, your present views continue to be long on accusation and empty of substance.
Pat Frank is essentially right about tuning. A good example is in turbulence modeling in fluid dynamics. The well-known k-epsilon model solves two general transport equations for turbulence kinetic energy (k) and the dissipation rate (epsilon). The model contains five constants – how do you obtain values for these constants so you can use the model? You TUNE THE MODEL. That’s right, you obtain data for flat plates and pipes and adjust the constants until the results (i.e. velocity distributions, friction coefficients, shear stress profiles etc.) match the data for those specific cases. You then boldly go and apply the tuned model to a 747 aircraft at 4 degrees angle of attack. Of course, these kinds of models often fail for complex flows because the constants are only good for the limited cases you tuned the model for. The turbulence literature abounds with examples of model failures and proposed remedies.
If you look at the documentation and theory for most climate models, there are are hundreds of parameters(!) and most of these come from the literature where the authors proposed a model and showed it to work under specific circumstances. And like turbulence models, there is NO guarantee they these parameterizations still apply for more complex situations. So modelers do what they like to do – tune the models and parameters to best fit the available data. Unfortunately, our ocean/atmosphere system is so complex and coupled that it is a difficult task to know how to tune one parameter without implicitly affecting something else.
Exactly. if my oil reservoir model doesn’t match the field history then my boss will laugh at my forecasts. The models have to be calibrated against the past – that is the only way to reasonably verify them.
It’s good enough so that the reporter will be impressed and not ask any questions.
Higher resolution != better accuracy. This is axiomatic in the numerical modeling of physical systems. Higher resolution does mean smaller numerical errors in the limit of vanishingly spatial and temporal increments. But accuracy is only as good as the underlying model you’re solving. (Ask turbulence modelers about this…).
OT: What happened with Watts et al 2012? It was announced two years ago that it will be published “soon”. Any news?
When they beat the Old Farmer’s Almanac they can crow. Until then (if ever), they need to shut up after the last miserable failures.
“These kinds of calculations have gone from basically intractable to heroic to now doable.”
So, do we gather from that, that the many billions so far spent and countries energy policies decided, based on the aforementioned intractable and heroic dud computers that are now acknowledged to have been miserable failures?
So they simply get their garbage out much sooner.
No George, they get HIGH RESOLUTION garbage out much sooner. No more of that low resolution garbage.
+1
Exactly. Its important to be able to calculate garbage resolution to the .00001 order, and do it faster than ever before.
Insert comment about precision vs accuracy here.
Claims have been made for this supa dupa computer stimulator. Now let it do it stuff, make projections and let’s observe. I hope it does better than this.
http://www.energyadvocate.com/gc1.jpg
Great… That means that once improved and verified they may be able to accurately predict between next week’s to next month’s weather in three months?
The high-resolution model produced stronger storms and more of them, which was closer to the actual observations for most seasons. “In the low-resolution models, hurricanes were far too infrequent,” Wehner said.
Methinks Wehner is delusional. And methinks the new model is going in the wrong direction.
In what universe have we observed “stronger storms and more of them” for most seasons? Haven’t observations actually shown fewer hurricanes now than in the past?
Now see?…there ya go bringing reality into the mix.
That’ll screw up the whole play.
Yes, I think they have. But that’s not important right now! 🙂
Error: Now faster and more finely grained than ever.
Bigger anus = Larger poop
The real worry is the bigger mouth and appetite…
Serves me right for scatological imagery.
New and improved Climate Bleach makes white noise whiter.
Hmm. As an IT guy, one thing I know for sure is that more data does not necessarily equal better data. 100TB doesnt impress, other than to wonder exactly why they would need 100TB to display a temperature trend.
they looked at other variables. read the paper
Is good money this climate scam thing… A new one about Al Gore.
http://www.blacklistednews.com/HOW_AL_GORE_BECAME_A_BILLIONAIRE/39156/0/38/38/Y/M.html
Hightech is useless when data, methods and models are flawed.
It enables you to reach the wrong conclusion in record time.
‘ Lawrence Berkeley National Laboratory (Berkeley Lab) climate scientist Michael Wehner was able to complete a run in just three months.’ I am not sure being wrong faster is of much real benefit when they totally fail to understand why they were wrong or even admit to it.
…3. Even today’s fastest supercomputers can only achieve up to 25 km resolution, and a single run at such high resolution takes 3 months.
4. No model can be properly initialized with the state of the entire climate system, since the observations are woefully insufficient to provide the initial state at a single point in time for the entire climate system.
5. Chaos theory explains why even if the huge impediments of #3 and #4 above could be overcome, it is still impossible to predict the chaotic and non-linear weather & climate system beyond 3 weeks in the future…
http://hockeyschtick.blogspot.com/2014/11/nasa-official-says-in-ny-times-if-you.html
#3, actually folks will be running at 1km soon. A single run at 25km. 1 year of sim time took a day of wall clock.
#4. you dont need to initialize the ENTIRE system.
#4
You don’t need to initialuze the entire system?
Possibly. However, this will give a range of particular solutions that will depend on the subset of Initial Conditions chosen. How do you know that your subset of ICs are well posed?.
Steven Mosher November 13, 2014 at 2:06 pm
Thanks, Mosh. When you read the paper, you find that increasing the horizontal resolution actually makes many results worse … from the abstract:
This is the recurring problem that such models have run into. You can physically run the models at very high resolution, but your results may be much worse.
w.
“In the absence of extensive model tuning at high resolution, simulation of many of the mean fields analyzed in this study is degraded compared to the tuned lower-resolution public released version of the model.”
So, you give the models more resolution, and the system messes up. That ought to provide some insight.
This is the delusion that bigger more powerful computers will solve the climate problems of inaccurate forecasts. As I have written, if you don’t have data you can’t build a model or validate the results. Meanwhile, billions are wasted when other more pressing uses for big computers are ignored.
http://wattsupwiththat.com/2014/10/16/a-simple-truth-computer-climate-models-cannot-work/
I’m not against climate modeling. I wish them the best and hope they can make strides with improved codes as computers get more powerful. I AM against throwing money into dozens of codes, many written very poorly (e.g. NASA GISS Model E) – just focus on one or two models and make them as good as you can. Unfortunately, the Climate Industry(tm) will have none of that…
The animation starts on July 26, 1979 and there is substantial snow on the ground in the northern hemisphere. Can’t be all that then.
Over the last century, the world warmed … but even according to the IPCC, heavy precipitation has NOT gotten more extreme. Despite the facts, Wehner has his head so far up his … model … that he can’t even tell if its raining.
Epic fail, the typical modeler’s conceit that their model is the real world writ small.
w.
He should have said “Observations show this to be happening” even though they don’t. 😊 Money does terrible things to the mind. It makes you see things that aren’t there.
English translation: We’re able to pull numbers out of our ass faster than ever before.
and get PAID FOR IT.
Fixed it 🙂
Claim: “golden
ageshower of climate science, models” is upon usDoesn’t he mean, if instead of ‘how’?
I thought so.
They’re lying when they say they know the answers and it’s settled science.
Gavin said it’s settled.
Mikey said it’s settled.
Uncle Al the Kiddies Pal said it’s settled.
Even the POTUS said it’s settled.
Silly me…of course they’re lying.
Everybody knows that.
Did you know you can save money on a climate quote in 15 minutes? 🙂
We know orders of magnatude more about weather than we do climate yet we cannot forecast two weeks out.
During the animation, which runs from July to October, there is no change in the snow on the ground. No change in snow cover anywhere, Asia, Canada, nothing. Another epic fail … these guys are better than the funny papers.
w.
Thats because the animation is drawn over a fixed background. Look at the head slide.
they are showing total column integrated water vapor.
The background scene is fixed. It doesnt represent the ground cover.
You’re right, Mosh. The background is just a climate Potemkin village.
w.
“The high-resolution model produced stronger storms and more of them, which was closer to the actual observations for most seasons. “In the low-resolution models, hurricanes were far too infrequent,” Wehner said.” Translated = “Our preferred expectations were met with more computing power.” Where did these “actual observations for most seasons” come from? These people are clinically delusional.
Never mind the output data, the graphics are to die for …
Pointman
“Berkeley Lab researcher says climate science is entering a new golden age.”
That’s funny, I thought the Neolithic, not the gold age followed the stone age.
Neolithic is the New Stone Age, that followed the Mesolithic (Middle Sone Age) which followed the Paleolithic (Old Stone Age).
Bigger computer, gets the wrong results even quicker!
The Met Office here in the UK just did that, a £97m supercomputer to replace their antique (4 year old) £33m one!
Yawn. Wake me up when they actually put ALL the relevant data into the clanking machine instead of just the bits they use at the moment.
What comprises all the relevant data?
“A cloud system-resolved model can reduce one of the greatest uncertainties in climate models, by improving the way we treat clouds,” Wehner said. “That will be a paradigm shift in climate modeling. We’re at a shift now, but that is the next one coming.”
And after saying this they still have the gall to create policy!!!