Claim: "golden age of climate science, models" is upon us

Latest Supercomputers Enable High-Resolution Climate Models, Truer Simulation of Extreme Weather

Berkeley Lab researcher says climate science is entering a new golden age.

Not long ago, it would have taken several years to run a high-resolution simulation on a global climate model. But using some of the most powerful supercomputers now available, Lawrence Berkeley National Laboratory (Berkeley Lab) climate scientist Michael Wehner was able to complete a run in just three months.

What he found was that not only were the simulations much closer to actual observations, but the high-resolution models were far better at reproducing intense storms, such as hurricanes and cyclones. The study, “The effect of horizontal resolution on simulation quality in the Community Atmospheric Model, CAM5.1,” has been published online in the Journal of Advances in Modeling Earth Systems.

“I’ve been calling this a golden age for high-resolution climate modeling because these supercomputers are enabling us to do gee-whiz science in a way we haven’t been able to do before,” said Wehner, who was also a lead author for the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). “These kinds of calculations have gone from basically intractable to heroic to now doable.”

Michael Wehner, Berkeley Lab climate scientist

Using version 5.1 of the Community Atmospheric Model, developed by the Department of Energy (DOE) and the National Science Foundation (NSF) for use by the scientific community, Wehner and his co-authors conducted an analysis for the period 1979 to 2005 at three spatial resolutions: 25 km, 100 km, and 200 km. They then compared those results to each other and to observations.

One simulation generated 100 terabytes of data, or 100,000 gigabytes. The computing was performed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility. “I’ve literally waited my entire career to be able to do these simulations,” Wehner said.

The higher resolution was particularly helpful in mountainous areas since the models take an average of the altitude in the grid (25 square km for high resolution, 200 square km for low resolution). With more accurate representation of mountainous terrain, the higher resolution model is better able to simulate snow and rain in those regions.

“High resolution gives us the ability to look at intense weather, like hurricanes,” said Kevin Reed, a researcher at the National Center for Atmospheric Research (NCAR) and a co-author on the paper. “It also gives us the ability to look at things locally at a lot higher fidelity. Simulations are much more realistic at any given place, especially if that place has a lot of topography.”

The high-resolution model produced stronger storms and more of them, which was closer to the actual observations for most seasons. “In the low-resolution models, hurricanes were far too infrequent,” Wehner said.

The IPCC chapter on long-term climate change projections that Wehner was a lead author on concluded that a warming world will cause some areas to be drier and others to see more rainfall, snow, and storms. Extremely heavy precipitation was projected to become even more extreme in a warmer world. “I have no doubt that is true,” Wehner said. “However, knowing it will increase is one thing, but having a confident statement about how much and where as a function of location requires the models do a better job of replicating observations than they have.”

Wehner says the high-resolution models will help scientists to better understand how climate change will affect extreme storms. His next project is to run the model for a future-case scenario. Further down the line, Wehner says scientists will be running climate models with 1 km resolution. To do that, they will have to have a better understanding of how clouds behave.

“A cloud system-resolved model can reduce one of the greatest uncertainties in climate models, by improving the way we treat clouds,” Wehner said. “That will be a paradigm shift in climate modeling. We’re at a shift now, but that is the next one coming.”

The paper’s other co-authors include Fuyu Li, Prabhat, and William Collins of Berkeley Lab; and Julio Bacmeister, Cheng-Ta Chen, Christopher Paciorek, Peter Gleckler, Kenneth Sperber, Andrew Gettelman, and Christiane Jablonowski from other institutions. The research was supported by the Biological and Environmental Division of the Department of Energy’s Office of Science.

# # #

– See more at: http://newscenter.lbl.gov/2014/11/12/latest-supercomputers-enable-high-resolution-climate-models-truer-simulation-of-extreme-weather/#sthash.HIQAHanC.dpuf

0 0 votes
Article Rating
202 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
November 13, 2014 10:37 am

Wehner and his co-authors conducted an analysis for the period 1979 to 2005 at three spatial resolutions: 25 km, 100 km, and 200 km. They then compared those results to each other and to observations.
This is curve fitting to known data. They keep making this exact same mistake over and over again. Show me a hindcast or a forecast from initial condition in say 1880 and I’ll be more interested.

george e. smith
Reply to  davidmhoffer
November 13, 2014 10:51 am

So just where the hell did they get their hands on measured observed climate data at 25 km resolution all over the earth. There aren’t nearly enough measuring stations to have that much data. And I won’t bother rewriting the exact same comment, replacing 25 km, with 200 km.
And of course you need all those spatial observations to be taken at the same time, otherwise they don’t mean anything.
If I measure a different variable at a different place, at a different time, I really do not have the makings of a model of reality.
But I’m anxious to see some of their megaterafloppy results.

george e. smith
Reply to  george e. smith
November 13, 2014 10:53 am

I presume that their computer model is capable of replicating the data for each of the grid points that they used to construct the model.

KNR
Reply to  george e. smith
November 13, 2014 10:57 am

they didn’t they used another model to imagine it , this is climate ‘science ‘ after all, were realty comes a poor second to models.

Reply to  george e. smith
November 13, 2014 1:58 pm

easy. there are global datasets down to 1km.
depnds on the metric and the area.
US tempertures and precipitation? 1km. PRISM. Thats just one.

ShrNfr
Reply to  george e. smith
November 13, 2014 2:07 pm

Even the best resolution with the MSU is not fine enough if I remember correctly.

Reply to  george e. smith
November 13, 2014 6:36 pm

No matter what you may want to believe, Steve Mosher, we dont have precipitation data at the 1km resolution. Just because a dataset claims it, doesn’t make it correct because we simply dont measure it. Or even close to it for that matter. Satellites only get approximately a daily view of a region…say over the ocean. How can that possibly equate to an accurate measurement of precipitation?

johnmarshall
Reply to  george e. smith
November 14, 2014 3:58 am

Agreed!
I don’t care how big the computer is if the model has the wrong forcings and wrong assumptions the answer will still be WRONG.

latecommer2014
Reply to  george e. smith
November 16, 2014 2:57 pm

They can be inaccurate so much faster now. That’s real progress…I guess

KNR
Reply to  davidmhoffer
November 13, 2014 10:59 am

why change that which pays so well and always gives you the ‘right ‘ results ?

Reply to  davidmhoffer
November 13, 2014 1:56 pm

This is curve fitting to known data.
wrong.

BFL
Reply to  Steven Mosher
November 13, 2014 3:14 pm

Soooo, how DO they validate/verify the model, especially with repeatedly adjusted data. I am already suspicious when they see more severe storm frequency, which has not been the case. There also needs to be a method of falsifying the model results which ain’t gonna happen as it will be repeatedly adjusted/updated to prevent any result except catastrophic climate disruption. Can’t have any result which would cut the paychecks off.

Reply to  Steven Mosher
November 13, 2014 5:07 pm

True. “Extremely heavy precipitation was projected to become even more extreme in a warmer world.” This hasn’t happened yet. Here is one list of the record rainfalls http://www.nws.noaa.gov/oh/hdsc/record_precip/record_precip_world.html
There is only one of the 6 record rainfall events in the 21st century and three in the 19th.

Reply to  Steven Mosher
November 13, 2014 6:04 pm

From the paper (my bolds):
Abstract: “ In the absence of extensive model tuning at high resolution, simulation of many of the mean fields analyzed in this study is degraded compared to the tuned lower-resolution public released version of the model.
Section 2: “ Differences among select parameters employed for the three resolutions required by tuning and stability considerations are listed in Table A1 of Appendix Details of CAM5.1.
Like its previous versions, the tuned, publically distributed 0.9° × 1.3° version of CAM5.1, exhibits a spurious “double ITCZ” pattern in the Pacific. In the GPCP observations, the Intertropical Convergence Zone (ITCZ) exhibits both a band of enhanced precipitation slightly north of the equator and the South Pacific Convergence Zone (SPCZ) extending from the maritime continent toward the southeast.
Such errors may be influenced by the lack of energy budget tuning (due to high computational costs) of the high-resolution configuration.
Section 5: “ While the simulated weather at high resolution offers more realism in terms of reproducing intense storms and in some mean fields that are strongly influenced by local orography, mean fields at large scales are often better represented by the tuned ∼100 km public release version of the model than by the high-resolution simulation presented here.
Appendix A: “Differences among select parameters employed for the three resolutions required by tuning and stability considerations are listed in Table A1.
Table A1: “The Stability and Tuning Parameters That Were Varied in This Study
What is “tuning,” Steve?

markl
Reply to  Pat Frank
November 13, 2014 6:13 pm

That’s when you don’t like the station you’re on so you tune it until you get what you want. When I was a youngster TV’s didn’t have a lot of channels but I always could go with Howdy Doody. Now I find we even sent him to China!

Reply to  Steven Mosher
November 13, 2014 6:26 pm

Mosher writes “wrong.”
Wrong.

Reply to  Steven Mosher
November 13, 2014 6:28 pm

Appendix A says this: “ parameterizations of cloud microphysics, cloud macrophysics, orographic gravity wave drag, the radiative effects of aerosols, and parameterizations of shortwave and longwave radiations are included [Neale et al., 2010].
Neale, et al, 2010 is, “Description of the NCAR Community Atmosphere Model (CAM 5.0), NCAR Tech. Note NCAR/TN-486+STR
Neale, et al. say, “The ∇2 diffusion coefficient has a vertical variation which has been tuned to give reasonable Northern and Southern Hemisphere polar night jets.
They say, “At entrainment interfaces, eddy diffusivity is computed using Eqn.(4.10). … where a2 is a tuning parameter being allowed to be changed between 10 and 60, and we chose a2 = 30.
They say, concerning eq. [4.57], “where Δp_pen is vertical overshooting distance of cumulus updraft above LNB and 1 ≤ r_pen ≤ 10 is a tunable non-dimensional penetrative entrainment coefficient. In CAM5, we chose r_pen = 10.
They say, regarding eq. [4.142]. “C_u and C_d are tunable parameters. In the CAM 5.0 implementation we use C_u = C_d = 0.4. The value of C_u and C_d control the strength of convective momentum transport. As these coefficients increase so do the pressure gradient terms, and convective momentum transport decreases.
They say, concerning eq. [4.180], “Originally, this empirical formula was obtained by including not only cumulus but also stratus generated by detrained cumulus condensate, which by construction results in overestimated cumulus fraction. Thus, we are using a freedom to change the two coefficients 0.04 and 675 to simulate convective updraft fractional area only. Currently these coefficients are also used as tuning parameters to obtain reasonable regional/global radiation budget and grid-mean LWC/IWC.
One could go on– there are more tuning factors discussed. It is clear from the Report of Neale, et al., that CAM5 has many more tuned parameters than are mentioned by M. F. Wehner, et al. in “The effect of horizontal resolution on simulation quality in the Community Atmospheric Model, CAM5.1
The described tuning makes completely factual davidmhoffer’s description of the work as, “curve fitting to known data.” There just isn’t any doubt about it.

Reply to  Steven Mosher
November 13, 2014 7:31 pm

By the way, all those tuned parameters have an uncertainty range. That’s the physical range of magnitudes that the unknown true parameter value might have, and the width of that range represents the prevailing knowledge (or ignorance) regarding the true magnitude.
Every single parameter uncertainty should be propagated through a projection to determine the reliability of that projection. However, this standard of physical science is never met in climate projection studies; probably because error propagation is never done for them.
A climate projection with no physical error bars has no knowable relevance to physical reality. Mere visual correspondence is physically meaningless.

Reply to  Steven Mosher
November 13, 2014 7:53 pm

read harder pat frank.
and you will figure out why you never get your crap published and why jeff id and Lucia tore your arguments to shreds.
you dont tune to observations or curve fit. period

Reply to  Steven Mosher
November 13, 2014 9:05 pm

Mosher writes “you dont tune to observations or curve fit. period”
Wrong again.

Reply to  Steven Mosher
November 13, 2014 9:10 pm

And the reason you dont think its a curve fit is because you dont appreciate what a curve fit actually is, Steve. Multiple inputs that approximate physical processes, are tuned and feedback upon each other is a curve fit.
Now deny it.

Reply to  Steven Mosher
November 13, 2014 9:45 pm

As has been historically typical with your comments on my work, Steve, your present views continue to be long on accusation and empty of substance.

Frank K.
Reply to  Steven Mosher
November 14, 2014 8:13 am

Pat Frank is essentially right about tuning. A good example is in turbulence modeling in fluid dynamics. The well-known k-epsilon model solves two general transport equations for turbulence kinetic energy (k) and the dissipation rate (epsilon). The model contains five constants – how do you obtain values for these constants so you can use the model? You TUNE THE MODEL. That’s right, you obtain data for flat plates and pipes and adjust the constants until the results (i.e. velocity distributions, friction coefficients, shear stress profiles etc.) match the data for those specific cases. You then boldly go and apply the tuned model to a 747 aircraft at 4 degrees angle of attack. Of course, these kinds of models often fail for complex flows because the constants are only good for the limited cases you tuned the model for. The turbulence literature abounds with examples of model failures and proposed remedies.
If you look at the documentation and theory for most climate models, there are are hundreds of parameters(!) and most of these come from the literature where the authors proposed a model and showed it to work under specific circumstances. And like turbulence models, there is NO guarantee they these parameterizations still apply for more complex situations. So modelers do what they like to do – tune the models and parameters to best fit the available data. Unfortunately, our ocean/atmosphere system is so complex and coupled that it is a difficult task to know how to tune one parameter without implicitly affecting something else.

4 eyes
Reply to  davidmhoffer
November 13, 2014 2:30 pm

Exactly. if my oil reservoir model doesn’t match the field history then my boss will laugh at my forecasts. The models have to be calibrated against the past – that is the only way to reasonably verify them.

PiperPaul
Reply to  davidmhoffer
November 13, 2014 4:43 pm

It’s good enough so that the reporter will be impressed and not ask any questions.

Frank K.
Reply to  davidmhoffer
November 14, 2014 7:55 am

Higher resolution != better accuracy. This is axiomatic in the numerical modeling of physical systems. Higher resolution does mean smaller numerical errors in the limit of vanishingly spatial and temporal increments. But accuracy is only as good as the underlying model you’re solving. (Ask turbulence modelers about this…).

Ivan
November 13, 2014 10:43 am

OT: What happened with Watts et al 2012? It was announced two years ago that it will be published “soon”. Any news?

Brad Rich
November 13, 2014 10:43 am

When they beat the Old Farmer’s Almanac they can crow. Until then (if ever), they need to shut up after the last miserable failures.

Tim
Reply to  Brad Rich
November 13, 2014 5:53 pm

“These kinds of calculations have gone from basically intractable to heroic to now doable.”
So, do we gather from that, that the many billions so far spent and countries energy policies decided, based on the aforementioned intractable and heroic dud computers that are now acknowledged to have been miserable failures?

george e. smith
November 13, 2014 10:45 am

So they simply get their garbage out much sooner.

Tom O
Reply to  george e. smith
November 13, 2014 10:52 am

No George, they get HIGH RESOLUTION garbage out much sooner. No more of that low resolution garbage.

Reply to  Tom O
November 13, 2014 11:46 am

+1

Reply to  Tom O
November 13, 2014 12:25 pm

Exactly. Its important to be able to calculate garbage resolution to the .00001 order, and do it faster than ever before.

PiperPaul
Reply to  Tom O
November 13, 2014 4:56 pm

Insert comment about precision vs accuracy here.

Jimbo
Reply to  george e. smith
November 13, 2014 2:07 pm

Claims have been made for this supa dupa computer stimulator. Now let it do it stuff, make projections and let’s observe. I hope it does better than this.
http://www.energyadvocate.com/gc1.jpg

Joe Crawford
November 13, 2014 10:47 am

Great… That means that once improved and verified they may be able to accurately predict between next week’s to next month’s weather in three months?

Eustace Cranch
November 13, 2014 10:49 am

The high-resolution model produced stronger storms and more of them, which was closer to the actual observations for most seasons. “In the low-resolution models, hurricanes were far too infrequent,” Wehner said.
Methinks Wehner is delusional. And methinks the new model is going in the wrong direction.

Louis
Reply to  Eustace Cranch
November 13, 2014 11:23 am

In what universe have we observed “stronger storms and more of them” for most seasons? Haven’t observations actually shown fewer hurricanes now than in the past?

Reply to  Louis
November 13, 2014 12:26 pm

Now see?…there ya go bringing reality into the mix.
That’ll screw up the whole play.

Jay Hope
Reply to  Louis
November 13, 2014 3:26 pm

Yes, I think they have. But that’s not important right now! 🙂

November 13, 2014 10:50 am

Error: Now faster and more finely grained than ever.

Reply to  Pat Frank
November 13, 2014 12:34 pm

Bigger anus = Larger poop

Brute
Reply to  MCourtney
November 13, 2014 3:14 pm

The real worry is the bigger mouth and appetite…

Reply to  MCourtney
November 13, 2014 3:17 pm

Serves me right for scatological imagery.

Harold
Reply to  Pat Frank
November 13, 2014 3:40 pm

New and improved Climate Bleach makes white noise whiter.

November 13, 2014 10:51 am

Hmm. As an IT guy, one thing I know for sure is that more data does not necessarily equal better data. 100TB doesnt impress, other than to wonder exactly why they would need 100TB to display a temperature trend.

Reply to  dbakerber
November 13, 2014 2:00 pm

they looked at other variables. read the paper

Andyj
November 13, 2014 10:51 am

Is good money this climate scam thing… A new one about Al Gore.
http://www.blacklistednews.com/HOW_AL_GORE_BECAME_A_BILLIONAIRE/39156/0/38/38/Y/M.html

Non Nomen
November 13, 2014 10:53 am

Hightech is useless when data, methods and models are flawed.

Reply to  Non Nomen
November 13, 2014 12:26 pm

It enables you to reach the wrong conclusion in record time.

KNR
November 13, 2014 10:54 am

‘ Lawrence Berkeley National Laboratory (Berkeley Lab) climate scientist Michael Wehner was able to complete a run in just three months.’ I am not sure being wrong faster is of much real benefit when they totally fail to understand why they were wrong or even admit to it.

November 13, 2014 10:56 am

…3. Even today’s fastest supercomputers can only achieve up to 25 km resolution, and a single run at such high resolution takes 3 months.
4. No model can be properly initialized with the state of the entire climate system, since the observations are woefully insufficient to provide the initial state at a single point in time for the entire climate system.
5. Chaos theory explains why even if the huge impediments of #3 and #4 above could be overcome, it is still impossible to predict the chaotic and non-linear weather & climate system beyond 3 weeks in the future…
http://hockeyschtick.blogspot.com/2014/11/nasa-official-says-in-ny-times-if-you.html

Reply to  Hockey Schtick
November 13, 2014 2:06 pm

#3, actually folks will be running at 1km soon. A single run at 25km. 1 year of sim time took a day of wall clock.
#4. you dont need to initialize the ENTIRE system.

rcs
Reply to  Steven Mosher
November 13, 2014 2:37 pm

#4
You don’t need to initialuze the entire system?
Possibly. However, this will give a range of particular solutions that will depend on the subset of Initial Conditions chosen. How do you know that your subset of ICs are well posed?.

Reply to  Steven Mosher
November 13, 2014 8:13 pm

Steven Mosher November 13, 2014 at 2:06 pm

#3, actually folks will be running at 1km soon. A single run at 25km. 1 year of sim time took a day of wall clock.

Thanks, Mosh. When you read the paper, you find that increasing the horizontal resolution actually makes many results worse … from the abstract:

In the absence of extensive model tuning at high resolution, simulation of many of the mean fields analyzed in this study is degraded compared to the tuned lower-resolution public released version of the model.

This is the recurring problem that such models have run into. You can physically run the models at very high resolution, but your results may be much worse.
w.

Reply to  Steven Mosher
November 13, 2014 9:27 pm

“In the absence of extensive model tuning at high resolution, simulation of many of the mean fields analyzed in this study is degraded compared to the tuned lower-resolution public released version of the model.”
So, you give the models more resolution, and the system messes up. That ought to provide some insight.

November 13, 2014 10:56 am

This is the delusion that bigger more powerful computers will solve the climate problems of inaccurate forecasts. As I have written, if you don’t have data you can’t build a model or validate the results. Meanwhile, billions are wasted when other more pressing uses for big computers are ignored.
http://wattsupwiththat.com/2014/10/16/a-simple-truth-computer-climate-models-cannot-work/

Frank K.
Reply to  Tim Ball
November 14, 2014 8:19 am

I’m not against climate modeling. I wish them the best and hope they can make strides with improved codes as computers get more powerful. I AM against throwing money into dozens of codes, many written very poorly (e.g. NASA GISS Model E) – just focus on one or two models and make them as good as you can. Unfortunately, the Climate Industry(tm) will have none of that…

Bill Illis
November 13, 2014 10:57 am

The animation starts on July 26, 1979 and there is substantial snow on the ground in the northern hemisphere. Can’t be all that then.

Editor
November 13, 2014 10:58 am

Extremely heavy precipitation was projected to become even more extreme in a warmer world. “I have no doubt that is true,” Wehner said.

Over the last century, the world warmed … but even according to the IPCC, heavy precipitation has NOT gotten more extreme. Despite the facts, Wehner has his head so far up his … model … that he can’t even tell if its raining.
Epic fail, the typical modeler’s conceit that their model is the real world writ small.
w.

Jimbo
Reply to  Willis Eschenbach
November 13, 2014 2:56 pm

Extremely heavy precipitation was projected to become even more extreme in a warmer world. “I have no doubt that is true,” Wehner said.

He should have said “Observations show this to be happening” even though they don’t. 😊 Money does terrible things to the mind. It makes you see things that aren’t there.

LeeHarvey
November 13, 2014 10:58 am

this [is] a golden age for high-resolution climate modeling

English translation: We’re able to pull numbers out of our ass faster than ever before.

Reply to  LeeHarvey
November 13, 2014 12:30 pm

and get PAID FOR IT.
Fixed it 🙂

November 13, 2014 11:02 am

Claim: “golden age shower of climate science, models” is upon us

Harry Passfield
November 13, 2014 11:03 am

Wehner says the high-resolution models will help scientists to better understand how climate change will affect extreme storms.

Doesn’t he mean, if instead of ‘how’?

Reply to  Harry Passfield
November 13, 2014 11:52 am

I thought so.
They’re lying when they say they know the answers and it’s settled science.
Gavin said it’s settled.
Mikey said it’s settled.
Uncle Al the Kiddies Pal said it’s settled.
Even the POTUS said it’s settled.
Silly me…of course they’re lying.
Everybody knows that.

Reply to  mikerestin
November 13, 2014 12:31 pm

Did you know you can save money on a climate quote in 15 minutes? 🙂

Rob Dawg
November 13, 2014 11:03 am

We know orders of magnatude more about weather than we do climate yet we cannot forecast two weeks out.

Editor
November 13, 2014 11:03 am

During the animation, which runs from July to October, there is no change in the snow on the ground. No change in snow cover anywhere, Asia, Canada, nothing. Another epic fail … these guys are better than the funny papers.
w.

Reply to  Willis Eschenbach
November 13, 2014 2:12 pm

Thats because the animation is drawn over a fixed background. Look at the head slide.
they are showing total column integrated water vapor.
The background scene is fixed. It doesnt represent the ground cover.

Reply to  Steven Mosher
November 13, 2014 8:02 pm

You’re right, Mosh. The background is just a climate Potemkin village.
w.

markl
November 13, 2014 11:03 am

“The high-resolution model produced stronger storms and more of them, which was closer to the actual observations for most seasons. “In the low-resolution models, hurricanes were far too infrequent,” Wehner said.” Translated = “Our preferred expectations were met with more computing power.” Where did these “actual observations for most seasons” come from? These people are clinically delusional.

November 13, 2014 11:08 am

Never mind the output data, the graphics are to die for …
Pointman

November 13, 2014 11:08 am

“Berkeley Lab researcher says climate science is entering a new golden age.”
That’s funny, I thought the Neolithic, not the gold age followed the stone age.

Adam Gallon
Reply to  Dave in Canmore
November 13, 2014 12:02 pm

Neolithic is the New Stone Age, that followed the Mesolithic (Middle Sone Age) which followed the Paleolithic (Old Stone Age).
Bigger computer, gets the wrong results even quicker!
The Met Office here in the UK just did that, a £97m supercomputer to replace their antique (4 year old) £33m one!

November 13, 2014 11:08 am

Yawn. Wake me up when they actually put ALL the relevant data into the clanking machine instead of just the bits they use at the moment.

Chris
Reply to  Oldseadog
November 13, 2014 11:33 pm

What comprises all the relevant data?

Francisco
November 13, 2014 11:10 am

“A cloud system-resolved model can reduce one of the greatest uncertainties in climate models, by improving the way we treat clouds,” Wehner said. “That will be a paradigm shift in climate modeling. We’re at a shift now, but that is the next one coming.”
And after saying this they still have the gall to create policy!!!

Curious George
November 13, 2014 11:14 am

This model has a 2.5% error in energy transfer by water evaporation from tropical seas, http://judithcurry.com/2013/06/28/open-thread-weekend-23/#comment-338257

LeeHarvey
Reply to  Curious George
November 13, 2014 11:25 am

It’s the only way they could keep the water vapor feedback negative.

LeeHarvey
Reply to  LeeHarvey
November 13, 2014 11:44 am

Dammit… I meant positive.

Reply to  LeeHarvey
November 13, 2014 12:01 pm

Water vapor feedback is whatever they say it is at that moment.
Nothing more and nothing less.

Curious George
Reply to  Curious George
November 13, 2014 11:41 am

On the other hand it means that predictions become unreliable after 2 days.

Joel O'Bryan
Reply to  Curious George
November 13, 2014 11:42 am

on an initial state, iterate a 0.025 error factor several hundred times over many thousands of grids. what is output state? Garbage.

Matthew R Marler
Reply to  Curious George
November 13, 2014 12:54 pm

Curious George: This model has a 2.5% error in energy transfer by water evaporation from tropical seas, http://judithcurry.com/2013/06/28/open-thread-weekend-23/#comment-338257
that’s not the half of it. You only refer here to the latent heat of vaporization. What is necessary for accurate modeling is the rate of evaporation, and the change in that rate as temperature increases. (followed by accurate modeling of cloud cover, et seq.)
It seems obvious that if they set the latent heat too high, and they have the energy input rate correct, then they must have the evaporation rate too low. Getting from there to a quantitative assessment of how much error, accumulated across the distribution of regions and temperatures, looks intractable now. And if the temperature response to a doubling of CO2 concentration is less than 1% of the baseline temperature, on average, it would seem that a 2.5% error in the latent heat is a non-negligible error.

DHR
Reply to  Curious George
November 13, 2014 5:07 pm

But hasn’t water vapor been declining for many decades?

DD More
Reply to  Curious George
November 14, 2014 7:11 am

Do they also pull most of the energy out of the water, which lowers its temperature, as it actually happens in the real world.

Harry Passfield
November 13, 2014 11:16 am

I particularly loved the closing line of the graphic: “Data is freely available”. Oh boy. I’ll just go and get my old desktop out and give it a run….

November 13, 2014 11:18 am

This is not going to make any advances in the prediction of climate because this product like all the others before it will once again be based on incomplete , inaccurate ,as well as missing data.
This is a waste of time ,money and effort which is the norm when it comes to climate science.
The blind leading the blind and that is essentially what climate science is at this juncture.

Kevin Daugherty
Reply to  Salvatore Del Prete
November 13, 2014 12:14 pm

You said incomplete, inaccurate, and missing data. This seems to imply no intent. I think fraudulent data more accurately describes much of the intent of so called climate science.

Tim
Reply to  Salvatore Del Prete
November 14, 2014 4:39 pm

More like the cunning leading the blind.

Rob Dawg
November 13, 2014 11:19 am

“Clouds aren’t important unless you have money to give us to study them then they are very important. But we will need a bigger computer…”

Quelgeek
November 13, 2014 11:19 am

“…models will help scientists to better understand how climate change will affect extreme storms.”
That seems exactly backwards to me. It’s like suggesting models can help better understand how forests cause trees.

Jimbo
Reply to  Quelgeek
November 13, 2014 3:05 pm

No no. Climate change is not the same as a changing climate. Climate change is code for man’s Satanic gases.

old44
November 13, 2014 11:20 am

Why do they need the new computer models when the old ones were 100% accurate?

Reply to  old44
November 13, 2014 12:34 pm

Simple. There was still more funding in the kitty.

Rud Istvan
November 13, 2014 11:21 am

There are two fundamental problems with this PR.
First, to even attempt to model convection cells (clouds, Tstorms) the grid resolution has to be 10km or less, not 25. Better is still not near good enough. This run still took 3 months to simulate 26 years to 2005. So a single run at 25km grid would take about 15 months to reach 2100 for ‘CMIP 7’. (And still not have the physics, only parameterizatioms.) Still incomputable, let alone for ensembles. IPCC AR5 said this also, using the PR example here of clouds. WG1 7.2.1.2.
Second, finer resolution just increases the initial value problem. So the nonlinear dynamic model results just diverge more and more rapidly. And attempting to convert that mathematical fact to a boundary envelope problem via ensembles is impossible owing to the computational constraints. See problem 1.
Both issues illustrated in essays Cloudy Clouds and Models all the way Down in Blowing Smoke.
Give the supercomputer to the NWS to do real weather forecasting out a few days on regional fine scales. See Cliff Mass blog. Stop wasting taxpayer money on shiny toys that are inherently not fit for purpose.

mwh
November 13, 2014 11:24 am

Why do we spend ever more money on supercomputers and other methods to attempt to predict the unpredictable. So far the models have produced no better then a guessed line or curve. The funniest part is that the line following the 5000 year trend tends to be closest. Apart from that there is far too short a range in all accurate data the satellite record doesnt even manage 50 years yet and doesnt span the warming/cooling periods experienced since the LIA, the predictions from all these data sets is therefore flawed right from the start.
If that wasnt inaccurate enough how can you take a piece of new data like the greater depth readings from buoys and with absolutely no historical data to speak of use that information to predict the future.
In 100 years time maybe we may make some headway into how all the forcings/feedbacks fit together, even then I doubt prediction of the future weather will improve much – its just too chaotic.
Shouldnt all this money be used in streamlining the human race by improving energy efficiency and embarking on a mild depopulation programme worldwide – now that would have benefits!! not some mega terabyte super ruler

Sal Minella
November 13, 2014 11:24 am

Y mean it takes them years to produce one load of crap? I can do it in about six hours

Gary
November 13, 2014 11:30 am

More likely to be the Iron Pyrite Age of Climate Modeling.

LeeHarvey
Reply to  Gary
November 13, 2014 11:36 am

A thousand years ago, there was big money in alchemy. Apparently everything really does come back around.

Paul
Reply to  LeeHarvey
November 13, 2014 1:09 pm

You have to admit, they do a good job of turning lead into gold.
(RoHS compliant, I assume?)

Claudius
November 13, 2014 11:30 am

And in the end this matters how? Another stupor-computer climate model that does what in comparison to what? This is nothing more than a shameful waste. I can see starting a climate model out with real world data collected with some semblance of conformity and consistency but that data would be, what, maybe twenty years old? The earth is how many years old? Twenty years worth of usable data means what.
Same case with green house gas emissions, folks have thirty years worth of data on a feature of earth’s atmosphere that we have no idea how old it could possible be. I think that scientific study in the end is a good thing and developing models is of paramount importance but the idea of attempting to regulate society with so little data and such poor tools and systems of measurement is nothing more than hysterical political tripe.

Rob Dawg
November 13, 2014 11:37 am

In all the time that model ran it never got dark, the sea ice never budged, and as noted above the snow never fell. Reflects observation? I think not.

Reply to  Rob Dawg
November 13, 2014 12:36 pm

See how nice it is, living in the land of models??? 🙂 Don’t like something?…just delete it.

looncraz
Reply to  Rob Dawg
November 13, 2014 8:18 pm

I think this is just wind current modeling over a static surface map.
One thing that seems missing is circumpolar currents, but I’m not really certain what they were trying to visualize.

catweazle666
November 13, 2014 11:37 am

” “In the low-resolution models, hurricanes were far too infrequent,” Wehner said.”
How infrequent is infrequent?
The last major hurricane to strike the United States made landfall on Oct. 24, 2005.

Ian H
Reply to  catweazle666
November 13, 2014 11:58 am

Far too infrequent as in most low resultion models can’t spontaneously generate hurricanes.

Resourceguy
November 13, 2014 11:37 am

High-resolution model bias takes a lot of money and computing power.

Mark Bofill
November 13, 2014 12:03 pm

Look, faster, better computers are always nice. The trouble is order complexity of algorithms. A problem which can’t be solved in polynomial time is basically intractable, doesn’t really matter how powerful the computer is trying to solve it. Simulations are always going to be an approximation that are going to diverge from reality by the nature of the math involved and the parameterization of very complicated things, running on a better computer isn’t going to alter that fundamental reality.
Computer science majors learn this as freshmen or sophomores.

Rud Istvan
Reply to  Mark Bofill
November 13, 2014 1:29 pm

+1

Bruce Cobb
November 13, 2014 12:11 pm

“Gee-whiz science”? More like Cheez Whiz science. Lots of fluff and pretty, but no substance to it.

November 13, 2014 12:26 pm

As a plod who has the odd three decades of coding and systems experience, I wonder what they are ACTUALLY running on multi-billion dollar massively parallel Beowulf cluster, because the latest GISS model (GCM ModelE) 5.x doesn’t support such an architecture. See, being a pedantic software engineer of the olde school, I downloaded and compiled the beast, and read ALL of the VERY CLUNKY FORTRAN. Where has the $50 BILLION gone?
From the system specifications:
“Note that the parallelization used in the code is based on the OpenMP shared memory architecture. This is not appropriate for multi-processing on a distributed memory platform (such as a Beowlf cluster). For machines that share a number of processors per board, the OpenMP directives will work up to that number of processors. We are moving towards a domain decomposition/MPI approach (which will be clear if you look at the code), but this effort is not yet complete or functional.”
http://www.giss.nasa.gov/tools/modelE/HOWTO.html#part0_3
Anyone interested you can look at the code on my site. I put it all in one convenient place.
https://github.com/addinall/GISS_climate_model
Thinking about re-writing it in a sensible language. Reverse engineering, at a glance, will show how trivial this model is.
The code will run correctly on something like a SunFire with 16-32 CPUs on the same MOBO, but it WILL NOT make use of a parallel architecture.
“ModelE uses OpenMP application program interface. It consists in a set of instructions (starting with C$OMP) which tell the compiler how to parallelize the code. To be able to run the model on multiple processors one has to compile it with enabled OpenMP. This is done by appending MP=YES to the compile line, i.e.
gmake gcm RUN=E001xyz MP=YES
It is important to keep in mind that one can’t mix OpenMP objects with a non-OpenMP compilation. This means that one has to do gmake vclean when switching from OpenMP to non-OpenMP compilation. The option MP=YES can be set in ~/.modelErc file (see Part 0.2). In that case one can skip it on the command line.
In the setup stage, the number of processors is defined by the relevant environmental variable $MP_SET_NUMTHREADS. However, when using runE, it is similar to the way it is used for the non-OpenMP model, except that one has to specify the number of processors as a second argument to runE. For instance if one wants to run the model E001xyz on 4 processors one starts it with
runE E001xyz 4
” […]
Now, I am not impressed by “I can run me program an it produces 12 zillion petabytes of random numbers, all falling within pre-ordained upper and lower bounds” when the data for the priming of the model is sparse indeed.
“Boundary and initial conditions for the AR4 version can be downloaded from fixed.tar.gz (191 MB). This is a large amount of data due to things like transient 3-D aerosol concentrations etc.”
http://www.giss.nasa.gov/tools/modelE/
Boundaries and initial conditions for the WHOLE PLANET can squeeze into 191 MB???? Amazing!
My temperature start data for one automatic lathe under Statistical Process Control is about thirty times more complex than the WHOLE PLANET! Me must be doing it wrong!
Can I please have the super-computer to do some REAL WORK. Proteomics and Epigenetics could stand a nice new shiny machine(s).

Paul
Reply to  Addinall
November 13, 2014 1:12 pm

Shhhh, you’re going to ruin it for them…

Curious George
Reply to  Addinall
November 13, 2014 1:27 pm

Is the GISS Model E same as CAM5.1? I thought that this post was about CAM5.1.

Reply to  Curious George
November 13, 2014 2:15 pm

They are not the same. Every time I hear about the “gold standard” of models, GISS is trotted out. That is what peaked my curiosity initially. The code is bloody old and awful, and will run OK on an XBOX.
They have altered the run from CAM4.x to CAM5.1 so that now CAM5.1 can copy (close enough) the real observed data. Warming over the 20th century has dropped from 0.84C mean to 0.35 mean (both meaningless numbers).
CAM5.1 can make use of parallel operation:
“CAM makes use of both distributed memory parallelism implemented using MPI (referred to throughout this document as SPMD), and shared memory parallelism implemented using OpenMP (referred to as SMP). Each of these parallel modes may be used independently of the other, or they may be used at the same time which we refer to as “hybrid mode”. When talking about the SPMD mode we usually refer to the MPI processes as “tasks”, and when talking about the SMP mode we usually refer to the OpenMP processes as “threads”. A feature of CAM which is very helpful in code development work is that the simulation results are independent of the number of tasks and threads being used.”
But not very well.
The software was not written with parallelism in mind as an architectural model. The existing shared memory serial code has been hacked around. That is probably why a run is taking months!
They would be better served (sic) writing some decent software rather than buying shiny new boxes.

Frank K.
Reply to  Curious George
November 13, 2014 2:16 pm

No. CAM 5.1 and GISS Model E are two different models. CAM 5.1 is actually well done as far as computer models go. Great documentation and implementation. Hats off to NCAR.
GISS Model E is a piece of old FORTRAN junk that should not even be compared with CAM 5.1 or other state of the art climate models. Gavin doesn’t have time to document it properly so no one really knows what’s in it…

BFL
Reply to  Curious George
November 13, 2014 10:09 pm

Mosher: “The latest GISS model has exactly Nothing to do with this post. good engineer. not.”
Are you implying that the REAL model code is “classified” so that it is safe from programmers with 20-30 years experience who might find something wrong with it?? Shades of climategate. And yes I saw the list of lame excuses for doing this.

Reply to  Addinall
November 13, 2014 3:05 pm

The latest GISS model has exactly Nothing to do with this post.
good engineer. not.

Rick
November 13, 2014 12:26 pm

Maybe it’s about time ‘Theoretical Climate Science’ was treated as a distinctly separate discipline to ‘Actual Climate Science’.

CCM591
November 13, 2014 12:33 pm

My experience with numerical weather prediction models is that the higher-resolution models (both in the horizontal and vertical) are not necessarily any more accurate with their projections than the lower-resolution NWP models.

Rud Istvan
Reply to  CCM591
November 13, 2014 1:41 pm

ECMRWF was days ahead of NWS on Hurricane Sandy. Go to the Cliff Mass blog for lots of specifics and details. Point is not that finer resolution models are perfect (that darned Lorenz nonlinear dynamics effect), but often are practically better. Essay Models… In Blowing Smoke uses a concrete example for an Arizona thunderstorm weather front. Real data from an event with both radar and weather models of varying grid fineness.

CCM591
Reply to  Rud Istvan
November 13, 2014 2:59 pm

I agree that the ECMWF model is a good model (better than the GFS), but I’ve still seen it (ECMWF model) depict totally anomalous features at 240 hours that never actually materialize.
The new NAM model has a relatively fine horizontal resolution (4 km), but is really no better than the earlier 12 km NAM, based on my experience with working with it on essentially a daily basis.

Dennis Stayer
November 13, 2014 12:43 pm

Gigi – just a whole lot faster. One thing I know is if they ever get a model together that comes near to reality I’ll hear of it from WUWT.

Tom in Florida
November 13, 2014 12:59 pm

I believe the “golden age” he is speaking about is the grant money deposits in their bank accounts.

Reply to  Tom in Florida
November 13, 2014 1:05 pm

You have it in one!

November 13, 2014 1:00 pm

Golden age of stupid, he means.

Randy
November 13, 2014 1:20 pm

I think its cute in a really scary way that anyone would claim climate models are gaining accuracy when we do not even know what drives ENSO yet, which caused rates of change far surpassing the ones we are supposed to fear. We have talk of an energy im balance yet cant explain, or even highlight a source for a massive transfer of energy.
We totally understand the climate! Except the predicted rate of change needed for the dangerous ends of the claims isnt even vaguely coming into fruition, and major energy movements happen without us even have an inkling of what triggers it. Science is settled! Well unless you look at the published work where basically every variable is still up for debate. hmm. no more questions!

November 13, 2014 1:27 pm

The video shows Australia as not having ANY cyclones for the entire period.
The Oz BOM show so many cyclone tracks over the same period that it looks like the spaghetti graphs typically shown for climate mpdel runs.

Reply to  John Trigge
November 13, 2014 1:48 pm

Ah. That must be the LOW-Res model you’re looking at. The one with not enough storms.

Jesse G.
November 13, 2014 1:33 pm

A computer model, no matter how powerful, is no better than the programming and the input data. A computer is not a magical machine.

Curious George
Reply to  Jesse G.
November 13, 2014 1:50 pm

I believe that a computer IS a magical machine. A program with input data but without a computer is only a dead model.

Dawtgtomis
November 13, 2014 1:52 pm

Just a last-ditch effort to legitimize the previous claims. Now we can put garbage in and get higher resolution garbage out, faster than ever before!

Dawtgtomis
Reply to  Dawtgtomis
November 13, 2014 1:54 pm

Still reminds me of “PAY NO ATTENTION TO THAT MAN BEHIND THE CURTAIN”…

CarlF
November 13, 2014 1:57 pm

Wow. They can generate ever more accurate garbage faster than ever. The computer power is truly amazing. That does not mean they will get any better results, of course, but it will likely increase their insistence that we can trust the projections, which likely will show even more extreme weather. Eager to see what it spits out. If it incorporates oceans as well as clouds, it could be very interesting.

Dawtgtomis
Reply to  CarlF
November 13, 2014 2:04 pm

From my view it looks like the results are predetermined. Higher tech gives just higher authority to the hypnotized masses.

Dave
November 13, 2014 2:02 pm

New model, same assumptions.

Reply to  Dave
November 13, 2014 3:06 pm

and more accurate results

Reply to  Steven Mosher
November 14, 2014 2:31 am

“Chaos: When the present determines the future, but the approximate present does not approximately determine the future.” (Lorenz 2005)

tty
November 13, 2014 2:03 pm

Whoever wrote that press release seems to have a rather vague idea of mathematics:
“Wehner and his co-authors conducted an analysis for the period 1979 to 2005 at three spatial resolutions: 25 km, 100 km, and 200 km”.
“The higher resolution was particularly helpful in mountainous areas since the models take an average of the altitude in the grid (25 square km for high resolution, 200 square km for low resolution).”
I make that 625 and 40,000 square km respectively. Unless resolution was actually 5 and 14 kilometers,
By the way 5 kilometers is probably a more realistic spatial resolution if you actually want to model weather reasonably realistically in mountain areas.

November 13, 2014 2:13 pm

He kept saying the models were improved and more accurate, but never quantified that. For all we know they improved from laughable to pathetic.

Steve Keohane
November 13, 2014 2:23 pm

This so called ‘Golden Age’ of turd polishing is still producing a lot of ‘Brown 25’*.
*® Uranus Corp. from “The Groove Tube”

MarkW
November 13, 2014 2:30 pm

Hooray. Our models don’t suck as badly as they used to.

Jimbo
November 13, 2014 2:42 pm

We are entering the Golden Age of electronic chicken entrails. This could be fun.
The Met Office keeps on getting bigger and faster computers and keeps failing faster. Why?
http://www.metoffice.gov.uk/news/releases/archive/2014/new-hpc

November 13, 2014 2:46 pm

I wish they will do an interactive Web version!
This is better than current video games, by far.
A little bit more expensive to develop, I gather. ;-(

November 13, 2014 3:00 pm

Latest Supercomputers Enable …. faster garbage in/garbage out.
End of story.
Their models still suck because they are stilled based on the same fallacious assumptions and use the same tampered data sets.

Michael Elliott
November 13, 2014 3:02 pm

Hello, if these super computers are so good, how come the three day forcast is frequently wrong.
For a real and being able to check the accurancy of their forecasts, why not take a known set of data, try 1880, then 1920, then 1950, and see what comes up. Easy to check on how good they are. And mind you no cheating, slipping in the known weather data before the programme has been run.
Michael Elliott

November 13, 2014 3:06 pm

The high-resolution model produced stronger storms and more of them, which was closer to the actual observations for most seasons. “In the low-resolution models, hurricanes were far too infrequent,” Wehner said.

Seems that the low-res models are more like REALITY.

Reply to  Streetcred
November 13, 2014 3:09 pm

Very true.
But if you are right for the wrong reason you are just setting yourself up for a bigger fall.

Dawtgtomis
Reply to  Streetcred
November 13, 2014 3:32 pm

To me it paraphrases as: “My new computer is more fun because it makes things look even scarier when I assume that CO2 drives temperature.”

TRM
November 13, 2014 3:10 pm

So lets see their output. Pick a start date of oh 2000 and lets see if it can hindcast the 1990s and forecast out to 2010. When they actually start to get close to Dr Easterbrook’s or Dr Libby’s level of skill we’ll call that a good thing.
If they start a run in 2000 does it show the “pause, peak, plateau? If not then back to reworking the code boys.

garymount
Reply to  TRM
November 13, 2014 4:21 pm

It might not be a problem with the code, but more a problem with capturing the physics. As stated, a lot of cloud dynamics are missing.

Dawtgtomis
November 13, 2014 4:07 pm

“golden age of climate science, models”
This one leaves me with “Everything’s Up To Date In Kansas City” playing in my head.
Seems naive (or possibly, arrogant) to assume that no further improvements will be made and perfection has already been attained What I see as essential to the warmist dogma is the assumption that future generations will have only the present technology and understanding to solve whatever problems vex mankind. They apparently feel an urgency to predict something which is beyond our understanding at this juncture and control it’s outcome as if somehow the future won’t take care of itself.

garymount
November 13, 2014 4:18 pm

A modern dynamical computer model should be developed. This is where you can increase the resolution in areas for example with large terrain differences.

Roy
November 13, 2014 4:18 pm

What he found was that not only were the simulations much closer to actual observations, but the high-resolution models were far better at reproducing intense storms, such as hurricanes and cyclones.

What an oxymoron statement. It should say better at NOT reproducing intense storms, as there has been a decrease.
Because there is no input data for most of the world, they have to estimate/guess/fudge most of the data. It doesn’t matter how “super” your computer is, garbage in is garbage out.

garymount
November 13, 2014 4:26 pm

This reminds me of when classical physicists were getting closer to matching reality in the quantum arena as they added parameters, but still using classical physics. Yet they were wrong at the same time they fooled themselves into believing they were on the right track.

Dawtgtomis
Reply to  garymount
November 13, 2014 5:06 pm

Hmm…The Golden Age of Physics, maybe?

PaulH
November 13, 2014 4:49 pm

No doubt these new computerized Ouija boards will produce the same predictions: “We’re gonna need more money”

expat scientist
November 13, 2014 5:37 pm

The following quote is relevant here:
… there is an awful temptation to squeeze the lemon until it is dry and to present a picture of the future which through its very precision and verisimilitude carries conviction. Yet a man who uses an imaginary map, thinking it is a true one, is like to be worse off than someone with no map at all; for he will fail to inquire whenever he can, to observe every detail on his way, and to search continuously with all his senses and all his intelligence for indications of where he should go.
From Small is Beautiful by E. F. Schumacher.

Richard M
November 13, 2014 5:45 pm

Looks like they’ve moved from infancy to the toddler stage. As long as people recognize they are far from having anything close to reality I guess making progress is a good thing.
Also note the time period is during the warm phase of the PDO. Since they don’t model ENSO at all, any warming they find will over state reality.

SAMURAI
November 13, 2014 6:51 pm

“To do that, they will have to have a better understanding of how clouds behave.”
Ah, yes… Ye ol’ cloud defense…
This feigned ignorance of how clouds are the real climate regulators of Earth’s climate (not CO2) will literally be CAGW advocates’ get-out-of-jail-free card, when they’re testifying before Congress trying to defend why world governments wasted $10’s of trillions on the disconfirmed CAGW hypothesis…
Willis’ brilliant post awhile back regarding his ONE LINE equation that matches HADCRUT4 temp records to an amazing R2 value of .98, shows you don’t need millions of lines of computer code and years of supercomputer time to “accurately” hindcast anything… All you need is a ONE LINE equation that matches known results based on arbitrary assumptions…
As long as these CAGW climate models assume ECS will be anywhere near 3.0C~4.5C, model projections will be laughably off, unless natural forcing effects or natural variation just happen to generate 3~4.5C of warming by 2100, which doesn’t seem probable…

ES
November 13, 2014 7:06 pm

Maybe the reason that NWS was down.
Chinese hack U.S. weather systems, satellite network.
Hackers from China breached the federal weather network recently, forcing cybersecurity teams to seal off data vital to disaster planning, aviation, shipping and scores of other crucial uses, officials said. The intrusion occurred in late September but officials gave no indication that they had a problem until Oct. 20, said three people familiar with the hack and the subsequent reaction by the National Oceanic and Atmospheric Administration, which includes the National Weather Service. Even then, NOAA did not say its systems were compromised.
http://www.washingtonpost.com/local/chinese-hack-us-weather-systems-satellite-network/2014/11/12/bef1206a-68e9-11e4-b053-65cea7903f2e_story.html

John L.
November 13, 2014 8:03 pm

GIGO still rules I see. I’ll stick with my Farmer’s Almanac and the bunion aching weather test
for a while yet, thank you.

Khwarizmi
November 13, 2014 8:08 pm

It’s just a cheap imitation of the real thing:

Note the daily precipitation cycle evident over heavily forested tropical regions.
That isn’t simulated in the computer game.

Reply to  Khwarizmi
November 14, 2014 4:52 am

Its not that we cant do pretty good weather approximations, we can. Our weather forecasting on the whole is pretty good. The problem is that they think that a pretty good weather approximation has anything whatsoever to do with modelling changes due to CO2 increases. We cant model climate change. Its not the same as weather and no amount of resolution increase will change that.

Editor
November 13, 2014 8:23 pm

Steven Mosher November 13, 2014 at 1:58 pm

easy. there are global datasets down to 1km.
depnds on the metric and the area.
US tempertures and precipitation? 1km. PRISM. Thats just one.

Nope. That’s the output of a model, which can have any resolution you like … but which is assuredly different from the observational data which the commenter asked about.
w.

Anything is possible
Reply to  Willis Eschenbach
November 13, 2014 8:41 pm

Willis, your Callender thread does not appear to be open for comments.
Judging by your closing remark, I’m guessing this is an unintended SNAFU.

markl
Reply to  Anything is possible
November 13, 2014 8:55 pm

It’s not just me then. Great article BTW.

Steve Keohane
Reply to  Anything is possible
November 13, 2014 9:04 pm

Noticed that too. Very good article.

Reply to  Anything is possible
November 13, 2014 9:42 pm

Thanks for the heads-up, guys. It’s fixed now, your comments welcome.
w.

November 13, 2014 9:22 pm

Whoa, there is no fundamental reason models must be wrong. The notion that warming will cause more extreme weather is wrong. The geological record is crystal clear that warmer periods are more stable.
You just have to get the models right. No simple task, particularly when you start out upside down with a horrible misconception that the climate has high sensitivity to CO2. The task before the modelers is to deconstruct the “chaos” apology for failure. I do not believe in chaos. Chaos is groupspeak for “we don’t know squat”.
Resolution will help as it will rub their noses in the dogsquat all the faster, but they are in a position to run multiple scenarios very quickly. I think of it as when I have an unknown map projection and I go to arcmap start cycling through possibilities until one snaps in. A bingo moment. I fully believe the golden age of modeling is coming. It just won’t be tomorrow.

Ursus Augustus
November 13, 2014 9:25 pm

“tuning parameter”? That’s a “fudge factor” isn’t it? In other words they have not used the fine mesh to better model the actual climate they have used it to introduce a new farrago of fudge factors.
Navier Stokes with some fudge factors? Nah, too hard to get agreement with reality and the “deniers are onto it. Way too easy to spot the obvious flaw. Numerical Synthesis with lots of Tuning Paramaters is where its at. Upsize the kool ade too?
The caterpillar undergoes its chrysalis to become… a moth drawn to the light.

Mike McMillan
November 13, 2014 11:43 pm

I don’t think Mazatlan or Los Cabos would be great tourist spots on that computer planet with as many cyclone hits as they took.

Reg Nelson
November 14, 2014 1:01 am

Wasn’t 2005 also the ending point for Mann’s model? I guess that was the year the science was settled.
One way to look at this: A chessboard has 64 squares and 16 pieces. The movement of the pieces is constrained by the rules of the game. It is only recently that super computers have been able to defeat the top human players.
How many squares and how many pieces does the Earth have?

Allan MacRae
November 14, 2014 1:33 am

In North America last year (2013) both Environment Canada (EC) and the USA National Weather Service (NWS) predicted a warmish winter, and winter 2013-14 was very cold in the central and eastern two-thirds of Canada and the USA.
It should be noted that certain private meteorologists made accurate predictions of a very cold winter as early as July 2013.
This year is shaping up much like last year – both Environment Canada and the USA National Weather Service have again predicted a warmish winter, and again the same private meteorologists have predicted winter 2014-15 will be very cold in the central and eastern two-thirds of Canada and the USA. So far the private prediction is proving accurate – November has been very cold.
I understand the EC and NWS government weather forecasts rely primarily on computer models that in recent years have a very poor predictive track record.
The private forecasters used analogs from previous years of actual weather history as the basis for their predictions, and recently have a very good predictive track record (I have not attempted to go back further in time).
This limited evidence suggests that the government’s best seasonal weather computer models have little or no predictive skill, whereas older methodologies that rely on historical analogues have a stronger predictive track record.
I further suggest that in science one’s predictive track record is perhaps the only objective measure of one’s competence, and in this regard both Environment Canada and the USA National Weather Service failed in their forecasts for last winter and probably again for this winter. I am a pragmatist, and I am interested in what works – it does not matter if the shiny new computer model is the biggest and the best if it has no predictive skill, and it does not matter if an analog methodology is hundreds of years old, as long as it has good predictive skill.
The reason this all matters is the Excess Winter Mortality Rate –many more people in Europe and North America die in the four Winter months than in the eight non-Winter months.
Repeating an earlier example for Europe and all of Russia:
Assume a very low Excess Winter Mortality Rate of 10% (it varies from about 10% to 30% in Europe);
About 1% of the population dies per year in Europe and Russia, or about 8 million deaths out of about 800 million people;
The Excess Winter Mortality of this population is (4 months/8 months) * 10% * 8 million = at least 400,000 Excess Winter Deaths per year – the real number probably exceeds 500,000.
This is an average number of Excess Winter Deaths across Europe and Russia – it varies depending upon flu severity, cold etc.
Many people in Europe, especially older people on pensions, cannot afford to adequately heat their homes so are especially susceptible to illness and death in winter.
The population of North America subject to cold weather is less than half the above and we have much lower energy costs due to fracking of shales to produce cheap natural gas.
I hope I’ve slipped a decimal or two – these numbers seem daunting. However, if I am correct then here are a few suggested conclusions:
1. Winter weather forecasts matter because people can be forewarned and prepared for a cold winter, or they can be misinformed and unprepared.
2. Government organizations that frequently get their winter forecasts wrong (especially due to a warming bias) are doing a great disservice to their citizens.
3. Even a small percentage increase in Excess Winter Mortality means an increase of tens of thousands of winter deaths that may have been preventable if people were properly forewarned.
4. Excess Winter Mortality particularly strikes down the elderly and the poor.
5. Cheap abundant energy is the lifeblood of modern society, and green activists and politicians who have driven up the cost of energy have done a great disservice to their citizens, especially the elderly.
I expect that global temperatures will start to cool within a decade or less, and Excess Winter Mortality rates will increase. That will put an end to global warming mania, after trillions of dollars have been squandered and many lives lost. As usual, I hope to be wrong.
Best wishes to all, Allan

richard verney
November 14, 2014 1:46 am

I have not read the comments, and I suspect that others have noted this, but there will be no golden age in the near future because the past data base has become so horribly corrupted by endless adjustments, that it will be impossible to properly tune the computer, and thereby properly initialise it.
Any computer that is tuned to such past data is bound to give up garbage simply because of GIGO.
Further to Reg Nelson’s comment, it is not simply a question of the number of squares and number of piecies, it is also the different potential move that each piece can make. Since we do not know the precise consituents nor the upper and lower bounds of each and every component that encompasses the catchall esxpression ‘natural variation’ there is no prosepect that in the near future something worthwhile will be outputted from these glorifed games machines.

whiten
November 14, 2014 1:57 am

Again a proof about an AGW straw-hanging.
Again a model simulation run and model projections claimed to prove that climate models’ in high resolution simulation do project close enough to reality with a very good proximity and therefor the climate models in principle and in general still should be considered as correct models about the AGW projections.
One problem there though, this high resolution model simulation is not a proper climate model simulation perse.
It simply is a GW model simulation, as all climate models are simply run as AGW models of climate.
Considering the period of 1979 to 2005 as a Gw period is no wonder or unexpected that such a high resolution model simulation could project close enough to reality.
The projections in this case will be the projection of the GW impact on the weather.
The main problem of the climate models is that actually the models are run as AGW models of climate, and these models are not able to project after 2004 point a close to reality climate, they run too hot for a comfort.
The other problem is that some, for not saying many “climatologists” still do try to impose the AGW as a certainty through the misconception that climate and GW are one and the same always, that there is only one possibility or only one climate outcome, a GW or a AGW.
So, a GW or a AGW model simulation is considered as a proper climate model simulation simply because it’s projections come close enough to the reality for a GW period, aka a GW or AGW simulation being good enough for a given GW period should be considered by default as a proper and unquestionable climate model and there for the AGW still must be considered as a certanty.
To me that is a high resolution cherry-picking to prove a “scientific” obsession with a false certainty of AGW.
Cheers

Jbird
November 14, 2014 2:20 am

GIGO.

michael hart
November 14, 2014 2:49 am

It is the Golden Age of funding for models.

Doubting Rich
November 14, 2014 3:18 am

The thing with golden ages is not that they end, but that the end usually comes about due to underlying corruption or paucity of the underlying philosophy that was always there. In other words the golden age was an illusion.

Bellman
November 14, 2014 4:28 am

A Computer Model is a Computer Game. A Computer Model is a Computer Game. A Computer Model is a Computer game. There I’ve said it thrice. What I tell you three times is true.

observa
November 14, 2014 6:11 am
gunsmithkat
November 14, 2014 6:37 am

Super fast BS in and Super fast BS out. or simply GIGO. There simply aren’t enough data points and never will be to accurately predict the weather or climate.

beng
November 14, 2014 6:58 am

The golden age of climate science models….
No, it’s the golden age of academic grants.

mikeishere
Reply to  beng
November 14, 2014 11:44 am
Dawtgtomis
November 14, 2014 10:08 am

(Quote) “A cloud system-resolved model can reduce one of the greatest uncertainties in climate models, by improving the way we treat clouds,”
That sounds simple enough, but as a layman observer of this whole thing, I get the impression that it could be a while before we have a good enough understanding of cloud formation and behavior to properly apply their effects in climate models. Even then, how does one apply the more random solar events that influence cosmic radiation’s cloud forming ability? There appear (from common sense & observation) to be many more factors that must be loaded into the “crystal ball” before the chanting is started.
I’m looking forward to Allen’s promised posting on cloud formation.

Dawtgtomis
Reply to  Dawtgtomis
November 15, 2014 10:25 am

I’m sorry, Anthony, for calling you Allen. I’m lousy at name retention.

Reply to  Dawtgtomis
November 15, 2014 12:17 pm

We need more basic research on things like clouds as you suggest however, the hope that these can then be incorporated into models is a waste of money and time at this point. There is so little we know and most of there formulas are not proven facts. So, it all comes down to guessing and hoping a result looks reasonable but someone has to take the big picture and realize any attempt to do modeling over hundred years when we don’t have precise data to start, are guessing at the interactions and the couplings, missing entire data sets in some cases. They talk about 25km square area. We have 3 or 4 weather stations in areas like antarctica that represent 10-20% of the globe. I can’t believe we’ve paid billions for these people to play with computers and guess and guess and guess. Uggh can we please have someone cut them off and get back to science.

GlynnMhor
November 14, 2014 10:28 am

Unless the underlying assumptions, algorithms, and inputs are fixed, the models will still not be a valid representation of the real world.
Faster supercomputers can just yield the wrong results in less time

mikeishere
November 14, 2014 11:42 am

An analog computer would produce results at the speed of light. The precision of the input data is very low to begin with and there is no need for high precision results anyway. Is there really any point in predicting global temperature to anything more precise than a ~tenth of a degree?

Reply to  mikeishere
November 15, 2014 12:10 pm

If you take imprecise inputs and do a trillion trillion calculations on them what chance is there that the result is at all “predictive?” What they do is constrain the calculations so they produce what “looks like weather.” They used to have the problems that temps would plummet to -200C in some places etc… No doubt they’ve put in all kinds of funny things in the formulas to prevent bizarre results but this is tinker toy play. It has nothing to do with reality.

Jeff F
November 14, 2014 2:30 pm

97.2873576% Consensus, in a snap!

Dave Dodd
November 14, 2014 3:00 pm

Just more bad science, REALLY, REALLY, REALLY fast!!

Ed Zuiderwijk
November 15, 2014 2:52 am

Current models: rubbish in, rubbish out.
Future models: even more sophysticated rubbish in, rubbish out.
Advice to the modellers: take time off modelling and coding and think.

Reply to  Ed Zuiderwijk
November 15, 2014 12:06 pm

I couldn’t agree more but thinking won’t help. We need scientists to do basic research in this subject not more models or even model thinking. We are spending billions on models that have no chance of being valid because they are all based on completely unproven assumptions. I can’t believe we pay for this.

November 15, 2014 12:03 pm

Seriously, is this a joke? The term golden age is funny. Does anybody else see the humor of how ridiculous their statements are: “concluded that a warming world will cause some areas to be drier and others to see more rainfall, snow, and storms.” So would a colder world and I bet if they ran the models with no change they would show increasing storms someplace and decreasing storms someplaces. I can’t believe we are spending money on this. Judith just published a blog explaining how we don’t know enough about the ocean to model anything very well. I believe we need to redirect funding from pointless models to basic science. We need satellites and more buoys and more measurements and more experiments and fewer models based on guesses.

Jerry Henson
November 17, 2014 6:27 am

“Give me four parameters and I will draw an elephant for you, and with five,I will have him raise his trunk.”
John von Neumann.

george e. smith
November 21, 2014 12:20 pm

I have a rather simplistic view of computer modeling; since I do it all day long; mostly these days of Optical systems, both imaging, and non imaging. I actually expect that once I build the real system, it will perform just like the simulations say, to within the range needed for the final product. Hasn’t failed to do that yet.
So I’m intrigued by the availability of computer systems with probably one trillion times the computing power that I have to use.
Now science, and physics in particular, is quite fussy about “discrepancies.” We fret over repeatable discrepancies between the output of models, and our observations of the real universe.
Take planetary systems for example; specifically our own system.
A hundred years ago, we had a perfectly good model and theory of how it works as far as basic orbit variables are concerned. It was “Newtonian Dynamics.” Well I’m sure different people called it different things.
One thing Newton’s gravitation and his dynamics told us, was that the perihelion of Mercury (and other planets) would precess about some direction in space.
Trouble was, even then, it did not do so in agreement with Newton’s theory. There were discrepancies.
Specifically, the experimentally observed precession of Mercury’s perihelion differed from the theory to the tune of 43 seconds of arc per century. Astronomers were in no doubt, that their measurements were accurate, and not in agreement, with Newton’s laws.
Now today, you can set all that orbital motion up on your cell phone, and plot exactly what Newtonian dynamics says is supposed to happen, and you can also plot the real astronomically observed motion.
So if you sit in front of your ipad/ped/pid/pod/pud/whatever, and run that Mercury simulation in fast motion for 100 years, the two positions will diverge by 43 seconds of arc over that 100 years of plotting, which will take five minutes to run on your iphone.
The problem is, that the human eye, under the best visual conditions has an angular resolution limit of about one arc minute, so after watching 100 years of history, you still can’t discern the “discrepancy” that everybody fretted about.
Eventually, Albert Einstein waved the whole problem away, and fixed the model that was broke.
So “discrepancies” are a big deal.
So I don’t swallow any claims, that so and so is close enough.
My upper limit on the magnitude of any discrepancy between theory and practice is ZERO.
Now. I still do accept that certain “simplifications” can be made in a theory or model, to conveniently get reasonably close to correct, for rough estimation purposes. But when push comes to shove, If reality and theory diverge, they are NOT the same, and that should be kept in mind.
Now when it comes to modeling the climate, we are in luck, because apparently there exists about 165 years of peer reviewed carefully measured real data about aspects of earth’s climate, and specifically what we tend to call the earth’s mean Temperature.
So with recorded guard banded data, we do know what the earth’s Temperature has been for 165 years, and we have four of five, or more well regarded sources of such data, overlapping each other, and evidently pretty much in quite good agreement with each other. Two of those, the UAH and RSS satellite data sets, are less long lived, but desirable in other respects.
So I think we know within guard bands, what earth Temperature has been for 165 years.
So enter the megateracomputer gizmo, that these “modelers” are so happy about.
Great, I’m all for it. So here is your task, should you choose to accept it.
Twiddle the knobs on YOUR model of earth’s climate, and run it on your whizzmachine.
Publish the result, when your computer model of earth’s mean Temperature, replicates the 165 year long data set(s) that we have observational evidence of. Well of course I don’t expect you to get any closer to the numbers in the data sets, than the known error guard bands of those data set numbers.
At that point, we can all agree that “The science is settled”, and you do have a credible model of earth climate.

george e. smith
Reply to  george e. smith
November 21, 2014 1:27 pm

I should add, I do not expect them to reproduce the daily readings, or for that matter, even the monthly readings that Lord Monckton uses in his monthly stoppage calculation.
A 13 month rolling average like Dr Roy does now and then is fine, even a five year rolling average.

george e. smith
Reply to  george e. smith
November 21, 2014 4:12 pm

Well looncraz, of course I mean that we have that record of data that the purveyors of such assert is in fact a measure of the earth temperature.
Good or no good; that is the record that purports to tell us we are slowly roasting.
So that IS the record that the modelers should be trying to replicate

looncraz
Reply to  george e. smith
November 21, 2014 1:43 pm

“we do know what the earth’s Temperature has been for 165 years”
I’d have to say that we really don’t. Even current estimates cover a spectrum of over 3°C. The relative changes between the datasets have rather high agreement, however.
That enables a poor model to perform “well” even if it is a degree or two off. Typical use of statistical misrepresentation, frankly.
A model that draws a straight line bumbling around the mean of the “observed” temperature data reconstructions can be almost as accurate as the best model depending on how you determine its accuracy. And, since the AMO/PDO are such powerful influences, and we know when they happened in the past, models are fed with that information, assumptions geared upon that knowledge, and so forth such that they will recreate the general features of that period somewhat decently, even if they diverge wildly outside of that range.
It will be a long time before we have demonstrably accurate models.

george e. smith
Reply to  looncraz
November 21, 2014 4:26 pm

Well you will never have an accurate model if you don’t even model the real system. You can’t assume a steady state continuous 342 W/m^2 all over the earth at all points for all times, and expect it to conform to a system that goes from around 1360 W/m^2 max at the center, and drops to near zero just 12 hours later.
You cannot simply average the instantaneous variable in a non linear system; you always get a wrong answer.
And in the case of a black body like radiator, that undergoes a regular periodic Temperature cycle with a variation in Temperature of perhaps +/- 5-6% or more, the total radiated energy is always greater than what you calculate from a constant average Temperature. And the excess is well in the range of these various “forcings” they keep yakking about.
The earth cools faster than Trenberth claims, and it isn’t because it has a higher average temperature.
I tried explaining that but even Dr. S dismissed it