Be a closet climate modeler with Climate@home

First there was SETI@home where you could use spare CPU cycles to look for extraterrestrial signals. Now we have Climate@Home, running NASA GISS modeling software in distributed computing, but no release date yet. Oxford has already created climateprediction.net running BOINC. I expect it’s a case of “keeping up with the Joneses”

I wonder though, what’s the “carbon footprint” of leaving all those computers running to calculate the effects of fossil fuels burned to run them? Already the Met Office Supercomputer is drawing fire for it’s massive electricity use, so does this simply spread the problem around?

From NASA’s CIO: Creating a Virtual Supercomputer to Model Climate

Climate@Home

NASA will be calling on people worldwide to help determine the accuracy of a computer model that scientists use to predict climate change. The initiative, called “Climate@Home,” is unprecedented in scope. Never before has NASA attempted to recruit so many people to help perform research vital to forecasting the Earth’s climate in the 21st century under a wide range of different situations.

NASA’s Earth Science Division (ESD) and Office of the Chief Information Officer (OCIO) have strategically partnered to manage the Climate@Home initiative. This effort will include collaborations between the 10 NASA Centers, the 13 Federal agencies of the USGCRP (United States Global Change Research Program) along with several universities and private organizations.

Goddard Space Flight Center (GSFC)’s Robert Cahalan is serving as the project scientist and has assembled an international team of scientists to help set science goals and determine which parameters to run. GSFC’s senior advisor to the CIO, Myra Bambacus, serves as the project manager and will run this initiative.

Participants need no special training to get involved in Climate@Home. All they need is a desktop computer or laptop. Volunteers will be able to download the computer model to run on their computers as a background process whenever the computers are on, but not used to their full capacity.

The climate model that volunteers download is made up of mathematical equations that quantitatively describe how atmospheric temperature, air pressure, winds, water vapor, clouds, precipitation and other factors all respond to the Sun’s heating of the Earth’s surface and atmosphere. Models help predict how the Earth’s climate might respond to small changes in Earth’s ability to absorb sunlight or radiate energy into space.

Scientists traditionally have used supercomputers to test the sensitivity and accuracy of climate models. With these powerful machines, they are able to run millions of calculations, each computing a different scenario or combination of circumstances such as varying levels of chlorine or water vapor in the atmosphere.

Instead of using supercomputers in this effort, NASA is creating a virtual supercomputing network that spreads the data-processing chores across thousands of computers. Such a task-sharing initiative eliminates the need to buy additional supercomputers, which consume enormous amounts of energy, and reduces the “carbon footprint” of running these calculations.

Prior to the initiative’s official roll out in early 2011, the project will be announced in the news. The project website will provide instructions on how to download the models and supporting computer software. The goal is to have recruited tens of thousands of participants by the time the initiative begins. Each participant will run the same model but with certain parameters slightly adjusted. Scientists will examine the resulting sensitivity of the climate predictions to those adjustments, resulting in a better understanding of the parameters that should be studied in the future.

Climate@Home will have the option of using the same high-level architecture developed for “SETI@Home,” a scientific experiment that uses Internet-connected computers in the Search for Extraterrestrial Intelligence program. The initiative also is modeled after a similar project coordinated by the Oxford e-Research Centre in the United Kingdom called Climateprediction.net.

Climate@Home will test the accuracy of a model developed by the Goddard Institute of Space Studies (GISS) in New York, and will serve as a trailblazer to explore the accuracy of other models as well.

==============

h/t to WUWT reader Bruce Foutch

0 0 votes
Article Rating
81 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Chris F
August 25, 2010 4:50 pm

No thank’s, think I’ll pass on this…

Leon Brozyna
August 25, 2010 4:55 pm

You have got to be kidding!
I’m not letting GISS anywhere near my computer!

bill-tb
August 25, 2010 4:56 pm

It’s a crackpot idea to make people feel good.

pwl
August 25, 2010 4:57 pm

“NASA will be calling on people worldwide to help determine the accuracy of a computer model that scientists use to predict climate change.”
Comparisons with the Objective Reality of Nature are what would determine it’s accuracy, not people. For some strange reason climate scientists are reluctant to compare their models with what is really happening in Nature. Strange that.
However, we already know that it’s a wild chicken shoot for no models can accurately predict Nature with her built in randomness.

Keith
August 25, 2010 4:57 pm

The BBC did something similar a while back. I don’t remember much coming of it. Anywy, I’m not quite sure how we can “help determine the accuracy of a computer model that scientists use to predict climate change” by, er, running a program on our computers.
You don’t verify a model with another model or abstract number-crunching, but through real-world observation. Not the first time that this basic aspect of the scientific process seems to have been overlooked by those in the field of climat-“ology”.

Chris B
August 25, 2010 5:00 pm

Lots of garbage in………lots of garbage out.

DCC
August 25, 2010 5:05 pm

So what’s the point? Do they think that if I run their models on my computer that I will be more likely to believe the results? Hummph!

Jimash
August 25, 2010 5:16 pm

Seti@home was an extraordinarily unrewarding, use of screensaver time.
BOINC was worse since it works in the background and steals cycles you actually might need .
Neither ever gave the slightest hint what it was doing or offered any way to examine
any data, like maybe the sound, ( Like in Contact) or the direction in which they were looking or a star map of where the thing was pointed. Nothing. Might as well be the IRS.

Keith Minto
August 25, 2010 5:20 pm

Climate@home?,more like Weather@home.
If past experience is a guide, NASA will be very selective in who they choose.

August 25, 2010 5:27 pm

SETI probably failed because the aliens got a look at what passes for Sol III intelligence, and took a pass. [click in image to embiggen]

latitude
August 25, 2010 5:31 pm

We have too many pot smokers in government……….

Bill Illis
August 25, 2010 5:36 pm

Skynet combined with Connolley’s Wikipedia?

pwl
August 25, 2010 5:36 pm

Stephen Wolfram proves with his New Kind of Science that certain systems generate randomness from within the system – no outside randomness needed, no randomness in initial conditions needed. The system itself generates randomness thus making it unpredictable as a first principle of science. Oh, and these systems can be incredibly simple and yet generate inherent randomness from within the systems. Let that sink in.
The implication for climate and weather prediction is that one can’t predict due to the inherent randomness of the Natural systems. No matter how elaborate you make your “climate model” you’ll never be able to predict it with any accuracy.
Wolfram makes his case with his book A New Kind of Science, he presents the key ideas in this video. Enjoy learning. Oh, Rule 30 is one simple system that generates it’s own randomness. You’ll want to read chapter two of the book, http://www.wolframscience.com/nksonline/page-23, for the proof. The entire book is amazing. 1/2 of it is references.
Wolfram’s work has very serious implications for climate science and weather forecasting.

Urederra
August 25, 2010 5:40 pm

There is a folding@home or something like that, where they use your computer to do study protein misfolding.
http://folding.stanford.edu/
Out of the three of them, (Seti, folding and Climate) folding is the closest to real science.

Kilted Mushroom
August 25, 2010 5:52 pm

Perhaps this idea could be used by skeptical scientists, with no access to super computers, to model some of the missing parameters in the standard models?

Spartacus
August 25, 2010 5:56 pm

How can this be a new? I downloaded and run this screensaver in 2002!!! Downloaded from climateprediction.net. This is very old news. Just an effort from the modellers to gathered lost computing power as time passed, such as myself…. Don’t bother.

noaaprogrammer
August 25, 2010 5:56 pm

This is similar to GIMPS – the Great Internet Mersenne Prime Search.

August 25, 2010 5:56 pm

As long as CO2 is given priority for (alleged) AGW in computer models, they will always be wrong. GI/GO

Rick Bradford
August 25, 2010 6:03 pm

I expect it’s a case of “keeping up with the Joneses”

If you’re referring to the East Anglian Joneses, I think we should be keeping away from them, not up with them….

pkatt
August 25, 2010 6:05 pm

Doh! thats it.. just DOH!

wayne
August 25, 2010 6:10 pm

“The climate model that volunteers download is made up of mathematical equations that quantitatively describe how atmospheric temperature, air pressure, winds, water vapor, clouds, precipitation and other factors all respond to the Sun’s heating of the Earth’s surface and atmosphere. Models help predict how the Earth’s climate might respond to small changes in Earth’s ability to absorb sunlight or radiate energy into space.”
What, are they going to have all of these people’s computers MEASURING all of these small changes? Model cannot predict when they are faulty to begin with.
How about the low angle reflection off of the oceans, the ice, and glossy leaves at the edges of this globe as the world does spin? Are they going to measure and allow for that too? How NASA, exactly how?
It’s always the missing and miss-estimated parameters and faulty interpolation equations that get them in the end. Me thinks they need 1000 worlds with everybody’s, all 6.5 billion of us, computers interconnected and that probably still won’t be enough, let alone the missing or faulty measurements.
I have no doubt that they will come out with some world from their simulation (model), it just won’t be our Earth.

Aldi
August 25, 2010 6:12 pm

Compare models with more models? I’m guessing a new headline by the communist networks would be something along the lines…”our models have reached a consensus, and it is worse than what we previously believed was worse than we thought.”

Graeme
August 25, 2010 6:22 pm

ooohhh – can I get a computer model which sets water vapour as a negative feedback?

Paul Linsay
August 25, 2010 6:25 pm

I did the ClimatePrediction.net thing. It got withdrawn because about half the time the model would plunge into an Ice Age. Can’t have that, can we?

George E. Smith
August 25, 2010 6:34 pm

So how big are the grants they are offering to become real climate scientists in this endeavor ?

George E. Smith
August 25, 2010 6:36 pm

Why not just look at what Mother Gaia does, and be done with it; she always gets the right answer; she has the world’s biggest parallel computing system; with more computing nodes than you can even count.

sky
August 25, 2010 6:52 pm

NASA says: “Scientists traditionally have used supercomputers to test the sensitivity and accuracy of climate models. ”
And here I thought that real-world measurements were necessary for testing model accuracy. Silly me!

Scott
August 25, 2010 7:00 pm

I’m willing to participate in this if they also give me the raw temperature record data to analyze on my own.
-Scott

R. Shearer
August 25, 2010 7:00 pm

Gotta save the NASA mainframes for fantasy football, surfing and porn.

Frank K.
August 25, 2010 7:05 pm

“The climate model that volunteers download is made up of mathematical equations that quantitatively describe how atmospheric temperature, air pressure, winds, water vapor, clouds, precipitation and other factors all respond to the Sun’s heating of the Earth’s surface and atmosphere. Models help predict how the Earth’s climate might respond to small changes in Earth’s ability to absorb sunlight or radiate energy into space.”
Maybe GISS will share with us exactly what those “equations” are. It appears that they don’t really know themselves…(and don’t really care – too busy blogging, you know).
And what does it say about a code when you have to rely on some stupid scheme such as this to test it’s “accuracy”… Accuracy compared to what? Hindcasting again? I’d better stop before I say too much…

Frank K.
August 25, 2010 7:26 pm

noaaprogrammer says:
August 25, 2010 at 5:56 pm
“This is similar to GIMPS – the Great Internet Mersenne Prime Search.”
The difference between GIMPS and the NASA climate “modeling” effort is that GIMPS is a well-posed mathematical problem…

Eddie
August 25, 2010 7:34 pm

‘reduces the “carbon footprint” ‘
lol…how does shifting the cycles to a less efficient machine decrease the carbon footprint? FAIL

juanslayton
August 25, 2010 7:34 pm

Does this mean we get a really transparent look at their code? Or will we find an end user agreement saying that we can’t peek…?

August 25, 2010 7:42 pm

I have two Dell 1850’s each with 2 x quad core 3.1 GHz CPU’s, 8 GB RAM, 2 x 15k rpm drives on PERC controllers, and dual fibre channel HBA’s in my garage. There they will sit in pursuit of a worthy cause, this isn’t one.
(Wish the ATI 5700 series graphics cards would slot into a 1U chassis)

Lew Skannen
August 25, 2010 8:02 pm

That Grauniad article is funny. I love this line.
“save thousands of lives by accurately modelling a climate-related disaster.”
I am totally in favour of running big computers to do modelling …( if the modelling is worthwhile)… but I really love the idea that these guys see themselves as saviours of the universe 24/7.

noaaprogrammer
August 25, 2010 8:06 pm

noaaprogrammer says:
“This is similar to GIMPS – the Great Internet Mersenne Prime Search.”
Frank K. says:
“The difference between GIMPS and the NASA climate “modeling” effort is that GIMPS is a well-posed mathematical problem…”
Yes, I should have said that the distributed processing approach is similar.

kadaka (KD Knoebel)
August 25, 2010 8:14 pm

Is it really that hard for them to get a Beowulf cluster to do this work? They could set one up for less than a climate modeler’s annual salary.
Heck, the US Air Force has their Sony PlayStation 3 supercomputing cluster. NASA could do climate modeling on old X-Boxes picked up at yard sales.
How much does Gavin earn receive in a year? 600 new PS3 consoles?

Ed Caryl
August 25, 2010 8:43 pm

When their models can predict the proverbial butterfly flapping its wings in China and causing rain in Oregon, then I’ll believe.
Where did they get the idea this would produce a smaller carbon footprint? All those individual computers left on all the time will burn 10 times the wattage that the same number of cores in a supercomputer will burn. It will just spread the carbon like peanut butter, all over the world.

Ben
August 25, 2010 8:43 pm

If they released something like this and tested models to KNOWN and measured conditions and the winner received X, it could be worthwhile.
Like, the people testing it can input their own assumptions in to the model and find out who’s assumptions work out best with the known temperature scales.
But no, we have to waste more CPU cycles on re-learning the CS 101 lesson known as garbage in- garbage out.
These people remind me of my class-mates in college who I would poke fun at when I would tell them “there are 10 kinds of people in this world..those who understand binary and those who should find a new major..”
Of course, I stole that one and re-arranged it for my own purposes, but lets be honest, since when has a model proven reality?
If that was even possible, I could prove the evidence of God by inputting a God variable and saying, see, my model works, God exists….

Richard Keen
August 25, 2010 8:43 pm

I tried to let the climate software onto my computer, but my anti-virus stopped it.

jorgekafkazar
August 25, 2010 9:02 pm

My apathy knows no bounds!

August 25, 2010 9:12 pm

Distributed computing. Hmmm.
A little adjustment work with the right software, we could install the correct CO2 feedback value. I’m still busy doing assembler on my ‘286, so any volunteers?

Graeme
August 25, 2010 9:25 pm

kadaka (KD Knoebel) says:
August 25, 2010 at 8:14 pm
Is it really that hard for them to get a Beowulf cluster to do this work? They could set one up for less than a climate modeler’s annual salary.

But that would be to initiate “Cost Effective Thinking” and that option is greyed out on the Climate Programs menu…
The AGW types never think about real world cost benefit analyses – it gets in the way of their sad little dogmas.

Robert
August 25, 2010 9:43 pm

No computer cycles (although i prefer to call them T-states) on my computer will be used for this.
This and Seti are based on Junk-science, to many parameters are simply guessed and those guesses have not been backed up by observational data. And the Drake equation has only a limited set of parameters compared to our Climate, yet it can not produce an accurate prediction on how many planets will have intelligent live.
A while a ago a sceptic added another parameter to the equation, the average life span of an average kingdom/republic/state, we had enough of them in our history, like the Mayans, Egyptians, Roman Empire, etc.
The outcome then was rather disturbing, because it was 0.3 i believe, that means either no intelligent live at all or just one, us! And somehow i doubt the outcome of 1.

UK Sceptic
August 26, 2010 12:05 am

If I want a model based prediction all I have to do is switch on the TV and watch the garbage put out by the Met Office. Meanwhile my computer has better things to do with its processor.

pwl
August 26, 2010 12:12 am

“On two occasions I have been asked [by members of the British Parliament], ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”
– Charles Babbage, The Inventor of the Analytical Engine – the first computer

Strawanzer
August 26, 2010 1:23 am

May be you better spend your (computational) time for more interesting things: e.g. http://einstein.phys.uwm.edu/

Paul Vaughan
August 26, 2010 1:38 am

spyware?

John Marshall
August 26, 2010 1:49 am

Not another climate model to get it wrong. I have never thought models of chaotic systems as being any good or produce and result worth looking at. Both climate and weather are chaotic systems. Need I say more or even waste the battery power of the good old laptop.

Jane Coles
August 26, 2010 2:51 am

“Each participant will run the same model but with certain parameters slightly adjusted.”
And this participant will disassemble the code and tweak these parameters further — thus helping ‘climate science’ to maintain the fine reputation for data and model integrity that it has earned over the years.

jmrSudbury
August 26, 2010 4:21 am

I have chosen to help out the Milkyway@Home project instead. It is mapping out a “highly accurate” 3D model of the galaxy. More info at http://milkyway.cs.rpi.edu/milkyway/ — John M Reynolds

Ken Hall
August 26, 2010 5:09 am

” Paul Linsay says:
August 25, 2010 at 6:25 pm
I did the ClimatePrediction.net thing. It got withdrawn because about half the time the model would plunge into an Ice Age. Can’t have that, can we?”
That sounds like the closest thing we have to an accurate model.

Enneagram
August 26, 2010 6:06 am

There is a business out there for programmers!: Make a Climate w/opt. Warming/Cooling Model for phones

Frank K.
August 26, 2010 6:37 am

The main thing for people to understand about this project is that the climate “model” in question is NOT running as a parallel “supercomputer”. That is, this project has nothing to do with running one large (high resolution) “model” in parallel, where small portions of the computation are divided among thousands of processors (as is the case with a true parallel code running on LINUX and Windows clusters). Rather, this is a case of taking a small, cruddy little computer “model” and using someone’s PC to run one or more cases. Software like BOINC basically coordinate sending you the prepackaged executable and input data, your PC runs the code, and the output data is mailed back to the coordinating entity.
I don’t know what they are thinking in terms of proving “accuracy”. Again, accuracy relative to what? And there’s no way you’re going to show any kind of grid convergence (PCs with a couple of gig of RAM are way too small) of the “solutions” either, not that they worry about such things at GISS.
This is, unfortunately, yet another boondoggle to waste more taxpayer money…

August 26, 2010 6:42 am

BOINC’s climateprediction.net is what got me interested in looking into the science behind AGW. After reviewing the scientific evidence, I quickly disposed of climateprediction.net, and donated my computer time to Rosetta and World Community Grid. Both of these projects study the molecular structures of diseases (either the disease itself, or how drugs interact with various viruses or bacterium).
Ironically, it is climateprediction.net that led to my eventual discovery of WUWT!
However, don’t throw the baby out with the bath water on this one. While SETI@home, climateprediction.net and climate@home may indeed by “junk science”, there exists a wealth of worthier causes that you can painlessly help with on the BOINC network. Check out: http://boinc.berkeley.edu/projects.php for a short list of projects.

Pascvaks
August 26, 2010 7:05 am

Q – How do you enlist hundreds of millions to your CAUSE and PERSPECTIVE and RELIGION?
A – Make everyone of them feel as though they were part of the discovery of something NEW, REVOLUTIONARY, CRITICAL. (Glooooooooooooobal Warrrrrrrrrrrrrrrrming!!!!!!!!!!!!!)

1DandyTroll
August 26, 2010 7:33 am

I never took to fancy SETI@Home, partly because I didn’t want no super weirdo nerds crawling around my computer, but mostly because I couldn’t see the whole intelligent aspect of looking for highly intelligent life, i.e. other then monkeys, dolphins, and gold fish, nor the über cleverness of connecting potentially millions of computers digitally communicating with each other to find that highly intelligent life by trying to find patterns in radio waves of analog nature.
But sure you guys go head and waste more money on an otherwise already too high electricity bill. :p

Sean Peake
August 26, 2010 7:46 am

No way I’m doing that! It will make my computer run too hot and likely burn out the fan.

Mark Nodine
August 26, 2010 7:47 am

Sounds like a great way to find the set of parameters that cooks up the scariest scenario to report to the press.

August 26, 2010 7:52 am

pwl: August 26, 2010 at 12:12 am
“…I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”
– Charles Babbage, The Inventor of the Analytical Engine – the first computer

Because people believe that computers make conscious decisions. They have never been forcefully reminded that a computer is only a machine that can add and subtract 1s and 0s faster than they can.

August 26, 2010 9:57 am

Ah people..
testing a model for sensitivity to input parameters EVEN BOGUS input parameters is STANDARD practice, REQUIRED practice. Whether one is building a model that predicts how a plane will fly ( they are “wrong” but we use them) or how a car will drive.
And, we often use a hierarchy of models from the simple to the complex, but they all should be tested for sensitivity to input parameters. Like SO:
What happens if I assume that we got levels of methane wrong? I dunno, run the
model with 100 different levels.. how sensitive is the model to THAT input.
not very sensistive? or very sensitive? its the only way you can understand how
big models perform.
What they are doing is actually good practice

M White
August 26, 2010 10:43 am

The BBC beat them to it (carried out in 2006)
http://www.bbc.co.uk/sn/climateexperiment/
I suspect that the results will be similar (Predicted impact on the UK)
http://www.bbc.co.uk/sn/climateexperiment/whattheymean/theuk.shtml

Frank K.
August 26, 2010 11:04 am

Steven Mosher says:
August 26, 2010 at 9:57 am
“What happens if I assume that we got levels of methane wrong? I dunno, run the
model with 100 different levels.. how sensitive is the model to THAT input.
not very sensistive? or very sensitive? its the only way you can understand how
big models perform.”
“What they are doing is actually good practice.”
Well, OK, if it is good practice:
(1) Why haven’t they done it already? The model they probably will use is of 80 – early 90s vintage. If it will run on a PC, surely they have the computing HP to do this themselves, well before even the first IPCC report. By they way, didn’t GSFC give these people a new $5 million upgraded computing cluster for this kind of work (using stimulus funding no less)? Why knot use that? (That is if they can get GISS personnel to stop blogging long enough to work on it…).
(2) Knowing that simple-minded, low-res model A has a certain output behavior to a given input does NOT imply that your “big” model B will behave accordingly. Unless, of course, it’s based on the same algorithms, numerical formulations, and coding practices (unlikely – and with GISS unknowable). So, you can’t use the results of this exercise to claim the recently numerical solutions from some version of Model E will act similarly – they are two different codes.
And the cost of this boondoggle is NOT zero. The compute cycles may be “free” but there will need to be a lot of work (=money) to organize, check, and publish the findings. We simply DO NOT HAVE THE MONEY ANYMORE…

August 26, 2010 11:20 am

This is a brilliant strategy by NASA.
When the climate prediction numbers start coming in wrong with this distributed computing, NASA can say: “It wasn’t us. It must be a defective laptop out there.”

Dennis R. Cooper
August 26, 2010 11:32 am

I can help. I already have a therometer in the shade, and the readout is in the bathroom, just under the attic door. The attic temp should be fairly close to the temp they are looking at when they look at my roof and tell everyone we’er in global warming.

Dillon Allen
August 26, 2010 11:55 am

Too bad you can’t upload a “reality” virus to GISS via BOINC.

Catweazle
August 26, 2010 12:06 pm

There is only one system in the whole of the known Universe capable of modelling the Earth’s climate.
We live on it.
Anybody that thinks that an open-ended non-linear system with a practically infinite number of feedbacks, most of which we don’t even know yet, and even the ones we do we aren’t sure of the sign of (take clouds for example, where the sign alters continuously according to whether it’s day or night) is capable of telling us anything whatsoever about the future climate either doesn’t know anything about science – especially computer science – and mathematics, or is a plain downright liar.
Or a fool, of course.

kadaka (KD Knoebel)
August 26, 2010 1:19 pm

Garbage IN,
IPCC Fifth Assessment Report-acceptable peer-reviewed literature OUT.
Wait for it, it’ll happen.

1DandyTroll
August 26, 2010 2:45 pm

@Catweazle
‘Anybody’ […] ‘capable of telling us anything whatsoever about the future climate’ […] ‘is a plain downright liar.’
‘Or a fool, of course.’
Sorry I had to edit your statement, but I must say for being a fool I’m quite happy at telling you anything what so ever you want to here, especially for a price! :p

August 26, 2010 6:01 pm

Frank:
“(1) Why haven’t they done it already? The model they probably will use is of 80 – early 90s vintage. If it will run on a PC, surely they have the computing HP to do this themselves, well before even the first IPCC report. By they way, didn’t GSFC give these people a new $5 million upgraded computing cluster for this kind of work (using stimulus funding no less)? Why knot use that? (That is if they can get GISS personnel to stop blogging long enough to work on it…).”
1. Why would you assume that they havent done it.
2. why would you assume that sensitivity testing is EVER complete.
3. The cycle hours available in a year of computing are known. They know
how many runs they have to do for Ar5. You prioritize your use of the assets.
So, one can well imagine that they want to do less important sensitivity studies
using off campus cycles.
“(2) Knowing that simple-minded, low-res model A has a certain output behavior to a given input does NOT imply that your “big” model B will behave accordingly. Unless, of course, it’s based on the same algorithms, numerical formulations, and coding practices (unlikely – and with GISS unknowable). So, you can’t use the results of this exercise to claim the recently numerical solutions from some version of Model E will act similarly – they are two different codes.”
Well, you dont understand how hierarchies of models work. There are many ways to benchmark low res models to high res. I’ll give you one. In determining what level of RCS ( radar cross section) a future fighter requires, very often we would use low res models to BOUND the the problem or bound the sensitivity study space. So for example, we would study very small RCS and then vary that parameter upwards looking at the response curves ( usually for non linear behavior) then using a few point off that curve we would run the high res models and benchmark.
So for a low res model we might varying the parameter from 0 to 100000. And you might find out that the parameter was only sensitive within 0-50. At values beyond 50 you got the same answer.
Well you take that information and first thing you do is check that assumption. The threshhold is 50. So with high res you might check
0,10,20,30,40,50,60, 70,80,90,100,200,300,400,500,1000,2000, 20,000, 50,000..
you might have computer time for 100 runs.. so you use low res to pick your window.
you are not concerned with exact matches in the response value
So , now, hopefully you get a confirmation of the systematic behavior. maybe the high res has a threshhold at 100 as opposed to 50.. ANd you note that changes from 0 to 25 do nothing!
basically you use the low res models to get an idea about the response characteristics of various input parameters. That reduces the paramter space you have to look at. So next time, you have 100 runs to split up in the parameter space.
so you might do.. 0, 20,21,22,,23,24, 25,26,27,28,29,30,35,45,50,55, ETC. and you expand the number of high fidelity runs you do around the past break points

Frank K.
August 26, 2010 7:38 pm

Steven Mosher says:
August 26, 2010 at 6:01 pm
Steve – I’ve been studying and implementing computational fluid dynamics codes for well over 20 years. I understand quite well how hierarchies of models work.
With all due respect, your example, I believe, is not valid in this case as we’re talking about systems of non-linear, coupled, partial differential equations. The formulations, algorithms, numerical implementation, parameterizations, and coding are all very crucial to the final result. The fact of the matter is we’re talking about using PCs to examine the sensitivity of a small, crude, non-linear climate model, the internal workings of which bear little to resemblance to the large, complex models in use today.
Here’s an example for you. Let’s say we look at external flow over an obstacle at a Reynolds number of 40,000. A crude model will be quite stable (dare I say robust?) and in fact yield a steady solution. Of course, as you start resolving the wake (with a finer mesh) you find that the solution no longer converges to steady-state, and in fact is highly unsteady. As a result, important quantities such as mean lift and drag coefficients can be way off the mark in the crude model.
And the above example is just a “simple” fluid dynamics problem! What if we now add cloud thermodynamics, tracers, radiation models, coupled ocean models, ice models, etc., and then attempt to integrate this complex, non-linear system over years and years of computational time? No one even knows if this problem is well-posed mathematically.
This current effort by NASA is just a colossal waste of time and money – money well simply do NOT have. The results will only be valid for the crude model on which the tests were conducted, and will be even more worthless given that I doubt we will even get know what differential equations are being solved! Unless, of course, NASA decides to finally properly document their codes…I’m not holding my breath…

Agile Aspect
August 26, 2010 7:41 pm

SETI and CAGW have a lot in common – they’re both junk science.
That is, they formulate their theories in terms of parameters which can’t be physical measured (the theories can’t be falsified in the sense of Popper.)
In fact, it can be argued that SETI sanitised junk science for the public consumption.

E.M.Smith
Editor
August 26, 2010 10:57 pm

More ‘computes’ will never repair bad data and lousy programs.
The idea that more computes will improve their accuracy is fundamentally flawed. It may improve their precision, but not the accuracy. Better data and better code is needed for that.
In fact, the improvement in total “computes” from improved algorithms is more than that from improved hardware. The necessary corollary of that is that BAD algorithms can consume all available improvement in hardware, and then some, while producing nothing of value.
More hardware is only justified AFTER all the software and techniques questions are fixed.
With that said: I do LOVE Big Iron. I’ve managed a supercomputer center before and it is ‘way cool’… One of my favorite machines was the “Stonesouper” computer, a Beowulf Cluster made from cast off PCs. It inspired me to make my own little wulf from a half dozen boxes; one of which became the box I ran GIStemp on for the first time. I’d expected to need a lot of power, so did the port to one node of the cluster so I could add however nodes it might need. I was very surprised to find that ONE was more than enough. That box is now ‘semi-retired’ and I’ve got GIStemp running on a much newer HP Vectra now.
http://www.extremelinux.info/stonesoup/
http://chiefio.wordpress.com/2010/08/09/obsolescent-but-not-obsolete/
FWIW there are a variety of problem types characterized by how much the computation can be compartmentalized and how much communications must happen between the parts. The very highly distributed computing on platforms connected by very slow networks only really works well on highly modular problems. This implies their ‘model’ has a lot of disjoint cells that do not interact much with each other. Doesn’t sound like a really great approach to a climate model to me.
For an 8,000 cell model of the world it would be better to have an 8,000 “core” machine with fast interconnects than to have 8000 widely scattered desktops. Such machines are fairly common. (A 2000 CPU machine made with ‘quad core’ processors would do it) and would let the models run much much faster AND with much more interaction between the “cells”. I’d expect that with the interactions between the elements of weather one would need such interactions to have a valid model. If you wanted 16,000 cells, you could go to more processors, or just more memory and run the two groups in alternate time slots.
One Large CPU does not work as well for problems with lots of small cells. It works better for large arrays of similar data. One could probably code the models to work with large arrays and run a giant mondo processor on it, but that would not be my first choice of approach.
Bottom line, though, is that if they are going for the highly distributed, time asynchronous, slow communications channel approach to modeling, there will either be fairly small interaction between the cells (certainly small iterative interactions) OR they are using a very sub-optimal choice of hardware.
You could get out of that box by handing over a grid of, say, 64 cells and computing all their internal interactions and feedbacks, then “stitching it together” at the margins with the ‘neighbor’ 64 cell blocks and calculating their interactions; but that would be a bit of a pain (and less interesting than having a simultaneous solution for all the cells.) Basically: Yeah, you could make it go; but it would not be the best way to do it.
SETI works because each work unit is entirely independent of the neighbor work unit, so all computes can be done independently. A space alien in Alpha Centauri has little to do with one in Betelgeuse, so low computer inter-cell communications is fine (as there is zero inter cell dependency) But a change of ocean temps in Hawaii DOES have an impact on cells in Siberia, so the cells ought to have a fair amount of interprocess communications.
Somehow I smell a PR stunt more than a technical solution optimization.
Then again, it can be a very cost effective solution even if it is a poor one technically.

E.M.Smith
Editor
August 26, 2010 11:35 pm

Frank K. says: No one even knows if this problem is well-posed mathematically.
It isn’t.
http://chiefio.wordpress.com/2010/07/17/derivative-of-integral-chaos-is-agw/
The models run off input data that are insufficient (Nyquist and quality both), turned into results that are meaningless via the temperature creation series such as GIStemp, then the models turn that into fantasies based on non-convergent math by playing forward a time series that diverges over time.
I’ve tried to get more folks interested in the math of the issue (not the arithmetic) but most folks just don’t care about the math issues. But they make the whole process just an exercise in dancing in the error bands of computer fantasies.
Oh, and per the idea of all the sensitivity testing being useful: It is IFF you know what the correct value is supposed to be. If you presume CO2 ought to be a strong positive feedback when it isn’t, doing sensitivity testing to find that yes, you got CO2 set it High Gain is not assessing the accuracy of the model, just the precision with which you have got the assumptions wrong.
FIRST get the models in touch with reality, THEN worry about how sensitive they are.

Robert
August 27, 2010 3:29 am

Agile Aspect says:
August 26, 2010 at 7:41 pm
SETI and CAGW have a lot in common – they’re both junk science.
That is, they formulate their theories in terms of parameters which can’t be physical measured (the theories can’t be falsified in the sense of Popper.)
In fact, it can be argued that SETI sanitised junk science for the public consumption.

The Drake equation is junkscience since it has a couple parameters that are not known and at the moment can’t be measured at all the moment, some can be measured or we are in the process of measuring them like there are the Exo-planets, in the last 15 years we discovered about 500 of them and the next 15 years will give us a lot more. Still many see the outcome of the Drake equation as a fact.
Seti may look like junkscience but in fact the answer is still out there, with the Drake equation in hand people claimed that the Milkyway must be full with intelligent life but we yet have to find it, so at the moment it does not look good for intelligent life out there.
Does that mean that there is no intelligent life out there? We simply don’t know. Even if we knew all the variables of the Drake equation, even the statistics over the possibility of the rise intelligent life on a planet than we still have to find the proof that Aliens are out there, ET has to phone our home.

Frank K.
August 27, 2010 6:59 am

Steven Mosher says:
August 26, 2010 at 10:30 pm
Interesting link, Steve. Thanks.
Here’s the part of the abstract that caught my interest:
“Firstly, climate model biases are still substantial, and may well be systemically related to the use of deterministic bulk-formula closure – this is an area where a much better basic understanding is needed. Secondly, deterministically formulated climate models are incapable of predicting the uncertainty in their predictions; and yet this is a crucially important prognostic variable for societal applications.”
Translation – we really don’t have a handle on how to model the physics in our GCMs using simple parameterizations, and, in any case, we won’t be able to determine if the answer is correct or not (which is important before start doing insane things like getting rid of all fossil fuels for power generation and transportation).
All the more reason to abandon this wasteful, unnecessary NASA project…

August 27, 2010 8:48 am

Frank, you actually have to watch the video:
“Secondly, deterministically formulated climate models are incapable of predicting the uncertainty in their predictions; and yet this is a crucially important prognostic variable for societal applications.”
If you watched the video you will note how the type of experiement suggested here fits into stochastic modelling of physical processes. You’d even learn about “probablilistic chips” The problem of paramaterization ( see the discussion of navier stokes) doesnt stop folks from building useful things that work. The issue is that up to now there are really two camps of modellers. Palmer is suggesting a third path. If you run a deterministic ab initio model, you will get the same answer, and that answer will be wrong. As it is practiced, there are over 20 models run this way and the ensemble mean is taken. That mean is more skillful than any one model. Palmer is suggesting that if you introduce stochastic methods to the problem of bulk parameterizations, you will get a a model that is non deterministic ( provides uncertainty) AND it outperforms the ensemble mean. Which means instead of 20 different models modelling the physics in slightly different ways, you get ONE MODEL, run many different times that is more skillful. This would be a huge change, because it would mean that the research community wouldnt do some wasteful things.. ( like build yet another GCM –YAGCM)
The GENERALIZED rants against models are a sign of intellectual laziness. All physics are models. When those models ( equations) do very very very well, we call them laws.

August 27, 2010 9:15 am

“That is, they formulate their theories in terms of parameters which can’t be physical measured (the theories can’t be falsified in the sense of Popper.)”
1. That’s not true. The parameterizations are typically not about things that cant be measured. They, are bull representations of processes that can be measured and are measured. But, the finer grained implementations cant be run, say due to compute power.
2. The theories CAN be ‘falsified’ ( see below for what that relly means) and ARE. when you do a bulk parameterization and get the ‘wrong’ answer that tells you somethings ‘wrong’. But more to the point, there is no such thing as popperian falsification. as popper himself came to realize. Whenever a theory ( a statement with the purity of math) comes into conflict with data ( which is ALWAYS) you have these choices:
1. reject the data ( an experiemnt that questioned F=MA, would likely not get attention)
2. Modify the theory, especially if it got many other things right
3. Ignore the theory, especially if it wasnt well connected to other science
4. Claim that the disagreement (if small) between the theory and the data was “error” or “uncertainty” and say that the theory was “verified”
Data doesnt prove theories ‘false’ anymore than it proves them ‘true’. DATA renders Theories more or less useful for some purpose. At the extreme some are so useful and so skillful that we call them ‘true.’ At the other extreme they are so useless that we call them ‘false.’ If we observe how people actually respond to mismatches between data and theory and catalog the responses we see that they do one of the 4 above. But from a logical standpoint, data cannot prove ( in a logical, mathematical sense) a theory TRUE, and can’t prove ( in a logical sense) that it is ‘wrong.’
You also need to understand what is meant by falsifiable in principle.
start here:
http://en.wikipedia.org/wiki/Falsifiability
read Quine.

Frank K.
August 27, 2010 10:12 am

“Frank, you actually have to watch the video:”
OK. I watched the video (it’s an hour long!).
The probablistic computer chip idea sounded very interesting. I’ll have to read up on that.
As for the rest of it, it was somewhat interesting (boring in spots), and no, I see NO resemblance to what was discussed in the video and the wasteful, uninteresting, and useless NASA project being discussed here. The toy model NASA wants to use is no where near the complexity and behavior of modern climate models. Again, I return to my example of the coarse grid Navier-Stokes solution. I can perturb the dynamic viscosity, the freestream velocity, etc. all day long and it will tell me nothing of the true behavior of the system because the model is not adequately representing the true physics.
By the way, take a look at the slide that is presented at 31:15. There, Prof. Palmer shows how much spread there is in the climate models’ predictions of global temperature when you look at the absolute temperatures rather than anomalies. Where have I heard that one before…
If our economy were in better shape and our government weren’t so burdened with massive debt, I would perhaps be more open to these kinds of projects. But we DON’T HAVE THE MONEY FOR THIS JUNK. Of course, if they really think this is important, NASA should consider canceling other projects to free up money for this…

John C
August 27, 2010 12:41 pm

I always find it funny that the Anti-GW crowd chimes in with quips about “Nature being random” and “models are useless” – Yet physical models are used everyday. We fly planes built in simulation. We have more accurate forecasting than decades ago because of modeling.
Some of you may say : “The weatherman is always wrong”. There’s a big difference between micro and mesocyclonic events. Modeling of Rossby Waves is pretty close to temperature variations on the planetary surface and trends can be forecast several weeks in advance.
Is modeling perfect? No. However, I disagree vehemently with suggestions that models are no better than random guess. This is absolutely not true and slaps the face of thousands of meteorologists and climatologists that work day in and day out to save people from floods, tornadoes, and other natural disasters.
To me one thing is absolutely certain about Global Warming. Nobody really knows the absolute truth. We can only base the future on current understanding of physics and some large assumptions. But at the end of the day, am I going to trust people that proclaim they know truth without a single shred of doubt? If you’re truly a scientist there’s always room for doubt. I notice that the most ardent Anti-Global Warming folks like to focus on little inconsistencies, yet they gloss over the majority of science research that gives data in contrary to their beliefs.