Modeling sunspots during times when few are seen

(h/t to Michael Ronayne)

NCAR

Sunspots Revealed in Striking Detail by Supercomputers

BOULDER—In a breakthrough that will help scientists unlock mysteries of the Sun and its impacts on Earth, an international team of scientists led by the National Center for Atmospheric Research (NCAR) has created the first-ever comprehensive computer model of sunspots. The resulting visuals capture both scientific detail and remarkable beauty.

flower-like shape; dark center, bright petals
The interface between a sunspot's umbra (dark center) and penumbra (lighter outer region) shows a complex structure with narrow, almost horizontal (lighter to white) filaments embedded in a background having a more vertical (darker to black) magnetic field. Farther out, extended patches of horizontal field dominate. For the first time, NCAR scientists and colleagues have modeled this complex structure in a comprehensive 3D computer simulation, giving scientists their first glimpse below the visible surface to understand the underlying physical processes.

The high-resolution simulations of sunspot pairs open the way for researchers to learn more about the vast mysterious dark patches on the Sun’s surface. Sunspots are the most striking manifestations of solar magnetism on the solar surface, and they are associated with massive ejections of charged plasma that can cause geomagnetic storms and disrupt communications and navigational systems. They also contribute to variations in overall solar output, which can affect weather on Earth and exert a subtle influence on climate patterns.

The research, by scientists at NCAR and the Max Planck Institute for Solar System Research (MPS) in Germany, is being published this week in Science Express.

“This is the first time we have a model of an entire sunspot,” says lead author Matthias Rempel, a scientist at NCAR’s High Altitude Observatory. “If you want to understand all the drivers of Earth’s atmospheric system, you have to understand how sunspots emerge and evolve. Our simulations will advance research into the inner workings of the Sun as well as connections between solar output and Earth’s atmosphere.”

Ever since outward flows from the center of sunspots were discovered 100 years ago, scientists have worked toward explaining the complex structure of sunspots, whose number peaks and wanes during the 11-year solar cycle. Sunspots encompass intense magnetic activity that is associated with solar flares and massive ejections of plasma that can buffet Earth’s atmosphere. The resulting damage to power grids, satellites, and other sensitive technological systems takes an economic toll on a rising number of industries.

Creating such detailed simulations would not have been possible even as recently as a few years ago, before the latest generation of supercomputers and a growing array of instruments to observe the Sun. Partly because of such new technology, scientists have made advances in solving the equations that describe the physics of solar processes.

The work was supported by the National Science Foundation, NCAR’s sponsor. The research team improved a computer model, developed at MPS, that built upon numerical codes for magnetized fluids that had been created at the University of Chicago.

Computer model provides a unified physical explanation

The new computer models capture pairs of sunspots with opposite polarity. In striking detail, they reveal the dark central region, or umbra, with brighter umbral dots, as well as webs of elongated narrow filaments with flows of mass streaming away from the spots in the outer penumbral regions. They also capture the convective flow and movement of energy that underlie the sunspots, and that are not directly detectable by instruments.

The models suggest that the magnetic fields within sunspots need to be inclined in certain directions in order to create such complex structures. The authors conclude that there is a unified physical explanation for the structure of sunspots in umbra and penumbra that is the consequence of convection in a magnetic field with varying properties.

The simulations can help scientists decipher the mysterious, subsurface forces in the Sun that cause sunspots. Such work may lead to an improved understanding of variations in solar output and their impacts on Earth.

Supercomputing at 76 trillion calculations per second

To create the model, the research team designed a virtual, three-dimensional domain that simulates an area on the Sun measuring about 31,000 miles by 62,000 miles and about 3,700 miles in depth – an expanse as long as eight times Earth’s diameter and as deep as Earth’s radius. The scientists then used a series of equations involving fundamental physical laws of energy transfer, fluid dynamics, magnetic induction and feedback, and other phenomena to simulate sunspot dynamics at 1.8 billion points within the virtual expanse, each spaced about 10 to 20 miles apart. For weeks, they solved the equations on NCAR’s new bluefire supercomputer, an IBM machine that can perform 76 trillion calculations per second.

The work drew on increasingly detailed observations from a network of ground- and space-based instruments to verify that the model captured sunspots realistically.

The new models are far more detailed and realistic than previous simulations that failed to capture the complexities of the outer penumbral region. The researchers noted, however, that even their new model does not accurately capture the lengths of the filaments in parts of the penumbra. They can refine the model by placing the grid points even closer together, but that would require more computing power than is currently available.

“Advances in supercomputing power are enabling us to close in on some of the most fundamental processes of the Sun,” says Michael Knölker, director of NCAR’s High Altitude Observatory and a co-author of the paper. “With this breakthrough simulation, an overall comprehensive physical picture is emerging for everything that observers have associated with the appearance, formation, dynamics, and the decay of sunspots on the Sun’s surface.”

aerial

First view of what goes on below the surface of sunspots. Lighter/brighter colors indicate stronger magnetic field strength in this subsurface cross section of two sunspots. For the first time, NCAR scientists and colleagues have modeled this complex structure in a comprehensive 3D computer simulation, giving scientists their first glimpse below the visible surface to understand the underlying physical processes. This image has been cropped horizontally for display. [ENLARGE & DISPLAY FULL IMAGE] (©UCAR, image courtesy Matthias Rempel, NCAR. News media terms of use*)

See a video animation of this and other sunspot visualizations as well as still “photo” images in the Sunspots Multimedia Gallery.

0 0 votes
Article Rating
120 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
June 21, 2009 10:45 am

“With this breakthrough simulation, an overall comprehensive physical picture is emerging for everything that observers have associated with the appearance, formation, dynamics, and the decay of sunspots on the Sun’s surface.”
A bit of hype here “everything”. As with all models the result depends on what you put in. Models do not create new knowledge, but helps visualize existing knowledge, in combination with assumptions about what you don’t know yet. If the model does not quite explain “everything” you can try varying your assumptions to see what might have to be changed in a way consistent with the equations that you have posited from the outset. A persistent problem in solar magnetic modeling is that as you go to finer and finer scales, the less accurate the model becomes because of lack of computer power, while at the same time telling us that the most interesting things [and determining factors] happen at still finer scales.

anna v
June 21, 2009 10:52 am

To create the model, the research team designed a virtual, three-dimensional domain that simulates an area on the Sun measuring about 31,000 miles by 62,000 miles and about 3,700 miles in depth – an expanse as long as eight times Earth’s diameter and as deep as Earth’s radius. The scientists then used a series of equations involving fundamental physical laws of energy transfer, fluid dynamics, magnetic induction and feedback, and other phenomena to simulate sunspot dynamics at 1.8 billion points within the virtual expanse, each spaced about 10 to 20 miles apart. For weeks, they solved the equations on NCAR’s new bluefire supercomputer, an IBM machine that can perform 76 trillion calculations per second.
Oh dear.

D. King
June 21, 2009 11:15 am

involving fundamental physical laws of energy transfer, fluid dynamics, magnetic induction
Sounds good.
and feedback, and other phenomena to simulate sunspot dynamics at 1.8 billion points within the virtual expanse,
Uh oh.

rbateman
June 21, 2009 11:17 am

while at the same time telling us that the most interesting things [and determining factors] happen at still finer scales.
That’s an eye-opener, Leif.

June 21, 2009 11:29 am

Maybe next they can simulate the Cubs winning the pennant.
Isn’t virtual reality wonderful? If I had a supercomputer there’s no telling what I would simulate. My first choice would be a rational world, but that’s pretty farfetched. Maybe I’d just start off with a rational neighborhood.
Thank goodness somebody is engaged in reality-based activities, such as farming, forestry, and that boring food-clothing-shelter stuff. Otherwise the Great Philosophers of Science would go hungry, naked, and wandering aimlessly in the snow.

timetochooseagain
June 21, 2009 11:31 am

Computers are the new experiments. God help the scientific method….

AnonyMoose
June 21, 2009 11:37 am

I’ll call this a good start.

crosspatch
June 21, 2009 11:39 am

Sounds like someone desperate to justify funding. Something looks oddly too symmetrical about it to me.

KimW
June 21, 2009 11:42 am

So the model just shows what we already know or infer and cannot show anything we do not. Too bad if there is anything new and unknown.
Some people might confuse this withan actual “Experiment” with real observations.

Kath
June 21, 2009 11:45 am

It doesn’t clearly explain some of the structures I’ve seen in photographs of a sunspot, but it’s a start. In my branch of the sciences, computer models have to be supported by experiments and verified.

JFA in Montreal
June 21, 2009 11:51 am

Garbage In – Garbage Out …

anna v
June 21, 2009 11:56 am

timetochooseagain (11:31:14) :
Computers are the new experiments. God help the scientific method….
When I was a graduate student back in the 1960’s I remember having strong arguments with a freshly arrived from England computer PhD , who was adamant in maintaining that pretty soon there would be no need for experiments like the CERN experiments because computers could simulate all experiments.
Not much has changed since then in people mesmerized by computing, except the power of computers, since my laptop is now more powerful than the “supercomputer” of the time.

Douglas DC
June 21, 2009 12:05 pm

Why am I thinking of “Hitchiker’s Guide to the Galaxy”-“Deep Thought” supercomputer?

MikeN
June 21, 2009 12:08 pm

Well the model is predicting a comeback for the sun soon.

pwl
June 21, 2009 12:18 pm

Pass the grains of salt please for we need a dose… models are excellent, as another above said, for learning and putting our knowledge or lack there of to the test to see how accurate it is with Nature. If you fail to test the model against the actual objective reality then you’re not doing science but you’re in the special effects business for the next Star Trek Film – Journey to the Center of the Sun.
Remember all the limitations, inaccuracies, misrepresentations, foibles, and possibly even all the conclusions of models that apply to climates on Earth also applies to models of the super tropical climate on Sol.
Of course if a model of the Sun’s spots was accurate the computer running the model and everything else on Earth would be instantly incinerated. Since to model the sun accurately you’d need to recreate it! Let’s not and say we did. 😉

Dave Wendt
June 21, 2009 12:36 pm

Leif Svalgaard (10:45:28)
A persistent problem in solar magnetic modeling is that as you go to finer and finer scales, the less accurate the model becomes because of lack of computer power, while at the same time telling us that the most interesting things [and determining factors] happen at still finer scales.
If computing power is really the main drawback of these models, they may indeed be more helpful in the not to distant future. Although the bluefire machine they used was delivered only a little over a year ago, it is already well off the state of the art. The best machines are now well into the PetaFlop range, i.e. doing 1-4 million billion calcs/sec and I’ve seen a recent report of a new machine in the pipeline which will be pushing the 100 PFlop barrier in a few years. The only things that seem to be limiting the development of supercomputer tech right now are the massive amounts of money and the massive supplies of energy they require.

June 21, 2009 12:52 pm

Supercomputing at 76 trillion calculations per second
I could use a faster computer, but it’d still be on dial-up.

Gordon Ford
June 21, 2009 1:05 pm

Ground truthing that model will be interesting!

Robert Wood
June 21, 2009 1:28 pm

Constructing a computer model is a valuable exercise for understanding what we know, but it doesn’t create data, nor prove theories. It is merely a tool for exploring the complex interactions of mathemateical equations – which are the real models.

Bill Illis
June 21, 2009 1:41 pm

There is still a way to go with this simulation I imagine.
The Swedish 1 Metre Solar Telescope seems to produce the best close-up pictures of the Sun using adaptive optics.
There is a lot of 3D effects that don’t seem to be properly caught in the simulation if you can see the different 3D structures in a sunspot in this image which is probably the best close-up ever taken of the interior of a sunspot.
http://www.solarphysics.kva.se/gallery/images/2003/halpha_22Aug2003_AR.4996.MFBD_color.jpg
other pics.
http://www.solarphysics.kva.se/gallery/images/2003/gband_02Jun2003_AR373.2539.MFBD_color.jpg
http://www.solarphysics.kva.se/gallery/images/2002/24jul02_gcont_ai.jpg
This movie takes awhile to fully load but is quite amazing.
http://www.solarphysics.kva.se/gallery/movies/oslo-2004/movies/gband_20Aug2004_sunspot_41min_color.mpg
Home page with other pics and movies.
http://www.solarphysics.kva.se/

June 21, 2009 2:02 pm

Dave Wendt (12:36:31) :
I’ve seen a recent report of a new machine in the pipeline which will be pushing the 100 PFlop barrier in a few years.
The ‘modern’ supercomputers achieve their massive throughput by running thousands of threads or even CPUs in parallel. This works for problems that can be ‘parallelized’ , but does not help much for ‘serial’ problems, so there are problems that cannot benefit from that kind of supercomputer, namely those where the next step depends on the previous step.

Ray
June 21, 2009 2:04 pm

I can’t look at that picture… My vision throbs!

Katherine
June 21, 2009 2:12 pm

For the first time, NCAR scientists and colleagues have modeled this complex structure in a comprehensive 3D computer simulation, giving scientists their first glimpse below the visible surface to understand the underlying physical processes.

Shouldn’t that be “what they think lies below the visible surface”? It’s just a model after all, not the real thing.

KlausB
June 21, 2009 2:49 pm

re: {B} Leif Svalgaard (14:02:18) :
The ‘modern’ supercomputers achieve their massive throughput by running thousands of threads or even CPUs in parallel. This works for problems that can be ‘parallelized’ , but does not help much for ’serial’ problems, so there are problems that cannot benefit from that kind of supercomputer, namely those where the next step depends on the previous step.{/b}
Leif, yep, that’s it.
It was already tried to distibute preliminary results on massive parallel computers,
to bypass the step by step chain. A hell to program it, not worth the effort.
Was there, saw that. And it did’t make sense to me.

Aelric
June 21, 2009 2:51 pm

On a lighter note:
Let us hope that the authors didn’t submit the ‘Eye of Sauron’ jpg by mistake!

kim
June 21, 2009 2:54 pm

Heh, we know the GCMs poorly model convection. Convection leads to turbulence, which becomes difficult to model. How do we know this sunspot program adequately models convection?
==========================================

kim
June 21, 2009 2:58 pm

Leif, could one of those ‘more interesting things happening at finer scales’ be the tidal movements of mere millimeters?
===============================================

Dave Wendt
June 21, 2009 3:12 pm

Bill Illis (13:41:59) :
Thanks for the links to those images, they are incredibly beautiful and fascinating!

Dave Wendt
June 21, 2009 3:22 pm

The ‘modern’ supercomputers achieve their massive throughput by running thousands of threads or even CPUs in parallel. This works for problems that can be ‘parallelized’ , but does not help much for ’serial’ problems, so there are problems that cannot benefit from that kind of supercomputer, namely those where the next step depends on the previous step.
Good point. I guess when it comes to computers, speed and power are not necessarily synonymous. Do the GCMs have this sort of unsuitability to parallelization also and might not attempting to tweak a model to run on a massively parallel computer system lead to distortions in the output?

Leon Brozyna
June 21, 2009 4:08 pm

Not another computer model. There’s almost a mystical aura surrounding this tool. And that’s all that it is – a tool, a guide. And it has limitations. As Dr. Svalgaard said, “[it] helps visualize existing knowledge.” And it’ll also help show you what you think you know is wrong when the model eventually fails to emulate reality.

tallbloke
June 21, 2009 4:19 pm

A persistent problem in solar magnetic modeling is that as you go to finer and finer scales, the less accurate the model becomes because of lack of computer power, while at the same time telling us that the most interesting things [and determining factors] happen at still finer scales.
Big fleas have little fleas
Upon their backs to bite em
And the little fleas have smaller fleas
and so ad infinitum

GlennB
June 21, 2009 4:34 pm

Looks like we have another spot middle south. Not picked up on yet, maybe cause it’s Sunday.
http://sohowww.nascom.nasa.gov/data/realtime/mdi_igr/512/
An interesting find circa 1898:
“It must have been visible as a notch on the limb on the preceding afternoon, but had not come into view in the morning, when the usual photographs were taken.”
http://articles.adsabs.harvard.edu//full/1898Obs….21..375M/0000375.000.html
Does this suggest that if a spot was not visible when photographs were taken in the morning, that any spots showing up later during the day and dissapearing before the next morning would not have been recorded?

David Holliday
June 21, 2009 5:58 pm

As much as I have denigrated the use of Global Climate Models (GCM), my read of this research is that the application of computer modeling is entirely appropriate. The difference is in how the models are being used. GCM’s are being used to project forward in order to predict climate on the basis of unproven theories and an incomplete understanding of climate drivers. In this research the model is being used to reconstruct phenomena using well understood principles and detailed observed data. In the GCM case you have a model being used to give false confidence in an outcome the likelihood of which is far from certain. In this case you have a model being used to increase confidence in the understanding of the underlying forces causing the phenomena. This case is an example of how computer models should be used. The GCM case is an example of how computer models can be used to give the appearance of confidence when there is none.

J.Hansford
June 21, 2009 6:01 pm

Hmmm… Computer models again?
As long as they understand they are only attempting to model what they already know.
Nothing new will arise from it. It is simply an idea for a hypothesis on something new, that observation and experimentation will still have to prove…..
Once upon a time, people used to just use their imagination rather than a computer generated image to visualize a concept. Those that could conceptualize their accumulated knowledge and apply that in a good guess for uncovering the secrets yet unknown…. Became great men and women of science.
….. So the pitfall here is, instead of seeking out the secrets of the natural universe…. They’ll get sidetracked tweaking their Computer program so that their representation of a Hypothesis fits with accepted orthodoxy and looks, “real-er”.
… Sorta sounds familiar somehow?

Bobby Lane
June 21, 2009 6:24 pm

As Leif said, GIGO. Yet another computer model purporting to explain an enormously complex natural system about which we have not a lot of substantial research into its complex behaviors. All they will use this to do is to either: 1) explain why sunspots do affect climate and thus control the dialogue by controlling the “results,” or 2) coming to a consensus that sunspots don’t have that much of an effect when the sun ramps back up and stops being a part of the 24 hour news cycle to any significant degree.

Michael Ronayne
June 21, 2009 6:29 pm

Being a very big fan of fractals I was at first awe struck by the beauty of the images which are undeniable. Then I realized that there is a very significant flaw in the graphics. There is no color scale calibration legend which should have been included with all of these graphics. We are all assuming that we know what the colors mean and the intensity scales which are being used. This was no small omission and I have written about this practice in the past when such graphics are used for deception which is not the case here. In this case, I suspect that the researches were as captivated by the beauty as I was and still am. In the face of such beauty you need a really paranoid personality, such as mine, to note the omission. Hopefully the next iteration of the graphics will contain a legend.
Mike

Retired Engineer
June 21, 2009 6:33 pm

Mike D. (11:29:21) :
“Maybe next they can simulate the Cubs winning the pennant.”
They would need a bigger computer …
“Otherwise the Great Philosophers of Science would go … wondering aimlessly in the snow.”
They don’t now ?
I suspect things happen on a finer scale then 18 miles per data point. Same problem with GCM’s. Coarse resolution, and insufficient data to start with. So they make assumptions. Which are the mother of all foul ups.
But, they have to spend our tax dollars somehow.

Gary Strand
June 21, 2009 6:37 pm

The commentators who dismiss computer modeling – I wonder if any are engineers, who live by computer models these days. Consider that the next new bridge you may cross or skyscraper you go into was modeled on a computer. Do you still believe models are worthless?
Likewise, how would one run an experiment on the sun? Or, something much easier – how would you run an experiment on the climate system?

Jack Green
June 21, 2009 6:52 pm

I’m one of those engineers that lives by computer models and I hate to attack my own profession but it’s a function of the data input. If you don’t have good data then you don’t get good output. Garbage in = Garbage out. Still it’s the process that leads to understanding over the long term that matters and you never get the answer you get a range. Our job is to tell the truth and not project things that aren’t there.

Kath
June 21, 2009 7:19 pm

Found this web page “Academic phrases translated”
http://www.thesabloggers.org/2009/06/just-for-grins-academic-phrases-translated.html
Samples:
Academic phrase with (translation):
“A statistically-oriented projection of these findings…” (A scientific wild guess.)
“Additional study will be required for a more complete understanding of this phenomenon…” (I didn’t understand this, and probably never will.)
“A definite trend is evident..”‘ (This data is practically meaningless.)
“Correct within an Order of Magnitude…” (Wrong. Wrong. Wrong.)
“It is generally believed…” (A couple of others think so, too.)
…..

actuator
June 21, 2009 7:24 pm

Computer models are just fine when you are dealing with problems where the variables are all known with precision. With many engineering models the knowledge base for materials used in structures, the effects of gravity, weight loading, etc. are sufficient to predict outcomes to produce sound structures. As with climate, a computer model of solar functions involves variables and cycles that are not known or are not known with sufficient precision for the model to be a valid predictor of outcomes.

Gary Strand
June 21, 2009 7:34 pm

It would be nice to have 1km x 1km x 10m resolution weather data for the entire atmosphere and ocean, but that’s not ever going to happen. Blaming models for the “GI” part of “GO” isn’t fair.
It’s also unfortunate that the technological development required to have computers and satellites and good earth observing systems also seems to have required the use of energy sources that are altering the climate of the planet.

MarkB
June 21, 2009 7:35 pm

Regarding those who badmouth mathematical models: how about you go learn to write one, and then come back to us and share your expertise. Until you do, you’re no better than the sports fans who call into talk radio every day and complain about a decision a manager made during a baseball game. Everone’s a baseball expert – including Joe the Hostess Twinkies delivery van driver.

red432
June 21, 2009 7:43 pm

A little modesty and careful qualifications would be nice.
Some time back I read in the Economist that some computer
guys were working on “modelling” a whole nematode.
I was amazed that the Economist folk just bought this
hook line and sinker. At the current state of understanding
no one knows how to model the folding of a single protein
given any amount of computing power — not to mention
how it behaves in an organism. I’m guessing the
nematode model was an utter waste of time (except for
extracting money from some grant giver).
Maybe the sunspot simulators could say the are not “modelling sunspots” but
simulating our current hypotheses about sunspots in order to see
if they seem to reasonably reflect reality.

JamesD
June 21, 2009 7:46 pm

So let me get this straight. This simulation shows the structure of a sun spot on the order of magnitude of the sun, but it is not perfect. And it takes a week to solve on a super computer. Now imagine simulating this every second, for years and years, and you get an idea of the computing power it would take to model the Earth’s climate. Compute climate models are a joke. They are useless.

JamesD
June 21, 2009 7:46 pm

“order of magnitude of the EARTH”, correction

Gary Strand
June 21, 2009 7:51 pm

JamesD, would you recommend all climate modeling efforts be ended and their funding zeroed?

MikeW
June 21, 2009 7:52 pm

As a programmer myself, I can just imagine the amount of effort, debate, testing and retooling that must have gone into each aspect of this project to coax the simulation to a point where its predictive output begins to track actual observations of a recorded event. Such staggering complexity.
But then, how humiliating would it be to find that all your work is only good for studying something that the Sun used to do. Ouch!

rbateman
June 21, 2009 7:53 pm

The spot simulation needs fractalization, which is again more iterative load.
Like fusion, it’s 30 years away.

Kath
June 21, 2009 8:09 pm

Gary Strand (18:37:56) :
“Do you still believe models are worthless?”
Gary, those FEA programs that are used to model engineering structures are checked and verified against known test models and small scale experiments to ensure that the results they produce are consistent. They go through extensive quality control to ensure consistency, and even then, the resultant models have to be checked for anomalies or unexpected behaviour that could be caused by incorrect constraints, boundary conditions, element types or other factors. Designers & engineers who use these programs also apply appropriate safety factors depending on application. Where appropriate, scale or partial full scale models may be tested and supplemented with theoretical calculations.
Engineering models cannot be compared, in any way, to climate models or the sunspot models in this article.

Editor
June 21, 2009 8:14 pm

Gary Strand (19:51:23) :
JamesD, would you recommend all climate modeling efforts be ended and their funding zeroed?
I’m with Gary on this one. Models can only be as good as the science that goes in to them but building the model is important for testing the science. I WOULD argue that climate modeling has gotten too bound up with politics and a lot of us are suspicious about the spin and lack of both transparency and humility, but let’s not turn into a bunch of Luddites.

Gary Strand
June 21, 2009 8:15 pm

Kath, how does one create a “known test model”, “small scale experiment”, “scale or partial full scale model” of the earth’s climate system?

Hank Hancock
June 21, 2009 8:23 pm

“A persistent problem in solar magnetic modeling is that as you go to finer and finer scales, the less accurate the model becomes because of lack of computer power, while at the same time telling us that the most interesting things [and determining factors] happen at still finer scales.” – Leif Svalgaard (10:45:28)
Another challenge of modeling is the limitations of math precision of today’s binary computing systems. In a good math co-processor or subsystem that can make qualitative (best algorithm) decisions on how to perform operations on number sets. The limits are determined by the size, width (abstract difference between the highest and lowest values), and density, which is the relationship between the size and width of a number set. Precision error can occur in base or type casting conversion of different types of floating point numbers, importing data from lower precision input, and execution of higher order math operators against floating point numbers.
While all integer values (which all have a density = 1) can be represented in a binary system – there are gaps in floating point values in all binary math systems. Those gaps are real numbers that cannot be represented without approximation. Thus, another challenge in going to finer and finer scales is not only limits in computing power by also inherent limitations in math precision in today’s binary computing systems.
Analog computers could solve this problem as they are capable of representing all values of real numbers in a given scale and range but are still not ready for prime [no pun intended] time. While we get all excited about faster computational speeds, what will be even more exciting is the analog computers of the future that will handle very large and very small numbers with incredible precision, perform non-linear calculations effortlessly, and all at light speed minus velocity factor.

Evan Jones
Editor
June 21, 2009 8:52 pm

It’s worse than that. You can’t model something like that from the bottom up. The errors compound. It’s like trying to simulate World War II using Squad Leader rules. Can’t be done; foolish, even silly, to attempt.
Something like that has to be modeled, if it is to be modeled in the first place, from the top down. The results will be less precise–and probably far more accurate. Such a model is also much easier to modify for newly discovered factors. (Such a model could also be run using a 1980s PC. Or, lacking that, the back of a large envelope.)
And, like judging a wargame, one must start by comparing the model with the “storyboard”, that is to say, the real past. One does not even consider using the model to project until it has first duplicated.
STAVKA actually did a top-down simulation in 1940 of a projected German invasion. Zhukov “played” the Germans. The results were almost exactly the same as what happened historically, down to the stall at Yelnya. (This did not suit Stalin, and STAVKA threw the results out and Zhukov was in the doghouse.)

Paul R
June 21, 2009 8:58 pm

Looking into that computer generated spot makes my eyes go wonky, that’s all I have to say about that.

June 21, 2009 8:58 pm

kim (14:58:04) :
Leif, could one of those ‘more interesting things happening at finer scales’ be the tidal movements of mere millimeters?
Fraction of millimeters 🙂
The problem with these minute movements is that they occur on top of [or swamped by] random movements millions of times larger and faster. The tidal effects are not fine scale, but the largest scale possible on the Sun.

J. Peden
June 21, 2009 9:18 pm

Gary Strand:
“Or, something much easier – how would you run an experiment on the climate system?”
You make some predictions based upon your hypothesis and see if they eventuate in the real world. You make clear what empirical conditions would falsify your hypothesis. Yes, those really are real experiments, with the whole climate, Earth, and Solar System themselves as a giant sources of altering conditions, results, and data. Isn’t that enough?
But you can also run smaller real world experiments on the elements of your hypothesis, such as the temperature effect of varying concentrations of CO2 within a confined space and a given light wave input, etc., or on CO2’s effect on a simulated “ocean”.
Isn’t that how general laws came to be developed and accepted? Isn’t that what Climate Science should be doing?
Oh, and don’t forget to check your measuring equipment, wink, wink.

John F. Hultquist
June 21, 2009 9:49 pm

“They also contribute to variations in overall solar output, which can affect weather on Earth and exert a subtle influence on climate patterns.”
I’d like the authors to expand on this “subtle influence” idea. Where is it? Why is it? How is it? Haven’t we all heard that the climate is swamped by CO2 and the positive feedback of clouds?

June 21, 2009 10:00 pm

To the devotees of supermodels:
I create and use models all the time in my forest biometrics work — primarily growth-and-yield models, but also many types of mensurational tool-type models.
I don’t make pretty pictures or use fabulously expensive supercomputers to create my models. Nor do I rely on models alone to make difficult management decisions. They are tools, not scripture.
The pretty pictures are useless to a large degree, but even more useless is the expense. The economy of the entire world is reeling. We have real problems galore, such as poverty, famine, epidemic disease, ignorance, war, etc. Is it too much to ask government scientists to pinch a few pennies on work that has no practical value?
I am not opposed to “pure” science such as astronomy or particle physics. But I am on a tight budget right now. We all are. Wouldn’t it be better to pursue real wealth creation and relief of human suffering at the moment, than to squander, yes squander, precious resources on pretty pictures?
We all know that most government science excludes the truly innovative, paradigm-challenging thinkers. PC science does not advance knowledge; in fact, I would argue it is a barricade to fresh scientific inquiry. That is another reason to reduce the funding to inside-the-Establishment institutional science.
I am not a Luddite. But I do find it ironic that the real Luddites, those who would shut down civilization out of a gripe with technology and indeed humanity in general, rely on supercomputer climate models to justify their dystopianism.

Kath
June 21, 2009 10:27 pm

“Gary Strand (20:15:56) :
Kath, how does one create a “known test model”, “small scale experiment”, “scale or partial full scale model” of the earth’s climate system?”
Gary, J.Peden, above, answered your question. The climate experiment is all around us. It is happening as we speak. And, as J.Peden notes, the temperature and other measurements must be accurate and not based on urban heating, bad siting or poor equipment.
One has to ask the following of the climate models:
a) Temperatures are dropping while CO2 continues to rise. Do the models correctly reflect this fact?
b) The Arctic ice just about reached “average” thickness this winter; in fact, the ice was thick enough for the Russians to drive vehicles to the North pole. http://www.russia-ic.com/news/show/8099/. Do the models accurately predict the changes in ice thickness that we have seen from the winter of 2007 to 2008-2009?
c) Do the models adequately conform to recorded historical climate data such as the little ice age and the medieval warm period.
(There are others that I have not listed)
The problem is this: If even one of those events is not correctly modeled, can we reasonably expect the same models to predict climate decades into the future?
One other thing: Climate change is normal. It has always happened in the past and it will always happen in the future. With or without human intervention.

kristof
June 21, 2009 10:28 pm

Wow, sometimes it is here worse than in a retirement home. ‘In the good old days, there were no computers,…then it was all ‘real’ science.’
So some of you are just saying that computer models are useless because they can give faulty outcome. And let’s first call a cat a cat. Computer models are actually just calculating the physical models we have, but that were just to difficult to calculate by hand. They are not some magical new tool. They are used very extensively in many, if not all fields of engineering and science. Doe that mean that their outcome should be trusted blindly? Duh, of course not.
That is why people check them, test them, improve them, disregard them if they are wrong, and then still you are aware of the fact that it is only a model, not reality itself.
That is why different groups make different models.
So yes, you have to be cautious about the output of computer models, but to disregard them as a whole, well that is just plain old nostalgic whining.
But anything that helps not to confront the issue, is good enough right.
the reasoning that some follow is: 1. computer models can be faulty
2. So therefore nothing that comes out of the models for our climate can be trusted.
2. does not have to follow 1.
or some say:
1. the climate is too complex, is not enough understood or the grid size should be smaller so 2. everything a models predict is false.
Also here 2. does not have to follow 1.
A simple model can take you already on the way of understanding what happens, even though the system is complex.It is not because the outcome of the present models has some variance in it, that it results are just useless.
Some people are here discussing the pros and cons of computer models as should be done.
But please quit the whining about the bad computer models that are bad because you just don’t like their outcome.
You would think that under Bush at least one group would have come up with a model that said nothing was wrong if the models are really so useless, unreliable and can predict anything with bad input (the garbage in, garbage out).

June 21, 2009 10:36 pm

Katherine (14:12:45) :
Shouldn’t that be “what they think lies below the visible surface”? It’s just a model after all, not the real thing.
I was thinking exactly the same thing. It seems that modeling is taking the place of reality. What the results of a computer are, they become into realities in the minds of some people. At this rate, some scientists will end believing that the Earth warms up the Sun, or that the energy in the Earth is totally endogenous, created by the carbon dioxide from the nothingness… 🙂

Neil O'Rourke
June 21, 2009 11:05 pm

Consider that the next new bridge you may cross or skyscraper you go into was modeled on a computer. Do you still believe models are worthless?
The stress calculations needed for this sort of modelling are (essentially) completly solved. Also, the vast majority of modelling is some form of interpolation.
Global Climate Models are taking a subset of data and extrapolating.

hotrod
June 21, 2009 11:37 pm

Every time I see this sort of “news item” about computer models appear, I always flash back to the same two events.
One was an incident in a convenience store years ago right after the magic computer electronic cash registers came out that figured your change for you. I had been shopping there for years, and knew the price of certain items by heart. I walked in pulled a soda pop out of the cooler and set it on the counter. A new employee rang it up and announced that that would be$1.25, I smiled and said it should be $.89, she indignantly said, “no it is $1.25” and pointed at the electronic cash register. I turned and pointed to the sign on the glass door of the cooler that clearly advertised $.89. Her response was, that it would be $1.25, because that is what the computer says! It took a moment to convince her that the sign was evidence of a reality that the computer did not recognize.
I also flash back to a debate I was having with an engineer about their limits of knowledge regarding accidents on nuclear reactors. Here in Colorado at that time, we had the only commercial high temperature gas cooled nuclear reactor in the U.S. (perhaps in the world – there was one other similar research reactor in Peach Bottom Pa). They were describing various maximum credible accident scenarios.
One of them was a breach of the concrete containment and the possibility of run away heating of the graphite core (think very big charcoal grill) when it was exposed to oxygen while it was heated to very high temperatures (not unlike what would happen at Chernobyl years later). When the subject came up their answer was a simple emphatic “that is impossible”. We were trying to impress on them the limits of their knowledge and get them to at least qualify that statement with something like “That is extremely unlikely, and our calculations show it is probably impossible”.
The engineers would simply not budge from their flat declarative statement, until I commented, “Yes and the engineers at de Havilland did not think the wings would fall off their Comet airplane either”!
At that point there was a pregnant pause and you could see the light go on as they said that yes “to the best of their knowledge that was impossible”!
Larry

RexAlan
June 22, 2009 12:09 am

Computers are a great tool for helping you think, just never let them do the thinking for you.

Gary Pearse
June 22, 2009 4:09 am

Gary Strand (18:37:56) :
“The commentators who dismiss computer modeling – I wonder if any are engineers, who live by computer models these days. Consider that the next new bridge you may cross or skyscraper you go into was modeled on a computer”
Your defense of models is a bit over the top. Complex bridges and buildings were designed for a long time with a pencil and a slide rule. The models, using the same applied science simply do the calculations and drawings a heck of a lot faster (when you know the science! I wouldn’t walk across a bridge designed by a climate change modeler). Yes computer models can be useful but keep in mind what happened to the movies when computer simulation could make good car crashes and explosions. Moviecraft and even telling a good story got overpowered by this new tech. The story became written around the simulations and a very useful tool became a toy and an end in itself. And of course, you could make the simulation do exactly what your wanted it to do. Sound familiar? I think a major contribution of modelling to science is that they have proven that the cosmos is not deterministic. With models, it appears one can fashion many alternatives for a phenomenon (and pick the one your like)

Gary Pearse
June 22, 2009 4:29 am

Re models and engineering. Engineering models are relatively simple dealing with strength of materials, loads, force vectors, and effects of time. But even in these simple circumstances with factors that you can count on your fingers, the engineer usually then builds in a safety factor of 50 to several hundred percent to be sure!! Surely, climate models that are projecting a degree or two over a century with measuring tools that aren’t accurate to a degree or two and dealing with so many unknowns should have such a huge “safety factor” as to render them useless. I hope there are enough of us Luddites around to vote down the computer model’s projections of the next century.

Gary Strand
June 22, 2009 5:50 am

J. Peden:
“You make some predictions based upon your hypothesis and see if they eventuate in the real world. You make clear what empirical conditions would falsify your hypothesis. Yes, those really are real experiments, with the whole climate, Earth, and Solar System themselves as a giant sources of altering conditions, results, and data. Isn’t that enough?”
If that’s your criteria, then climate models are doing quite well. They can replicate the known 20th century climate – with the appropriate caveat on “known”. I doubt you agree, but given your metric above, perhaps you need to alter it given climate models’ successes.
“But you can also run smaller real world experiments on the elements of your hypothesis, such as the temperature effect of varying concentrations of CO2 within a confined space and a given light wave input, etc., or on CO2’s effect on a simulated “ocean”.
What of all the other factors? Climate depends on far more than just CO2 and solar input; how would we know what’s important if we relied on just one or two forcings?
“Isn’t that how general laws came to be developed and accepted? Isn’t that what Climate Science should be doing?”
That’s what “Climate Science” has been doing – for the better part of 40 years.
“Oh, and don’t forget to check your measuring equipment, wink, wink.”
Indeed.

Tim Clark
June 22, 2009 6:00 am

The simulations can help scientists decipher the mysterious, subsurface forces in the Sun that cause sunspots. Such work may lead to an improved understanding of variations in solar output and their impacts on Earth.
Give the authors some credit. They could have stated, ” Such work will lead to understanding that minute variations in solar output cannot account for the hockey stick.

Gary Strand
June 22, 2009 6:01 am

Kath:
“The climate experiment is all around us. It is happening as we speak. And, as J.Peden notes, the temperature and other measurements must be accurate and not based on urban heating, bad siting or poor equipment.”
Of course – but those measurements are only a very small part of the picture. Did you see my comment about the lack of 1km x 1km x 10m resolution data for the ocean and atmosphere? Indeed, those (somewhat arbitrary) requirements for data are generous; for land surface and subsurface processes, a vertical resolution of centimeters would be appropriate – same applies for sea ice.
“One has to ask the following of the climate models:
a) Temperatures are dropping while CO2 continues to rise. Do the models correctly reflect this fact?”
Despite the fact that “temperatures are dropping” is a cherry-pick of the data, yes, climate models do replicate temperature drops as CO2 increases. Your hidden assumption that temperature must increase monotonically as CO2 increases ignores the knowledge we have of the many other factors that affect temperature.
“b) The Arctic ice just about reached “average” thickness this winter; in fact, the ice was thick enough for the Russians to drive vehicles to the North pole. http://www.russia-ic.com/news/show/8099/. Do the models accurately predict the changes in ice thickness that we have seen from the winter of 2007 to 2008-2009?”
I personally haven’t looked at the output from the IPCC AR4 models for this particular measure – I’d be quite surprised if at least one of the models, for one of its runs, didn’t show sea ice area and thickness matching observations. One thing – measuring sea ice thickness is tricky – very tricky.
“c) Do the models adequately conform to recorded historical climate data such as the little ice age and the medieval warm period.
(There are others that I have not listed)”
Provide accurate and correct forcing and boundary conditions for those two states, and climate models will give it a go. Of course, since what we do know of those two eras is quite poor and has huge error bars, expecting climate models to replicate them is asking quite a lot. Remember, GIGO…
“The problem is this: If even one of those events is not correctly modeled, can we reasonably expect the same models to predict climate decades into the future?”
That’s the kicker, isn’t it? Who, and how, defines “correctly modeled” since our knowledge is imperfect? Likewise, does our lack of omniscience mean we are truly completely ignorant? I don’t think so.
“One other thing: Climate change is normal. It has always happened in the past and it will always happen in the future. With or without human intervention.”
That’s true – but that doesn’t mean that we humans cannot be changing the climate ourselves. Just as humans have always died, doesn’t mean someone cannot be charged with murder, when circumstances and the facts require it.

Gary Strand
June 22, 2009 6:04 am

Gary Pearse:
“Your defense of models is a bit over the top. Complex bridges and buildings were designed for a long time with a pencil and a slide rule.”
I recommend you examine the efforts of Lewis Fry Richardson.
“The models, using the same applied science simply do the calculations and drawings a heck of a lot faster (when you know the science! I wouldn’t walk across a bridge designed by a climate change modeler).”
Why not? Because climate change modelers don’t know the science (which is a bit unfair) or because climate change modelers deliberately manipulate and alter the science to achieve some pre-determined outcome (which is hugely unfair and unreasonably snarky)?

david_a
June 22, 2009 6:07 am

Gary Strand:
I’d say that some of the most ardent ‘disbelievers’ in the AGW hypothesis are precisely those people who have a lot of hands on experience with computer models and understand them for what they are.
The great problem with the climate models appears to be that they contain few hard and fast testable outcomes at least on shorter timescales, or if they do then they are not made public in a clear manor. We have about 30 years of pretty decent satellite temperature series to work worth and are in the process of accumulating good ocean heat content data with the deployment of the Argo network. I have yet to find a paper which says something to the effect that here are the variances of the various energy balance components of the system as predicted by the models and given them here is a probability distribution of what the energy balance should look like as a function of time.
Instead the ‘science’ (especially the paleo reconstructions) is presented as a public relations campaign and to anyone even remotely conversant with the math and the physics it is laughable and causes one to have an enormous distrust of those ‘doing the work’.

Gary Strand
June 22, 2009 6:29 am

david_a (06:07:21) :
“I’d say that some of the most ardent ‘disbelievers’ in the AGW hypothesis are precisely those people who have a lot of hands on experience with computer models and understand them for what they are.”
Those of us in the business aren’t as enamored of the model output as imply.
“The great problem with the climate models appears to be that they contain few hard and fast testable outcomes at least on shorter timescales, or if they do then they are not made public in a clear manor.”
Expecting climate models to replicate “shorter timescales” is a misunderstanding of climate. Depends on what length of time you mean. Climate models will never be able to tell you weather phenomena.
As for the results not being made public, that’s directly contradicted by the existence of the CMIP3 archive.

Just Want Results...
June 22, 2009 7:12 am

Bill Illis (13:41:59) :
Thanks Bill! You’re the best commenter here at WUWT.

Steven Hill
June 22, 2009 7:25 am

Those spots on the sun, are they part of cycle 23? Trying to learn something.
thanks,
Steve

June 22, 2009 7:40 am

Models are simply tools. They are used in many different disciplines as an aid to understanding and tp provide predictions which can be tested experimentally.
They are particularly suited to linear problems which have a wealth of factual information on which to develop algorithms, usually provided by experiment. They are poor to useless when used for chaotic non-linear systems where the drivers are poorly understood, as is the case for what happens to the bodies in our solar system.
Too much reliance on badly constructed models, which have been bought to provide politically motivated outcomes, could potentially drive science into a future dark age.
Science which cannot be falsified is no better than a religion regarding the prediction of future events.

Just Want Results...
June 22, 2009 7:45 am

Mike D. (11:29:21) :
Maybe next they can simulate the Cubs winning the pennant.

Probably not. No super computer has enough for those variables.
But I think you could use a small pocket computer to model Brett Favre with Adrian Peterson in the backfield winning a Super Bowl with Minnesota!

hotrod
June 22, 2009 8:35 am

Models are great once they have been validated. Modern aerodynamics and hydrodynamics also uses modeling. But those models have been validated thousands of times. They put in the details of a new ship design or aircraft design, let the model crunch the numbers then put a model of the finished design in a drag tank (for the ship) or a wind tunnel and verify that the predictions the model churned out match up with the real world. Over time they have gotten the numerical methods good enough that they narrow the possibilities down to manageable numbers but they still test fly the plane and do scale model test to verify the final values.
Here is a simple test plug in the real world weather data for today and run a simulation on the models for a year from now. Take the output and place it in a safe deposit box. One year from now look at the simulation output and compare it to the real world weather on that end date.
You cannot “test” a model against the pre-existing data it was designed to mimic, that is akin to checking a mathematical calculation by doing the exact same calculation twice and proclaiming it validated because you got the exact same result the second time you did the calculation. A valid check must use new data and accurately predict an unknown future event. Making a 100 year in the future prediction is like a fortune teller predicting the number of great grand children you will have. It is untestable in any usable time frame.
Larry

Gary Strand
June 22, 2009 9:04 am

hotrod (08:35:23):
“Here is a simple test plug in the real world weather data for today and run a simulation on the models for a year from now. Take the output and place it in a safe deposit box. One year from now look at the simulation output and compare it to the real world weather on that end date.”
Climate models aren’t NWP models, and, we cannot know the “real world weather data” to sufficient precision (the ultimate limiter being Heisenberg) to allow a one year forecast.
Your “simple test” is anything but, and no model can ever pass it. Try a different metric.

June 22, 2009 9:08 am

Steven Hill (07:25:19) :
Those spots on the sun, are they part of cycle 23?
cycle 24 because of their high latitude and magnetic signatures.

David Corcoran
June 22, 2009 9:11 am

Gary Strand, Dr. Hansen has been issuing predicitions for 30 years, and now the arctic ice is near normal and the Antarctic ice is well above normal. Most Stevenson screens he mostly relies on have been shown not to meet NOAA guidelines. I think its foolish to be anything but skeptical given those circumstances.
It’s these wrong predictions that cause the most skepticism. And these have occurred over a long period of time. 30 years isn’t cherry-picking and it isn’t just “weather”. Then there was his fantastically wrong 2006 Super El Nino prediction. And we’re supposed to bet all of our livelihoods on this?
Environmentalists have a rich history going back to the 70s of crying wolf without much consequence. Paul Erlich said we’d be eating each other by the 1980s… eaten a neighbor lately?
Modelling the past is relatively easy. Tinker ’til it fits. Knowing the future? That’s what separates the rich from the poor on Wall Street. The AGW crowd is losing credibility with the public with every cool summer and brutal winter.

Alan the Brit
June 22, 2009 9:13 am

Gary Strand;-)
Kath;-)
Gary Pearse;-)
Engineering programmes are as Kath has stated. They are based on known behavioural characteristics of materials & structural forms based on theory & testing. Generally a full size model would be made for best results. As engineers, we know pretty much how steel, concrete, timber, masonry, aluminium, glass, behave as materials, although they can & do throw in the odd wobbly every now & then in practice (observed reality)! Computers only reflect in their output what input is made.
The simplest test of an engineering computer programme/model, I am afraid to say it, is to take a pencil, (2B preferably), a pad of plain paper, & do a couple of sums by hand (God forbid such heretical goings on), sketch out what you think the bending moment, shear force, & most importantly the deflected form you think you should get, then run the computer model with the same parameters, if they match up reasobaly well, then the computer model is probably right. Remember, most structural engineering deisgn by computers are simply number crunching the equations that have been worked out by hand in the past. These programme should only be used by very experienced engineers who know by “feel” that the answer they get is in the right ball park, whereas fresh faced graduates tend to rely thoroughly on their computer output for the answers, without getting the chance to develop “feel”. (I kid you not it’s frightening at times). I expect this applies to many fields! So how do these Climate Modelers know they have the right “feel” for what they get out, how can they? Ultimate validation has to be observed reality. So called “predictive” modeling is in its infancy & may never become a reality with things like the Sun & Climate. With all this ever decreasing finer scales of modeling at the input end there is a distinct possibility of dissappearing up ones own output!
If one were to believe what the GCM makers imply, they “know” pretty much everything about the climate, & can “accurately” estimate the behaviour of what they don’t! I say again, if these guys had a little more incentive, like loss of position/job/pension/home, it they are shown to be wrong, then maybe the “uncertainties” might just get a little more front page news!

hotrod
June 22, 2009 9:43 am

Gary Strand (09:04:17) :
hotrod (08:35:23):
“Here is a simple test plug in the real world weather data for today and run a simulation on the models for a year from now. Take the output and place it in a safe deposit box. One year from now look at the simulation output and compare it to the real world weather on that end date.”
Climate models aren’t NWP models, and, we cannot know the “real world weather data” to sufficient precision (the ultimate limiter being Heisenberg) to allow a one year forecast.
Your “simple test” is anything but, and no model can ever pass it. Try a different metric.

Which is precisely the point. We do not have initial information of sufficient precision to make a hundred year calculation even if the mathematics were perfect.
The mathematics are not perfect.
The granularity of the models is insufficient to allow meaningful projections that far into the future.
And last but not least even on short runs they are not validating against reality so we know they are broken.
Maybe in 50-100 years they will be workable but right now all they are is SWAG’s.
Larry

Gary Strand
June 22, 2009 9:52 am

hotrod (09:43:52) :
“Which is precisely the point. We do not have initial information of sufficient precision to make a hundred year calculation even if the mathematics were perfect.”
Not quite true. Consider that dropping a ball from a height can lead to a very good guess at the time it will take, without having to know ‘g’ to infinite precision, the air resistance to infinite precision, the mass of the ball to infinite precision, an infinitely-precise stopwatch, and so on.
Climate model projections are analogous – do we really need to know the state of the climate system infinitely precisely to make a projection of the future? Not really – which isn’t to say that a random input state will represent the real earth regardless.
Do we know everything we really need to know about the climate system to make perfectly accurate forecasts? No. Does that mean that we know virtually nothing and any projection is just sheer guesswork? No.

Gary Strand
June 22, 2009 9:56 am

Alan the Brit (09:13:43) :
“If one were to believe what the GCM makers imply, they “know” pretty much everything about the climate, & can “accurately” estimate the behaviour of what they don’t!”
You’re erecting a strawman. I don’t know of any modeler than claims to “know” “pretty much everything” – we do know the major drivers of climate well enough to create models of it that are reasonably correct.
One thing I’ve noticed about skeptics is that they have unreasonable expectations of proof. It’s kinda like a trial – the prosecution only has to prove its case beyond a *reasonable* doubt, not *any* doubt.

Gary Strand
June 22, 2009 9:57 am

David Corcoran (09:11:22) :
“The AGW crowd is losing credibility with the public with every cool summer and brutal winter.”
That’s because a single summer or winter, alone, do not disprove (or prove) AGW.

Gary Pearse
June 22, 2009 11:31 am

Gary Strand:
Sorry if you took my remarks about climate modelers designing bridges as snarky. I didn’t intend to offend. My point was that engineers, by the nature of their tasks, can’t afford to be wrong – their failures are dramatic parts of human history. It was you who brought engineers into the discussion on models and I wanted the differences to be clear . I was wrong in my statement that a global warmer couldn’t design a bridge using a model. I’m sure there are structural engineers who have bought into the settled science hypothesis. Moreover, an engineer’s model may well be usable by a layman (even if it wouldn’t be permitted) because it has been very well developed from experience. One final point, it seems to me that many of the spokespersons for the validity of climate models are often not scientists in that field. They are more believers. Al Gore is a politician, the heads of IPCC are a railway engineer and an economist, Hansen is an astronomer….. An engineer would be most surprised to have a school teacher, social worker, nuclear physicist, organic chemist and bakery chef argue spiritedly and vehemently about the pros and cons of a structural engineer’s model.

Carsten Arnholm, Norway
June 22, 2009 12:10 pm

Alan the Brit (09:13:43) :
Remember, most structural engineering deisgn by computers are simply number crunching the equations that have been worked out by hand in the past. These programme should only be used by very experienced engineers who know by “feel” that the answer they get is in the right ball park, whereas fresh faced graduates tend to rely thoroughly on their computer output for the answers, without getting the chance to develop “feel”. (I kid you not it’s frightening at times).

I am a structural engineer and “computer modeler” in the sense that I have been creating Finite Element Analysis software for more than 20 years. What you say is very true. A famous example from when things went wrong with this kind of analysis was the Sleipner A gravity base offshore platform that suffered catastrophic failure on August 23, 1991. It was caused by inappropriate use of the Finite Element Method by inexperienced engineers. The huge structure collapsed in one of our fjords and caused a magnitude 3 earthquake.
http://www.ima.umn.edu/~arnold/disasters/sleipner.html
P.S. My software was not used in this case, but it could have been. It was a case of garbage in, garbage out.

June 22, 2009 1:12 pm

Dear Colleagues… At last, my article was sent back completely reviewed. The bad news is that it was classified like an academic article, that is, a didactic article. I’m struggling for it is published like a peer reviewed paper. In the meantime, you can see a graph on the extrapolated data of TSI going back some 11550 years…
http://www.biocab.org/Extrapolated_TSI.jpg
I stand thinking that the ISG variable is a confident proxy for calculating the TSI before the advent of satellite and computation of sunspots. 🙂

June 22, 2009 2:03 pm

Gary Strand (09:56:04) :
You’re erecting a strawman. I don’t know of any modeler than claims to “know” “pretty much everything” – we do know the major drivers of climate well enough to create models of it that are reasonably correct.
One thing I’ve noticed about skeptics is that they have unreasonable expectations of proof. It’s kinda like a trial – the prosecution only has to prove its case beyond a *reasonable* doubt, not *any* doubt.

No, Gary, don’t deceive yourself. Skeptics don’t have unreasonable expectations of proof. There is no proof against a belief. What science shows, the good science of thermodynamics and heat transfer, is that the CO2 is not a source of heat, that CO2 at its current partial pressure in the atmosphere cannot absorb and emit the loads of heat that AGW assumes, that the CO2 has not a total emittancy enough as to increase the atmospheric temperature more than 0.03 K, etc.

Gary Strand
June 22, 2009 2:07 pm

Nasif Nahle (14:03:40) :
Arrhenius was wrong?

Pamela Gray
June 22, 2009 3:05 pm

Just some thoughts on modeling.
The original premise of AGW was, when looking at its infancy, made from a statistical analysis of noisy weather over time. Weather pattern variation data was submitted to statistical analysis in order to create trend lines. Some used curvy nonlinear algorithms, some used straight linear algorithms, but statistically generated nonetheless.
Eventually, through political or scientific processes, or both, it was assumed that this averaging and subsequent statistical analysis revealed something other than what it was; the statistical average of weather over time. It was now assumed that this artificial trend line represented different data, that of a green-house gas affect signature, which eventually became the notion that the trendline was directly related to human-caused greenhouse gas emissions. So devices were set up round the world to measure surface ozone pollution, CO2, and methane. It became apparent that CO2 and methane were increasing. Sinks were not directly measured but instead were calculated, again with assumptions as part of the calculation. Since these two gases are known greenhouse gases, the jump was made that the trendline in the temperature data was not statistical artifacts of noisy weather pattern variation, but was a direct measure of greenhouse gas influence on temperatures.
This assumed relationship was then mathematically modeled till the modeled trendline matched the observed trendline. A concerted and admitted assumption was made to dampen the affects of natural weather pattern variation drivers in the calculations. These models were then projected using varying levels of CO2/methane emissions resulting in increasing temperature.
How did CO2 become the main culprit? Of the two, methane is the more powerful gas but this would lead to an uneven application of restrictions that would not be tolerated. As in the voting public would not like the price of meat being higher than their monthly house payment and ranchers would simply go on strike. Farmers would likely have joined them. CO2 was chosen as the one to concentrate on politically because the burden would be shared by everybody and would likely not trigger agricultural outrage.
The problem with this development is that assumptions were made based on lab properties of greenhouse gases, much like early mistakes were made in understanding the physics and behavior of plasma in the lab versus plasma in the cosmos.
However, I can make assumptions as well. I can make a thought experiment that gives natural weather pattern drivers more influence while giving human-caused emissions less influence in my mathematical models and end up with drawings that look very much like the CO2 modeled future if warm oceanic oscillations such as El Nino dominate phases were to exist in varying strengths. I could also predict a downward trend if cool oceanic oscillations such as La Nina dominate phases were to exist in varying strengths.
Which premise is correct? Both make the same degree of assumptions in terms of influence in situ, but using different variables. This would be a logical test if you simply match the current stalled temp trend with a model that matches it (and we know which set of models would win). However, the political arm of the scientific CO2 premise has already done an end run around that by saying their model is still the more correct one because once natural cool weather pattern variation drivers cease, the temperature rise will be catastrophic as CO2 caused temperature increases crawl into bed with natural warm weather pattern variation drivers.
Given that, a very good test of this debate would be under the condition of an extended El Nino. [And much to the consternation of my Solar friends, the Sun’s affect can be dismissed in this experiment. It can do whatever it wants to do. The affects of El Nino would bury any Solar influence.] My hunch is that runaway temps would not happen. Yes, it would be warmer than it is now, but it would not fry us like the end run hypothesis mentioned above says it would.

hotrod
June 22, 2009 3:06 pm

Not quite true. Consider that dropping a ball from a height can lead to a very good guess at the time it will take, without having to know ‘g’ to infinite precision, the air resistance to infinite precision, the mass of the ball to infinite precision, an infinitely-precise stopwatch, and so on.
Climate model projections are analogous – do we really need to know the state of the climate system infinitely precisely to make a projection of the future? Not really – which isn’t to say that a random input state will represent the real earth regardless.
Do we know everything we really need to know about the climate system to make perfectly accurate forecasts? No. Does that mean that we know virtually nothing and any projection is just sheer guesswork? No.

You are trying to imply I am asserting things I am not, and create an indefensible argument.
I did not say they “knew nothing” I did not say they needed to know “everything to infinite precision”. What I did say, is that they have not made even the most elementary validations of their models.
If they could put today’s climate information (sst, solar flux, air temps etc.) into their model and run the model for 30, 60, 90 days and then compare the predictions of the model to reality with good results, then and only then could they with any certainty at all, assert that they could predict, say 6 months in the future. Once they get 10 – 20 acceptably accurate 6 month predictions, then they could reasonably assert that they could predict the weather a 1 or 5 years in advance — etc. etc.
They are making extremely long predictions with no short term validation tests to establish any sense of what the models precision is. In fact some of their current projections are not validating even in short time periods. They are not getting the signature atmospheric warming they expect, they did not predict the current stabilization, and downturn in temps. Their arctic ice predictions are not faring very well either, nor has the recent spat of cool wet weather many areas have had this last winter and spring done much to show they have a good grasp on even regional climate let alone global climate.
To take your falling ball example —
Lets say some guy says that he can predict how long it takes for a ball to fall to the ground from a leaning tower.
First he needs to specify how he will measure the time. Will it be judged by eye or the sound of the object hitting the ground or some other means. Then he would need to forecast before the experiment is run what an acceptable error in that time prediction was.
Suppose he says that he can predict the time of fall to a precision of a tenth of a second, and everyone agrees that for real world problems that is good enough.
Then he needs to drop the ball, and see if the actual fall time agrees with the predicted fall time. Then he needs to do it several more times to show it was not just a fluke. If all those drops come out acceptably close to the predicted fall time, then some other person needs to use his formula to predict the fall time of a different ball from a different tower.
Rinse and repeat.
After you have a few dozen or a few hundred successful tests that all agree with the prediction, then you can with some authority assert you have a model of a falling ball that can predict the fall time of any ball from any tower to an acceptable precision.
The AGW community is making projections of events that won’t even happen for 30 -100 years with no track record of being able to predict any shorter time interval to any well accepted degree of accuracy. Everyone is just supposed to take their word for it that they used “math” and “Physics” and they are “experts” and they used “computers” so not to worry the predictions are reliable! Please go spend a few trillion dollars and while you are at it turn government regulations on their head and overhaul entire economies and break the back of a few industries and create a few other industries (cap and trade) out of thin air all on the “faith” that they got it right even though they freely admit that some of the numbers they used in their model were “educated guesses”, and they do not have a single meaningful validation test under their belt.
They need to go through a formal validation process. Not unlike the flight tests a new plane goes through. Even though it was “designed on a computer” with a very reliable and trusted mathematical model, that has been validated thousands of times befor, occasionally the plane does not do what the engineers expect. Sometimes the wings fall off ( De Havilland Comet ), sometimes fuel lines vibrate and break in flight, sometimes the thoroughly tested automatic pilot system thinks the pilot wants to land when he is not, and flys him into the ground or does something else it is not expected to do ( http://www.thesun.co.uk/sol/homepage/news/article700633.ece ).
When they can get 9 out of 10 predictions for climate conditions in 1, 5, or 10 years in the future, I might listen to them about a 50 year forecast. After they get a few of them right, then they can start asserting they have a clue what the climate will be in 100+ years.
Larry

Gary Strand
June 22, 2009 3:22 pm

One problem – climate models do not predict weather, and as Lorenz showed in the 1960s, our knowledge of the current state of the atmosphere (much less the entire climate system) will *never* be perfect enough to go out more than two weeks or so.
Therefore, no climate model will ever be able to meet your criteria for validation, because it simply cannot be done, and never will be. Sorry.
Lastly, climate modelers don’t ask anyone to “take their word for it”. There are many papers and so forth that detail what climate models do right, what they do wrong, and ideas as to the why for both. There’s also the CMIP3 archive, in which you can access all the climate model data you could ever want, so you can look into them yourself. At least one climate model, CCSM3, also provides the entire source code as well as all necessary input datasets to do your own runs. Nothing hidden, or kept secret, or locked away, at all.
BTW, the Comet’s problem wasn’t that the wings fell off, it’s that the cabin explosively depressurized, due to a lack of understanding of the effects of pressurization cycles on metal, resulting in fatigue and cracking. IIRC.

June 22, 2009 3:27 pm

Gary Strand (14:07:32) :
Nasif Nahle (14:03:40) :
Arrhenius was wrong?

Oh, yeah! Arrhenius was wrong on his sensitivity magnitude:
http://www.ecd.bnl.gov/steve/pubs/HeatCapacity.pdf
Even Schwartz is wrong.
Van Ness, Hottel, Stephan, etc., were right because their studies were based on observations and experimentation, not on simple speculation.

Gary Strand
June 22, 2009 3:27 pm

One problem, Pamela, with your timeline and theory about how CO2 came to be regarded as the “bad guy” for warming. Arrhenius showed in 1896 that increasing “carbonic acid” (aka CO2) in the atmosphere warms the surface.
That’s long before atmospheric CO2 concentrations were measured.

Gary Strand
June 22, 2009 3:43 pm

Nasif Nahle (15:27:45) :
You have intriguing ideas. Have you thought of submitting them for publication?

June 22, 2009 3:59 pm

Gary Strand (15:43:01) :
Nasif Nahle (15:27:45) :
You have intriguing ideas. Have you thought of submitting them for publication?

I’ve submitted and published not my ideas, but my work on assessing this issue based, not on ideas, but on data obtained by many scientists who worked on heat transfer science and climate physics from observation of nature and experimentation.

Pamela Gray
June 22, 2009 4:24 pm

Gary, I have no quarrel with the important role greenhouse gasses play in our environment. The hypothesis that CO2 is one of our greenhouse gasses appears to have validity. What I wonder about and question is the human-caused CO2 in situ influence in a highly variable real setting, where CO2 is also a natural and necessary variable in our set of greenhouse gasses. I think the human-caused portion of CO2’s modeled influence is over-stated and modeled endogenous weather pattern variation drivers under-stated. That would certainly explain our current weather pattern over the last 10 to 12 years and clearly shows influence in the 1998 El Nino coupled temp spike. CO2 scientists would agree that human-caused increased CO2 did not cause that spike in temps. What they do say is that CO2 may have made it slightly worse (by less than a degree).

Rob
June 22, 2009 4:25 pm

I wonder why F1 teams without access to a wind tunnel never win anything.

kurt
June 22, 2009 5:51 pm

“Gary Strand (15:22:15) :
One problem – climate models do not predict weather, and as Lorenz showed in the 1960s, our knowledge of the current state of the atmosphere (much less the entire climate system) will *never* be perfect enough to go out more than two weeks or so.
Therefore, no climate model will ever be able to meet your criteria for validation, because it simply cannot be done, and never will be. Sorry.”
I think you may be missing the point of the original argument, which is that, without this kind of validation, no conclusions can be drawn as to the accuracy of the computer models. Whether or not the criteria is impossible or not is irrelevant – the question is whether it is reasonable to require that a model’s output be validated before accepting it as a basis for changing policy. In my opinion, it is reasonable to have that requirement.
I believe it to be foolish to divorce the question of how well we understand a system from the concurrent issue of what demonstrable practical use to which we have put that knowledge. We understand atmospheric phenomenon well enough to predict weather three days in advance, but not well enough to predict weather more than five days out. We understand electromagnetics well enough to design long-distance power transmission lines, and gravity well enough to predict the orbits of planets around the sun, but we don’t understand the interaction between the two well enough to predict the occurrence of sunspots. And this doesn’t just mean that you prove your knowledge of a system by applying it, it means that the applications ARE THEMSELVES the very benchmark by which you assess your understanding of a system. It is a logical fallacy to start with an unproven and entirely subjective premise of how well you understand a system (e.g., the climate) and from that premise conclude that you have the ability to perform some specific application (e.g. accurately predict the response of a climate variable to changes in CO2), before you’ve actually, verifiably, done it.
For me to accept that we understand the climate system well enough to construct a computer model that accurately predicts the long-term response of the climate to a doubling of CO2, I need a track record of accurate such predictions. Moreover, it’s not good enough to show that a computer model accurately simulates the climate conditions that we know (or think we know) occurred in the past. Without even going into that cliche about elephants and their moving trunks, just think of that sunspot model that was announced about a year and a half ago that bragged an 85% or 90% fit to previous sunspot cycles, but in its first prediction is being proven spectacularly wrong. Fitting a model to past performance data is a mathematical task, not a scientific one, and the fact that a model can be adjusted to fit past data at best shows that the model is consistent with the data. It does not prove exclusivity, i.e. that there is not another model that also fits with the data, that has substantively different output. It says nothing about the likelihood that the model’s output accurately simulates the real-world system’s future behavior.
My central problem with the manmade global warming theory is that this particular field of science is inherently uncertain. We can’t even measure or observe changes in climate except over time intervals of decades if not centuries. The assertion that we’ve somehow, in the last few decades, mastered all that needs mastering so as to not only quantify mankind’s impact on temperature, but the secondary effects that the temperature increase has on weather phenomenon (droughts, hurricanes, etc), is a ludicrous proposition – one in which only the terminally gullible would accept.

Jesper
June 22, 2009 7:10 pm

Gary Strand,
Where do you draw the line between unpredictable weather and predictable climate? From your comments on this thread, presumably this scale includes only intervals longer than one year….over what time integration can would you say we can reasonably hold climate models to the test? 5 years? 10 years? 20? 50? 100?

June 22, 2009 7:40 pm

Gary Strand (15:27:48) :
One problem, Pamela, with your timeline and theory about how CO2 came to be regarded as the “bad guy” for warming. Arrhenius showed in 1896 that increasing “carbonic acid” (aka CO2) in the atmosphere warms the surface.
That’s long before atmospheric CO2 concentrations were measured.

Arrhenius said that by doubling the atmospheric CO2 concentration the Earth would be warmed up by 4 °C, or if the CO2 would be cut to one half, the Earth would cool by 4 °C. He was wrong:
ΔT = 5.35 (W/m^2) [LN2] / 4 (σ) (T)^3 = 0.98 K
Even when I introduced the sensitivity provided by Arrhenius, those 4 °C are nowhere; hence, Arrhenius was wrong.

Pamela Gray
June 22, 2009 9:14 pm

Nasif, just because he overstated his thesis does not mean he was wrong about the underlying premise that CO2 is one of our greenhouse gasses and functions as an important component regarding greenhouse gas heat retention. Without these gasses, we would probably not be here.

June 22, 2009 9:41 pm

Pamela Gray (21:14:29) :
Nasif, just because he overstated his thesis does not mean he was wrong about the underlying premise that CO2 is one of our greenhouse gasses and functions as an important component regarding greenhouse gas heat retention. Without these gasses, we would probably not be here.
Without oceans (water) and carbon dioxide, we would not be here; it’s a positive assertion.
Hi Pamela… “Greenhouse” gases don’t warm up the Earth. The Sun warms up the Earth; the “greenhouse” gases allocate the heat into more available microstates; they don’t generate heat. It’s like a volleyball court… The ball is the heat, the Sun is the player setting (setter), the defensive team (at the opposite side of the setter) is the oceans and land and the offensive team is the “greenhouse” gases. 😉

June 22, 2009 9:47 pm

@Pamela… I forgot to say that Arrhenius was wrong also regarding his underlying premise because the carbon dioxide would work as a coolant if its mass in the atmosphere increases and the load of energy incoming to the Earth doesn’t increases, i.e. if the intensity of solar radiation doesn’t increase.

Gary Strand
June 23, 2009 5:31 am

Nasif Nahle (15:59:53) :
“I’ve submitted and published not my ideas, but my work on assessing this issue based, not on ideas, but on data obtained by many scientists who worked on heat transfer science and climate physics from observation of nature and experimentation.”
Your ideas have been published, then – in what journal(s)?

Gary Strand
June 23, 2009 5:34 am

Jesper (19:10:23) :
“Where do you draw the line between unpredictable weather and predictable climate? From your comments on this thread, presumably this scale includes only intervals longer than one year….over what time integration can would you say we can reasonably hold climate models to the test? 5 years? 10 years? 20? 50? 100?”
20 years minimum.

Gary Strand
June 23, 2009 5:35 am

To the folks requesting validation of a climate model before they accept them as reasonable tools – what are your metrics, and why?

June 23, 2009 7:25 am

Gary Strand (05:31:13) :
Your ideas have been published, then – in what journal(s)?
Nope, they’re not ideas; I didn’t invented natural processes. They’re what scientists have observed in nature and experimented in labs, when it is possible. AGW is an idea.
Every article submitted, didactic, theoretical or informative, is peer reviewed for its publication in Biocab.org. Some of my articles have been published by Universities; for example, Astrobiology, Heat Transfer, Heat Stored by Atmospheric Gases, The Abiotic Origin of Life, etc.

Gary Strand
June 23, 2009 8:19 am

I understand you’re an empiricist, not a rationalist.
When I said published, I meant in a journal, not a website. After all, if you can convincingly show that “Arrhenius was wrong also regarding his underlying premise because the carbon dioxide would work as a coolant if its mass in the atmosphere increases and the load of energy incoming to the Earth doesn’t increases, i.e. if the intensity of solar radiation doesn’t increase”, then you’re going to overturn more than a century of understanding.

June 23, 2009 9:12 am

Gary Strand (08:19:10) :
I understand you’re an empiricist, not a rationalist.
When I said published, I meant in a journal, not a website. After all, if you can convincingly show that “Arrhenius was wrong also regarding his underlying premise because the carbon dioxide would work as a coolant if its mass in the atmosphere increases and the load of energy incoming to the Earth doesn’t increases, i.e. if the intensity of solar radiation doesn’t increase”, then you’re going to overturn more than a century of understanding.

There is not need of writing an article about Arrhenius’ mistakes; take any book on heat transfer and you’ll find those errors. If the source of heat doesn’t change its intensity, and the mass of carbon dioxide increases, the carbon dioxide will act as a coolant:
dT = q / m (Cp)
It’s a basic formula for calculating the change of temperature caused by any substance.

kurt
June 23, 2009 2:06 pm

“Gary Strand (05:35:37) :
To the folks requesting validation of a climate model before they accept them as reasonable tools – what are your metrics, and why?”
As of today, there are no metrics by which models can be validated. That’s why many don’t believe they can be relied upon. Perhaps after about 75 years or so, a computer model in existence today could be validated with respect to a forecast in a climate variable (e.g., temperature) If, say, it’s running 10 year mean predicted temperatures were within 95% of the measured running 10-year mean of temperarures over 70 of those 75 years. That woud be impressive, but I would add that important part is that you collect empirically the metrics to quantify how reliable the model is. If it turned out to be within 75% of the actual 10-year mean in 70 of 75 years, you still would have an objective way of measuring how reliable of a tool the model is. But right now, there is nothing.

hotrod
June 23, 2009 2:12 pm

Gary Strand (05:35:37) :
To the folks requesting validation of a climate model before they accept them as reasonable tools – what are your metrics, and why?

The National Weather Service recognizes the need to formally validate flood forecasts due to their impact on public planning and expenditures.
http://www.nws.noaa.gov/oh/rfcdev/docs/Final_Verification_Report.pdf
Why should we not expect a similar formal review of the climate model projections?
Can you give any rational that supports the idea that a similar organized effort to evaluate and improve climate models in unwarranted?
Public costs incurred due to faulty flood forecasts would be counted in the multimillion dollar range. Public costs due to faulty climate forecasts would tally in the hundreds of billions of dollars to multiple trillion dollar range.
The Nuclear Regulatory Commission requires formal evaluation of a nuclear plant and the possible impact of its maximum credible accident, and formal testing and evaluation of the adequacy of emergency response planning due to the high public costs and impacts a nuclear plant accident would have.
The EPA requires similar impact studies on major industrial plants that might impact the public, and emergency response activities in communities.
It is the climate modeling community that has the burden of proof to show why their models should not be held to similar standards of formal validation and review.
To base public policy on untested computer models in pure idiocy! They are either competent and useful, or if incompetent and harmful, or statistically meaningless. Until we know which of those 3 options is true, given the costs involved we should assume they are useless or harmful (first do no harm).
The easiest metric to use would be to show they perform better in a statistically significant degree from a naive forecast that simply forecasts more of the same we had last year or the last few years.
There are two simple variations of this, one is that the future conditions will be the same as the historical climatic variation (for example will fall within some error of the 1971-2000 average).
The other is persistence — ie that the future conditions will be essentially identical to today’s conditions.
To have merit the model projection would have to beat both those metrics by a statistically significant margin.
http://www.forecastadvisor.com/blog/
If their forecasts (projections) are within the range of historical natural variation, then they also need to prove that they are predicting something that would not have happened without increasing CO2.
If the climate change forecast makes some projection about sea surface temperature rise over the next century for example, then unless they can show a scientifically valid reason to the contrary, they should be tested against 1/10 that rise over 10 years. Likewise on other major features of their forecasts. If they are scientifically valid, the authors of the model should be able to state ahead of time what the error bars are for their bench marks on key events and the window of performance they must fly through to be meaningful.
Larry

Gary Strand
June 23, 2009 3:39 pm

kurt (14:06:19) :
“[…]Perhaps after about 75 years or so, a computer model in existence today could be validated with respect to a forecast in a climate variable (e.g., temperature) If, say, it’s running 10 year mean predicted temperatures were within 95% of the measured running 10-year mean of temperarures over 70 of those 75 years. That woud be impressive, but I would add that important part is that you collect empirically the metrics to quantify how reliable the model is. If it turned out to be within 75% of the actual 10-year mean in 70 of 75 years, you still would have an objective way of measuring how reliable of a tool the model is.”
Have you done this test using the available data for the CMIP3 archive for the model runs for the 20th century compared against your favorite observational data, for surface temperature?
That would be an interesting test.

Gary Strand
June 23, 2009 3:44 pm

hotrod (14:12:57) :
“The easiest metric to use would be to show they perform better in a statistically significant degree from a naive forecast that simply forecasts more of the same we had last year or the last few years.”
As I asked Kurt, have you exploited the CMIP3 climate model data archive versus your favorite obs data and made this examination?

June 23, 2009 7:48 pm

It seems models are more real than reality… Heh!

hotrod
June 24, 2009 6:56 am

Gary Strand (15:44:41) :
hotrod (14:12:57) :
“The easiest metric to use would be to show they perform better in a statistically significant degree from a naive forecast that simply forecasts more of the same we had last year or the last few years.”
As I asked Kurt, have you exploited the CMIP3 climate model data archive versus your favorite obs data and made this examination?

And just why should I do someone else’s job? It is up to the model developers to show they have a clue what is going on, not the people that are paying them to do the job.
If they care so little about the validity of their product that they will not even invest a small fraction of their time showing it has value why should I pay the slightest attention to their projections.
Do you think it is the shoppers job to verify prices in a store?
Do you think it is the patients job to certify his doctors?
Do you think it is the buyers job to crash test cars?
Larry

Gary Strand
June 24, 2009 3:26 pm

Geez, don’t get so upset.

Gary Strand
June 24, 2009 9:23 pm

One other problem with the “forecast” metric – what if a tropical volcano erupts during the period? Does that invalidate the climate model? On what grounds?