(h/t to Michael Ronayne)
Sunspots Revealed in Striking Detail by Supercomputers
BOULDER—In a breakthrough that will help scientists unlock mysteries of the Sun and its impacts on Earth, an international team of scientists led by the National Center for Atmospheric Research (NCAR) has created the first-ever comprehensive computer model of sunspots. The resulting visuals capture both scientific detail and remarkable beauty.

The high-resolution simulations of sunspot pairs open the way for researchers to learn more about the vast mysterious dark patches on the Sun’s surface. Sunspots are the most striking manifestations of solar magnetism on the solar surface, and they are associated with massive ejections of charged plasma that can cause geomagnetic storms and disrupt communications and navigational systems. They also contribute to variations in overall solar output, which can affect weather on Earth and exert a subtle influence on climate patterns.
The research, by scientists at NCAR and the Max Planck Institute for Solar System Research (MPS) in Germany, is being published this week in Science Express.
“This is the first time we have a model of an entire sunspot,” says lead author Matthias Rempel, a scientist at NCAR’s High Altitude Observatory. “If you want to understand all the drivers of Earth’s atmospheric system, you have to understand how sunspots emerge and evolve. Our simulations will advance research into the inner workings of the Sun as well as connections between solar output and Earth’s atmosphere.”
Ever since outward flows from the center of sunspots were discovered 100 years ago, scientists have worked toward explaining the complex structure of sunspots, whose number peaks and wanes during the 11-year solar cycle. Sunspots encompass intense magnetic activity that is associated with solar flares and massive ejections of plasma that can buffet Earth’s atmosphere. The resulting damage to power grids, satellites, and other sensitive technological systems takes an economic toll on a rising number of industries.
Creating such detailed simulations would not have been possible even as recently as a few years ago, before the latest generation of supercomputers and a growing array of instruments to observe the Sun. Partly because of such new technology, scientists have made advances in solving the equations that describe the physics of solar processes.
The work was supported by the National Science Foundation, NCAR’s sponsor. The research team improved a computer model, developed at MPS, that built upon numerical codes for magnetized fluids that had been created at the University of Chicago.
Computer model provides a unified physical explanation
The new computer models capture pairs of sunspots with opposite polarity. In striking detail, they reveal the dark central region, or umbra, with brighter umbral dots, as well as webs of elongated narrow filaments with flows of mass streaming away from the spots in the outer penumbral regions. They also capture the convective flow and movement of energy that underlie the sunspots, and that are not directly detectable by instruments.
The models suggest that the magnetic fields within sunspots need to be inclined in certain directions in order to create such complex structures. The authors conclude that there is a unified physical explanation for the structure of sunspots in umbra and penumbra that is the consequence of convection in a magnetic field with varying properties.
The simulations can help scientists decipher the mysterious, subsurface forces in the Sun that cause sunspots. Such work may lead to an improved understanding of variations in solar output and their impacts on Earth.
Supercomputing at 76 trillion calculations per second
To create the model, the research team designed a virtual, three-dimensional domain that simulates an area on the Sun measuring about 31,000 miles by 62,000 miles and about 3,700 miles in depth – an expanse as long as eight times Earth’s diameter and as deep as Earth’s radius. The scientists then used a series of equations involving fundamental physical laws of energy transfer, fluid dynamics, magnetic induction and feedback, and other phenomena to simulate sunspot dynamics at 1.8 billion points within the virtual expanse, each spaced about 10 to 20 miles apart. For weeks, they solved the equations on NCAR’s new bluefire supercomputer, an IBM machine that can perform 76 trillion calculations per second.
The work drew on increasingly detailed observations from a network of ground- and space-based instruments to verify that the model captured sunspots realistically.
The new models are far more detailed and realistic than previous simulations that failed to capture the complexities of the outer penumbral region. The researchers noted, however, that even their new model does not accurately capture the lengths of the filaments in parts of the penumbra. They can refine the model by placing the grid points even closer together, but that would require more computing power than is currently available.
“Advances in supercomputing power are enabling us to close in on some of the most fundamental processes of the Sun,” says Michael Knölker, director of NCAR’s High Altitude Observatory and a co-author of the paper. “With this breakthrough simulation, an overall comprehensive physical picture is emerging for everything that observers have associated with the appearance, formation, dynamics, and the decay of sunspots on the Sun’s surface.”
First view of what goes on below the surface of sunspots. Lighter/brighter colors indicate stronger magnetic field strength in this subsurface cross section of two sunspots. For the first time, NCAR scientists and colleagues have modeled this complex structure in a comprehensive 3D computer simulation, giving scientists their first glimpse below the visible surface to understand the underlying physical processes. This image has been cropped horizontally for display. [ENLARGE & DISPLAY FULL IMAGE] (©UCAR, image courtesy Matthias Rempel, NCAR. News media terms of use*)
See a video animation of this and other sunspot visualizations as well as still “photo” images in the Sunspots Multimedia Gallery.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Models are simply tools. They are used in many different disciplines as an aid to understanding and tp provide predictions which can be tested experimentally.
They are particularly suited to linear problems which have a wealth of factual information on which to develop algorithms, usually provided by experiment. They are poor to useless when used for chaotic non-linear systems where the drivers are poorly understood, as is the case for what happens to the bodies in our solar system.
Too much reliance on badly constructed models, which have been bought to provide politically motivated outcomes, could potentially drive science into a future dark age.
Science which cannot be falsified is no better than a religion regarding the prediction of future events.
Mike D. (11:29:21) :
Maybe next they can simulate the Cubs winning the pennant.
Probably not. No super computer has enough for those variables.
But I think you could use a small pocket computer to model Brett Favre with Adrian Peterson in the backfield winning a Super Bowl with Minnesota!
Models are great once they have been validated. Modern aerodynamics and hydrodynamics also uses modeling. But those models have been validated thousands of times. They put in the details of a new ship design or aircraft design, let the model crunch the numbers then put a model of the finished design in a drag tank (for the ship) or a wind tunnel and verify that the predictions the model churned out match up with the real world. Over time they have gotten the numerical methods good enough that they narrow the possibilities down to manageable numbers but they still test fly the plane and do scale model test to verify the final values.
Here is a simple test plug in the real world weather data for today and run a simulation on the models for a year from now. Take the output and place it in a safe deposit box. One year from now look at the simulation output and compare it to the real world weather on that end date.
You cannot “test” a model against the pre-existing data it was designed to mimic, that is akin to checking a mathematical calculation by doing the exact same calculation twice and proclaiming it validated because you got the exact same result the second time you did the calculation. A valid check must use new data and accurately predict an unknown future event. Making a 100 year in the future prediction is like a fortune teller predicting the number of great grand children you will have. It is untestable in any usable time frame.
Larry
hotrod (08:35:23):
“Here is a simple test plug in the real world weather data for today and run a simulation on the models for a year from now. Take the output and place it in a safe deposit box. One year from now look at the simulation output and compare it to the real world weather on that end date.”
Climate models aren’t NWP models, and, we cannot know the “real world weather data” to sufficient precision (the ultimate limiter being Heisenberg) to allow a one year forecast.
Your “simple test” is anything but, and no model can ever pass it. Try a different metric.
Steven Hill (07:25:19) :
Those spots on the sun, are they part of cycle 23?
cycle 24 because of their high latitude and magnetic signatures.
Gary Strand, Dr. Hansen has been issuing predicitions for 30 years, and now the arctic ice is near normal and the Antarctic ice is well above normal. Most Stevenson screens he mostly relies on have been shown not to meet NOAA guidelines. I think its foolish to be anything but skeptical given those circumstances.
It’s these wrong predictions that cause the most skepticism. And these have occurred over a long period of time. 30 years isn’t cherry-picking and it isn’t just “weather”. Then there was his fantastically wrong 2006 Super El Nino prediction. And we’re supposed to bet all of our livelihoods on this?
Environmentalists have a rich history going back to the 70s of crying wolf without much consequence. Paul Erlich said we’d be eating each other by the 1980s… eaten a neighbor lately?
Modelling the past is relatively easy. Tinker ’til it fits. Knowing the future? That’s what separates the rich from the poor on Wall Street. The AGW crowd is losing credibility with the public with every cool summer and brutal winter.
Gary Strand;-)
Kath;-)
Gary Pearse;-)
Engineering programmes are as Kath has stated. They are based on known behavioural characteristics of materials & structural forms based on theory & testing. Generally a full size model would be made for best results. As engineers, we know pretty much how steel, concrete, timber, masonry, aluminium, glass, behave as materials, although they can & do throw in the odd wobbly every now & then in practice (observed reality)! Computers only reflect in their output what input is made.
The simplest test of an engineering computer programme/model, I am afraid to say it, is to take a pencil, (2B preferably), a pad of plain paper, & do a couple of sums by hand (God forbid such heretical goings on), sketch out what you think the bending moment, shear force, & most importantly the deflected form you think you should get, then run the computer model with the same parameters, if they match up reasobaly well, then the computer model is probably right. Remember, most structural engineering deisgn by computers are simply number crunching the equations that have been worked out by hand in the past. These programme should only be used by very experienced engineers who know by “feel” that the answer they get is in the right ball park, whereas fresh faced graduates tend to rely thoroughly on their computer output for the answers, without getting the chance to develop “feel”. (I kid you not it’s frightening at times). I expect this applies to many fields! So how do these Climate Modelers know they have the right “feel” for what they get out, how can they? Ultimate validation has to be observed reality. So called “predictive” modeling is in its infancy & may never become a reality with things like the Sun & Climate. With all this ever decreasing finer scales of modeling at the input end there is a distinct possibility of dissappearing up ones own output!
If one were to believe what the GCM makers imply, they “know” pretty much everything about the climate, & can “accurately” estimate the behaviour of what they don’t! I say again, if these guys had a little more incentive, like loss of position/job/pension/home, it they are shown to be wrong, then maybe the “uncertainties” might just get a little more front page news!
Which is precisely the point. We do not have initial information of sufficient precision to make a hundred year calculation even if the mathematics were perfect.
The mathematics are not perfect.
The granularity of the models is insufficient to allow meaningful projections that far into the future.
And last but not least even on short runs they are not validating against reality so we know they are broken.
Maybe in 50-100 years they will be workable but right now all they are is SWAG’s.
Larry
hotrod (09:43:52) :
“Which is precisely the point. We do not have initial information of sufficient precision to make a hundred year calculation even if the mathematics were perfect.”
Not quite true. Consider that dropping a ball from a height can lead to a very good guess at the time it will take, without having to know ‘g’ to infinite precision, the air resistance to infinite precision, the mass of the ball to infinite precision, an infinitely-precise stopwatch, and so on.
Climate model projections are analogous – do we really need to know the state of the climate system infinitely precisely to make a projection of the future? Not really – which isn’t to say that a random input state will represent the real earth regardless.
Do we know everything we really need to know about the climate system to make perfectly accurate forecasts? No. Does that mean that we know virtually nothing and any projection is just sheer guesswork? No.
Alan the Brit (09:13:43) :
“If one were to believe what the GCM makers imply, they “know” pretty much everything about the climate, & can “accurately” estimate the behaviour of what they don’t!”
You’re erecting a strawman. I don’t know of any modeler than claims to “know” “pretty much everything” – we do know the major drivers of climate well enough to create models of it that are reasonably correct.
One thing I’ve noticed about skeptics is that they have unreasonable expectations of proof. It’s kinda like a trial – the prosecution only has to prove its case beyond a *reasonable* doubt, not *any* doubt.
David Corcoran (09:11:22) :
“The AGW crowd is losing credibility with the public with every cool summer and brutal winter.”
That’s because a single summer or winter, alone, do not disprove (or prove) AGW.
Gary Strand:
Sorry if you took my remarks about climate modelers designing bridges as snarky. I didn’t intend to offend. My point was that engineers, by the nature of their tasks, can’t afford to be wrong – their failures are dramatic parts of human history. It was you who brought engineers into the discussion on models and I wanted the differences to be clear . I was wrong in my statement that a global warmer couldn’t design a bridge using a model. I’m sure there are structural engineers who have bought into the settled science hypothesis. Moreover, an engineer’s model may well be usable by a layman (even if it wouldn’t be permitted) because it has been very well developed from experience. One final point, it seems to me that many of the spokespersons for the validity of climate models are often not scientists in that field. They are more believers. Al Gore is a politician, the heads of IPCC are a railway engineer and an economist, Hansen is an astronomer….. An engineer would be most surprised to have a school teacher, social worker, nuclear physicist, organic chemist and bakery chef argue spiritedly and vehemently about the pros and cons of a structural engineer’s model.
I am a structural engineer and “computer modeler” in the sense that I have been creating Finite Element Analysis software for more than 20 years. What you say is very true. A famous example from when things went wrong with this kind of analysis was the Sleipner A gravity base offshore platform that suffered catastrophic failure on August 23, 1991. It was caused by inappropriate use of the Finite Element Method by inexperienced engineers. The huge structure collapsed in one of our fjords and caused a magnitude 3 earthquake.
http://www.ima.umn.edu/~arnold/disasters/sleipner.html
P.S. My software was not used in this case, but it could have been. It was a case of garbage in, garbage out.
Dear Colleagues… At last, my article was sent back completely reviewed. The bad news is that it was classified like an academic article, that is, a didactic article. I’m struggling for it is published like a peer reviewed paper. In the meantime, you can see a graph on the extrapolated data of TSI going back some 11550 years…
http://www.biocab.org/Extrapolated_TSI.jpg
I stand thinking that the ISG variable is a confident proxy for calculating the TSI before the advent of satellite and computation of sunspots. 🙂
Gary Strand (09:56:04) :
You’re erecting a strawman. I don’t know of any modeler than claims to “know” “pretty much everything” – we do know the major drivers of climate well enough to create models of it that are reasonably correct.
One thing I’ve noticed about skeptics is that they have unreasonable expectations of proof. It’s kinda like a trial – the prosecution only has to prove its case beyond a *reasonable* doubt, not *any* doubt.
No, Gary, don’t deceive yourself. Skeptics don’t have unreasonable expectations of proof. There is no proof against a belief. What science shows, the good science of thermodynamics and heat transfer, is that the CO2 is not a source of heat, that CO2 at its current partial pressure in the atmosphere cannot absorb and emit the loads of heat that AGW assumes, that the CO2 has not a total emittancy enough as to increase the atmospheric temperature more than 0.03 K, etc.
Nasif Nahle (14:03:40) :
Arrhenius was wrong?
Just some thoughts on modeling.
The original premise of AGW was, when looking at its infancy, made from a statistical analysis of noisy weather over time. Weather pattern variation data was submitted to statistical analysis in order to create trend lines. Some used curvy nonlinear algorithms, some used straight linear algorithms, but statistically generated nonetheless.
Eventually, through political or scientific processes, or both, it was assumed that this averaging and subsequent statistical analysis revealed something other than what it was; the statistical average of weather over time. It was now assumed that this artificial trend line represented different data, that of a green-house gas affect signature, which eventually became the notion that the trendline was directly related to human-caused greenhouse gas emissions. So devices were set up round the world to measure surface ozone pollution, CO2, and methane. It became apparent that CO2 and methane were increasing. Sinks were not directly measured but instead were calculated, again with assumptions as part of the calculation. Since these two gases are known greenhouse gases, the jump was made that the trendline in the temperature data was not statistical artifacts of noisy weather pattern variation, but was a direct measure of greenhouse gas influence on temperatures.
This assumed relationship was then mathematically modeled till the modeled trendline matched the observed trendline. A concerted and admitted assumption was made to dampen the affects of natural weather pattern variation drivers in the calculations. These models were then projected using varying levels of CO2/methane emissions resulting in increasing temperature.
How did CO2 become the main culprit? Of the two, methane is the more powerful gas but this would lead to an uneven application of restrictions that would not be tolerated. As in the voting public would not like the price of meat being higher than their monthly house payment and ranchers would simply go on strike. Farmers would likely have joined them. CO2 was chosen as the one to concentrate on politically because the burden would be shared by everybody and would likely not trigger agricultural outrage.
The problem with this development is that assumptions were made based on lab properties of greenhouse gases, much like early mistakes were made in understanding the physics and behavior of plasma in the lab versus plasma in the cosmos.
However, I can make assumptions as well. I can make a thought experiment that gives natural weather pattern drivers more influence while giving human-caused emissions less influence in my mathematical models and end up with drawings that look very much like the CO2 modeled future if warm oceanic oscillations such as El Nino dominate phases were to exist in varying strengths. I could also predict a downward trend if cool oceanic oscillations such as La Nina dominate phases were to exist in varying strengths.
Which premise is correct? Both make the same degree of assumptions in terms of influence in situ, but using different variables. This would be a logical test if you simply match the current stalled temp trend with a model that matches it (and we know which set of models would win). However, the political arm of the scientific CO2 premise has already done an end run around that by saying their model is still the more correct one because once natural cool weather pattern variation drivers cease, the temperature rise will be catastrophic as CO2 caused temperature increases crawl into bed with natural warm weather pattern variation drivers.
Given that, a very good test of this debate would be under the condition of an extended El Nino. [And much to the consternation of my Solar friends, the Sun’s affect can be dismissed in this experiment. It can do whatever it wants to do. The affects of El Nino would bury any Solar influence.] My hunch is that runaway temps would not happen. Yes, it would be warmer than it is now, but it would not fry us like the end run hypothesis mentioned above says it would.
You are trying to imply I am asserting things I am not, and create an indefensible argument.
I did not say they “knew nothing” I did not say they needed to know “everything to infinite precision”. What I did say, is that they have not made even the most elementary validations of their models.
If they could put today’s climate information (sst, solar flux, air temps etc.) into their model and run the model for 30, 60, 90 days and then compare the predictions of the model to reality with good results, then and only then could they with any certainty at all, assert that they could predict, say 6 months in the future. Once they get 10 – 20 acceptably accurate 6 month predictions, then they could reasonably assert that they could predict the weather a 1 or 5 years in advance — etc. etc.
They are making extremely long predictions with no short term validation tests to establish any sense of what the models precision is. In fact some of their current projections are not validating even in short time periods. They are not getting the signature atmospheric warming they expect, they did not predict the current stabilization, and downturn in temps. Their arctic ice predictions are not faring very well either, nor has the recent spat of cool wet weather many areas have had this last winter and spring done much to show they have a good grasp on even regional climate let alone global climate.
To take your falling ball example —
Lets say some guy says that he can predict how long it takes for a ball to fall to the ground from a leaning tower.
First he needs to specify how he will measure the time. Will it be judged by eye or the sound of the object hitting the ground or some other means. Then he would need to forecast before the experiment is run what an acceptable error in that time prediction was.
Suppose he says that he can predict the time of fall to a precision of a tenth of a second, and everyone agrees that for real world problems that is good enough.
Then he needs to drop the ball, and see if the actual fall time agrees with the predicted fall time. Then he needs to do it several more times to show it was not just a fluke. If all those drops come out acceptably close to the predicted fall time, then some other person needs to use his formula to predict the fall time of a different ball from a different tower.
Rinse and repeat.
After you have a few dozen or a few hundred successful tests that all agree with the prediction, then you can with some authority assert you have a model of a falling ball that can predict the fall time of any ball from any tower to an acceptable precision.
The AGW community is making projections of events that won’t even happen for 30 -100 years with no track record of being able to predict any shorter time interval to any well accepted degree of accuracy. Everyone is just supposed to take their word for it that they used “math” and “Physics” and they are “experts” and they used “computers” so not to worry the predictions are reliable! Please go spend a few trillion dollars and while you are at it turn government regulations on their head and overhaul entire economies and break the back of a few industries and create a few other industries (cap and trade) out of thin air all on the “faith” that they got it right even though they freely admit that some of the numbers they used in their model were “educated guesses”, and they do not have a single meaningful validation test under their belt.
They need to go through a formal validation process. Not unlike the flight tests a new plane goes through. Even though it was “designed on a computer” with a very reliable and trusted mathematical model, that has been validated thousands of times befor, occasionally the plane does not do what the engineers expect. Sometimes the wings fall off ( De Havilland Comet ), sometimes fuel lines vibrate and break in flight, sometimes the thoroughly tested automatic pilot system thinks the pilot wants to land when he is not, and flys him into the ground or does something else it is not expected to do ( http://www.thesun.co.uk/sol/homepage/news/article700633.ece ).
When they can get 9 out of 10 predictions for climate conditions in 1, 5, or 10 years in the future, I might listen to them about a 50 year forecast. After they get a few of them right, then they can start asserting they have a clue what the climate will be in 100+ years.
Larry
One problem – climate models do not predict weather, and as Lorenz showed in the 1960s, our knowledge of the current state of the atmosphere (much less the entire climate system) will *never* be perfect enough to go out more than two weeks or so.
Therefore, no climate model will ever be able to meet your criteria for validation, because it simply cannot be done, and never will be. Sorry.
Lastly, climate modelers don’t ask anyone to “take their word for it”. There are many papers and so forth that detail what climate models do right, what they do wrong, and ideas as to the why for both. There’s also the CMIP3 archive, in which you can access all the climate model data you could ever want, so you can look into them yourself. At least one climate model, CCSM3, also provides the entire source code as well as all necessary input datasets to do your own runs. Nothing hidden, or kept secret, or locked away, at all.
BTW, the Comet’s problem wasn’t that the wings fell off, it’s that the cabin explosively depressurized, due to a lack of understanding of the effects of pressurization cycles on metal, resulting in fatigue and cracking. IIRC.
Gary Strand (14:07:32) :
Nasif Nahle (14:03:40) :
Arrhenius was wrong?
Oh, yeah! Arrhenius was wrong on his sensitivity magnitude:
http://www.ecd.bnl.gov/steve/pubs/HeatCapacity.pdf
Even Schwartz is wrong.
Van Ness, Hottel, Stephan, etc., were right because their studies were based on observations and experimentation, not on simple speculation.
One problem, Pamela, with your timeline and theory about how CO2 came to be regarded as the “bad guy” for warming. Arrhenius showed in 1896 that increasing “carbonic acid” (aka CO2) in the atmosphere warms the surface.
That’s long before atmospheric CO2 concentrations were measured.
Nasif Nahle (15:27:45) :
You have intriguing ideas. Have you thought of submitting them for publication?
Gary Strand (15:43:01) :
Nasif Nahle (15:27:45) :
You have intriguing ideas. Have you thought of submitting them for publication?
I’ve submitted and published not my ideas, but my work on assessing this issue based, not on ideas, but on data obtained by many scientists who worked on heat transfer science and climate physics from observation of nature and experimentation.
Gary, I have no quarrel with the important role greenhouse gasses play in our environment. The hypothesis that CO2 is one of our greenhouse gasses appears to have validity. What I wonder about and question is the human-caused CO2 in situ influence in a highly variable real setting, where CO2 is also a natural and necessary variable in our set of greenhouse gasses. I think the human-caused portion of CO2’s modeled influence is over-stated and modeled endogenous weather pattern variation drivers under-stated. That would certainly explain our current weather pattern over the last 10 to 12 years and clearly shows influence in the 1998 El Nino coupled temp spike. CO2 scientists would agree that human-caused increased CO2 did not cause that spike in temps. What they do say is that CO2 may have made it slightly worse (by less than a degree).
I wonder why F1 teams without access to a wind tunnel never win anything.