(h/t to Michael Ronayne)
Sunspots Revealed in Striking Detail by Supercomputers
BOULDER—In a breakthrough that will help scientists unlock mysteries of the Sun and its impacts on Earth, an international team of scientists led by the National Center for Atmospheric Research (NCAR) has created the first-ever comprehensive computer model of sunspots. The resulting visuals capture both scientific detail and remarkable beauty.

The high-resolution simulations of sunspot pairs open the way for researchers to learn more about the vast mysterious dark patches on the Sun’s surface. Sunspots are the most striking manifestations of solar magnetism on the solar surface, and they are associated with massive ejections of charged plasma that can cause geomagnetic storms and disrupt communications and navigational systems. They also contribute to variations in overall solar output, which can affect weather on Earth and exert a subtle influence on climate patterns.
The research, by scientists at NCAR and the Max Planck Institute for Solar System Research (MPS) in Germany, is being published this week in Science Express.
“This is the first time we have a model of an entire sunspot,” says lead author Matthias Rempel, a scientist at NCAR’s High Altitude Observatory. “If you want to understand all the drivers of Earth’s atmospheric system, you have to understand how sunspots emerge and evolve. Our simulations will advance research into the inner workings of the Sun as well as connections between solar output and Earth’s atmosphere.”
Ever since outward flows from the center of sunspots were discovered 100 years ago, scientists have worked toward explaining the complex structure of sunspots, whose number peaks and wanes during the 11-year solar cycle. Sunspots encompass intense magnetic activity that is associated with solar flares and massive ejections of plasma that can buffet Earth’s atmosphere. The resulting damage to power grids, satellites, and other sensitive technological systems takes an economic toll on a rising number of industries.
Creating such detailed simulations would not have been possible even as recently as a few years ago, before the latest generation of supercomputers and a growing array of instruments to observe the Sun. Partly because of such new technology, scientists have made advances in solving the equations that describe the physics of solar processes.
The work was supported by the National Science Foundation, NCAR’s sponsor. The research team improved a computer model, developed at MPS, that built upon numerical codes for magnetized fluids that had been created at the University of Chicago.
Computer model provides a unified physical explanation
The new computer models capture pairs of sunspots with opposite polarity. In striking detail, they reveal the dark central region, or umbra, with brighter umbral dots, as well as webs of elongated narrow filaments with flows of mass streaming away from the spots in the outer penumbral regions. They also capture the convective flow and movement of energy that underlie the sunspots, and that are not directly detectable by instruments.
The models suggest that the magnetic fields within sunspots need to be inclined in certain directions in order to create such complex structures. The authors conclude that there is a unified physical explanation for the structure of sunspots in umbra and penumbra that is the consequence of convection in a magnetic field with varying properties.
The simulations can help scientists decipher the mysterious, subsurface forces in the Sun that cause sunspots. Such work may lead to an improved understanding of variations in solar output and their impacts on Earth.
Supercomputing at 76 trillion calculations per second
To create the model, the research team designed a virtual, three-dimensional domain that simulates an area on the Sun measuring about 31,000 miles by 62,000 miles and about 3,700 miles in depth – an expanse as long as eight times Earth’s diameter and as deep as Earth’s radius. The scientists then used a series of equations involving fundamental physical laws of energy transfer, fluid dynamics, magnetic induction and feedback, and other phenomena to simulate sunspot dynamics at 1.8 billion points within the virtual expanse, each spaced about 10 to 20 miles apart. For weeks, they solved the equations on NCAR’s new bluefire supercomputer, an IBM machine that can perform 76 trillion calculations per second.
The work drew on increasingly detailed observations from a network of ground- and space-based instruments to verify that the model captured sunspots realistically.
The new models are far more detailed and realistic than previous simulations that failed to capture the complexities of the outer penumbral region. The researchers noted, however, that even their new model does not accurately capture the lengths of the filaments in parts of the penumbra. They can refine the model by placing the grid points even closer together, but that would require more computing power than is currently available.
“Advances in supercomputing power are enabling us to close in on some of the most fundamental processes of the Sun,” says Michael Knölker, director of NCAR’s High Altitude Observatory and a co-author of the paper. “With this breakthrough simulation, an overall comprehensive physical picture is emerging for everything that observers have associated with the appearance, formation, dynamics, and the decay of sunspots on the Sun’s surface.”
First view of what goes on below the surface of sunspots. Lighter/brighter colors indicate stronger magnetic field strength in this subsurface cross section of two sunspots. For the first time, NCAR scientists and colleagues have modeled this complex structure in a comprehensive 3D computer simulation, giving scientists their first glimpse below the visible surface to understand the underlying physical processes. This image has been cropped horizontally for display. [ENLARGE & DISPLAY FULL IMAGE] (©UCAR, image courtesy Matthias Rempel, NCAR. News media terms of use*)
See a video animation of this and other sunspot visualizations as well as still “photo” images in the Sunspots Multimedia Gallery.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

“Gary Strand (15:22:15) :
One problem – climate models do not predict weather, and as Lorenz showed in the 1960s, our knowledge of the current state of the atmosphere (much less the entire climate system) will *never* be perfect enough to go out more than two weeks or so.
Therefore, no climate model will ever be able to meet your criteria for validation, because it simply cannot be done, and never will be. Sorry.”
I think you may be missing the point of the original argument, which is that, without this kind of validation, no conclusions can be drawn as to the accuracy of the computer models. Whether or not the criteria is impossible or not is irrelevant – the question is whether it is reasonable to require that a model’s output be validated before accepting it as a basis for changing policy. In my opinion, it is reasonable to have that requirement.
I believe it to be foolish to divorce the question of how well we understand a system from the concurrent issue of what demonstrable practical use to which we have put that knowledge. We understand atmospheric phenomenon well enough to predict weather three days in advance, but not well enough to predict weather more than five days out. We understand electromagnetics well enough to design long-distance power transmission lines, and gravity well enough to predict the orbits of planets around the sun, but we don’t understand the interaction between the two well enough to predict the occurrence of sunspots. And this doesn’t just mean that you prove your knowledge of a system by applying it, it means that the applications ARE THEMSELVES the very benchmark by which you assess your understanding of a system. It is a logical fallacy to start with an unproven and entirely subjective premise of how well you understand a system (e.g., the climate) and from that premise conclude that you have the ability to perform some specific application (e.g. accurately predict the response of a climate variable to changes in CO2), before you’ve actually, verifiably, done it.
For me to accept that we understand the climate system well enough to construct a computer model that accurately predicts the long-term response of the climate to a doubling of CO2, I need a track record of accurate such predictions. Moreover, it’s not good enough to show that a computer model accurately simulates the climate conditions that we know (or think we know) occurred in the past. Without even going into that cliche about elephants and their moving trunks, just think of that sunspot model that was announced about a year and a half ago that bragged an 85% or 90% fit to previous sunspot cycles, but in its first prediction is being proven spectacularly wrong. Fitting a model to past performance data is a mathematical task, not a scientific one, and the fact that a model can be adjusted to fit past data at best shows that the model is consistent with the data. It does not prove exclusivity, i.e. that there is not another model that also fits with the data, that has substantively different output. It says nothing about the likelihood that the model’s output accurately simulates the real-world system’s future behavior.
My central problem with the manmade global warming theory is that this particular field of science is inherently uncertain. We can’t even measure or observe changes in climate except over time intervals of decades if not centuries. The assertion that we’ve somehow, in the last few decades, mastered all that needs mastering so as to not only quantify mankind’s impact on temperature, but the secondary effects that the temperature increase has on weather phenomenon (droughts, hurricanes, etc), is a ludicrous proposition – one in which only the terminally gullible would accept.
Gary Strand,
Where do you draw the line between unpredictable weather and predictable climate? From your comments on this thread, presumably this scale includes only intervals longer than one year….over what time integration can would you say we can reasonably hold climate models to the test? 5 years? 10 years? 20? 50? 100?
Gary Strand (15:27:48) :
One problem, Pamela, with your timeline and theory about how CO2 came to be regarded as the “bad guy” for warming. Arrhenius showed in 1896 that increasing “carbonic acid” (aka CO2) in the atmosphere warms the surface.
That’s long before atmospheric CO2 concentrations were measured.
Arrhenius said that by doubling the atmospheric CO2 concentration the Earth would be warmed up by 4 °C, or if the CO2 would be cut to one half, the Earth would cool by 4 °C. He was wrong:
ΔT = 5.35 (W/m^2) [LN2] / 4 (σ) (T)^3 = 0.98 K
Even when I introduced the sensitivity provided by Arrhenius, those 4 °C are nowhere; hence, Arrhenius was wrong.
Nasif, just because he overstated his thesis does not mean he was wrong about the underlying premise that CO2 is one of our greenhouse gasses and functions as an important component regarding greenhouse gas heat retention. Without these gasses, we would probably not be here.
Pamela Gray (21:14:29) :
Nasif, just because he overstated his thesis does not mean he was wrong about the underlying premise that CO2 is one of our greenhouse gasses and functions as an important component regarding greenhouse gas heat retention. Without these gasses, we would probably not be here.
Without oceans (water) and carbon dioxide, we would not be here; it’s a positive assertion.
Hi Pamela… “Greenhouse” gases don’t warm up the Earth. The Sun warms up the Earth; the “greenhouse” gases allocate the heat into more available microstates; they don’t generate heat. It’s like a volleyball court… The ball is the heat, the Sun is the player setting (setter), the defensive team (at the opposite side of the setter) is the oceans and land and the offensive team is the “greenhouse” gases. 😉
@Pamela… I forgot to say that Arrhenius was wrong also regarding his underlying premise because the carbon dioxide would work as a coolant if its mass in the atmosphere increases and the load of energy incoming to the Earth doesn’t increases, i.e. if the intensity of solar radiation doesn’t increase.
Nasif Nahle (15:59:53) :
“I’ve submitted and published not my ideas, but my work on assessing this issue based, not on ideas, but on data obtained by many scientists who worked on heat transfer science and climate physics from observation of nature and experimentation.”
Your ideas have been published, then – in what journal(s)?
Jesper (19:10:23) :
“Where do you draw the line between unpredictable weather and predictable climate? From your comments on this thread, presumably this scale includes only intervals longer than one year….over what time integration can would you say we can reasonably hold climate models to the test? 5 years? 10 years? 20? 50? 100?”
20 years minimum.
To the folks requesting validation of a climate model before they accept them as reasonable tools – what are your metrics, and why?
Gary Strand (05:31:13) :
Your ideas have been published, then – in what journal(s)?
Nope, they’re not ideas; I didn’t invented natural processes. They’re what scientists have observed in nature and experimented in labs, when it is possible. AGW is an idea.
Every article submitted, didactic, theoretical or informative, is peer reviewed for its publication in Biocab.org. Some of my articles have been published by Universities; for example, Astrobiology, Heat Transfer, Heat Stored by Atmospheric Gases, The Abiotic Origin of Life, etc.
I understand you’re an empiricist, not a rationalist.
When I said published, I meant in a journal, not a website. After all, if you can convincingly show that “Arrhenius was wrong also regarding his underlying premise because the carbon dioxide would work as a coolant if its mass in the atmosphere increases and the load of energy incoming to the Earth doesn’t increases, i.e. if the intensity of solar radiation doesn’t increase”, then you’re going to overturn more than a century of understanding.
Gary Strand (08:19:10) :
I understand you’re an empiricist, not a rationalist.
When I said published, I meant in a journal, not a website. After all, if you can convincingly show that “Arrhenius was wrong also regarding his underlying premise because the carbon dioxide would work as a coolant if its mass in the atmosphere increases and the load of energy incoming to the Earth doesn’t increases, i.e. if the intensity of solar radiation doesn’t increase”, then you’re going to overturn more than a century of understanding.
There is not need of writing an article about Arrhenius’ mistakes; take any book on heat transfer and you’ll find those errors. If the source of heat doesn’t change its intensity, and the mass of carbon dioxide increases, the carbon dioxide will act as a coolant:
dT = q / m (Cp)
It’s a basic formula for calculating the change of temperature caused by any substance.
“Gary Strand (05:35:37) :
To the folks requesting validation of a climate model before they accept them as reasonable tools – what are your metrics, and why?”
As of today, there are no metrics by which models can be validated. That’s why many don’t believe they can be relied upon. Perhaps after about 75 years or so, a computer model in existence today could be validated with respect to a forecast in a climate variable (e.g., temperature) If, say, it’s running 10 year mean predicted temperatures were within 95% of the measured running 10-year mean of temperarures over 70 of those 75 years. That woud be impressive, but I would add that important part is that you collect empirically the metrics to quantify how reliable the model is. If it turned out to be within 75% of the actual 10-year mean in 70 of 75 years, you still would have an objective way of measuring how reliable of a tool the model is. But right now, there is nothing.
The National Weather Service recognizes the need to formally validate flood forecasts due to their impact on public planning and expenditures.
http://www.nws.noaa.gov/oh/rfcdev/docs/Final_Verification_Report.pdf
Why should we not expect a similar formal review of the climate model projections?
Can you give any rational that supports the idea that a similar organized effort to evaluate and improve climate models in unwarranted?
Public costs incurred due to faulty flood forecasts would be counted in the multimillion dollar range. Public costs due to faulty climate forecasts would tally in the hundreds of billions of dollars to multiple trillion dollar range.
The Nuclear Regulatory Commission requires formal evaluation of a nuclear plant and the possible impact of its maximum credible accident, and formal testing and evaluation of the adequacy of emergency response planning due to the high public costs and impacts a nuclear plant accident would have.
The EPA requires similar impact studies on major industrial plants that might impact the public, and emergency response activities in communities.
It is the climate modeling community that has the burden of proof to show why their models should not be held to similar standards of formal validation and review.
To base public policy on untested computer models in pure idiocy! They are either competent and useful, or if incompetent and harmful, or statistically meaningless. Until we know which of those 3 options is true, given the costs involved we should assume they are useless or harmful (first do no harm).
The easiest metric to use would be to show they perform better in a statistically significant degree from a naive forecast that simply forecasts more of the same we had last year or the last few years.
There are two simple variations of this, one is that the future conditions will be the same as the historical climatic variation (for example will fall within some error of the 1971-2000 average).
The other is persistence — ie that the future conditions will be essentially identical to today’s conditions.
To have merit the model projection would have to beat both those metrics by a statistically significant margin.
http://www.forecastadvisor.com/blog/
If their forecasts (projections) are within the range of historical natural variation, then they also need to prove that they are predicting something that would not have happened without increasing CO2.
If the climate change forecast makes some projection about sea surface temperature rise over the next century for example, then unless they can show a scientifically valid reason to the contrary, they should be tested against 1/10 that rise over 10 years. Likewise on other major features of their forecasts. If they are scientifically valid, the authors of the model should be able to state ahead of time what the error bars are for their bench marks on key events and the window of performance they must fly through to be meaningful.
Larry
kurt (14:06:19) :
“[…]Perhaps after about 75 years or so, a computer model in existence today could be validated with respect to a forecast in a climate variable (e.g., temperature) If, say, it’s running 10 year mean predicted temperatures were within 95% of the measured running 10-year mean of temperarures over 70 of those 75 years. That woud be impressive, but I would add that important part is that you collect empirically the metrics to quantify how reliable the model is. If it turned out to be within 75% of the actual 10-year mean in 70 of 75 years, you still would have an objective way of measuring how reliable of a tool the model is.”
Have you done this test using the available data for the CMIP3 archive for the model runs for the 20th century compared against your favorite observational data, for surface temperature?
That would be an interesting test.
hotrod (14:12:57) :
“The easiest metric to use would be to show they perform better in a statistically significant degree from a naive forecast that simply forecasts more of the same we had last year or the last few years.”
As I asked Kurt, have you exploited the CMIP3 climate model data archive versus your favorite obs data and made this examination?
It seems models are more real than reality… Heh!
And just why should I do someone else’s job? It is up to the model developers to show they have a clue what is going on, not the people that are paying them to do the job.
If they care so little about the validity of their product that they will not even invest a small fraction of their time showing it has value why should I pay the slightest attention to their projections.
Do you think it is the shoppers job to verify prices in a store?
Do you think it is the patients job to certify his doctors?
Do you think it is the buyers job to crash test cars?
Larry
Geez, don’t get so upset.
One other problem with the “forecast” metric – what if a tropical volcano erupts during the period? Does that invalidate the climate model? On what grounds?