Scientists Issue Unprecedented Forecast of Next Sunspot Cycle

This is an official NCAR News Release (National Center for Atmospheric Research) Apparently, they have solar forecasting techniques down to a “science”, as boldly demonstrated in this press release. – Anthony

Scientists Issue Unprecedented Forecast of Next Sunspot Cycle

BOULDER—The next sunspot cycle will be 30-50% stronger than the last one and begin as much as a year late, according to a breakthrough forecast using a computer model of solar dynamics developed by scientists at the National Center for Atmospheric Research (NCAR). Predicting the Sun’s cycles accurately, years in advance, will help societies plan for active bouts of solar storms, which can slow satellite orbits, disrupt communications, and bring down power systems.

The scientists have confidence in the forecast because, in a series of test runs, the newly developed model simulated the strength of the past eight solar cycles with more than 98% accuracy. The forecasts are generated, in part, by tracking the subsurface movements of the sunspot remnants of the previous two solar cycles. The team is publishing its forecast in the current issue of Geophysical Research Letters.

“Our model has demonstrated the necessary skill to be used as a forecasting tool,” says NCAR scientist Mausumi Dikpati, the leader of the forecast team at NCAR’s High Altitude Observatory that also includes Peter Gilman and Giuliana de Toma.

Understanding the cycles

The Sun goes through approximately 11-year cycles, from peak storm activity to quiet and back again. Solar scientists have tracked them for some time without being able to predict their relative intensity or timing.

Scientists

NCAR scientists Mausumi Dikpati (left), Peter Gilman, and Giuliana de Toma examine results from a new computer model of solar dynamics. (Photo by Carlye Calvin, UCAR)

Forecasting the cycle may help society anticipate solar storms, which can disrupt communications and power systems and affect the orbits of satellites. The storms are linked to twisted magnetic fields in the Sun that suddenly snap and release tremendous amounts of energy. They tend to occur near dark regions of concentrated magnetic fields, known as sunspots.

The NCAR team’s computer model, known as the Predictive Flux-transport Dynamo Model, draws on research by NCAR scientists indicating that the evolution of sunspots is caused by a current of plasma, or electrified gas, that circulates between the Sun’s equator and its poles over a period of 17 to 22 years. This current acts like a conveyor belt of sunspots.

The sunspot process begins with tightly concentrated magnetic field lines in the solar convection zone (the outermost layer of the Sun’s interior). The field lines rise to the surface at low latitudes and form bipolar sunspots, which are regions of concentrated magnetic fields. When these sunspots decay, they imprint the moving plasma with a type of magnetic signature. As the plasma nears the poles, it sinks about 200,000 kilometers (124,000 miles) back into the convection zone and starts returning toward the equator at a speed of about one meter (three feet) per second or slower. The increasingly concentrated fields become stretched and twisted by the internal rotation of the Sun as they near the equator, gradually becoming less stable than the surrounding plasma. This eventually causes coiled-up magnetic field lines to rise up, tear through the Sun’s surface, and create new sunspots.

The subsurface plasma flow used in the model has been verified with the relatively new technique of helioseismology, based on observations from both NSF– and NASA–supported instruments. This technique tracks sound waves reverberating inside the Sun to reveal details about the interior, much as a doctor might use an ultrasound to see inside a patient.

Figure Comparison

NCAR scientists have succeeded in simulating the intensity of the sunspot cycle by developing a new computer model of solar processes. This figure compares observations of the past 12 cycles (above) with model results that closely match the sunspot peaks (below). The intensity level is based on the amount of the Sun’s visible hemisphere with sunspot activity. The NCAR team predicts the next cycle will be 30-50% more intense than the current cycle. (Figure by Mausumi Dikpati, Peter Gilman, and Giuliana de Toma, NCAR.)

Predicting Cycles 24 and 25

The Predictive Flux-transport Dynamo Model is enabling NCAR scientists to predict that the next solar cycle, known as Cycle 24, will produce sunspots across an area slightly larger than 2.5% of the visible surface of the Sun. The scientists expect the cycle to begin in late 2007 or early 2008, which is about 6 to 12 months later than a cycle would normally start. Cycle 24 is likely to reach its peak about 2012.

By analyzing recent solar cycles, the scientists also hope to forecast sunspot activity two solar cycles, or 22 years, into the future. The NCAR team is planning in the next year to issue a forecast of Cycle 25, which will peak in the early 2020s.

“This is a significant breakthrough with important applications, especially for satellite-dependent sectors of society,” explains NCAR scientist Peter Gilman.

The NCAR team received funding from the National Science Foundation and NASA’s Living with a Star program.

IMPORTANT NOTE:

The date of this NCAR News Release is March 6, 2006

Source: http://www.ucar.edu/news/releases/2006/sunspot.shtml

(hat tip to WUWT reader Paul Bleicher)

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

213 Comments
Inline Feedbacks
View all comments
jlc
May 31, 2009 2:08 am

C Colenaty
It would help if you could correspond in a common language so that we can respond.
Most of us here are not familar with Klingon grammar.
Respectfully (greetings to visitors from beyond our known universe)
jlc

May 31, 2009 3:29 am

Leif Svalgaard (23:26:26) :
The polar field precursor method has now performed reasonably well for three cycles and may be on track for number four, which in any case will be yet another test.

Is there a published prediction somewhere, predating cycle 21, demonstrating this?

RW
May 31, 2009 3:32 am

“As we don’;t udnerstand the Sun, how can we have an accurate model of it? Or am I being pedantic?”
Ho ho. No, you’re quite right, we don’t understand the Sun. What is it made of? Why does it rise every day? How does it shine? How far away is it? All these things are completely mysterious!

Chris Wright
May 31, 2009 3:42 am

Their model was brilliant at predicting past solar cycles but failed spectacularly within a couple of years when attempting to predict the future. Does this sound familiar?
Of course, the current sleeping sun may have been caused by a step change that could not have predicted on the basis of our current knowledge. But, becoming ever more cynical in my old age, I would lean towards another explanation: that, despite all their talk of physical processes, their model was little more than an exercise in curve fitting. If so, then their claimed success in hindcasting previous solar cycles is a perfect example of circular reasoning.
This piece is excellent because there is a clear parallel with climate models. They are brilliant at forecasting climate that has already happened, but they fall at the first hurdle when it comes to forecasting the future. All the models, including NASA Model E, predict a continuous rise in the global temperature. Of course, it hasn’t happened. The world is no warmer than it was ten years ago (I suspect that’s what they really mean when they say ‘things are even worse than we thought’). Could it be that the climates models, too, are little more than than exercises in curve fitting?
I suspect it will be many decades before we understand the climate engine sufficiently to make meaningful predictions. Indeed it may never be possible due to the chaotic nature of weather and climate. Even if I believed in AGW I would still regard those MIT researchers as fools. Claiming they can predict the global climate in 2100 with a high degree of precision is such obvious nonsense that it does make me rather sad about the state of science. I think that, as science has always been self-correcting in the past, things will improve and climate science will become honest. But I’m not holding my breath.
Chris

Arthur Glass
May 31, 2009 3:48 am

“… in inaccurately measured and untested absurdities.’
How does one measure an absurdity? It would be expressed, one supposes, in scientific notation. Or is ‘measure’ used here in its musical sense?
Measure for Measure?

Paul R
May 31, 2009 3:49 am

jlc (02:08:32) :
Most of us here are not familar with Klingon grammar.
The Klingons never even named their star, they knew it all came down to Targ gas. This model is without honour.

jh
May 31, 2009 3:58 am

In a 2006 paper in Urania de Jager says this of the solar dyamo beleived to drive the sun’s magnetic activity- “The dynamo is a non-linear process that shows chaotic elements. Phase catstrophes do occur. Therefore it is basically not possible to forecast future solar activity.”
That said de Jager and Duhau published another paper in 2008 based on some empirical observations on the long term osciilation around these phase transitions and came up with this prediction.
“The regularities that we observe in the behaviour of these oscillations during the last millenim enable us to forecast solar activity. We find that the system is presently undergoing a transition from the recent Grand Maximum to another regime. This transition started in 2000 and it is expected to end around the maximum of cycle 24, foreseen for 2014, with a mximum sunspot nmber Rmax = 68 +/- 17. At that time a period of lower solar activity will start. That period will be one of regular oscillations, as occured between 1730 and 1923. The first of these oscillations may even turn out to be as strongly negative as around 1810 in which case a short Grand Minimum similar to the Dalton one might develop. This moderate to low activity episode is expected to last for at least one Gleissberg cycle (60-100 years).”
In an earlier conference abstact I have seen one of those authors, based on the three above possible evoltions of the solar cycle predicted that in the most conservative outcome, regular oscillations one supposes, a fall in global temperature of 0.3 deg C over 20 years i.e. very nearly half the rise we have seen in the last century in about a fifth of the time.
So there are broadly two different predictions for the evolution of the global temperature. I understand that this is what science is all about. The old, old story of the beautiful maiden hypothesis and the ugly ogre, truth!

Boudu
May 31, 2009 4:07 am

I’m sure that once the apropriate adjustments have been made to the observed data, the predictions will be spot on.

May 31, 2009 4:09 am

Maybe Leif can tell me. Does NCAR now use my method for predicting what is going to happen during SC24? All I can do, is to take a SWAG (Scientific Wild Arsed Guess).

Frederick Davies
May 31, 2009 4:17 am

Kinda lika the thermohaline ocean circulation theory, isn’t it? Maybe there should be a magazine for debunked scientific ideas, that way people would realize the importance debunking wrong theories has in the advancement of Science. It could be called something like “Not Science”…

Micky C (MC)
May 31, 2009 4:34 am

The last bit was the best…2006. Ah well lads. Reminds me of this joke:
A bunch of mathematicians and physicists are on the train going to university. They all know each other well and are talking about things in general. A physicist looks up and sees the conductor coming through the next carriage, so he proceeds to check his wallet for his ticket. His friends start doing the same. Casually he looks up and notices that all the mathematicians aren’t doing the same, in fact they all looked very laid back and uncaring. He says to his friend “Haven’t you got your ticket?”. His mathematician friend says “Oh I have a ticket….but they don’t” and gestures to his laid back friends.
“But the conductor is just about to come into this carriage” says the physicist looking up.
“Oh, well then” says the mathematician and with a nod all the other mathematicians and himself get up and walk down to the other end of the carriage where there is a toilet. They all proceed to get in. All 15 or so of them. Very geometric.
The conductor comes in, checks everyone’s ticket and moves down the carriage. He notices that the toilet is occupied and knocks on the door.
“One second” says a voice and then a ticket is slipped under the door. The conductor checks it and moves on.
The physicist is impressed. “Ah that’s a good plan”
The next week he is on the train with his friends and the same bunch of mathematicians. This time the mathematician looks round and notices the conductor in the next carriage. “Did you buy ticket?” he asks the physicist.
“Yes…but only one” and with a nod, as the conductor is getting closer, all the other physicists move to the bottom of the carriage and into the toilet. All 10 of them.
The mathematician smiles. With a nod all the other mathematicians move down to the bottom of the carriage and into the next carriage, where there is also a toilet. The first mathematician moves down to the toilet in this carriage, waits until the conductor is half way down the carriage, then knocks on the toilet door.
“Tickets please”. A ticket is slipped under the door.
“Thank you” he says and promptly walks off with it.
As he is walking down the next carriage and getting ready to repeat the mathematicians stock manoeurve, his friend asks him “Why did you do that? Is he not your friend?”
“Ah well, yes he is,” he replies, “but physicists shouldn’t meddle in methods they don’t fully understand”
I’m a physicist and a mathematician but I think this sums the modelling above up quite nicely

Paul Coppin
May 31, 2009 5:18 am

While the model certainly appears laughable in short hindsight, I’m willing to cut the researchers a little slack. Eccentricity and naivete know no bounds in science and too often passion and belief are honestly mistaken for truth and rigour. The press release gives a few clues as to why this model went off the rails. Predictive models are a saleable quantity and universities are known for their propensity in modern times to push their research efforts to market. That appears to be the motivation in this case. The release has all the earmarks of a university marketing office looking to catch cash for the school. Competitiveness amongst faculty will tend to encourage them to get on the gravy train. Prestige, cash and workload may be in the balance. Academic eccentricity only tends to be tolerated these days from those who’s wackiness is profitable or sufficiently tenured as to be immovable.
The model does have a “marketable” feature: to demonstrate clearly the fragility of rearward looking models to accurately predict the future. Interestingly, the curve fit presented by the model also tends to correlate at some level with “global warming”, if you are to accept it exists. While this failed model doesn’t automatically point to the inherent failure of the currently fashionable climate models, it also provides no support for their utility.

George Hebbard
May 31, 2009 5:20 am

As I reread Dikpati’s theory, I was struck by the fact that they base their predicitions on the sun’s predictability. Without better understanding of the underlying effects, they are doomed to failure.
I haven’t seen anything better than Landscheidt’s solar torque effects to explain variations in the solar dynamo…Dikpati’s conveyor belt is not invariable.

Juls
May 31, 2009 5:25 am

Still the same mistake, again and again, with predictive models.
1) sorry guys but it’s very easy to fit a model with 8 cycles….too little data. Any model can be tuned to fit 8 cycles…even a poor polynom. It means nothing, certainly not a proof that the model is accurate.
2) when you get a model of a natural phenomenon (where chaos, diverging equations play a large role) which is 98% accurate…a big red sign should blink in your head: “THIS IS NOT POSSIBLE”. There is chaos…no model can predict it…if you have a 98% accurate model it means you have tried to model chaos instead of only modelling the part which can be modelled. You are bound to fail in the future.

May 31, 2009 5:34 am

I’ve written about the latest NOAA/NASA prediction on my weblog. Needless to say, I’m not impressed.

Jack Green
May 31, 2009 5:43 am

Maybe we need a book burning session. These scientists are now making things up instead using the good ol “We just don’t know” and “we can’t get any of our models to converge”. This is getting really stupid when you have an event that there is no data to match. I wish the media would just go away because journalism is tabloidism now.

RW
May 31, 2009 5:44 am

“All the models, including NASA Model E, predict a continuous rise in the global temperature”
No they don’t.

Basil
Editor
May 31, 2009 6:08 am

Leif Svalgaard (21:59:58) :
Gilbert (21:51:24) :
Great review. How much attention did they actually give it?
They did follow all my requests, otherwise it would not have been accepted as I get to see it again and again, until I’m happy with it. The reviewer most of the time wields enormous power in that regard.

Then you must be a reasonable reviewer, i.e., one who recognizes that sometimes even “reasonable minds disagree,” and who understands that the task of a reviewer/referee is not to impose one’s own view of things, but to insure that certain standards have been met. I was a referee for a couple of journals in my field “back in the day” and both of the editors that I worked most closely with insisted that my reviews contain constructive criticism, so that even if I were recommending against publication, I should be telling the authors what they could do to improve the paper to the point it might stand a better chance of getting published.
I think this is the way it probably works in fields that are not heavily politicized
(yet?) heavily politicized. I have the feeling that “climate science” has drifted far from this ideal.

Retired Engineer
May 31, 2009 6:13 am

Descartes: “I think, therefore I am.”
Universe: “So ?”
The sun will do what the sun will do. We may observe and predict, but the big orange thing probably doesn’t much give a snip. We have more than enough problems with the few things we can control. Or the things we think we can control.
OT, smoking doesn’t kill people. It isn’t as if they’ll live forever if they don’t smoke. It may shorten their lives (in my ancestors cases, it probably lengthened them) but many things shorten lives. Does it really matter if SC24 is big or small? (no offense intended, Leif) Interesting, yes, crucial, maybe not. The federal deficit has a bigger impact, and I predict that will be huge. And something we don’t seem to have any control over.
Warm, cold, or in-between. As long as the sun comes up each day and doesn’t dump a big CME on me. Not sure I could do much about that.

John W.
May 31, 2009 6:25 am

FatBigot (22:11:17) :
I wonder whether we are seeing the same thing with the spewings of computer models. Once something has been spewed it seems to be presumed that it is correct. That is not an entirely illogical approach, after all the models are set-up and operated by people with strings of letters after their names. It is logical to accept the word of people far better qualified than you are, what is not logical is to accept the word of a computer. Yet we seem to find that the authority afforded to computerised results is greater than the authority given to the word of those who fed the computer in the first place.

This problem has been with us since the dawn of computer based simulation. I agree that it’s understandable for a lay person to accept the results based on their expectation that the “with strings of letters after their names” actually represent knowledge and integrity. Today, sadly, they don’t. In this case, with this group of clowns, all they have actually “proved” is the uselessness of models that haven’t been through IV&V. And the dishonesty, incompetence or both of those who refuse the IV&V process.
Leif Svalgaard (22:11:18) :
… I asked her [Dikpati] how she could have faith in the correctness of the programming if the code was such a mess, but never got a good answer. Another argument was something about ‘intellectual property’ [of taxpayer funded work???].
She can not have any faith in her model, and she knew it. Whenever I began the verification part of IV&V, the very first step was to CLEAN UP THE CODE. That is what reputable scientists, such as Choudhuri, do. I wouldn’t even think about validation until I had confidence the code was actually modeling what it claimed to model. Based on what you wrote, I’ll surmise Dikpati had “patched” it to effectively “jump to desired answer” and wasn’t about to let you discover that. (I’ve seen that behavior too many times, so she’s not alone. in fact, I know of at least one major $$$$Billion program that was recently canceled because their simulation based evaluations were caught out as junk as a result of this kind of behavior.)
You’re also absolutely correct about ownership. If she developed it on the US governments nickel, the government owns it. Period. At one time, companies would mix their own funds with the governments to claim proprietary rights, but acquisition regulations (both FAR and DFAR) started blocking that long ago.
Thanks for providing the background that illuminates what kind of “scientists” these people are. I look forward to the day when acquisition authorities begin looking into these “research” programs.

maz2
May 31, 2009 6:25 am

” their results appeared difficult to compare and synthesize. ”
…-
“How Many Scientists Fabricate and Falsify Research? A Systematic Review….
PLoS One ^ | 29 May 2009 | Daniele Fanelli
The frequency with which scientists fabricate and falsify data, or commit other forms of scientific misconduct is a matter of controversy. Many surveys have asked scientists directly whether they have committed or know of a colleague who committed research misconduct, but their results appeared difficult to compare and synthesize. This is the first meta-analysis of these surveys.
To standardize outcomes, the number of respondents who recalled at least one incident of misconduct was calculated for each question, and the analysis was limited to behaviours that distort scientific knowledge: fabrication, falsification, “cooking” of data, etc… Survey questions on plagiarism and other forms of professional misconduct were excluded. The final sample consisted of 21 surveys that were included in the systematic review, and 18 in the meta-analysis.
A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86–4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once –a serious form of misconduct by any standard– and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices. Meta-regression showed that self reports surveys, surveys using the words “falsification” or “fabrication”, and mailed surveys yielded lower percentages of misconduct. When these factors were controlled for, misconduct was reported more frequently by medical/pharmacological researchers than others.
Considering that these surveys ask sensitive questions and have other limitations, it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct.”
http://www.freerepublic.com/focus/f-news/2261381/posts

John W.
May 31, 2009 6:26 am

Mod, please use this one instead of the previous where I forgot a tag.
FatBigot (22:11:17) :
I wonder whether we are seeing the same thing with the spewings of computer models. Once something has been spewed it seems to be presumed that it is correct. That is not an entirely illogical approach, after all the models are set-up and operated by people with strings of letters after their names. It is logical to accept the word of people far better qualified than you are, what is not logical is to accept the word of a computer. Yet we seem to find that the authority afforded to computerised results is greater than the authority given to the word of those who fed the computer in the first place.

This problem has been with us since the dawn of computer based simulation. I agree that it’s understandable for a lay person to accept the results based on their expectation that the “with strings of letters after their names” actually repressent knowledge and integrity. Today, sadly, they don’t. In this case, with this group of clowns, all they have actaully “proved” is the uselessness of models that haven’t been through IV&V.
Leif Svalgaard (22:11:18) :
… I asked her [Dikpati] how she could have faith in the correctness of the programming if the code was such a mess, but never got a good answer. Another argument was something about ‘intellectual property’ [of taxpayer funded work???].

She can not have any faith in her model, and she knew it. Whenever I began the verification part of IV&V, the very first step was to CLEAN UP THE CODE. That is what reputable scientists, such as Choudhuri, do. I wouldn’t even think about validation until I had confidence the code was actually mmodeling what it claimed to model. Based on what you wrote, I’ll surmise Dikpati had “patched” it to effectively “jump to desired answer” and wasn’t about to let you discover that. (I’ve seen that behaviour too many times, so she’s not alone. in fact, I know of at least one major $$$$Billion program that was recently canceled because their simulation based eveluations were caught out as junk as a result of this kind of behaviour.)
You’re also absolutely correct about ownership. If she developed it on the US governments nickel, the government owns it. Period. At one time, companies would mix their own funds with the governments to claim proprioetary rights, but acquisition regulations (both FAR and DFAR) started blocking that long ago.
Thnaks for provideing the background that iluminates what kind of “scientists” these people are.

Rik Gheysens
May 31, 2009 6:27 am

Leif,
In the graphic “Old and new cycle groups during SC-transit”, (see http://users.telenet.be/j.janssens/SC23web/SCweb10.pdf )
Janssens lets know that the break-even is reached in October 2008. SC-minimum is 2 months prior to the break-even SC23-SC24 (+/- 4 months). “According to the above method, sol

Rhys Jaggar
May 31, 2009 6:35 am

One thing I have found over the past 20 years is that as soon as you think you understand the local climate characteristics you are interested in, they change……..so you don’t any more.
Their programme obviously worked for several cycles, which would have been useful if it’d been around then………..but it wasn’t, because the data wasn’t available………
Maybe it tells us something about the complexity of understanding the sun?

Rik Gheysens
May 31, 2009 6:37 am

Leif,
In the graphic “Old and new cycle groups during SC-transit”, (see http://users.telenet.be/j.janssens/SC23web/SCweb10.pdf )
Janssens lets know that the break-even is reached in October 2008. Theoretically the SC-minimum is 2 months prior to the break-even SC23-SC24 (+/- 4 months). “According to the above method, solar cycle minimum should take place in August 2008 +/- 4 months”.
Is the past break-even point a strong argument putting the solar minimum in August 2008 and not in December 2008 as NOAA predicts?

1 3 4 5 6 7 9