Sunspot Cycle and the Global Temperature Change Anomaly

UPDATE: The author writes:

Thank you for posting my story on Sunspots and The Global Temperature Anomaly.

I was pleasantly surprised when I saw it and the amount of constructive feedback I was given.

Your readers have pointed out a fatal flaw in my correlation.

In the interests of preventing the misuse of my flawed correlation please withdraw the story.

Then I replied: Please make a statement to that effect in comments, asking the story be withdrawn.

To which he replied:

After further reflection, I have concluded that the objection to the cosine function as having no physical meaning is not valid.

I have posted my response this morning and stand by my correlation.

Personally, I think the readers have it right. While interesting, this is little more than an exercise in curve fitting. – Anthony

Guest post by R.J. Salvador

I have made an 82% correlation between the sunspot cycle and the Global Temperature Anomaly. The correlation is obtained through a non linear time series summation of NASA monthly sunspot data to the NOAA monthly Global Temperature Anomaly.

clip_image002

This correlation is made without, averaging, filtering, or discarding any temperature or sunspot data.

Anyone familiar with using an Excel spread sheet can easily verify the correlation.

The equation, with its parameters, and the web sites for the Sunspot and Global temperature data used in the correlation are provided below for those who wish to do temperature predictions.

The correlation and the NOAA Global Mean Temperature graph are remarkably similar.

clip_image004

For those who like averages, the yearly average from 1880 to 2013 reported by NOAA and the yearly averages calculated by the correlation have an r^2 of 0.91.

clip_image006

The model for the correlation is empirical. However the model shows that the magnitude, the asymmetrical shape, the length and the oscillation of each sunspot cycle appear to be the factors controlling Global temperature changes. These factors have been identified before and here they are correlated by an equation to predict the Temperature Anomaly trend by month.

The graph below shows the behavior of the correlation to the actual anomaly during a heating (1986 to 1996) and cooling (1902 to 1913) sunspot cycles. The next photo provides some obvious conclusion about these same two Sunspot cycles. clip_image008

clip_image010

In the graph above the correlation predicted start temperature for these same two solar cycles has been reset to zero to make the comparison easier to see.

High sustained sunspot peak number with short cycle transitions into the next cycle correlate with temperature increases.

Low sunspot peak numbers with long cycle transitions into the next cycle correlate with temperature decreases.

Oscillations in the Sunspot number, which are chaotic, can cause increases or decreases in temperature depending where they occur in the cycle.

The correlation equation contains just two terms. The first, a temperature forcing term, is a constant times the Sunspot number for the month raised to a power. [b*SN^c]

The second term, a stochastic term, is the cosine of the Sunspot number times a constant. [cos(a*SN)] This term is used to model those random chaotic events having a cyclical association with the magnitude of the sunspot number. No doubt this is a controversial term as its frequency is very high. There is a very large degree of noise in the temperature anomaly but the term finds a pattern related to the Sunspot number.

Each term is calculated by month and added to the prior month’s calculation. The summation stores the history of previous temperature changes and this sum approximates a straight line relationship to the actual Global Temperature Anomaly by month which is correlated by the constants d and e. The resulting equation is:

Where TA= the predicted Temperature Anomaly

Cos = the cosine in radians

* = multiplication

^ = exponent operator

Σ = summation

a,b,c,d,e = constants

TA= d*[Σcos(a*SN)-Σb*SN^c]+e from month 1 to the present

The calculation starts in January of 1880.

The correlation was made using a non-linear time series least squares optimization over the entire data range from January of 1880 to February of 2013. The Proportion of variance explained (R^2) = 0.8212 (82.12%)

The Parameters for the equation are:

a= 148.425811533409

b= 0.00022670169089817989

c= 1.3299372454954419

e= -0.011857962851469542

f= -0.25878555224841393

The summations were made over 1598 data months therefore use all the digits in the constants to ensure the correlation is maintained over the data set.

The correlation can be used to predict future temperature changes and reconstruct past temperature fluctuations outside the correlated data set if monthly sunspot numbers are provided as input.

If the sunspot number is zero in a month the correlation predicts that the Global Temperature Anomaly trend will decrease at 0.0118 degree centigrade per month. If there were no sunspots for a year the temperature would decline 0.141 degrees. If there were no Sunspots for 50 years we would be entering an ice age with a 7 degree centigrade decline. While this is unlikely to happen, it may have in the past. The correlation implies that we live a precarious existence.

clip_image012

The correlation was used to reconstruct what the global temperature change was during the Dalton minimum in sunspot from 1793 to 1830. The correlation estimates a 0.8 degree decline over the 37 years.

Australian scientists have made a prediction of sunspots by month out to 2019. The correlation estimates a decline of 0.1 degree from 2013 to 2019 using the scientists’ data.

The Global temperature anomaly has already stopped rising since 1997.

clip_image014

The formation of sunspots is a chaotic event and we can not know with any certainty the exact future value for a sunspot number in any month. There are limits that can be assumed for the Sunspot number as the sunspot number appears to take a random walk around the basic beta type curve that forms a solar cycle. The cosine term in the modeling equation attempts to evaluate the chaotic nature of sunspot formation and models the temperature effect from the statistical nature of the timing of their appearance.

Some believe we are entering a Dalton type minimum. The prediction in this graph makes two assumptions.

First : the Australian prediction is valid to 2019.

Second: that from 2020 to 2045, the a replay of Dalton minimum will have the same sunspot numbers in each month as from may 1798 to may 1823. This of course won’t happen, but it gives an approximation of what the future trend of the Global Anomaly could be.

clip_image016

If we entered another Dalton type minimum post 2019, the present positive Global Temperature Anomaly would be completely eliminated.

See the following web page for future posts on this correlation.

http://www.facebook.com/pages/Sunspot-Global-Warming-Correlation/157381154429728

Data sources:

NASA

http://solarscience.msfc.nasa.gov/greenwch/spot_num.txt

NOAA ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/monthly.land_ocean.90S.90N.df_1901-2000mean.dat

Australian Government Bureau of meteorology

http://www.ips.gov.au/Solar/1/6

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
166 Comments
Inline Feedbacks
View all comments
Alvin
May 3, 2013 8:46 am

Clearly the progressives will begin to postulate that CO2 drives solar cycles.

Nylo
May 3, 2013 8:53 am

Paul Vaughan, “there’s an important learning opportunity here for anyone patient & careful enough to deeply understand and appreciate exactly why the cosine integral’s changepoints are timed as they are“.
There’s nothing to understand other than this is an exercise of curve fitting with total disregard as to the significance of the paremeters. I already demonstrated that, many replies ago, by adding a tiny, imperceptible noise to the SSN and seeing what happens with the reconstruction. The formula is so sensible to tiny variations of SSN that the results often no longer had any resemblance to the original. The author’s method to choose the “a” parameter resembles putting a billion monkeys in front of typewriters until one writes “IPCC=SHIT”. He has then picked that “monkey” (that “a” value) and presented it to us as the great monkey of ultimate wisdom.
The big difference with the IPCC models is that IPCC parameters are at least based on something physical. I won’t dispute that they are as well examples of curve fitting, of course they are and that’s why I don’t believe in any of them, those parameters are chosen in preference to others as plausible as them just because they give the desired results. But they have a base. They can be explained, they can be defended, whereas “a” cannot. It is the most absolute crap.

Kelvin Vaughan
May 3, 2013 8:57 am

thingodonta says:
May 3, 2013 at 3:49 am
You can’t tax sunspots.
Wanna bet!

crosspatch
May 3, 2013 9:08 am

It is obvious from these data that Congress must increase its solar spending. We just aren’t spending enough on the sun.

Russ R.
May 3, 2013 9:34 am

I would also like to add that curve-fitting is how Calculus was derived.
So there is a history of getting data that is not well-understood, and finding mathematical formulas that describe it, before you understand how the variables are producing the resultant data.

Paul Vaughan
May 3, 2013 9:46 am

Nylo (May 3, 2013 at 8:53 am)
My patience for your ignorance &/or deception has expired. Do not address me again.

REPLY:
Mr. Vaughn. This isn’t your blog, and if people want to write posts to you, it is my call as to whether to allow them or not, not yours.
In this case I agree with Nylo. You would do well to learn some manners – Anthony

May 3, 2013 9:53 am

Its interesting that the Sunspot Number and temperature anomaly correlation is there at all, I attempted to break the temperature anomaly down into individual temperature records and found that the correlation exists there as well, on the Northern Hemisphere there is a lag between solar activity and temperature that varies with Latitude (did you know that?). Depending on the latitude where the temperature reading is from, there is a Lag of up to 3 months between solar activity and temperature, this would be due to seasonal variation and what state of activity the sun was in.
http://thetempestspark.wordpress.com/2013/02/15/average-november-sunspot-number-and-february-minimum-temperature-1875-2012/
This raises an important question for me, As the global temperature anomaly is from land based stations and the majority of earths land mass is on the northern hemisphere, Is the temperature anomaly showing the seasonal variation of the Northern Hemispheres exposure to solar activity through stronger and weaker solar activity?
If it is, then it basically means that the Global temperature anomaly is not an accurate measurement of earths temperature “evenly distributed globally”, but is an anomaly that represents seasonal variation of Earths exposure to solar activity through stronger and weaker solar activity over time.

Henry Bowman
May 3, 2013 10:14 am

The exposition is unclear to me. What are the limits of the summation? From the description, it seems as though the predictor is a recursive one, but perhaps that’s not what you meant. This is interesting, but a clarification is needed, in my view.
It might help if you used MathJax.

May 3, 2013 10:20 am

Thanks Nylo for a clear, elegant, and ultimately very convincing reminder that curve fitting has very little to do with science. I was first intrigued and taken in by the “correlation” illustrated in this post, but your noise sensitivity analysis unambiguously demonstrates that this curve fitting process is a completely artificial and meaningless exercise in noise fitting. I am a bit ticked at my initial interest and poor scientific judgment (big big difference between correlation and causation), but you set me straight (and I hope others). Again many thanks.

Zeke Hausfather
May 3, 2013 10:22 am

I still prefer my old leprechaun fit: http://rankexploits.com/musings/2009/you-can%E2%80%99t-make-this-stuff-up-ii/
I managed to get an r^2 of 0.73 with just one parameter!

Joe Crawford
May 3, 2013 10:30 am

I don’t see Salvador’s model as being any more or less valid that the current crop of GCMs. In fact they are quite similar in that they each have many ‘tunable’ parameters that permit curve fitting to the historical temperature profile one chooses to match. They are all just curve fitting. At least R.J. is honest in not claiming that his parameters stand for some as yet undetermined physical parameter that they ‘think’ might approximate the value used. And, he doesn’t throw in a few million lines of code in order to justify development costs that in reality, like his cosine term, just add a lot of noise and are negated by the tunable parameters.
In other words, at least to me, his model is no more nor no less valid in predicting tomorrow’s climate that any of the current crop of GCMs that we’ve probably spent billions developing. Besides, “the fact of the matter is” (Lord I hate that phrase) is that it appears to do great job of matching the temperature record chosen.

May 3, 2013 10:41 am

E=MC2 is just curve fitting!

May 3, 2013 10:42 am

Interestingly, I did a max temp vs bright sunshine correlation for the Central UK which predicted that the max temp and bright sunshine would end in 2010 (NothingSettledNothingCertain.com).
Your first graph: it is the fit of your prediction to the actual, I gather. The linear relationship is then the how your prediction X 1.0024 the actual, i.e. your prediction has a 2.4% underestimation of the temp as measured by GISTemp data. This linear difference is perhaps similar to my 0.1C/century that was left out of my sunshine + PDO/AMO heat release/heat retention cycles. I thought land use OR CO2 could explain it.
Try plotting the deviation from prediction vs time and see if there is a pattern that matches the PDO/AMO signal.

May 3, 2013 10:55 am

Joe Crawford:
Your post at May 3, 2013 at 10:30 am concludes saying

In other words, at least to me, his model is no more nor no less valid in predicting tomorrow’s climate that any of the current crop of GCMs that we’ve probably spent billions developing. Besides, “the fact of the matter is” (Lord I hate that phrase) is that it appears to do great job of matching the temperature record chosen.

OK, I will accept that.
No model is a perfect emulation of anything. Every model is assessed by its usefulness.
What use does this model have?
1.
The model cannot assist understanding of climate behaviour.
It is a curve fit using purely arbitrary variables which represent no known physical parameters to obtain agreement between sun spots and one climate indicator. Adjusting one of the parameters will alter the indicated relationship, but that adjustment indicates nothing about climate because the parameter has no relationship to anything in climate.
2.
The model has no predictive ability.
The model relates sun spots to one climate variable. Assuming it does indicate a true relationship between them (which is doubtful) then neither of them can be predicted so there is no available prediction of one of them which is necessary for the model to predict the other.
Simply, the model is worthless: it is not even wrong.
Richard

Editor
May 3, 2013 10:57 am

First, Mr. Salvador, you have done a mountain of work. Several comments on it, in no particular order.
I fear that there is no way to sugar-coat my opinion. I see this as a meaningless curve fitting exercise. Why do I say that? I’ve read countless papers, and done countless analyses of this type myself. As with most parts of my life, I’ve developed some rules of thumb for identifying curve fitting. For me, the tell-tale indications of meaningless curve fitting are:
1. A large number of tunable parameters , in this case five (a, b, c, e, and f). For those unaware of the importance of this issue, let me quote from Freeman Dyson:

Quantum electrodynamics is the theory of electrons and photons interacting through electromagnetic forces. Because the electromagnetic forces are weak, we could calculate the atomic processes precisely. By 1951, we had triumphantly finished the atomic calculations and were looking for fresh fields to conquer. We decided to use the same techniques of calculation to explore the strong nuclear forces. We began by calculating meson–proton scattering, using a theory of the strong forces known as pseudoscalar meson theory. By the spring of 1953, after heroic efforts, we had plotted theoretical graphs of meson–proton scattering.
We joyfully observed that our calculated numbers agreed pretty well with Fermi’s measured numbers. So I made an appointment to meet with Fermi and show him our results. Proudly, I rode the Greyhound bus from Ithaca to Chicago with a package of our theoretical graphs to show to Fermi.
When I arrived in Fermi’s office, I handed the graphs to Fermi, but he hardly glanced at them. He invited me to sit down, and asked me in a friendly way about the health of my wife and our newborn baby son, now fifty years old. Then he delivered his verdict in a quiet, even voice. “There are two ways of doing calculations in theoretical physics”, he said. “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and selfconsistent mathematical formalism. You have neither.”
I was slightly stunned, but ventured to ask him why he did not consider the pseudoscalar meson theory to be a selfconsistent mathematical formalism. He replied, “Quantum electrodynamics is a good theory because the forces are weak, and when the formalism is ambiguous we have a clear physical picture to guide us.With the pseudoscalar meson theory there is no physical picture, and the forces are so strong that nothing converges. To reach your calculated results, you had to introduce arbitrary cut-off procedures that are not based either on solid physics or on solid mathematics.”
In desperation I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, “How many arbitrary parameters did you use for your calculations?” I thought for a moment about our cut-off procedures and said, “Four.” He said, “I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.”
With that, the conversation was over. I thanked Fermi for his time and trouble,and sadly took the next bus back to Ithaca to tell the bad news to the students. Because it was important for the students to have their names on a published paper, we did not abandon our calculations immediately. We finished them and wrote a long paper that was duly published in the Physical Review with all our names on it. Then we dispersed to find other lines of work. I escaped to Berkeley, California, to start a new career in condensed-matter physics.

The problem is that (as in this case) if you are given free choice of any mathematical expression, combined with five tunable parameters, you can truly make the elephant wriggle his trunk in the exact shape of the historical temperature record … and regarding that, I can only bring you the “bad news” that Dyson brought back to his students. Your results mean nothing.
As someone commented above, to be fair I should note that climate models do the same thing, they have many tunable parameters … but then I don’t think they’re worth a bucket of warm spit for forecasting either … or for hind-casting, for that matter.
2. Ultra-high precision required in specifying the parameters. If you need more significant digits for your parameters than your data contains, you are in very dangerous territory.
3. Lack of an underlying physical theory. While not a fatal objection, it certainly applies to such convoluted methods as we see above. Let me give an example of why lack of a physical theory is not a fatal objection. It seems clear that the slow decay in the concentration of an injected pulse of CO2 (or other gas) into the atmosphere follows an exponential form. We do not understand all of the myriad pathways that carbon takes wandering around this marvelous planet, so we don’t have a complete unifying theory about why the average of all of those carbon pathways comes out to have an exponential form … but there it is in the records. So in that case, the assumption of exponential decay with a single time parameter may be justified despite the lack of a complete theory.
In the current instance, however, we have nothing to underly the convoluted, counter-intuitive, claimed mathematical relationship. In such a case, the lack of a physical theory looms large.
4. An “over-good” fit. Things in the real climate are messy. There’s a lot of what climate scientists call “noise” in the data, which is a technical term meaning “we don’t have a clue”. A result as good as this one is far too tight a fit to the historical data to be believable. A fit as good as the one shown would imply that other than sunspots, almost nothing affects the global temperature … very doubtful.
5. Bad units. As Steven Mosher noted above, you have sunspots on one side of the equation, and temperature on the other. Although in some circumstances this is handled by introducing a constant “C” having some kind of imaginary units to convert one to the other, in this case we end up with degrees per sunspot number to the nth power … bad sign.
6. Small data sets. Not an issue in this case, but see e.g. Nikolov and Zeller for some high-quality curve fitting on a data set of a dozen or so.
7. Hyper-sensitivity to small changes in data. As Nylo points out above, if you add a tiny random value to the data, the formula gives a hugely different result. Any formula that sensitive to the random fluctuations of the data is over-specified.
8. Lack of testing using withheld data. As several commenters have noted, cut the data in half, and derive your parameters in the same manner using solely the first half of the data … then use those parameters to project the results to the second half. I suspect you’ll be shocked.
Those are my rules of thumb for identifying what I describe as meaningless curve fitting exercises, Mr. Salvador. And I fear that your exposition above fits all of them but one, you used large data sets. As a result, and sadly, I have no hesitation in identifying your work as not being of value.
Now, Dr. Robert Brown, who posts as rgbatduke in his comments above and whose science-fu is very strong, said that if you dropped the strange cosine stuff that what remained might make sense. Unfortunately, wordpress ate the important parts of his equations, so I’m not clear what he meant, I’ll have to derive the math myself. But in general, I fear that your claimed results are totally spurious.
Finally, someone said that this paper should be retracted. I couldn’t disagree more, this is science in action. I have no problem with bad science being put up and shot down here. I’ve seen some of my own go down in flames.
However, I did like the thought of putting a comment up at the top noting the objections of some commenters, let me consider that. In a way it constitutes some kind of peer review, with the obvious pluses and minuses.
Perhaps just a brief note at the top that there are objections, with links to salient comments, at the top of the post … suggestions welcome, although they may be ignored …
w.

RomanM
May 3, 2013 12:18 pm

Nylo is absolutely correct.
If one simply plots the two components of the equation separately, one can see that the cumulative sum of the cosines provides virtually the entire fit to the variation of the temperature anomaly, while the cumulative sum of the powers is pretty close to a straight line. The function cos(148.4258*x) goes through a complete cycle for every change in x of magnitude 2*pi/148.4258 or 0.0423. In effect, it generates “random” values basically matching the residuals from a linear fit of the cumulative sum of the powers.
There is no viable predictive value to the fit.

Matthew R Marler
May 3, 2013 12:21 pm

TA= d*[Σcos(a*SN)-Σb*SN^c]+e from month 1 to the present
Is there a typo in that equation? cos(a*SN) with a = appx 148 is peculiar. Also, the sum of those numbers from month 1 to present is a single number — to what does it get compared?
How about some estimated standard errors on those parameter estimates, and fewer claimed significant figures?
It looks like totally uninformed post-hoc model fitting, but perhaps a clearer presentation would clear that up.

gnarf
May 3, 2013 1:05 pm

The problem is that you can build a formula giving you any curve from zny input. For example I can build a formula giving the level of dow jones index from the temperature in Arkansas. That is curve fitting… the formula works for the period it was built for…but it does not mean temperature and dow jones are linked…and it will fail miserably to predict the future.

Bart
May 3, 2013 1:20 pm

son of mulder says:
May 3, 2013 at 4:19 am
“And what was Newton’s physical explanation for the parameter ^2 in the gravitational inverse square law?”
Divergence of the force is zero – empty space is neither a source nor a sink for gravity. That leads ineluctably to a 1/r^2 dependence.

Martin Hodgkins
May 3, 2013 1:38 pm

There is nowhere near the tight correlation shown in that top graph. There is far more to it than that and even publishing this article is a disgrace. Here is SSN and CET (no not what the article compared). https://sites.google.com/site/ralulacet/sunspots/sunspots-and-cet

May 3, 2013 2:14 pm

I agree with Willis Eschenbach, but isn’t the point being made by R.J. Salvador’s that it is possible to build a correlation between sunspot numbers this way? as an example and in regard to how Anthropogenic Global warming proponents use statistical methods to build their charts, otherwise why would R.J. Salvador admit to the process and provide the method.
I’m not a statistician, as an engineer (studying science to improve my understanding) I use precise values at a precise point in time to measure what the values are doing at any precise interval, where they are and what the relationship is between these values and their upper and lower limits. If I were to use curve fitting to get a desired result it could result in death or injury.
Having said that, do you think that this chart is curve fitting?
http://thetempestspark.wordpress.com/2013/02/15/average-november-sunspot-number-and-february-minimum-temperature-1875-2012/

E.M.Smith
Editor
May 3, 2013 2:53 pm

HenryP:
Read your links. Couple of comments:
Not seeing the need for a religious reference at all. You don’t have much religion in the article anyway, and it mostly just distracts.
It looks like you are leaning toward a ‘cold catastrophe’ expectation. It isn’t warranted. There is NO global shortage of food, nor shortage of excess growing capacity. Farming is already growing by leaps in Brazil and Africa. Furthermore, about 40% of US corn goes into gas tanks. We can massively increase food supply by not feeding it to cars. (Gasoline can be made from coal, as can alcohol, for centuries to come if needed – but it won’t be, as we are awash in natural gas.)
There are also massive amounts of grains fed to animals. In any real “food crisis” we can eat the grains ourselves instead. It takes about 10 lbs of grain to make one lb of beef, and about 3 pounds to make a pound of pork or chicken. As the meat is mostly water, and the grain is dry, the relative ‘food content’ of each pound is higher in the grain as well. (I can easily eat a one pound steak, but one cup of rice makes more rice than I can possibly eat in one meal… It takes about 1 dry pound of food per person per day, or 1/3 lb / meal, and you are well fed.)
At present, the global problem is over production. (Germany is dumping rye anywhere it can…)
Finally, we can easily swap to more cold tolerant crops. Oats and Barley, for example, grow in just barely defrosted dirt. The big issue isn’t cold, it’s water. Both too much and too little, and wind blowing down crops. (That’s why potatoes work better than wheat in bad times…)
So yes, it’s going to get a bit colder. Yes, far north farmers will have a harder time of it, but no, it won’t matter on a global scale.
http://chiefio.wordpress.com/2013/01/11/grains-and-why-food-will-stay-plentiful/

Chad Wozniak
May 3, 2013 3:28 pm

@Paul Vaughan –
I suspect quite a few regular posters here are fed up with YOUR ignorance/deception.
@thingodonta/kelvin vaughan – Just watch the kleptocrats find a way to tax sunsposts – and transits of Mercury – and previously undiscovered comets and asteroids – and supernovas in galaxies 10 billion light years away.
– don’t let the alarmies discourage you.

Paul Vaughan
May 3, 2013 3:37 pm

The random “noise” assumptions made by several critics fail:
http://img32.imageshack.us/img32/347/rx22anim.gif
http://tallbloke.files.wordpress.com/2013/03/scd_sst_q.png
http://img201.imageshack.us/img201/4995/sunspotarea.png
http://imageshack.us/a/img692/3756/c1a6mo.gif
Suggestion: Take some weeks &/or months to more deeply understand and appreciate exactly what Salvador has captured (whether by accidental fluke or trick awareness).

Bart
May 3, 2013 4:31 pm

Personally, I think the cosine term is an unphysical kluge. But, the broader point that the area under the sunspot curves was generally increasing in the time of increasing temperature is important. It probably explains the overall trend, even as variability is introduced by the system response.