Guest Post by Willis Eschenbach
Are there cycles in the sun and its associated electromagnetic phenomena? Assuredly. What are the lengths of the cycles? Well, there’s the question. In the process of writing my recent post about cyclomania, I came across a very interesting paper entitled “Correlation Between the Sunspot Number, the Total Solar Irradiance, and the Terrestrial Insolation“ by Hempelmann and Weber, hereinafter H&W2011. It struck me as a reasonable look at cycles without the mania, so I thought I’d discuss it here.
The authors have used Fourier analysis to determine the cycle lengths of several related datasets. The datasets used were the sunspot count, the total solar irradiance (TSI), the Kiel neutron count (cosmic rays), the Geomagnetic aa index, and the Mauna Loa insolation. One of their interesting results is the relationship between the sunspot number, and the total solar irradiation (TSI). I always thought that the TSI rose and fell with the number of sunspots, but as usual, nature is not that simple. Here is their Figure 1:
They speculate that at small sunspot numbers, the TSI increases. However, when the number of sunspots gets very large, the size of the black spots on the surface of the sun rises faster than the radiance, so the net radiance drops. Always more to learn … I’ve replicated their results, and determined that the curve they show is quite close to the Gaussian average of the data.
Next, they give the Fourier spectra for a variety of datasets. I find that for many purposes, there is a better alternative than Fourier analysis for understanding the makeup of a complex waveform or a time-series of natural observations. Let me explain the advantages of an alternative to the Fourier Transform, which is called the Periodicity Transform, developed by Sethares and Staley.
I realized in the writing of this post that in climate science we have a very common example of a periodicity transform (PT). This is the analysis of temperature data to give us the “climatology”, which is the monthly average temperature curve. What we are doing is projecting a long string of monthly data onto a periodic space, which repeats with a cycle length of 12. Then we take the average of each of those twelve columns of monthly data, and that’s the annual cycle. That’s a periodicity analysis, with a cycle length of 12.
By extension, we can do the same thing for a cycle length of 13 months, or 160 months. In each case, we will get the actual cycle in the data with that particular cycle length.
So given a dataset, we can look at cycles of any length in the data. The larger the swing of the cycle, of course, the more of the variation in the original data that particular cycle explains. For example, the 12-month cycle in a temperature time series explains most of the total variation in the temperature. The 13-month cycle, on the other hand, is basically nonexistent in a monthly temperature time-series.
The same is true about hourly data. We can use a periodicity transform (PT) to look at a 24-hour cycle. Here’s the 24-hour cycle for where I live:

Figure 2. Average hourly temperatures, Santa Rosa, California. This is a periodicity transform of the original hourly time series, with a period of 24.
Now, we can do a “goodness-of-fit” analysis of any given cycle against the original observational time series. There are several ways to measure that. If we’re only interested in a relative index of the fit of cycles of various lengths, we can use the root-mean-square power in the signals. Another would be to calculate the R^2 of the cycle and the original signal. The choice is not critical, because we’re looking for the strongest signal regardless of how it’s measured. I use a “Power Index” which is the RMS power in the signal, divided by the square root of the length of the signal. In the original Sethares and Staley paper, this is called a “gamma correction”. It is a relative measurement, valid only to compare the cycles within a given dataset.
So … what are the advantages and disadvantages of periodicity analysis (Figure 2) over Fourier analysis? Advantages first, neither list is exhaustive …
Advantage: Improved resolution at all temporal scales. Fourier analysis only gives the cycle strength at specific intervals. And these intervals are different across the scale. For example, I have 3,174 months of sunspot data. A Fourier analysis of that data gives sine waves with periods of 9.1, 9.4, 9.8, 10.2, 10.6, 11.0, 11.5, and 12.0 years.
Periodicity analysis, on the other hand, has the same resolution at all time scales. For example, in Figure 2, the resolution is hourly. We can investigate a 25-hour cycle as easily and as accurately as the 24-hour cycle shown. (Of course, the 25-hour cycle is basically a straight line …)
Advantage: A more fine-grained dataset gives better resolution. The resolution of the Fourier Transform is a function of the length of the underlying dataset. The resolution of the PT, on the other hand, is given by the resolution of the data, not the length of the dataset.
Advantage: Shows actual cycle shapes, rather than sine waves. In Figure 2, you can see that the cycle with a periodicity of 24 is not a sine wave in any sense. Instead, it is a complex repeating waveform. And often, the shape of the wave-form resulting from the periodicity transform contains much valuable information. For example, in Figure 2, from 6AM until noon, we can see how the increasing solar radiation results in a surprisingly linear increase of temperature with time. Once that peaks, the temperature drops rapidly until 11 PM. Then the cooling slows, and continues (again surprisingly linearly) from 11PM until sunrise.
As another example, suppose that we have a triangle wave with a period of 19 and a sine wave with a period of 17. We add them together, and we get a complex wave form. Using Fourier analysis we can get the underlying sine waves making up the complex wave form … but Fourier won’t give us the triangle wave and the sine wave. Periodicity analysis does that, showing the actual shapes of the waves just as in Figure 2.
Advantage: Can sometimes find cycles Fourier can’t find. See the example here, and the discussion in Sethares and Staley.
Advantage: No “ringing” or aliasing from end effects. Fourier analysis suffers from the problem that the dataset is of finite length. This can cause “ringing” or aliasing when you go from the time domain to the frequency domain. Periodicity analysis doesn’t have these issues
Advantage: Relatively resistant to missing data. As the H&W2011 states, they’ve had to use a variant of the Fourier transform to analyze the data because of missing values. The PT doesn’t care about missing data, it just affects the error bars.
Advantage: Cycle strengths are actually measured. If the periodicity analysis say that there’s no strength in a certain cycle length, that’s not a theoretical statement. It’s a measurement of the strength of that actual cycle compared to the other cycles in the data.
Advantage: Computationally reasonably fast. The periodicity function I post below written in the computer language “R”, running on my machine (MacBook Pro) does a full periodicity transform (all cycles up to 1/3 the dataset length) on a dataset of 70,000 data points in about forty seconds. Probably could be sped up, all suggestions accepted, my programming skills in R are … well, not impressive.
Disadvantage: Periodicity cycles are neither orthogonal nor unique. There’s only one big disadvantage, which applies to the decomposition of the signal into its cyclical components. With the Fourier Transform, the sine waves that it finds are independent of each other. When you decompose the original signal into sine waves, the order in which you remove them makes no difference. With the Periodicity Transform, on the other hand, the signals are not independent. A signal with a period of ten years, for example, will also appear at twenty and thirty years and so on. As a result, the order in which you decompose the signal becomes important. See Sethares and Staley for a full discussion of decomposition methods.
A full periodicity analysis looks at the strength of the signal at all possible frequencies up to the longest practical length, which for me is a third of the length of the dataset. That gives three full cycles for the longest period. However, I don’t trust the frequencies at the longest end of the scale as much as those at the shorter end. The margin of error in a periodicity analysis is less for the shorter cycles, because it is averaging over more cycles.
So to begin the discussion, let me look at the Fourier Transform and the Periodicity Transform of the SIDC sunspot data. In the H&W2011 paper they show the following figure for the Fourier results:
Figure 3. Fourier spectrum of SIDC daily sunspot numbers.
In this, we’re seeing the problem of the lack of resolution in the Fourier Transform. The dataset is 50 years in length. So the only frequencies used by the Fourier analysis are 50/2 years, 50/3 years, 50/4 years, and so on. This only gives values at cycle lengths of around 12.5, 10, and 8.3 years. As a result, it’s missing what’s actually happening. The Fourier analysis doesn’t catch the fine detail revealed by the Periodicity analysis.
Figure 4 shows the full periodicity transform of the monthly SIDC sunspot data, showing the power contained in each cycle length from 3 to 88 years (a third of the dataset length).
Figure 4. Periodicity transform of monthly SIDC sunspot numbers. The “Power Index” is the RMS power in the cycle divided by the square root of the cycle length. Vertical dotted lines show the eleven-year cycles, vertical solid lines show the ten-year cycles.
This graph is a typical periodicity transform of a dataset containing clear cycles. The length of the cycles is shown on the bottom axis, and the strength of the cycle is shown on the vertical axis.
Now as you might expect in a sunspot analysis, the strongest underlying signal is an eleven year cycle. The second strongest signal is ten years. As mentioned above, these same cycles reappear at 20 and 22 years, 30 and 33 years, and so on. However, it is clear that the main periodicity in the sunspot record is in the cluster of frequencies right around the 11 year mark. Figure 5 shows a closeup of the cycle lengths from nine to thirteen years.:
Figure 5. Closeup of Figure 4, showing the strength of the cycles with lengths from 9 years to 13 years.
Note that in place of the single peak at around 11 years shown in the Fourier analysis, the periodicity analysis shows three clear peaks at 10 years, 11 years, and 11 years 10 months. Also, you can see the huge advantage in accuracy of the periodicity analysis over the Fourier analysis. It samples the actual cycles at a resolution of one month.
Now, before anyone points out that 11 years 10 months is the orbital period of Jupiter, yes, it is. But then ten years, and eleven years, the other two peaks, are not the orbital period of anything I know of … so that may or may not be a coincidence. In any case, it doesn’t matter whether the 11 years 10 months is Jupiter or not, any more than it matters if 10 years is the orbital period of something else. Those are just the frequencies involved to the nearest month. We’ll see below why Jupiter may not be so important.
Next, we can take another look at the sunspot data, but this time using daily sunspot data. Here are the cycles from nine to thirteen years in that dataset.
Figure 6. As in figure 5, except using daily data.
In this analysis, we see peaks at 10.1, 10.8, and 11.9 years. This analysis of daily data is much the same as the previous analysis of monthly data shown in Figure 5, albeit with greater resolution. So this should settle the size of the sunspot cycles and enshrine Jupiter in the pantheon, right?
Well … no. We’ve had the good news, here’s the bad news. The problem is that like all natural cycles, the strength of these cycles waxes and wanes over time. We can see this by looking at the periodicity transform of the first and second halves of the data individually. Figure 7 shows the periodicity analysis of the daily data seen in Figure 6, plus the identical analysis done on each half of the data individually:
Figure 7. The blue line shows the strengths of the cycles found using the entire sunspot dataset as shown in Figure 6. The other two lines are the cycles found by analyzing half of the dataset at a time.
As you can see, the strengths of the cycles of various lengths in each half of the dataset are quite dissimilar. The half-data cycles each show a single peak, not several. In one half of the data this is at 10.4 years, and in the other, 11.2 years. The same situation holds for the monthly sunspot half-datasets (not shown). The lengths of the strongest cycles in the two halves vary greatly.
Not only that, but in neither half is there any sign of the signal at 11 years 10 months, the purported signal of Jupiter.
As a result, all we can do is look at the cycles and marvel at the complexity of the sun. We can’t use the cycles of one half to predict the other half, it’s the eternal curse of those who wish to make cycle-based models of the future. Cycles appear and disappear, what seems to point to Jupiter changes and points to Saturn or to nothing at all … and meanwhile, if the fixed Fourier cycle lengths are say 8.0, 10.6, and 12.8 years or something like that, there would be little distinction between any of those situations.
However, I was unable to replicate all of their results regarding the total solar irradiance. I suspect that this is the result of the inherent inaccuracy of the Fourier method. The text of H&W2011 says:
4.1. The ACRIM TSI Time Series
Our analysis of the ACRIM TSI time series only yields the solar activity cycle (Schwabe cycle, Figure 6). The cycle length is 10.6 years. The cycle length of the corresponding time series of the sunspot number is also 10.6 years. The close agreement of both periods is obvious.
I suggest that the agreement at 10.6 years is an artifact of the limited resolution of the two Fourier analyses. The daily ACRIM dataset is a bit over 30 years, and the daily sunspot dataset that he used is 50 years of data. The Fourier frequencies for fifty years are 50/2=25, 50/3=16.7, 50/4=12.5, 50/5=10, and 50/6=8.3 year cycles. For a thirty-two year dataset, the frequencies are 32/2=16, 32/3=10.6, and 32/4=8 years. So finding a cycle of length around 10 in both datasets is not surprising.
In any case, I don’t find anything like the 10.6 year cycle they report. I find the following:
Figure 8. Periodicity analysis of the ACRIM composite daily total solar irradiance data.
Note how much less defined the TSI data is. This is a result of the large variation in TSI during the period of maximum solar activity. Figure 9 shows this variation in the underlying data for the TSI:
Figure 9. ACRIM composite TSI data used in the analysis.
When the sun is at its calmest, there is little variation in the signal. This is shown in the dark blue areas in between the peaks. But when activity increases, the output begins to fluctuate wildly. This, plus the short length of the cycle, turns the signal into mush and results in the loss of everything but the underlying ~ 11 year cycle.
Finally, let’s look at the terrestrial temperature datasets to see if there is any trace of the sunspot cycle in the global temperature record. The longest general temperature dataset that we have is the BEST land temperature dataset. Here’s the BEST periodicity analysis:
Figure 10. Full-length periodicity analysis of the BEST land temperature data.
There is a suggestion of a cycle around 26 years, with an echo at 52 years … but nothing around 10-11 years, the solar cycle. Moving on, here’s the HadCRUT3 temperature data:
Figure 11. Full-length periodicity analysis of the HadCRUT3 temperature record.
Curiously, the HadCRUT3 record doesn’t show the 26- and 52-year cycle shown by the BEST data, while it does show a number of variations not shown in the BEST data. My suspicion is that this is a result of the “scalpel” method used to assemble the BEST dataset, which cuts the records at discontinuities rather than trying to “adjust” them.
Of course, life wouldn’t be complete without the satellite records. Here are the periodicity analyses of the satellite records:
Figure 12. Periodicity analysis, RSS satellite temperature record, lower troposphere.
With only a bit more than thirty years of data, we can’t determine any cycles over about ten years. The RSS data server is down, so it’s not the most recent data.
Figure 11. Periodicity analysis, UAH satellite temperature record, lower troposphere.
As one might hope, both satellite records are quite similar. Curiously, they both show a strong cycle with a period of 3 years 8 months (along with the expected echoes at twice and three times that length, about 7 years 4 months and 11 years respectively). I have no explanation for that cycle. It may represent some unremoved cyclicity in the satellite data.
SUMMARY:
To recap the bidding:
• I’ve used the Periodicity Transform to look at the sunspot record, both daily and monthly. In both cases we find the same cycles, at ~ 10 years, ~ 11 years, and ~ 11 years 10 months. Unfortunately when the data is split in half, those cycles disappear and other cycles appear in their stead. Nature wins again.
• I’ve looked at the TSI record, which contains only a single broad peak from about 10.75 to 11.75 years.
• The TSI has a non-linear relationship to the sunspots, increasing at small sunspot numbers and decreasing a high numbers. However, the total effect (averaged 24/7 over the globe) is on the order of a quarter of a watt per square metre …
• I’ve looked at the surface temperature records (BEST and HadCRUT3, which show no peaks at around 10-11 years, and thus contain no sign of Jovian (or jovial) influence. Nor do they show any sign of solar (sunspot or TSI) related influence, for that matter.
• The satellite temperatures tell the same story. Although the data is too short to be definitive, there appears to be no sign of any major peaks in the 10-11 year range.
Anyhow, that’s my look at cycles. Why isn’t this cyclomania? For several reasons:
First, because I’m not claiming that you can model the temperature by using the cycles. That way lies madness. If you don’t think so, calculate the cycles from the first half of your data, and see if you can predict the second half. Instead of attempting to predict the future, I’m looking at the cycles to try to understand the data.
Second, I’m not blindly ascribing the cycles to some labored astronomical relationship. Given the number of lunar and planetary celestial periods, synoptic periods, and the periods of precessions, nutations, perigees, and individual and combined tidal cycles, any length of cycle can be explained.
Third, I’m using the same analysis method to look at the temperature data that I’m using on the solar phenomena (TSI, sunspots), and I’m not finding corresponding cycles. Sorry, but they are just not there. Here’s a final example. The most sensitive, responsive, and accurate global temperature observations we have are the satellite temperatures of the lower troposphere. We’ve had them for three full solar cycles at this point. So if the sunspots (or anything associated with them, TSI or cosmic rays) has a significant effect on global temperatures, we would see it in the satellite temperatures. Here’s that record:
Figure 12. A graph showing the effect of the sunspots on the lower tropospheric temperatures. There is a slight decrease in lower tropospheric temperature with increasing sunspots, but it is far from statistically significance.
The vagaries of the sun, whatever else they might be doing, and whatever they might be related to, do not seem to affect the global surface temperature or the global lower atmospheric temperature in any meaningful way.
Anyhow, that’s my wander through the heavenly cycles, and their lack of effect on the terrestrial cycles. My compliments to Hempelmann and Weber, their descriptions and their datasets were enough to replicate almost all of their results.
w.
DATA:
SIDC Sunspot Data here
ACRIM TSI Data, overview here, data here
Kiel Neutron Count Monthly here, link in H&W document is broken
BEST data here
Sethares paper on periodicity analysis of music is here.
Finally, I was unable to reproduce the H&W2011 results regarding MLO transmissivity. They have access to a daily dataset which is not on the web. I used the monthly MLO dataset, available here, and had no joy finding their claimed relationship with sunspots. Too bad, it’s one of the more interesting parts of the H&W2011 paper.
CODE: here’s the R function that does the heavy lifting. It’s called “periodicity” and it can be called with just the name of the dataset that you want to analyze, e.g. “periodicity(mydata)”. It has an option to produce a graph of the results. Everything after a “#” in a line is a comment. If you are running MatLab (I’m not), Sethares has provided programs and examples here. Enjoy.
# The periodicity function returns the power index showing the relative strength
# of the cycles of various lengths. The input variables are:
# tdata: the data to be analyzed
# runstart, runend: the interval to be analyzed. By default from a cycle length of 2 to the dataset length / 3
# doplot: a boolean to indicate whether a plot should be drawn.
# gridlines: interval between vertical gridlines, plot only
# timeint: intervals per year (e.g. monthly data = 12) for plot only
# maintitle: title for the plat
periodicity=function(tdata,runstart=2,runend=NA,doplot=FALSE,
gridlines=10,timeint=12,
maintitle="Periodicity Analysis"){
testdata=as.vector(tdata) # insure data is a vector
datalen=length(testdata) # get data length
if (is.na(runend)) { # if largest cycle is not specified
maxdata=floor(datalen/3) # set it to the data length over three
} else { # otherwise
maxdata=runend # set it to user's value
}
answerline=matrix(NA,nrow=maxdata,ncol=1) # make empty matrix for answers
for (i in runstart:maxdata) { # for each cycle
newdata=c(testdata,rep(NA,(ceiling(length(testdata)/i)*i-length(testdata)))) # pad with NA's
cyclemeans=colMeans(matrix(newdata,ncol=i,byrow=TRUE),na.rm=TRUE) # make matrix, take column means
answerline[i]=sd(cyclemeans,na.rm=TRUE)/sqrt(length(cyclemeans)) # calculate and store power index
}
if (doplot){ # if a plot is called for
par(mgp=c(2,1,0)) # set locations of labels
timeline=c(1:(length(answerline))/timeint) #calculate times in years
plot(answerline~timeline,type="o",cex=.5,xlim=c(0,maxdata/timeint),
xlab="Cycle Length (years)",ylab="Power Index") # draw plot
title(main=maintitle) # add title
abline(v=seq(0,100,gridlines),col="gray") # add gridlines
}
answerline # return periodicity data
}
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.













Leif graph sunspots versus temp. going back to 1600 ad . Show us the no connection.
Dr Norman Page says:
July 31, 2013 at 12:26 pm:
“Henry Clark Your numbers look very good to me. They would fit well with my cooling forecast at http:// climatesense-norpag.blogspot.com – latest post.
Thanks and agreed (especially with points 1-3 and 8-9 in the forecast summary).
Salvatore Del Prete says:
July 31, 2013 at 12:19 pm
“Henry, Willis does not have a clue.”
Well, I’m hoping he replies to my scatterplot post if online later today, especially since the time involved in verifying the short premade spreadsheet would be much less than he must have spent creating the original graphs. We’ll see.
@HenryP
Indeed.
Salvatore Del Prete says:
July 31, 2013 at 1:00 pm
Leif graph sunspots versus temp. going back to 1600 ad . Show us the no connection.
It is usually the one who claims a connection that has to show there is one. But to help you along here are some:
http://www.leif.org/research/On-Becoming-a-Scientist.pdf slide 19
http://www.leif.org/research/Does%20The%20Sun%20Vary%20Enough.pdf slide 20
http://www.leif.org/research/Temp-Track-Sun-Not.png the reddish upper curves show solar activity
Henry: “The article doesn’t state the software used to create that plot.” He clearly states he’s using R. R is a better and more open choice than Excel for several reasons, including: 1) it’s free and runs on Windows, Macs, Linux, and most any other system you may choose, and 2) R is a programming language so your work can be clearly and unambiguously communicated. A spreadsheet hides your calculations and forces us to click through cell-by-cell and graph-by-graph to trace out what you did.
I’d also note that Excel does not display things like confidence intervals. Point estimates are nice, but their use can be misleading. In your case, especially misleading, since you are using time series data which is serial correlation (the data in year 2000 is not independent of the data in year 1999 or year 2001). Your “unbiased” Excel regression assumes that the data is not correlated — essentially that it is not from a real-world time series.
If you can get Excel to calculate 95% confidence intervals (95% CI), you’d find that the naive 95% CI — ignoring serial correlation — is (-0.0243158, -0.0044745) which gives an idea of the plausible range of the slope and would indicate that it’s definitely negative, though perhaps larger or smaller than your single number says.
The problem is that your data is serially correlated. If you use a regression technique that takes this into account, you’ll find that the 95% CI bounds increase: the naive CI is overconfident because the way Excel calculates the regression assumes that the data is not correlated from year to year, when in fact it is.
In fact, with correct processing, the 95% CI enlarges to (-0.0214861, 0.0057255), with a central value of -.0078803 which is bad news for you in two ways. First, the central point — if you’re naive enough to go with it — is half what Excel calculated. Second, since the CI includes zero and positive values, so you cannot say with certainty that the slope actually is negative. Your regression is meaningless.
Excel’s not a good idea for many reasons, including the fact that it obscures its calculations and encourages the naive application of improper procedures to data.
Leif Svalgaard says:
July 31, 2013 at 12:51 pm
“What you have been talking about is not the Sun and has no applicability to solar activity ‘cycles’.”
Wrong.
“You are watering down the concept to the point where it is void of meaning.”
Wrong.
Bart says:
July 31, 2013 at 1:33 pm
Wrong.
Wrong.
See, you don’t get it.
Bart says:
July 31, 2013 at 1:33 pm
Wrong, etc.
To be more constructive: instead of your whining It would be of interest if you could show some skill and demonstrate that the treatment in http://www.ann-geophys.net/26/231/2008/angeo-26-231-2008.pdf is equivalent to your notions.
Leif Svalgaard says:
July 31, 2013 at 1:40 pm
It trivially is. That just goes to show how little you understand it. I’ve tried to guide you to the path, but all I get is abuse, so what really is the point? I could give you stone tablets etched by the Creator herself, and you would still find some reason to carp.
milodonharlani says:
July 31, 2013 at 12:58 pm
====================================
No, I meant that plate tectonics fail to some extent as a mechanism for continental drift since it must still be explained why the plates move around. So plate tectonics beg the question as to why continents drift. –AGF
Wayne says:
July 31, 2013 at 1:23 pm
“Henry: “The article doesn’t state the software used to create that plot.” He clearly states he’s using R.”
R code for the specific plot I am talking about, the scatterplot, is not shown.
Possibly, perhaps he may have made not just some but all of the plots by R code.
Even if so, I can’t inspect and debug code which I haven’t seen, which isn’t in an uploaded file.
Wayne says:
July 31, 2013 at 1:23 pm
“I’d also note that Excel does not display things like confidence intervals. […]”
There is not a 99% or a 95% confidence interval on that particular trendline.
Checking, I did remember to include the word “around” to highlight how that numerical example is not precise. Perhaps I should have added additional qualifiers.
What I am aiming to point out, to argue, is not a refined formal analysis of the climate sensitivity to high precision. (If going for that, I would start by spending multiple times as long, to convert inconvenient monthly temperature data into suitable form and extend the dataset past 2002 with much additional work).
But I am illustrating how the scatterplot of cosmic rays (neutron count) versus temperature is vastly different from Eschenbach’s scatterplot, as would be blatant from direct visual inspection even if not adding the trendline highlight at all. And I am illustrating how a view that solar/cosmic-ray variation has essentially zero effect (the “hundredths of a degree” quote) would require believing every correlation in every related plot in http://s18.postimg.org/54ua3q255/gcrtempp.gif and http://s18.postimg.org/l3973i6hk/moreadded.jpg were probably all mere coincidences (which is way wrong).
Bart says:
July 31, 2013 at 1:56 pm
It trivially is.
In which case it should be trivial to demonstrate for the unwashed masses [like me] that it is and that their fit [coefficients, periods, etc] is the same as yours. I’m waiting. I would really like to know.
Bart
Because it is the result of a difference of similarly sized terms
To me it seems more fundamental. Leif views this as amplitude modulation, as far as I could tell from a previous discussion. A real 100 yr signal amplitude modulates a real 11 yr signal. This should produce symmetrical sidebands but it doesn’t. By real I mean fundamental, independent, not a result of other cycles like the beats.
Your explanation seems totally different and makes much more sense.
Leif
Keep on denying the Hale cycle, fine with me.
Willis Eschenbach says: July 31, 2013 at 9:55 am
Willis, do not be so naïve in your responses.
Very likely everybody reading these posts understood that you and Anthony have tried to defame my research simply because you both do not understand it of for some other reason I do not know. And you are still insisting in your falsehoods that my papers do not contain enough information about the data and the methodologies that I have used.
I see from your answer that you both now are understanding that your defamation attempt started to backfire.
So, let me know. Do you confirm that my papers do NOT contain ANY information about the data and the methodologies that I used, or are you willing to acknowledge that perhaps the truth is that you do not have enough scientific expertise to understand which data (which are freely available in Internet and properly referenced in the paper) and methodologies (that can be obtained from common textbooks of analysis) I have used in my paper?
In my previous post
Nicola Scafetta says: July 30, 2013 at 1:05 am
http://wattsupwiththat.com/2013/07/29/cycles-without-the-mania/#comment-1374995
I pointed out such macroscopic errors in your posts that I cannot but confirm that “Anthony published another piece of junk written by Willis”.
REPLY: Nicola, there’s an old saying. You can catch more flies with honey than you can from vinegar.
If you want to believe that challenging a paper on its merits is defamation, that’s your right. I also have the right to ignore you based on your claims being mostly emotional projection. – Anthony
REPLY FROM WILLIS: Nicola, you say:
That’s an example of the “Fallacy of the Excluded Middle”. You say the choices are a) either your papers contain NO information about data and methods, or b) I don’t understand your papers. There is a third possibility, which is that the papers contain SOME BUT NOT ENOUGH of the information about the data and methods.
The problem is, you haven’t revealed your computer code and your data. I know you didn’t do that work with a slide rule. So somewhere, you took a computer and applied some kind of mathematical transformations to a dataset.
What I and others have asked for all along is the code for those transformations, and the dataset you applied that code to. If it was done in a spreadsheet, then the spreadsheet contains both data and code. We need to see that spreadsheet. If it’s done in a computer language, the code and data are separate. In that case, we need to see the actual code used, and the actual data used.
In response to repeated requests for your code and data, you keep trying to make the issue your papers and methods. The papers and methods aren’t the problem. Lack of code and data is the problem. You applied actual code to an actual dataset to get your actual results. To replicate and verify those results, we need that actual code and the actual dataset. Until you provide those, you’re not following the scientific method.
Best regards,
w.
@Leif
Not referring to oscillators – any irregular (but repeated) pattern will have a related set of harmonics when transformed into the frequency domain. Simple symmetries can lead to particular patterns.
But I agree that it is a complex, non-linear system – which is why I expect 1/f noise as my primary hypothesis (which is commonly associated with complex nonlinear systems, e.g. climate). Interesting to explore whether the subharmonics have anything behind them but not convinced at the moment.
Bart: My demonstration above was not a model of the Sun, but a model of a simple system to explain a concept, that a change of state variables can produce an equivalent system representation which provides more insight into the fundamental properties of the system. And, when I say equivalent, I mean precisely equivalent. There exists an equivalent dynamical representation to the equations with which you are familiar which reduces to one dominated by two modes, one with a resonance period at 20 years, and one with a resonance period of 23.6 years.
Where did you show equivalent mathematical models of the sun? Sorry if I missed it.
Nicola Scafetta: Very likely everybody reading these posts understood that you and Anthony have tried to defame my research simply because you both do not understand it of for some other reason I do not know.
Count me as a counterexample to your “everybody” claim. They are merely critics of your research results. As I have written before, if the next 20 years worth of data conform closely enough to your model, then your model will have survived “stringent tests” and will be a candidate for the claim of “truth” or at least “accuracy”. Til then, I counsel patience and at least outward civility instead of whining.
Pamela Gray
Glad to see you participating in a positive way. I agree with you that it takes significant energy and time to change our global climate. Lief too thinks it is factors other than the sun that are behind the cooling every time sunspot numbers go zero or low levels. He could be right. Yet no one seems to have come up with an alternative or other mechanism that has wide scientific support to explain these regular coolings.
The cooling of temperatures that happened during the Maunder minimum and during the other 2 minimums did not happen overnight. The sunspots were declining from probably 1600 on, reached zero or very low levels by 1645, CET temperatures started to decline by 1650 and did not bottom out until about 1690. The whole period was about 70 years. AMO and PDO were both positive, so ocean cycles seemed not to be behind the cooling. There was some volcanic activity [6 eruptions] .Cooling during the other two major minimums happened over period of some 30 years of low solar activity. So these are small changes over long periods. The cooling only starts about 7-10 years after the last year of the solar maximum period [like after long low solar period s or cycles
. So if you can throw more light on this on this blog, I would love to read it. If I were solar scientist I would be speaking to my fellow quantum physicist , because I think the sun puts out other perhaps quantum level energies that may cause extra warming or result in cooling when these energies are absent during few if any solar eruptions or zero or minimum spots [ personal view only]
lgl says:
July 31, 2013 at 2:28 pm
“This should produce symmetrical sidebands but it doesn’t.”
Yes, I agree. This appears to be two separable processes superimposing to create beats.
Leif Svalgaard says:
July 31, 2013 at 9:32 am
“A sure sign of a spurious correlation is that it holds for a while, but then falters when new data is added. This is the case here. The correlation in question also fails going back in time,”
The J-E-V syzygies stay in phase with the sunspot cycles over four centuries. Nothing has faltered.
Henry: “There is not a 99% or a 95% confidence interval on that particular trendline.”
Willis may have made a mistake in his calculations. I’m not arguing one way or the other on that point. You may well be 100% correct in that regard. But the regression you did doesn’t make your point at all.
And your trendline most certainly does have a 95% CI. To be more specific, the coefficient of the slope has a 95% CI and it’s critical that we know this in order to judge whether the point estimate you gave is actually meaningful or not.
Excel does not give us this value, so neither did you. Based on that alone, your regression fails to prove your point. But more importantly, if Excel had given us this value, it would have been wrong. When calculated correctly — as I did in my previous post — the slope of the line cannot be said to be different from zero with confidence. So your point fails: you have not proven that the line has a large negative value instead of what Willis called a tiny value.
He may be wrong, but you did not prove it and your Excel example is doubly misleading.
Salvatore Del Prete says:
July 31, 2013 at 12:27 pm
The above post is my argument Jim Arndt.I have an explanation with specifics. I will argue this against any theory that is out there. I would more then glad to have an interview.
That is just hand waving. There is no substance to it. You are just calling out things but no explanation why they happen or a mechanism. I can saying anything I want but without why it happens or data to back it up it is just a strawman. Like I said do a guess post with data, sources and mechanism so we can all see how this is possible.
UHHG Do A Guest Post …fat fingers
Some bright mind figured out a long time ago that the energy needed to raise the Sun above the horizon could not be coming from a virgin in a cave. That kind of thinking applies quite well to this thread.
There appears to be some discussion of some kind of “stuff” that good ol’ Sol sends our way that we haven’t measured yet because we don’t know it exists (quantum level energies????). Irrelevant. As are the discussions of cosmic rays, magnetic changes, UV, etc. The proper way to search for an agent or driver is to determine how much energy is needed to create a change in weather patterns that lead to warming or cooling trends (you can’t have warming or cooling shifts without weather pattern variation changes that lead to warming or cooling shifts). Then look for that energy (guess how the periodic table was filled out?). If the total amount of TSI energy change that comes from the Sun does not rise to the required amount, you can stop looking for some subcomponent of the total energy available from the solar spectrum as a driver of temperature regime shifts on Earth.
That should really end the conversation. That it doesn’t leaves me gobsmacked. Essentially, the discussion by folks here who’s upturned noses have vaulted themselves to self-proclaimed positions of all-knowing Sun enthusiasts believe in butterfly disturbances leading to hurricanes.
Bullshit.