Guest Post by Willis Eschenbach
Loehle and Scafetta recently posted a piece on decomposing the HadCRUT3 temperature record into a couple of component cycles plus a trend. I disagreed with their analysis on a variety of grounds. In the process, I was reminded of work I had done a few years ago using what is called “Periodicity Analysis” (PDF).
A couple of centuries ago, a gentleman named Fourier showed that any signal could be uniquely decomposed into a number of sine waves with different periods. Fourier analysis has been a mainstay analytical tool since that time. It allows us to detect any underlying regular sinusoidal cycles in a chaotic signal.
Figure 1. Joseph Fourier, looking like the world’s happiest mathematician
While Fourier analysis is very useful, it has a few shortcomings. First, it can only extract sinusoidal signals. Second, although it has good resolution as short timescales, it has poor resolution at the longer timescales. For many kinds of cyclical analysis, I prefer periodicity analysis.
So how does periodicity analysis work? The citation above gives a very technical description of the process, and it’s where I learned how to do periodicity analysis. Let me attempt to give a simpler description, although I recommend the citation for mathematicians.
Periodicity analysis breaks down a signal into cycles, but not sinusoidal cycles. It does so by directly averaging the data itself, so that it shows the actual cycles rather than theoretical cycles.
For example, suppose that we want to find the actual cycle of length two in a given dataset. We can do it by numbering the data points in order, and then dividing them into odd- and even-numbered data points. If we average all of the odd data points, and we average all of the even data, it will give us the average cycle of length two in the data. Here is what we get when we apply that procedure to the HadCRUT3 dataset:
Figure 2. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 2. The cycle has been extended to be as long as the original dataset.
As you might imagine for a cycle of length 2, it is a simple zigzag. The amplitude is quite small, only plus/minus a hundredth of a degree. So we can conclude that there is only a tiny cycle of length two in the HadCRUT3.
Next, here is the same analysis, but with a cycle length of four. To do the analysis, we number the dataset in order with a cycle of four, i.e. “1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4 …”
Then we average all the “ones” together, and all of the twos and the threes and the fours. When we plot these out, we see the following pattern:
Figure 3. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 4. The cycle has been extended to be as long as the original dataset.
As I mentioned above, we are not reducing the dataset to sinusoidal (sine wave shaped) cycles. Instead, we are determining the actual cycles in the dataset. This becomes more evident when we look at say the twenty year cycle:
Figure 4. Periodicity in the HadCRUT3 dataset, with a cycle length of 20. The cycle has been extended to be as long as the original dataset.
Note that the actual 20 year cycle is not sinusoidal. Instead, it rises quite sharply, and then decays slowly.
Now, as you can see from the three examples above, the amplitudes of the various length cycles are quite different. If we set the mean (average) of the original data to zero, we can measure the power in the cyclical underlying signals as the sum of the absolute values of the signal data. It is useful to compare this power value to the total power in the original signal. If we do this at all possible frequencies, we get a graph of the strength of each of the underlying cycles.
For example, suppose we are looking at a simple sine wave with a period of 24 years. Figure 5 shows the sine wave, along with periodicity analysis in blue showing the power in each of the various length cycles:
Figure 5. A sine wave, along with the periodicity analysis of all cycles up to half the length of the dataset.
Looking at Figure 5, we can see one clear difference between Fourier analysis and periodicity analysis — the periodicity analysis shows peaks at 24, 48, and 72 years, while a Fourier analysis of the same data would only show the 24-year cycle. Of course, the apparent 48 and 72 year peaks are merely a result of the 24 year cycle. Note also that the shortest length peak (24 years) is sharper than the longest length (72-year) peak. This is because there are fewer data points to measure and average when we are dealing with longer time spans, so the sharp peaks tend to broaden with increasing cycle length.
To move to a more interesting example relevant to the Loehle/Scafetta paper, consider the barycentric cycle of the sun. The sun rotates around the center of mass of the solar system. As it rotates, it speeds up and slows down because of the varying pull of the planets. What are the underlying cycles?
We can use periodicity analysis to find the cycles that have the most effect on the barycentric velocity. Figure 6 shows the process, step by step:
Figure 6. Periodicity analysis of the annual barycentric velocity data.
The top row shows the barycentric data on the left, along with the amount of power in cycles of various lengths on the right in blue. The periodicity diagram at the top right shows that the overwhelming majority of the power in the barycentric data comes from a ~20 year cycle. It also demonstrates what we saw above, the spreading of the peaks of the signal at longer time periods because of the decreasing amount of data.
The second row left panel shows the signal that is left once we subtract out the 20-year cycle from the barycentric data. The periodicity diagram on the second row right shows that after we remove the 20-year cycle, the maximum amount of power is in the 83 year cycle. So as before, we remove that 83-year cycle.
Once that is done, the third row right panel shows that there is a clear 19-year cycle (visible as peaks at 19, 38, 57, and 76 years. This cycle may be a result of the fact that the “20-year cycle” is actually slightly less than 20 years). When that 19-year cycle is removed, there is a 13-year cycle visible at 13, 26, 39 years etc. And once that 13-year cycle is removed … well, there’s not much left at all.
The bottom left panel shows the original barycentric data in black, and the reconstruction made by adding just these four cycles of different lengths is shown in blue. As you can see, these four cycles are sufficient to reconstruct the barycentric data quite closely. This shows that we’ve done a valid deconstruction of the original data.
Now, what does all of this have to do with the Loehle/Scafetta paper? Well, two things. First, in the discussion on that thread I had said that I thought that the 60 year cycle that Loehle/Scafetta said was in the barycentric data was very weak. As the analysis above shows, the barycentric data does not have any kind of strong 60-year underlying cycle. Loehle/Scafetta claimed that there were ~ 20-year and ~ 60-year cycles in both the solar barycentric data and the surface temperature data. I find no such 60-year cycle in the barycentric data.
However, that’s not what I set out to investigate. I started all of this because I thought that the analysis of random red-noise datasets might show spurious cycles. So I made up some random red-noise datasets the same length as the HadCRUT3 annual temperature records (158 years), and I checked to see if they contained what look like cycles.
A “red-noise” dataset is one which is “auto-correlated”. In a temperature dataset, auto-correlated means that todays temperature depends in part on yesterday’s temperature. One kind of red-noise data is created by what are called “ARMA” processes. “AR” stands for “auto-regressive”, and “MA” stands for “moving average”. This kind of random noise is very similar observational datasets such as the HadCRUT3 dataset.
So, I made up a couple dozen random ARMA “pseudo-temperature” datasets using the AR and MA values calculated from the HadCRUT3 dataset, and I ran a periodicity analysis on each of the pseudo-temperature datasets to see what kinds of cycles they contained. Figure 6 shows eight of the two dozen random pseudo-temperature datasets in black, along with the corresponding periodicity analysis of the power in various cycles in blue to the right of the graph of the dataset:
Figure 6. Pseudo-temperature datasets (black lines) and their associated periodicity (blue circles). All pseudo-temperature datasets have been detrended.
Note that all of these pseudo-temperature datasets have some kind of apparent underlying cycles, as shown by the peaks in the periodicity analyses in blue on the right. But because they are purely random data, these are only pseudo-cycles, not real underlying cycles. Despite being clearly visible in the data and in the periodicity analyses, the cycles are an artifact of the auto-correlation of the datasets.
So for example random set 1 shows a strong cycle of about 42 years. Random set 6 shows two strong cycles, of about 38 and 65 years. Random set 17 shows a strong ~ 45-year cycle, and a weaker cycle around 20 years or so. We see this same pattern in all eight of the pseudo-temperature datasets, with random set 20 having cycles at 22 and 44 years, and random set 21 having a 60-year cycle and weak smaller cycles.
That is the main problem with the Loehle/Scafetta paper. While they do in fact find cycles in the HadCRUT3 data, the cycles are neither stronger nor more apparent than the cycles in the random datasets above. In other words, there is no indication at all that the HadCRUT3 dataset has any kind of significant multi-decadal cycles.
How do I know that?
Well, one of the datasets shown in Figure 6 above is actually not a random dataset. It is the HadCRUT3 surface temperature dataset itself … and it is indistinguishable from the truly random datasets in terms of its underlying cycles. All of them have visible cycles, it’s true, in some cases strong cycles … but they don’t mean anything.
w.
APPENDIX:
I did the work in the R computer language. Here’s the code, giving the “periods” function which does the periodicity function calculations. I’m not that fluent in R, it’s about the eighth computer language I’ve learned, so it might be kinda klutzy.
#FUNCTIONS
PI=4*atan(1) # value of pi
dsin=function(x) sin(PI*x/180) # sine function for degrees
regb =function(x) {lm(x~c(1:length(x)))[[1]][[1]]} #gives the intercept of the trend line
regm =function(x) {lm(x~c(1:length(x)))[[1]][[2]]} #gives the slope of the trend line
detrend = function(x){ #detrends a line
x-(regm(x)*c(1:length(x))+regb(x))
}
meanbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle means
rep(tapply(x,modline,mean),length.out=length(x))
}
countbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle number of datapoints N
rep(tapply(x,modline,length),length.out=length(x))
}
sdbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle standard deviations
rep(tapply(x,modline,sd),length.out=length(x))
}
normmatrix=function(x) sum(abs(x)) #returns the norm of the dataset, which is proportional to the power in the signal
# Function “periods” (below) is the main function that calculates the percentage of power in each of the cycles. It takes as input the data being analyzed (inputx). It displays the strength of each cycle. It returns a list of the power of the cycles (vals), along with the means (means), numner of datapoints N (count), and standard deviations (sds).
# There’s probably an easier way to do this, I’ve used a brute force method. It’s slow on big datasets
periods=function(inputx,detrendit=TRUE,doplot=TRUE,val_lim=1/2) {
x=inputx
if (detrendit==TRUE) x=detrend(as.vector(inputx))
xlen=length(x)
modmatrix=matrix(NA, xlen,xlen)
modmatrix=matrix(mod((col(modmatrix)-1),row(modmatrix)),xlen,xlen)
countmatrix=aperm(apply(modmatrix,1,countbyrow,x))
meanmatrix=aperm(apply(modmatrix,1,meanbyrow,x))
sdmatrix=aperm(apply(modmatrix,1,sdbyrow,x))
xpower=normmatrix(x)
powerlist=apply(meanmatrix,1,normmatrix)/xpower
plotlist=powerlist[1:(length(powerlist)*val_lim)]
if (doplot) plot(plotlist,ylim=c(0,1),ylab=”% of total power”,xlab=”Cycle Length (yrs)”,col=”blue”)
invisible(list(vals=powerlist,means=meanmatrix,count=countmatrix,sds=sdmatrix))
}
# /////////////////////////// END OF FUNCTIONS
# TEST
# each row in the values returned represents a different period length.
myreturn=periods(c(1,2,1,4,1,2,1,8,1,2,2,4,1,2,1,8,6,5))
myreturn$vals
myreturn$means
myreturn$sds
myreturn$count
#ARIMA pseudotemps
# note that they are standardized to a mean of zero and a standard deviation of 0.2546, which is the standard deviation of the HadCRUT3 dataset.
# each row is a pseudotemperature record
instances=24 # number of records
instlength=158 # length of each record
rand1=matrix(arima.sim(list(order=c(1,0,1), ar=.9673,ma=-.4591),
n=instances*instlength),instlength,instances) #create pseudotemps
pseudotemps =(rand1-mean(rand1))*.2546/sd(rand1)
# Periodicity analysis of simple sine wave
par(mfrow=c(1,2),mai=c(.8,.8,.2,.2)*.8,mgp=c(2,1,0)) # split window
sintest=dsin((0:157)*15)# sine function
plotx=sintest
plot(detrend(plotx)~c(1850:2007),type=”l”,ylab= “24 year sine wave”,xlab=”Year”)
myperiod=periods(plotx)
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Richard S Courtney says:
August 4, 2011 at 8:30 am
tallbloke:
At August 4, 2011 at 8:05 am you ask me:
“Richard, I’m not stopping you discussing the aspects you want to discuss, so what’s the problem?”
I answer that there are two problems.
Firstly, the Off Topic astronomical considerations have deflected this thread from discussion of its topic. Trolls often use such deflection as a deliberate tactic to inhibit discussion of a topic in a thread. In this case it has happened inadvertently (i.e. not deliberately and not by action of trolls).
Since Loehle and Scafetta are relying to a large extent on the astronomical phenomena to support their contention of the existence of a repeating quasi cycle in temperature variation, the astronomical aspect of the debate is decidedly not off topic in my opinion. The question of causation has been a vexing issue for a long time and that rumbles on through many solar related threads. This thread isn’t the only one to have that issue as one of its dominant themes. Given the number of other threads which come to be dominated by issues such as polar ice or greenhouse effect radiative physics I don’t see that we can be fairly singled out. I did at one point on the original Loehle and Scafetta thread say that the argument shouldn’t be about causation on a thread discussing cycles in solar and terrestrial phenomena, but it will not be allowed to rest by certain parties who wish to discredit cyclic research by pointing to the lack of proven physical mechanism.
Secondly, it HAS inhibited the discussion of the subject of this thread (i.e. discussion of the argument presented by Willis as to the validity of the L&S method). But that subject is worthy of discussion.
Actually, no-one has talked much about their method. Scafetta’s own contribution went uncontested and undiscussed, for whatever reason, and this thread was all about Willis’ method, not L&S’. Willis’ null result does not prove the L&S method invalid. Compare the several studies which say “We find no causal link between Forbush decreases and cloud cover, and the CERN CLOUD project.
As I said;
“I am sure all the astronomical discussions in this thread have great importance in their own right, and possibly warrant a thread on WUWT that specifically addresses them. But they are NOT the subject of this thread.”
If you look at the content of the graphs WIllis created, half were to do with astronomical phenomena, and half were to do with random datasets representing surface temperature. Maybe there just aren’t enough interested parties who want to discuss random datasets representing surface temperature, so by default the debate has become one about real celestial phenomena, because many here are very interested in that on topic part of the issues raised by Willis’ analysis..
Leif – I’d been trying to work out where your ideas had come from as you know, and Tallbloke’s mentioning Newton knowing his equations were for idealised bodies is the first thing that makes sense of it,
[Tallbloke: “Newton knew his equations of motion and kinematics applied to idealised bodies with perfect elasticity.
Leif: “Complete nonsense. Newton’s laws are universal and apply to all bodies, whatsoever.”
So I think I might have another go …, but I’ll take this back to the other thread. Over the weekend.
tallbloke says:
August 4, 2011 at 8:12 am
It seems odd that such a good correlation appears when the data is averaged at that timescale of 7.5 months.
It is not a good correlation. You have been taken in by the same sleight of hand [often used by enthusiasts] as on the first version. They quote R^2 of 0.9776 and 0.952, but that is not for the correlation [that has R^2=0.026, i.e. not significant] but for the fit of the polynomial to their data points.
Geoff Sharp says:
August 4, 2011 at 8:58 am
Your analysis is lousy Leif, no grouping, no recognition of AMP strength etc.
It is not me trying to convince the world of anything. The null-hypothesis is that there is no correlation. You could improve things by providing a table [easier than a plot] that for each of your perturbation you give the time, the ‘strength’ [whatever you think it is], and its type. Then in another table the time of the central point of each grand minimum you think there is. That will put the whole thing on a numerical footing so it can be analyzed properly [and get you away from mere hand waving].
tallbloke says:
August 4, 2011 at 1:13 pm
The question of causation has been a vexing issue for a long time and that rumbles on through many solar related threads.
No, it not is vexing at all [there isn’t any – that may be vexing for people who want there to be such causation] and that it rumbles on is only because the threads get hijacked by people [misusing the hospitality of WUWT] to push their own theories [in spite of having their own blogs for such].
Myrrh says:
August 4, 2011 at 1:30 pm
Leif: “Complete nonsense. Newton’s laws are universal and apply to all bodies, whatsoever.”
So I think I might have another go …, but I’ll take this back to the other thread. Over the weekend.
I think you have already embarrassed yourself enough…
Leif Svalgaard says:
August 4, 2011 at 3:39 pm
tallbloke says:
August 4, 2011 at 1:13 pm
The question of causation has been a vexing issue for a long time and that rumbles on through many solar related threads.
No, it not is vexing at all [there isn’t any – that may be vexing for people who want there to be such causation] and that it rumbles on is only because the threads get hijacked by people [misusing the hospitality of WUWT] to push their own theories [in spite of having their own blogs for such].
Seriously out of order statement. Anyway, Wolff and Patrone are published, whereas your ramblings on the non-existence of causation are not. How’s the rebuttal coming along by the way?
You seemed to think there was something amiss with equations 2a 2b and 4. I look forward to your full rebuttal paper, if it ever gets written.
Leif Svalgaard says:
August 4, 2011 at 3:39 pm
You have been taken in by the same sleight of hand [often used by enthusiasts] as on the first version. They quote R^2 of 0.9776 and 0.952, but that is not for the correlation [that has R^2=0.026, i.e. not significant] but for the fit of the polynomial to their data points.
I haven’t been taken in this time round Leif. I said I’d think about it, and having thought about it, I’ve come to the same conclusion. The data points are averages of averages.
FTA: “While Fourier analysis is very useful, it has a few shortcomings. First, it can only extract sinusoidal signals.”
That is incorrect. Power spectral density analysis is a mature field which is widely used in system identification. It can identify standard rational processes which describe the evolution of linear or linearized systems. Since every smoothly evolving system can be linearized about a particular operating point, the method has wide, virtually universal, applicability.
All this “periodicity analysis” does is apply a not-very-good bandpass filter to each periodicity bin. It is an unsophisticated analysis technique.
tallbloke says:
August 4, 2011 at 4:54 pm
“No, it not is vexing at all” […] because the threads get hijacked by people [misusing the hospitality of WUWT] ”
Seriously out of order statement.
But true
whereas your ramblings on the non-existence of causation are not.
Lots of papers already on that. You might enjoy my upcoming presentation at AGU Fall Meeting in December:
“Is Solar Activity Modulated by Astronomical Cycles?”
ABSTRACT BODY: When Rudolf Wolf devised the sunspot number he noted [1859] that the length of the cycle was close to the orbital period of Jupiter. He even constructed a formula involving the periods of Jupiter, Saturn, Venus, and Earth that reproduced the sunspot numbers
1826-1848. Unfortunately the formula failed for subsequent cycles and Wolf concluded at the end of his life that the attempts by himself and others to ‘explain’ solar activity by planetary influences had really never yielded any satisfactory result. Nevertheless, the hypothesis rears it head from time to time, even today. I review several recent attempts, both proposed correlations and mechanisms. The recent discovery of exoplanets and the possibility of detecting magnetic
cycles on their host stars offers a near future test of the hypothesis, based on more than the one exemplar, the solar system, we have had until now.
tallbloke says:
August 4, 2011 at 4:57 pm
I haven’t been taken in this time round Leif. I said I’d think about it, and having thought about it, I’ve come to the same conclusion. The data points are averages of averages.
If you keep taking averages of averages of averages of …, eventually the number of degrees of freedom dwindles to zero. I showed in my second plot that there is no correlation.
tallbloke says:
August 4, 2011 at 4:57 pm
I haven’t been taken in this time round Leif. I said I’d think about it, and having thought about it, I’ve come to the same conclusion. The data points are averages of averages.
Another point that complicates this is the uneven quality of the sunspot number series. Early on many values were interpolated [creating false linear correlations]. From 1882 to today [including the 20% Waldmeier jump ~1945], the record is thought to be of higher quality [no longer based solely on Wolf’s small telescope http://www.leif.org/research/Wolf-37mm.jpg ]. Using the higher-quality series 1882-2011 completely removes even the spurious correlation: http://www.leif.org/research/Jupiter-Distance-Monthly-Sunspot-Number3.png
Averages of averages will not restore a meaningful result, where there is none.
@John Day
There is a very precise relationship between the sinc function and the properties of the DFTs. When you take two sampled functions and convolve them by multiplication of their discrete spectra, what you doing is equvalent to the time domain convolution of two infinite series. This is because the DFT is a true fourier transform of an infinite signal multiplied by a a rectangular window, and multiplyied by a train of Dirac functions. The method of interpolation using a Fourier series, i.e.: a DFT, requires that the coefficients are computed correctly and to do so, the sampling period must be equal to the fundamental frequency of the signal. If this is the case, frequency domain interpolation, by taking a DFT, expanding the the spectrum with zeros above the Nyquist frequency and inverse transformation will give exact results (to numerical accuracy). This is a well known result. However, this is a convolution in frequency domain in which the spectum of the signal is multiplied by a “filter” that has a brick wall cutoff above the Nyquist Frequency. This is directly equivalent to a time domain convolution between two infinite series, both of which we know because the signal is periodic and the inverse Fourier transform of the “reconstituting filter” is the impulse response of the filter: sinc(t).
This has a number of idea has a number of implications. The first is that if you want to filter a signal, you take the signal, the impulse response of the filter, perform a DFT, multiply their spectra, perform an inverse DFT and quite possibly get the wrong result. This is because you are convolving two infinte series and the response to the earlier cycles will appear in the signal after you have convolved them – known as wrap around. If the arrays contining the data are ” padded” with zeros to over twice the sample period, this problem is eliminated although only the results within the original sample period will be meaningful. This is also extremely important from the point of view of signal interpolation. If the sampling period is not exactly the fundamental period, you are convolving an infinites series of a signal containing a discontinuity with sinc(t). Consider a sine wave that is well sampled but the sampling period is not the period of the sine wave. There is a “jump” between the first an the last sample. The next sample in the Fourier series is predictable from Taylor’s theorem. However, the derivative of the signal in the last point in the sampled signal is band limited, since the differentiation in the frequency domain is simply multiplication by -jw. Thus the derivative in the signal exceeds that which it can be due to bandwidth and the signal is discontinuous. When Fourier reconstruction is attempted, this results in a Gibb’s phenomenon and the reconstruction oscillates between samples. In a real signal, containing many different frequencies, this is likely to occur for at least some Fourier components.
You should try the following:
Compute an array containing a sine wave whose frequency is exactly 2*pi/N where N is the array length. Perform a Fourier reconstruction and calculate the error between the reconstructed signal and the true signal. The result will be exact except for small numerical error. Now increase the frequency of the sine, by say 5%, without changing the length of the aray and repeat the procedure. Repeat this by increasing the frequency in 5% steps until it is double the starting frequency. I think that you will be surprised by the results.
I’m so glad that you have attempted a sinc reconstruction in the time domain but I am not surprised by errors at the end of signal. More points at either end will impove the reconstruction, but how many points are needed? Will the solution converge to the correct value before resolution is degraded by numerical accuracy? Actually, I use both these methods all the time. In practice, by allowing a start-up and tail off at either ends of the signal, one can quite easily interpolate to the resolution of the ADC but the solution is not exact.
Leif Svalgaard says:
August 4, 2011 at 6:53 pm
tallbloke says:
August 4, 2011 at 4:54 pm
“No, it not is vexing at all” […] because the threads get hijacked by people [misusing the hospitality of WUWT] ”
Seriously out of order statement.
But true
You are entitled to your opinion. I can understand why you’d want to exclude those who show your rhetoric to be empty and your position untenable.
You might enjoy my upcoming presentation at AGU Fall Meeting in December:
“Is Solar Activity Modulated by Astronomical Cycles?”
ABSTRACT BODY: When Rudolf Wolf devised the sunspot number he noted [1859] that the length of the cycle was close to the orbital period of Jupiter. He even constructed a formula involving the periods of Jupiter, Saturn, Venus, and Earth that reproduced the sunspot numbers
1826-1848. Unfortunately the formula failed for subsequent cycles and Wolf concluded at the end of his life that the attempts by himself and others to ‘explain’ solar activity by planetary influences had really never yielded any satisfactory result. Nevertheless, the hypothesis rears it head from time to time, even today. I review several recent attempts, both proposed correlations and mechanisms. The recent discovery of exoplanets and the possibility of detecting magnetic
cycles on their host stars offers a near future test of the hypothesis, based on more than the one exemplar, the solar system, we have had until now.
Excellent. Great to see you’ll be giving the topic an airing. Will you be making the rest of the presentation available in advance of the meeting so we can offer some constructive critique?
🙂
Will there be a panel discussion or just a brief Q&A afterwards?
By the way Leif, I’ve made a discovery about your 10.8 year period that you believe to be the fundamental period (along with 121 years) which is the expression of the “solar dynamo” and can explain the other periods which you claim are only coincidentally related to Jupiter and Saturn.
It is itself, apart from being a ‘sideband’ of those planetary periods, directly related to fundamental physical attributes of the solar systems planetary orbits. The whole system is tied together as a real system, and instead of arguing about the direction of causation, we need to be discussing the feedback mechanisms which must be in operation.
I’ll be putting a new post up later which will make these relationships clear. Today, I’m filming.
tallbloke says:
August 5, 2011 at 12:26 am
By the way Leif, I’ve made a discovery about your 10.8 year period that you believe to be the fundamental period (along with 121 years) which is the expression of the “solar dynamo” and can explain the other periods which you claim are only coincidentally related to Jupiter and Saturn.
The dynamo period is not constant 10.8 but varies with time. It was 11.3 when Wolf did his compilation and has been 10.6 the past ~100 years [both with the usual fluctuations between cycles]. So is not ‘fundamental’ in the sense of being invariant [as it would be if astronomical].
It is itself, apart from being a ‘sideband’ of those planetary periods, directly related to fundamental physical attributes of the solar systems planetary orbits. The whole system is tied together as a real system, and instead of arguing about the direction of causation, we need to be discussing the feedback mechanisms which must be in operation.
At the time in the 1880s when the period was believed to be 11.3 years, Charles Harrison showed that if you insert the periods p, masses m, and distances from the Sun d for the eight planets in the formula P = sum (p*m/d^2)/sum(m/d^2) you get 11.29 years, so you see, numerology was going strong even back then [unfortunately the formula doesn’t hold any longer – as is so often the case with numerology].
Heh, no. It’s much simpler than that. The factors affecting variation in the solar cycle length either side of the background main driver are a bit more complex, but we already got that worked out last year. This is deliciously direct.
tallbloke says:
August 5, 2011 at 8:56 am
solar cycle length either side of the background main driver
You mean the solar dynamo, of course.
BTW, the Harrison formula can also be expressed as P =sum(A)/sum(A/p) where A is the angular momentum. Numerology is fun at times, but ultimatlely unproductive.
nicola scafetta says:
July 30, 2011 at 11:06 am
To Willis Eschenbach,
Are there quasi 20 year and 60 year cycles in the temperature data? Certainly.
Are they real cycles? My Monte Carlo analysis, which you have not commented upon, shows that such long-period pseudo-cycles are very common in random autocorrelated datasets. So there is no reason to believe that the 60-year cycle in the temperature record are real. They may be real, but we have no reason to believe they are, and plenty of reason to believe they an artifact.
(In passing, I love it when people say things like “Moreover similar cycles have been found by numerous other people in numerous climatic data and published in numerous data.” Citations are your friend, Nicola, that’s just handwaving.)
Well, yes, that’s what I did, and it’s what I said I did, it’s called “Periodicity Analysis” and it is a recognized technique. All your objection proves, Nicola, is that you did not read the paper I cited on Periodicity Analysis.
Like all other techniques, it has its strengths and weaknesses … so? Does repeating the average cycle somehow make the average cycle incorrect? This is a trivial objection.
Nicola, I fear I don’t follow you here. Periodicity analysis finds the actual 20 year cycle. If I make up a sequence made of 2 cycles, periodicity analysis can find one and subtract it out. And yes, it will extract the 20 and 15 year cycles. So I don’t understand your objection, except that it appears to show that you don’t understand periodicity analysis. It can do exactly what you say it can’t do. For example, I use periodicity analysis on the barycentric data in Figure 6, and extract the largest components. These allowed me to reconstruct the barycentric data to good accuracy using just those major cycles. If periodicity analysis doesn’t work (as you claim), then how can it deconstruct and reconstruct the barycentric data?
But in any case, my statements regarding the problems in the L/S analysis do not depend in any sense on periodicity analysis. I use periodicity analysis because it gives me different understandings of the data, but that is a personal preference. The issues I raised with the L/S analysis are underlying problems which have nothing to do with periodicity analysis.
Now, tallbloke had excoriated me for not answering Nicola. Drawing on his experience of his own actions, I suppose, tallbloke maliciously assumed that there was a good chance that I was trying to dodge Nicola, saying:
First, accusing me of “hack and run” tactics is a joke. I am one of the most responsible climate bloggers on the web regarding answering any and all scientific objections to my claims and ideas.
Second, it is not a “compliment” for someone to answer scientific objections to their ideas, that simply shows that tallbloke misunderstands the scientific process. It is a scientist’s obligation to answer reasonable scientific objections to his theory. And indeed, he would be a fool not to answer them, as if he does not do so, people may not believe his ideas.
I have made several objections to the L/S paper, none of which have been refuted, or even disputed in some cases:
1. While there are ~20- and ~60-year quasi-cycles in the temperature data, the ~20 year cycles barely rise out of the noise. Does anyone dispute this?
2. Monte Carlo analysis shows that the ~60-year quasi-cycles in the temperature data have a very good chance of being merely an artifact of the length of the record. Does anyone dispute this? If so, present your code, I’ve shown mine.
3. The size of the two cycles is inverted between the barycentric and the temperature records, with the ~60 year cycle being about a tenth of the size of the ~20 year cycle in the barycentric data, and the reverse size relationship (sixty-year much larger than twenty-year cycles) holding in the temperature data. Does anyone dispute this?
4. The null hypothesis in the L/S analysis is that the temperature will continue to rise at 0.15°C per century. There is nothing in their analysis to substantiate or justify this choice, other than that it is the trend of the first 2/3 of the observations. Since the historical trend is around 0.5C per century, the choice of this null hypothesis needs to be vigorously established. Instead, it is not established at all.
5. If you are going to claim that barycentric variations affect the climate, you have to use the actual amplitude, phase, and cycle of the barycentric observations. You can’t just say “well, there’s a quasi-20-year cycle in the data” and give it your own phase and amplification to match it to another dataset. If you’re going show the barycentric data affects the climate, you need to use the actual barycentric data. You can’t just pick two underlying cycles, adjust their phase and amplitude to fit the temperature, and say “TA-DA!”.
Note that none of this has anything to do with the type of signal analysis used. I could have used Fourier, or wavelet, or periodicity analysis. They’re all analyzing the same thing, and their results are largely similar. Use any of them and you’ll find that the ~60 cycle in the barycentric data is tiny compared to the 20 year cycle, yet Loehle/Scafetta claims that the reverse is true in the temperature data.
How does that work again? …
w.
(Written from the Kaladi Brothers coffee shop on the Kenai River, it’s a beautiful day, I’m headed outside. My best to all.)
Leif Svalgaard says:
August 5, 2011 at 10:29 am
Numerology is fun at times, but ultimatlely unproductive.
Except when it fits with observations, and creates predictions which turn out correct for long enough to be treated as usefully reliable.
Newtons gravitational equations for example…
Willis Eschenbach says:
August 5, 2011 at 10:57 am
First, accusing me of “hack and run” tactics is a joke.
Willis, reread the first sentence of my comment and then take note of the if-then clause the later part is predicated on. Thanks, and apologies for getting you riled.
Regarding your reply, I’m in agreement with you regarding much of what you say. There are a couple of things I’d like to comment on.
[Periodicity analysis] Like all other techniques, it has its strengths and weaknesses … so?
You were in transit when this reply from Bart came in:
Bart says:
August 4, 2011 at 5:23 pm
FTA: “While Fourier analysis is very useful, it has a few shortcomings. First, it can only extract sinusoidal signals.”
That is incorrect. Power spectral density analysis is a mature field which is widely used in system identification. It can identify standard rational processes which describe the evolution of linear or linearized systems. Since every smoothly evolving system can be linearized about a particular operating point, the method has wide, virtually universal, applicability.
All this “periodicity analysis” does is apply a not-very-good bandpass filter to each periodicity bin. It is an unsophisticated analysis technique.
Regarding you point 3: While it’s true the 20 year Jupiter-Saturn synodic cycle dominates the x-y barycentric data, Scafettas analysis isn’t completely predicated on the barycentric motion. It may be (or may not be) important that every sixty years, the J-S conjunction takes place at the same place relative to the galactic centre and/or the bowshock of th heliosphere. Also, your reply to me regarding the z-axis didn’t take into account the tilt of the Sun wrt the plane of invariance. This also may be important. Hopefully, the publication of the Loehle-Scafetta paper may raise awareness and create research interest in these questions. As the saying goes – Further research required. A bit of funding heading in this direction may well be better spent than paying people to fly around Alaskan coasts in choppers counting polar bears floating belly up in the briny, as I’m sure you’d agree.
Cheers
tb
Here you go Leif, you’re going to love this.
http://tallbloke.wordpress.com/2011/08/05/jackpot-jupiter-and-saturn-solar-cycle-link-confirmed/
Willis
stuff here which may be of interest
http://tinyurl.com/4ya5qkj
some of the early posts use a narrow band filter manually scanned over centre frequencies from a few months to 150 years looking for peak amplitude outputs (there was NO 60 year cycle found)
The latter posts adjust these frequencies and manually adjust phase and amplitude for best fit. Further frequencies of 60 years and 118 years were added to further improve the fit. 6 frequencies and a trend are all that is required to produce a good fit.
10.1y
the TSI
14.9y
21y
59.7y
118.5y
+Trend
There is No justification for any of these frequencies!
The spreadsheets are available if required
Willis Eschenbach says:
August 2, 2011 at 5:56 pm
As a follow up to my earlier response, there are two major issues that divide us. The first is role of intuition in mathematical analysis and the second is the adequacy of red noise models and periodicity analysis in dealing with real-world data.
In your post you state: “I started all of this because I thought that the analysis of random
red-noise datasets might show spurious cycles. So I made up some random
red-noise datasets the same length as the HadCRUT3 annual temperature
records (158 years), and I checked to see if they contained what look like
cycles.”
Then is your reply you claim that intuition proves “historically” correct.
No doubt, intuition often is inspiring. Yet in matters mathematical
conjecture does not suffice. While highly informed intuition led Fermat to
a correct conclusion that escaped proof for a very long time, that is an
exceptional case. For us mere mortals, proof is necessary.
The developers of periodicity analysis intended the algorithm to be used on
data known A PRIORI to be strictly periodic. There’s not a whiff of any
suggestion of applying it to random data to detect imbedded periodicities.
The only random data components S&S consider are gaussan white noise. I
believe the authors know full well that in strongly time-limited data the
mathematical conflation of very narrow-band components and strictly
periodic ones is great. The unproven conjecture that the algorithm
extracts “real” cycles in such cases is entirely yours.
Red noise is the very simplest of random processes that generates
autocorrelated ones from white noise. Unfortunately, low-order ARIMA
processes are that only ones that i.i.d-oriented statisticians seem familar
with. While the low-frequency spectral content of red noise indeed produces
long-term wanderings that may appear over short stretches similar to
band-limited processes, the latter have a distinctly oscillatory, rather
than monotonically decaying autocorrelation function (acf). That structure
is what produces oscillations in the records of typical real-world
processes. With short records, the sample autocovariance cannot be
accurately computed over lags long enough to fully capture the process
888888888structure. Decades ago, Burg developed a “maximum entropy” spectral
estimation algorithm that in effect extends the lag-range by fitting a
HIGH-order AR approximation to the available acf estimates. That algorithm,
rather than periodicity analysis, is what professional signal analysts
employ. They do not rely on intuition alone.
To bring home the reality of random cycles in every-day terms, consider
ocean surface waves. They are never strictly periodic and in a raging sea
not even very narrow band. Yet their forces have sunk ships large and
small. That should be real enough for anybody.
tallbloke says:
August 5, 2011 at 1:55 pm
Newtons gravitational equations for example…
For all your superior intuition and in-depth knowledge of Newtonian Mechanics you fail to see the important point: Newton was able to deduce Kepler’s laws, tides, and flattening of the Earth from a few fundamental concepts. No curve fitting or numerology there.
tallbloke says:
August 5, 2011 at 3:29 pm
Here you go Leif, you’re going to love this.
All this is old hat and actually serves to debunk the whole thing. Here is an old blog post [from 2009] about this: http://www.leif.org/research/Vuk-SAM.pdf but this conclusion is actually 111 years old.
tallbloke says:
August 5, 2011 at 3:29 pm
Here you go Leif, you’re going to love this.
Here is a non-technical description of how the solar cycles are generated
http://arxiv.org/abs/1103.3385
Leif Svalgaard says:
August 5, 2011 at 9:41 pm
No curve fitting or numerology there.
Gravitational anomalies are an interesting subject. So is tides.
tallbloke says:
August 5, 2011 at 3:29 pm
Here you go Leif, you’re going to love this.
All this is old hat and actually serves to debunk the whole thing. Here is an old blog post [from 2009] about this: http://www.leif.org/research/Vuk-SAM.pdf but this conclusion is actually 111 years old.
Old truths bear repeating when continually ignored or misrepresented. There are a couple of new things I have updated the post with since you looked at it. One being a possible explanation of your 17 year solar cycle. By The way, I acknowledged your Brown 1900 reference, thanks again for that. Please come over and tell us more about the 17 year cycle. Have any careful studies been done on the appearance of opposite polarity spots which would give me curves for their appearance in schwabe cycles?
Leif Svalgaard says:
August 5, 2011 at 9:54 pm
Here is a non-technical description of how the solar cycles are generated
http://arxiv.org/abs/1103.3385
Chouduri is always good for light fiction reading. 🙂
tallbloke says:
August 6, 2011 at 3:23 am
Please come over and tell us more about the 17 year cycle.
Slide 40 ff of http://www.leif.org/research/SHINE-2011-The-Forgotten-Sun.pdf have something on the ‘extended cycle’.
Have any careful studies been done on the appearance of opposite polarity spots which would give me curves for their appearance in schwabe cycles?
Richardson studied this very carefully in the late 1940s [newer studies does not alter his conclusions]. The opposite polarity spots occur at random. Some 3% of all spots are ‘reversed’ with no systematic behavior. They are most likely just ordinary regions that have rotated as all regions do to some degree.
Leif Svalgaard says:
August 6, 2011 at 5:20 am
The opposite polarity spots occur at random. Some 3% of all spots are ‘reversed’ with no systematic behavior. They are most likely just ordinary regions that have rotated as all regions do to some degree.
OK, thanks. I must have misunderstood your basis for the ‘extended solar cycle’. I’m waiting for the pdf to load so I can view slide 40
Ah. there we go, how did I know the tandem sketch would be included? 🙂
Small typo bottom of 47 Unexpected[ly] early
Thanks, it’s a great pdf.
FYI 17 years is the harmonic mean of the motions of Jupiter and Saturn.