Guest Post by Willis Eschenbach
Loehle and Scafetta recently posted a piece on decomposing the HadCRUT3 temperature record into a couple of component cycles plus a trend. I disagreed with their analysis on a variety of grounds. In the process, I was reminded of work I had done a few years ago using what is called “Periodicity Analysis” (PDF).
A couple of centuries ago, a gentleman named Fourier showed that any signal could be uniquely decomposed into a number of sine waves with different periods. Fourier analysis has been a mainstay analytical tool since that time. It allows us to detect any underlying regular sinusoidal cycles in a chaotic signal.
Figure 1. Joseph Fourier, looking like the world’s happiest mathematician
While Fourier analysis is very useful, it has a few shortcomings. First, it can only extract sinusoidal signals. Second, although it has good resolution as short timescales, it has poor resolution at the longer timescales. For many kinds of cyclical analysis, I prefer periodicity analysis.
So how does periodicity analysis work? The citation above gives a very technical description of the process, and it’s where I learned how to do periodicity analysis. Let me attempt to give a simpler description, although I recommend the citation for mathematicians.
Periodicity analysis breaks down a signal into cycles, but not sinusoidal cycles. It does so by directly averaging the data itself, so that it shows the actual cycles rather than theoretical cycles.
For example, suppose that we want to find the actual cycle of length two in a given dataset. We can do it by numbering the data points in order, and then dividing them into odd- and even-numbered data points. If we average all of the odd data points, and we average all of the even data, it will give us the average cycle of length two in the data. Here is what we get when we apply that procedure to the HadCRUT3 dataset:
Figure 2. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 2. The cycle has been extended to be as long as the original dataset.
As you might imagine for a cycle of length 2, it is a simple zigzag. The amplitude is quite small, only plus/minus a hundredth of a degree. So we can conclude that there is only a tiny cycle of length two in the HadCRUT3.
Next, here is the same analysis, but with a cycle length of four. To do the analysis, we number the dataset in order with a cycle of four, i.e. “1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4 …”
Then we average all the “ones” together, and all of the twos and the threes and the fours. When we plot these out, we see the following pattern:
Figure 3. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 4. The cycle has been extended to be as long as the original dataset.
As I mentioned above, we are not reducing the dataset to sinusoidal (sine wave shaped) cycles. Instead, we are determining the actual cycles in the dataset. This becomes more evident when we look at say the twenty year cycle:
Figure 4. Periodicity in the HadCRUT3 dataset, with a cycle length of 20. The cycle has been extended to be as long as the original dataset.
Note that the actual 20 year cycle is not sinusoidal. Instead, it rises quite sharply, and then decays slowly.
Now, as you can see from the three examples above, the amplitudes of the various length cycles are quite different. If we set the mean (average) of the original data to zero, we can measure the power in the cyclical underlying signals as the sum of the absolute values of the signal data. It is useful to compare this power value to the total power in the original signal. If we do this at all possible frequencies, we get a graph of the strength of each of the underlying cycles.
For example, suppose we are looking at a simple sine wave with a period of 24 years. Figure 5 shows the sine wave, along with periodicity analysis in blue showing the power in each of the various length cycles:
Figure 5. A sine wave, along with the periodicity analysis of all cycles up to half the length of the dataset.
Looking at Figure 5, we can see one clear difference between Fourier analysis and periodicity analysis — the periodicity analysis shows peaks at 24, 48, and 72 years, while a Fourier analysis of the same data would only show the 24-year cycle. Of course, the apparent 48 and 72 year peaks are merely a result of the 24 year cycle. Note also that the shortest length peak (24 years) is sharper than the longest length (72-year) peak. This is because there are fewer data points to measure and average when we are dealing with longer time spans, so the sharp peaks tend to broaden with increasing cycle length.
To move to a more interesting example relevant to the Loehle/Scafetta paper, consider the barycentric cycle of the sun. The sun rotates around the center of mass of the solar system. As it rotates, it speeds up and slows down because of the varying pull of the planets. What are the underlying cycles?
We can use periodicity analysis to find the cycles that have the most effect on the barycentric velocity. Figure 6 shows the process, step by step:
Figure 6. Periodicity analysis of the annual barycentric velocity data.
The top row shows the barycentric data on the left, along with the amount of power in cycles of various lengths on the right in blue. The periodicity diagram at the top right shows that the overwhelming majority of the power in the barycentric data comes from a ~20 year cycle. It also demonstrates what we saw above, the spreading of the peaks of the signal at longer time periods because of the decreasing amount of data.
The second row left panel shows the signal that is left once we subtract out the 20-year cycle from the barycentric data. The periodicity diagram on the second row right shows that after we remove the 20-year cycle, the maximum amount of power is in the 83 year cycle. So as before, we remove that 83-year cycle.
Once that is done, the third row right panel shows that there is a clear 19-year cycle (visible as peaks at 19, 38, 57, and 76 years. This cycle may be a result of the fact that the “20-year cycle” is actually slightly less than 20 years). When that 19-year cycle is removed, there is a 13-year cycle visible at 13, 26, 39 years etc. And once that 13-year cycle is removed … well, there’s not much left at all.
The bottom left panel shows the original barycentric data in black, and the reconstruction made by adding just these four cycles of different lengths is shown in blue. As you can see, these four cycles are sufficient to reconstruct the barycentric data quite closely. This shows that we’ve done a valid deconstruction of the original data.
Now, what does all of this have to do with the Loehle/Scafetta paper? Well, two things. First, in the discussion on that thread I had said that I thought that the 60 year cycle that Loehle/Scafetta said was in the barycentric data was very weak. As the analysis above shows, the barycentric data does not have any kind of strong 60-year underlying cycle. Loehle/Scafetta claimed that there were ~ 20-year and ~ 60-year cycles in both the solar barycentric data and the surface temperature data. I find no such 60-year cycle in the barycentric data.
However, that’s not what I set out to investigate. I started all of this because I thought that the analysis of random red-noise datasets might show spurious cycles. So I made up some random red-noise datasets the same length as the HadCRUT3 annual temperature records (158 years), and I checked to see if they contained what look like cycles.
A “red-noise” dataset is one which is “auto-correlated”. In a temperature dataset, auto-correlated means that todays temperature depends in part on yesterday’s temperature. One kind of red-noise data is created by what are called “ARMA” processes. “AR” stands for “auto-regressive”, and “MA” stands for “moving average”. This kind of random noise is very similar observational datasets such as the HadCRUT3 dataset.
So, I made up a couple dozen random ARMA “pseudo-temperature” datasets using the AR and MA values calculated from the HadCRUT3 dataset, and I ran a periodicity analysis on each of the pseudo-temperature datasets to see what kinds of cycles they contained. Figure 6 shows eight of the two dozen random pseudo-temperature datasets in black, along with the corresponding periodicity analysis of the power in various cycles in blue to the right of the graph of the dataset:
Figure 6. Pseudo-temperature datasets (black lines) and their associated periodicity (blue circles). All pseudo-temperature datasets have been detrended.
Note that all of these pseudo-temperature datasets have some kind of apparent underlying cycles, as shown by the peaks in the periodicity analyses in blue on the right. But because they are purely random data, these are only pseudo-cycles, not real underlying cycles. Despite being clearly visible in the data and in the periodicity analyses, the cycles are an artifact of the auto-correlation of the datasets.
So for example random set 1 shows a strong cycle of about 42 years. Random set 6 shows two strong cycles, of about 38 and 65 years. Random set 17 shows a strong ~ 45-year cycle, and a weaker cycle around 20 years or so. We see this same pattern in all eight of the pseudo-temperature datasets, with random set 20 having cycles at 22 and 44 years, and random set 21 having a 60-year cycle and weak smaller cycles.
That is the main problem with the Loehle/Scafetta paper. While they do in fact find cycles in the HadCRUT3 data, the cycles are neither stronger nor more apparent than the cycles in the random datasets above. In other words, there is no indication at all that the HadCRUT3 dataset has any kind of significant multi-decadal cycles.
How do I know that?
Well, one of the datasets shown in Figure 6 above is actually not a random dataset. It is the HadCRUT3 surface temperature dataset itself … and it is indistinguishable from the truly random datasets in terms of its underlying cycles. All of them have visible cycles, it’s true, in some cases strong cycles … but they don’t mean anything.
w.
APPENDIX:
I did the work in the R computer language. Here’s the code, giving the “periods” function which does the periodicity function calculations. I’m not that fluent in R, it’s about the eighth computer language I’ve learned, so it might be kinda klutzy.
#FUNCTIONS
PI=4*atan(1) # value of pi
dsin=function(x) sin(PI*x/180) # sine function for degrees
regb =function(x) {lm(x~c(1:length(x)))[[1]][[1]]} #gives the intercept of the trend line
regm =function(x) {lm(x~c(1:length(x)))[[1]][[2]]} #gives the slope of the trend line
detrend = function(x){ #detrends a line
x-(regm(x)*c(1:length(x))+regb(x))
}
meanbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle means
rep(tapply(x,modline,mean),length.out=length(x))
}
countbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle number of datapoints N
rep(tapply(x,modline,length),length.out=length(x))
}
sdbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle standard deviations
rep(tapply(x,modline,sd),length.out=length(x))
}
normmatrix=function(x) sum(abs(x)) #returns the norm of the dataset, which is proportional to the power in the signal
# Function “periods” (below) is the main function that calculates the percentage of power in each of the cycles. It takes as input the data being analyzed (inputx). It displays the strength of each cycle. It returns a list of the power of the cycles (vals), along with the means (means), numner of datapoints N (count), and standard deviations (sds).
# There’s probably an easier way to do this, I’ve used a brute force method. It’s slow on big datasets
periods=function(inputx,detrendit=TRUE,doplot=TRUE,val_lim=1/2) {
x=inputx
if (detrendit==TRUE) x=detrend(as.vector(inputx))
xlen=length(x)
modmatrix=matrix(NA, xlen,xlen)
modmatrix=matrix(mod((col(modmatrix)-1),row(modmatrix)),xlen,xlen)
countmatrix=aperm(apply(modmatrix,1,countbyrow,x))
meanmatrix=aperm(apply(modmatrix,1,meanbyrow,x))
sdmatrix=aperm(apply(modmatrix,1,sdbyrow,x))
xpower=normmatrix(x)
powerlist=apply(meanmatrix,1,normmatrix)/xpower
plotlist=powerlist[1:(length(powerlist)*val_lim)]
if (doplot) plot(plotlist,ylim=c(0,1),ylab=”% of total power”,xlab=”Cycle Length (yrs)”,col=”blue”)
invisible(list(vals=powerlist,means=meanmatrix,count=countmatrix,sds=sdmatrix))
}
# /////////////////////////// END OF FUNCTIONS
# TEST
# each row in the values returned represents a different period length.
myreturn=periods(c(1,2,1,4,1,2,1,8,1,2,2,4,1,2,1,8,6,5))
myreturn$vals
myreturn$means
myreturn$sds
myreturn$count
#ARIMA pseudotemps
# note that they are standardized to a mean of zero and a standard deviation of 0.2546, which is the standard deviation of the HadCRUT3 dataset.
# each row is a pseudotemperature record
instances=24 # number of records
instlength=158 # length of each record
rand1=matrix(arima.sim(list(order=c(1,0,1), ar=.9673,ma=-.4591),
n=instances*instlength),instlength,instances) #create pseudotemps
pseudotemps =(rand1-mean(rand1))*.2546/sd(rand1)
# Periodicity analysis of simple sine wave
par(mfrow=c(1,2),mai=c(.8,.8,.2,.2)*.8,mgp=c(2,1,0)) # split window
sintest=dsin((0:157)*15)# sine function
plotx=sintest
plot(detrend(plotx)~c(1850:2007),type=”l”,ylab= “24 year sine wave”,xlab=”Year”)
myperiod=periods(plotx)
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
So, just to be clear – this (below) is what Willis and his sagacious companions are describing as noise or an instrumental artefact:
http://i53.tinypic.com/2s0g5te.jpg
Leif Svalgaard says:
August 3, 2011 at 6:11 am
The solar data does not have any 172 yr signal, but a strong 208 yr signal.
Or to be more precise, the spectra of proxy datasets from 10Be and 14C believed to be representative of solar activity contain a strong 208 year signal. The directly observed sunspot record isn’t long enough to tell if this really is a solar signal or a terrestrial artifact.
phlogiston says:
August 3, 2011 at 6:41 am
So, just to be clear – this (below) is what Willis and his sagacious companions are describing as noise or an instrumental artefact:
http://i53.tinypic.com/2s0g5te.jpg
To be fair, I think what is being said is that it could be an autocorrelated random walk (red noise).
Geoff Sharp says:
August 3, 2011 at 1:17 am
the process of Fourier analysis are not picking up a regular signal, that is because it is not regular. Leif points out the De Vries or Suess cycle, that is a product of the trident. One of the prongs is weak so the gap increases. Look at the Dalton minimum and now, 210 years between grand minima, the last prong of the Dalton didnt fire and the first prong (SC20) of the current cycle was the same.
There is no 210 year gap in the recent barycenter distance: http://www.leif.org/research/Barycenter-Distance-1600AD-2100AD.png Your ‘signal’ has been marked with arrows. There are arrows precisely 179 years later. In fact, if you shift the curve 179 years you get an almost perfect match [the pink curve is 1929 plotted at 1750, etc]. Presumably the ‘last prong’ of the Dalton [which was not really a Grand Minimum, BTW] is what I have marked with a red arrow. The whole situation today is just what it was 179 years ago as far as the barycenter distance is concerned.
tallbloke says:
August 3, 2011 at 7:18 am
Or to be more precise, the spectra of proxy datasets from 10Be and 14C believed to be representative of solar activity contain a strong 208 year signal. The directly observed sunspot record isn’t long enough to tell if this really is a solar signal or a terrestrial artifact.
You can always blame the data if they don’t fit…
Or use them if they do fit. This is the noble art of cherry picking.
Leif Svalgaard says:
August 3, 2011 at 7:36 am
You can always blame the data if they don’t fit…
Or use them if they do fit. This is the noble art of cherry picking.
I think you’ve got sour grapes.
tallbloke says:
August 3, 2011 at 7:40 am
I think you’ve got sour grapes.
And what do base that unfounded assertion on? and what is it to you?
What the S&L paper does do:
It derives the residual of an observed 20 and 60 year temperature cycle plus an underlying trend, and shows that if this residual is attributed to CO2, the sensitivity to doubling is considerably less than imputed by IPCC.
What the S&L paper does not do:
Prove that the residual has anything to do with CO2.
Pochas – spot on.
The argument here is how good their evidence is that the observed cycle really is a cycle, and whether such supporting evidence as they do have from the paleo record is bolstered or not by cycles in planetary motions which may be affecting the Sun and Earth.
Leif Svalgaard says:
August 3, 2011 at 7:42 am
And what do base that unfounded assertion on? and what is it to you?
Easy now Leif, I should have added a Brit humour alert.
pochas says:
August 3, 2011 at 7:53 am
What the S&L paper does not do:
Prove that the residual has anything to do with CO2.
S&L claim that their paper shows [‘prove’ is inappropriate] a “Warming due to anthropogenic GHG+Aerosol of 0.66 oC/Century”. If not due to CO2, what do you think S&L would ascribe it to? What do you think their ‘GHG’ refers to?
tallbloke says:
August 3, 2011 at 8:06 am
“And what do base that unfounded assertion on? and what is it to you?”
Easy now Leif, I should have added a Brit humour alert.
You did not answer my questions. What you should have done is to not have said anything at all if you cannot substantiate it. Capice?
tallbloke says:
August 3, 2011 at 8:02 am
The argument here is how good their evidence is that the observed cycle really is a cycle, and whether such supporting evidence as they do have from the paleo record is bolstered or not by cycles in planetary motions which may be affecting the Sun and Earth.
What S&L claims is that “The fitted components match solar model forcings within their uncertainty” and that therefore the excess is anthropogenic. So their claimed planetary effect in central to their argument.
@John Day.
I really do not see the point in continuing this conversation. I am well aware that one can generate a filtered analogue waveform from a correctly digitised signal that approximates to the original waveform, provided one uses a linear phase shift filter.
However, mathematically, it is not possible to interpolate a general signal digitised in accordance with the Sampling Theorem (Nyquist theorem, Shannon-Nyquist theorem, WKS .. etc) completely accurately without knowledge of the signal at +- infinity. This is because the interpolating function, Sinc(t), is not bounded, as clearly is shown, and dicussed, in the Whittaker-Shannon theorem. If one samples a periodic signal interpolation can be performed by computation of the Fourier series obtained by a DFT at any particular instance within the cycle but the assumption under this condition is that the signal has existed as periodic signal from -+ infinity. However, this is not the perfect reconstruction you appear to believe beause a) adc samples are not Dirac functions , b) the samples involve a phase shift, and c) you cannot be certain what happened before and after the sampling period. What I think you mean is that a signal can be reconstructed with tolerable engineering accuracy, which is a very different issue. If one is to reproduce a signal “perfectly”, it is best to be aware of the limitations of perfection in the real world.
My general point is very simple – if a signal is not digitised with a sampling frequency greater than twice its bandwidth, it cannot be interpolated correctly. Many time series, such as temperature records, have either not been collected, or have been processed, without consideration of the Sampling theorem.
tallbloke says:
August 3, 2011 at 5:13 am
Look, tallbloke, I told you I didn’t see it, so you can stuff your nastiness and your protesting ladies where the sun don’t shine. I’ve been going strong getting ready for this trip, so I didn’t see it—take a Prozac if it helps you handle my not seeing it, because it seems to be disturbing you greatly, but that’s the facts in the case.
My bad for missing it. There’s an easy cure for that, however. You say “Hey, Willis, you missed it”, I say “OK, I’ll get to it as soon as I can,” and we move on.
Or, on the other hand, you could be a total jackwagon and dump the contents of your diseased brain out for all to repulsed by as you have just done, with its dark suspicions and ugly accusations and your creepy fantasies about what goes on in other people’s minds.
And unfortunately, there’s no cure for that, unless the person wants to be cured.
So tallbloke, yes, I will answer Scafetta. When have you ever known me not to answer someone? I am one of the few people writing on the climate science who attempts to answer all serious questions.
But at present I”m in the airport waiting for a flight, so you’ll just have to wait. I’m sure that you can pass the time by letting us know more of your bizarre fantasies about what I or other people are doing that don’t have the Tallbloke Stamp of Approval™, and spew more accusations of things like, what was it, “arithmomania” … or you could just STFU and wait, like any decent person would.
Your choice …
w.
Willis Eschenbach says:
August 3, 2011 at 9:12 am
Look, tallbloke, I told you I didn’t see it, so you can stuff your nastiness and your protesting ladies where the sun don’t shine.
Willis Eschenbach says:
August 2, 2011 at 11:07 am
And like I said, Geoff, when you do that folks just point and laugh at you.
Willis,
Most threads on possible solar cycles [and those were at the root of the L&S paper] end up being hijacked by tallbloke and Geoff [with occasional others] pushing with nastiness and insults their personal views way beyond what they are worth. No amount of sound counterarguments can restore some reasonableness into the ‘debate’ as we have seen.
tallbloke says:
August 3, 2011 at 8:02 am
“The argument here is how good their evidence is that the observed cycle really is a cycle, and whether such supporting evidence as they do have from the paleo record is bolstered or not by cycles in planetary motions which may be affecting the Sun and Earth.”
What I get from this is that Fourier analysis and Periodicity analysis may not be the right tools for analyzing cyclic but non-stationary processes. Apparently Scafetta has a better one.
pochas says:
August 3, 2011 at 10:09 am
What I get from this is that Fourier analysis and Periodicity analysis may not be the right tools for analyzing cyclic but non-stationary processes. Apparently Scafetta has a better one.
apparent from what? He doesn’t say he has a better one… Although I may have missed where he says that. Perhaps you could guide me to where.
Leif Svalgaard says:
August 3, 2011 at 7:31 am
There is no 210 year gap in the recent barycenter distance: http://www.leif.org/research/Barycenter-Distance-1600AD-2100AD.png Your ‘signal’ has been marked with arrows.
Yes you are right, I have the start of both grand minima at 210 years apart in my head. But you have demonstrated the variability of the 3 prongs and how they throw options up each cycle, the message is starting to get through. The current cycle has no third prong at all (looking at AM) which is unusual. I am now wondering where the De Vries cycle comes from.
Your distance graph is different to the one Carl did back in 2008 and looks to show more detail, are you just using the JPL distance from SSB to Sun? I produced by accident an interesting distance graph a while ago that plots the difference between Sun to Jupiter and SSB to Jupiter, it also shows the extra detail.
Carl’s distance graph with my annotation, he produced this graph after I pointed out to him the perturbations are the key not the zero crossings:
http://tinyurl.com/2dg9u22/images/995-2985.jpg
Mine:
http://tinyurl.com/2dg9u22/images/jup_dist_diff.png
I might need to extend mine out further. A point of interest, Landsch***t didn’t use any of the prongs but concentrated on the points where the graph goes below zero, this is why he was wrong with his 1990 prediction and a little late with the current minimum (if it comes to pass).
@richard Saumarez
I really do not see the point in continuing this conversation. I am well aware that one can generate a filtered analogue waveform from a correctly digitised signal that approximates to the original waveform, provided one uses a linear phase shift filter.
You’re still not getting the ‘perfect reconstruction’ part of Shannon’s theorem, are you? As a PhD you should be able to appreciate, mathematically, that a discrete set of samples from a ‘correctly digitized’ band-limited waveform contains _all_ of information available for that signal. There is no residual error, nothing’s missing. It’s _not_ an approximation!
That’s why equation (7) in Shannon’s proof is so mind-bendingly amazing. First of all, it has an ‘equals’ sign, so it’s an exact expression, not an approximation (as you keep insisting). Secondly, it equates, on the left side, _any_ band-limited function of time f(t) with arbitrarily infinite resolution in time [down to yocto-seconds, or smaller if you wish] to, on the right side, a sinc-function interpolation formula that depends _only_ on the values of f(t) at a finite number of sample points x1,x2,..,xn etc.
You should also (as a PhD) be able to distinguish this perfect representation in ‘theory’, from ‘practice’, where measurement noise (including quantization noise) does indeed create errors in communication (hence the name of the paper).
It was this same 1949 paper, however, in sections III & ff (following section II, Sampling Theorem), where Shannon presented his thesis to the world, that these communication errors cannot be eliminated, but can be made arbitrarily small, by managing the information space in which the time signals are embedded.
Best regards!
John Day
Leif Svalgaard says:
August 3, 2011 at 6:11 am
And there is no correlation between the tridents and grand minima.
You might need to point this out?
Geoff Sharp says:
August 3, 2011 at 10:43 am
Yes you are right, I have the start of both grand minima at 210 years apart in my head. But you have demonstrated the variability of the 3 prongs and how they throw options up each cycle, the message is starting to get through.
You are too quick to jump to conclusions. The prongs do not ‘throw options’. That is your head only. You have no clear message, and if you compare with http://www.leif.org/research/Barycenter-Distance-240AD-590AD.png [green line] you see that it is very close to 1750-2100Ad, yet there are no grand minima in that interval and solar activity is very different in the two intervals 1750-2100 and 240-590. You are just chasing shadows.
Your distance graph is different to the one Carl did back in 2008 and looks to show more detail, are you just using the JPL distance from SSB to Sun?
Just use JPL with a step size of 100 days or less [I tried 30 days – didn’t make any difference].
Geoff Sharp says:
August 3, 2011 at 10:57 am
“And there is no correlation between the tridents and grand minima.”
You might need to point this out?
It is the one making the claim that there is who has that burden. This is why I ask you to annotate my graphs with what you consider Grand Minima.
Anyway, I just did point this out:
Leif Svalgaard says:
August 3, 2011 at 11:06 am
“if you compare with http://www.leif.org/research/Barycenter-Distance-240AD-590AD.png [green line] you see that it is very close to 1750-2100Ad, yet there are no grand minima in that interval and solar activity is very different in the two intervals 1750-2100 and 240-590. “
phlogiston says:
August 3, 2011 at 6:41 am
So, just to be clear, no, that is a graph of the AMO, and I didn’t talk about the AMO at all.
This is why I ask that people quote exactly what I said that they are reacting to. phlogiston obviously thinks I said something about the AMO, or that I claimed the ~60 year cycles in the HadCRUT3 record are an “instrumental artifact”.
I made no such claim. I said that in red-noise random datasets, that such long-period pseudo-cycles are quite common, and that as a result we don’t know if the ~60 cycle in the observational data is real or not.
Quote what I said, phlogiston (and others). It will help prevent you from tilting at windmills, and will allow us to understand exactly where we may disagree.
w.
Leif Svalgaard says:
August 3, 2011 at 10:31 am
pochas says:
August 3, 2011 at 10:09 am
What I get from this is that Fourier analysis and Periodicity analysis may not be the right tools for analyzing cyclic but non-stationary processes. Apparently Scafetta has a better one.
Leif Svalgaard:
“apparent from what? He doesn’t say he has a better one… Although I may have missed where he says that. Perhaps you could guide me to where.”
see above
“Our analysis is based on the correct thecniques, that is “multiple” power spectrum analisis agaist red noise background. I would like to insist on the word “multiple” because I used three alternative methods. The quasi 20 and 60 year cycles are quite evident in the data. This tests are done in Scafetta 2010. In L&S 2011 we simply references those results”
As for myself, the 60 year cycle is easily visible to the unaided eye in the recent temperature record, although I certainly don’t expect you to agree 🙂