Riding a Pseudocycle

Guest Post by Willis Eschenbach

Loehle and Scafetta recently posted a piece on decomposing the HadCRUT3 temperature record into a couple of component cycles plus a trend. I disagreed with their analysis on a variety of grounds. In the process, I was reminded of work I had done a few years ago using what is called “Periodicity Analysis” (PDF).

A couple of centuries ago, a gentleman named Fourier showed that any signal could be uniquely decomposed into a number of sine waves with different periods. Fourier analysis has been a mainstay analytical tool since that time. It allows us to detect any underlying regular sinusoidal cycles in a chaotic signal.

Figure 1. Joseph Fourier, looking like the world’s happiest mathematician

While Fourier analysis is very useful, it has a few shortcomings. First, it can only extract sinusoidal signals. Second, although it has good resolution as short timescales, it has poor resolution at the longer timescales. For many kinds of cyclical analysis, I prefer periodicity analysis.

So how does periodicity analysis work? The citation above gives a very technical description of the process, and it’s where I learned how to do periodicity analysis. Let me attempt to give a simpler description, although I recommend the citation for mathematicians.

Periodicity analysis breaks down a signal into cycles, but not sinusoidal cycles. It does so by directly averaging the data itself, so that it shows the actual cycles rather than theoretical cycles.

For example, suppose that we want to find the actual cycle of length two in a given dataset. We can do it by numbering the data points in order, and then dividing them into odd- and even-numbered data points. If we average all of the odd data points, and we average all of the even data, it will give us the average cycle of length two in the data. Here is what we get when we apply that procedure to the HadCRUT3 dataset:

Figure 2. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 2. The cycle has been extended to be as long as the original dataset.

As you might imagine for a cycle of length 2, it is a simple zigzag. The amplitude is quite small, only plus/minus a hundredth of a degree. So we can conclude that there is only a tiny cycle of length two in the HadCRUT3.

Next, here is the same analysis, but with a cycle length of four. To do the analysis, we number the dataset in order with a cycle of four, i.e. “1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4 …”

Then we average all the “ones” together, and all of the twos and the threes and the fours. When we plot these out, we see the following pattern:

Figure 3. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 4. The cycle has been extended to be as long as the original dataset.

As I mentioned above, we are not reducing the dataset to sinusoidal (sine wave shaped) cycles. Instead, we are determining the actual cycles in the dataset. This becomes more evident when we look at say the twenty year cycle:

Figure 4. Periodicity in the HadCRUT3 dataset, with a cycle length of 20. The cycle has been extended to be as long as the original dataset.

Note that the actual 20 year cycle is not sinusoidal. Instead, it rises quite sharply, and then decays slowly.

Now, as you can see from the three examples above, the amplitudes of the various length cycles are quite different. If we set the mean (average) of the original data to zero, we can measure the power in the cyclical underlying signals as the sum of the absolute values of the signal data. It is useful to compare this power value to the total power in the original signal. If we do this at all possible frequencies, we get a graph of the strength of each of the underlying cycles.

For example, suppose we are looking at a simple sine wave with a period of 24 years. Figure 5 shows the sine wave, along with periodicity analysis in blue showing the power in each of the various length cycles:

Figure 5. A sine wave, along with the periodicity analysis of all cycles up to half the length of the dataset.

Looking at Figure 5, we can see one clear difference between Fourier analysis and periodicity analysis — the periodicity analysis shows peaks at 24, 48, and 72 years, while a Fourier analysis of the same data would only show the 24-year cycle. Of course, the apparent 48 and 72 year peaks are merely a result of the 24 year cycle. Note also that the shortest length peak (24 years) is sharper than the longest length (72-year) peak. This is because there are fewer data points to measure and average when we are dealing with longer time spans, so the sharp peaks tend to broaden with increasing cycle length.

To move to a more interesting example relevant to the Loehle/Scafetta paper, consider the barycentric cycle of the sun. The sun rotates around the center of mass of the solar system. As it rotates, it speeds up and slows down because of the varying pull of the planets. What are the underlying cycles?

We can use periodicity analysis to find the cycles that have the most effect on the barycentric velocity. Figure 6 shows the process, step by step:

Figure 6. Periodicity analysis of the annual barycentric velocity data. 

The top row shows the barycentric data on the left, along with the amount of power in cycles of various lengths on the right in blue. The periodicity diagram at the top right shows that the overwhelming majority of the power in the barycentric data comes from a ~20 year cycle. It also demonstrates what we saw above, the spreading of the peaks of the signal at longer time periods because of the decreasing amount of data.

The second row left panel shows the signal that is left once we subtract out the 20-year cycle from the barycentric data. The periodicity diagram on the second row right shows that after we remove the 20-year cycle, the maximum amount of power is in the 83 year cycle. So as before, we remove that 83-year cycle.

Once that is done, the third row right panel shows that there is a clear 19-year cycle (visible as peaks at 19, 38, 57, and 76 years. This cycle may be a result of the fact that the “20-year cycle” is actually slightly less than 20 years). When that 19-year cycle is removed, there is a 13-year cycle visible at 13, 26, 39 years etc. And once that 13-year cycle is removed … well, there’s not much left at all.

The bottom left panel shows the original barycentric data in black, and the reconstruction made by adding just these four cycles of different lengths is shown in blue. As you can see, these four cycles are sufficient to reconstruct the barycentric data quite closely. This shows that we’ve done a valid deconstruction of the original data.

Now, what does all of this have to do with the Loehle/Scafetta paper? Well, two things. First, in the discussion on that thread I had said that I thought that the 60 year cycle that Loehle/Scafetta said was in the barycentric data was very weak. As the analysis above shows, the barycentric data does not have any kind of strong 60-year underlying cycle. Loehle/Scafetta claimed that there were ~ 20-year and ~ 60-year cycles in both the solar barycentric data and the surface temperature data. I find no such 60-year cycle in the barycentric data.

However, that’s not what I set out to investigate. I started all of this because I thought that the analysis of random red-noise datasets might show spurious cycles. So I made up some random red-noise datasets the same length as the HadCRUT3 annual temperature records (158 years), and I checked to see if they contained what look like cycles.

A “red-noise” dataset is one which is “auto-correlated”. In a temperature dataset, auto-correlated means that todays temperature depends in part on yesterday’s temperature. One kind of red-noise data is created by what are called “ARMA” processes. “AR” stands for “auto-regressive”, and “MA” stands for “moving average”. This kind of random noise is very similar observational datasets such as the HadCRUT3 dataset.

So, I made up a couple dozen random ARMA “pseudo-temperature” datasets using the AR and MA values calculated from the HadCRUT3 dataset, and I ran a periodicity analysis on each of the pseudo-temperature datasets to see what kinds of cycles they contained. Figure 6 shows eight of the two dozen random pseudo-temperature datasets in black, along with the corresponding periodicity analysis of the power in various cycles in blue to the right of the graph of the dataset:

Figure 6. Pseudo-temperature datasets (black lines) and their associated periodicity (blue circles). All pseudo-temperature datasets have been detrended.

Note that all of these pseudo-temperature datasets have some kind of apparent underlying cycles, as shown by the peaks in the periodicity analyses in blue on the right. But because they are purely random data, these are only pseudo-cycles, not real underlying cycles. Despite being clearly visible in the data and in the periodicity analyses, the cycles are an artifact of the auto-correlation of the datasets.

So for example random set 1 shows a strong cycle of about 42 years. Random set 6 shows two strong cycles, of about 38 and 65 years. Random set 17 shows a strong ~ 45-year cycle, and a weaker cycle around 20 years or so. We see this same pattern in all eight of the pseudo-temperature datasets, with random set 20 having cycles at 22 and 44 years, and random set 21 having a 60-year cycle and weak smaller cycles.

That is the main problem with the Loehle/Scafetta paper. While they do in fact find cycles in the HadCRUT3 data, the cycles are neither stronger nor more apparent than the cycles in the random datasets above. In other words, there is no indication at all that the HadCRUT3 dataset has any kind of significant multi-decadal cycles.

How do I know that?

Well, one of the datasets shown in Figure 6 above is actually not a random dataset. It is the HadCRUT3 surface temperature dataset itself … and it is indistinguishable from the truly random datasets in terms of its underlying cycles. All of them have visible cycles, it’s true, in some cases strong cycles … but they don’t mean anything.

w.

APPENDIX:

I did the work in the R computer language. Here’s the code, giving the “periods” function which does the periodicity function calculations. I’m not that fluent in R, it’s about the eighth computer language I’ve learned, so it might be kinda klutzy.

#FUNCTIONS

PI=4*atan(1) # value of pi

dsin=function(x) sin(PI*x/180) # sine function for degrees

regb =function(x) {lm(x~c(1:length(x)))[[1]][[1]]} #gives the intercept of the trend line

regm =function(x) {lm(x~c(1:length(x)))[[1]][[2]]} #gives the slope of the trend line

detrend = function(x){ #detrends a line

x-(regm(x)*c(1:length(x))+regb(x))

}

meanbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle means

rep(tapply(x,modline,mean),length.out=length(x))

}

countbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle number of datapoints N

rep(tapply(x,modline,length),length.out=length(x))

}

sdbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle standard deviations

rep(tapply(x,modline,sd),length.out=length(x))

}

normmatrix=function(x) sum(abs(x)) #returns the norm of the dataset, which is proportional to the power in the signal

# Function “periods” (below) is the main function that calculates the percentage of power in each of the cycles. It takes as input the data being analyzed (inputx). It displays the strength of each cycle. It returns a list of the power of the cycles (vals), along with the means (means), numner of datapoints N (count), and standard deviations (sds).

# There’s probably an easier way to do this, I’ve used a brute force method. It’s slow on big datasets

periods=function(inputx,detrendit=TRUE,doplot=TRUE,val_lim=1/2) {

x=inputx

if (detrendit==TRUE) x=detrend(as.vector(inputx))

xlen=length(x)

modmatrix=matrix(NA, xlen,xlen)

modmatrix=matrix(mod((col(modmatrix)-1),row(modmatrix)),xlen,xlen)

countmatrix=aperm(apply(modmatrix,1,countbyrow,x))

meanmatrix=aperm(apply(modmatrix,1,meanbyrow,x))

sdmatrix=aperm(apply(modmatrix,1,sdbyrow,x))

xpower=normmatrix(x)

powerlist=apply(meanmatrix,1,normmatrix)/xpower

plotlist=powerlist[1:(length(powerlist)*val_lim)]

if (doplot) plot(plotlist,ylim=c(0,1),ylab=”% of total power”,xlab=”Cycle Length (yrs)”,col=”blue”)

invisible(list(vals=powerlist,means=meanmatrix,count=countmatrix,sds=sdmatrix))

}

# /////////////////////////// END OF FUNCTIONS

# TEST

# each row in the values returned represents a different period length.

myreturn=periods(c(1,2,1,4,1,2,1,8,1,2,2,4,1,2,1,8,6,5))

myreturn$vals

myreturn$means

myreturn$sds

myreturn$count

#ARIMA pseudotemps

# note that they are standardized to a mean of zero and a standard deviation of 0.2546, which is the standard deviation of the HadCRUT3 dataset.

# each row is a pseudotemperature record

instances=24 # number of records

instlength=158 # length of each record

rand1=matrix(arima.sim(list(order=c(1,0,1), ar=.9673,ma=-.4591),

n=instances*instlength),instlength,instances) #create pseudotemps

pseudotemps =(rand1-mean(rand1))*.2546/sd(rand1)

# Periodicity analysis of simple sine wave

par(mfrow=c(1,2),mai=c(.8,.8,.2,.2)*.8,mgp=c(2,1,0)) # split window

sintest=dsin((0:157)*15)# sine function

plotx=sintest

plot(detrend(plotx)~c(1850:2007),type=”l”,ylab= “24 year sine wave”,xlab=”Year”)

myperiod=periods(plotx)

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

416 Comments
Inline Feedbacks
View all comments
August 2, 2011 2:33 am

Willis Eschenbach says:
August 1, 2011 at 1:04 pm
I agree that there could easily be something to barycentric analysis … I just haven’t found anyone yet who could demonstrate such a connection, myself included.
Have you read my paper?

Richard Saumarez
August 2, 2011 2:55 am

Two rather boring points that have been alluded to by various posters, which are important:
1) The length of the record fixes to fundamental period for a Discrete Fourier series and the sampling frequency fixes the frequency resolution.
Is the data aliased? This could produce spurious low frequency cycles and is a problem that has bitten many people (who should know better) in the hindquarters.
2) A Fourier TRANSFORM cannot be made on a real signal. It is an analytical concept that requires knowledge of the existance of the signal between +- infinity and in continuous in frequency. It can be approximated for a time windowed signal using a Discrete Fourier transform, in which many of the manipulations between the time and frequency domains that apply to the FT are still valid, but must be carefully interpreted in signals in which there are components that have an infinite bandwandwidth (i.e.: impulses, a square wave). Please note that a step function, i.e.: a sudden change in base line, is not a Fourier Transformable function.
The failure to distinguish between an FT and a Fouriers series, which can be computed on a real signal, is a major source of confusion.

August 2, 2011 4:21 am

Leif Svalgaard says:
August 1, 2011 at 2:22 pm
……………
Absolutely agree (as you did not read my post).
As my late grandfather use to say:
“Every nature’s puzzle has a hidden treasure far greater than the puzzle itself.”
I have removed data file portion from the link
http://www.vukcevic.talktalk.net/dGMF.htm
since there is no interest from the other contributors.

August 2, 2011 5:26 am

Saumarez
A Fourier TRANSFORM cannot be made on a real signal. It is an analytical concept that requires knowledge of the existance of the signal between +- infinity and in continuous in frequency. It can be approximated for a time windowed signal using a Discrete Fourier transform, in which many of the manipulations between the time and frequency domains that apply to the FT are still valid, but must be carefully interpreted in signals in which there are components that have an infinite bandwandwidth (i.e.: impulses, a square wave).
Actually, that’s not correct. Have you not heard of the Shannon sampling theorem?
http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem
“If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart.”
I.e. a band-limited analog signal can be perfectly reconstructed from its samples, provided the highest frequency is sampled at least twice per period.
In the real world, “real signals” (as you say) tend to be automatically band-limited, due to constraining, filtering effects of electronic devices and channels. (Show me a signal with infinite frequency). So, as long as you have filtered the data somehow (or let it filter itself) observing the Nyquist limits in your sampling, the resulting reconstructions are not approximations, but “perfect”, in the same sense that a line may be “perfectly” reconstructed from two samples of the line, taken in two different locations.
Now, you may quibble that, in a practical sense, it’s not possible to sample a line perfectly, because it must be rendered with some thickness, so there’s always some uncertainty about the two samples.
Yes, agreed. That’s why draftsmen and carpenters typically make three or more samples when sampling lines, even though the mathematical theory clearly shows you only need two samples.
Same problem happens in sampling signals. Sampling can’t be done without some measurement errors, which you might decide are ‘neglible’. Also, oversampling tends to reduce the uncertainty by averaging out random measurement errors.
But do you agree with Shannon’s postulate that band-limited sampling is _not_ an approximation and mathematically captures _all_ the information in the signal? Most people, even some engineers and scientists don’t understand this at all. It is a highly non-intuitive notion that is difficult to grasp at first glance.

tallbloke
August 2, 2011 6:04 am

Leif Svalgaard says:
August 1, 2011 at 4:17 pm
tallbloke says:
August 1, 2011 at 3:52 pm
Hows the refutation of Wolff and Patrone coming along Leif? Any progress with equations 2a, 2b and 4 yet?
Still where I left it. The problem is with equation (2) as your experts in relativity will tell you. Einstein’s equivalence principle tells you “No experiment, no clever exploitation of the laws of physics can tell us whether we are in free space or in a gravitational field. One of the consequences: In a reference frame that is in free fall, the laws of physics are the same as if there were no gravity at all”
If W&P are correct, then the sun can tell that it is in a gravitational field [that of the planets] which would violate the principle.

I think you’ll find when you’ve studied a bit harder that no equation in the paper demands that the sun is not in freefall.
By the way, Wolff and Patrone have replied to my email. They confirm what I’ve been saying about your misinterpretation of the ‘potential energy PE’ they posit as being ‘gravitational potential energy’. It is as I’ve been saying to you for more than two years now, energy from fusion radiating and convecting from solar core simultaneaously with barycentric motion which makes possible the barycentric effect on solar activity. Their paper confirms what I intuited from my understanding of Newtonian dynamics in 2008.
Over on the Loehle and Scafetta thread, you make this claim:
http://wattsupwiththat.com/2011/07/25/loehle-and-scafetta-calculate-0-66%c2%b0ccentury-for-agw/#comment-709669
In dynamo models of the solar cycle we actually solve the MHD equations derived from Newton’s and Maxwell’s laws. That is, we do not ‘model’ anything by curve fitting. The equations are solved by integration in time and space and result in a single cycle. The average length of that cycle is 10.8 years, depending a bit on the meridional circulation, which serves as input to the solution. Since we have not observed the latter for more than a couple of cycles, we don’t know how it varies over the longer term, except that the solar cycles vary, so we conclude that what controls the cycles will also vary.
Please could you give a link to where this derivation is demonstrated, along with links to the supporting empirical observational evidence for the validity of any parameterisations used to obtain the result.
Please could you also supply a link to a helioseismology result which shows a solar cycle length oscillation in the sun which I requested in the linked comment but never got a reply to.
Thanks.

August 2, 2011 6:20 am

tallbloke says:
August 1, 2011 at 12:06 am
Geoff Sharp says:
July 31, 2011 at 5:25 pm
Can we see the results over 3000 years to see if there is a regular cycle. I was thinking all planetary orbits with their differing precessions would vary the z axis values over time allowing no repeatable pattern.
—————————–
I’ll have to dig the graph off my backup disc, when I’ve found it’s power cable…It might be quicker to download it.

Thanks I would be interested to see. In relation to the precession, looking at the solar system from the side in line with the solar equator let’s say the Jupiter Z axis is at its highest point right now. In 55,000 years it will be at its lowest point (at the same timing point of the orbit) if my figures are correct. The other planets would be precessing at different rates on their inclined orbits which should mean the total Z axis data will be shifting on a constant basis. I am not sure if the plane of the planet inclined orbit shifts with the precession, but either way the total mass must change over time?
Precession in the XY plane along with the differing orbit speeds produce different planet positions every 172 years (this is the shape of the solar proxy holocene record) but this is quite different to the mass changes experienced in the Z axis.

tallbloke
August 2, 2011 8:17 am

Geoff Sharp says:
August 2, 2011 at 6:20 am
In relation to the precession, looking at the solar system from the side in line with the solar equator let’s say the Jupiter Z axis is at its highest point right now. In 55,000 years it will be at its lowest point (at the same timing point of the orbit) if my figures are correct. The other planets would be precessing at different rates on their inclined orbits which should mean the total Z axis data will be shifting on a constant basis. I am not sure if the plane of the planet inclined orbit shifts with the precession, but either way the total mass must change over time?

Hi Geoff, sorry I haven’t found the cable yet, I’ll have a look tonight. Your 110k year cycle for Jupiter sounds very interesting, and I’d like more info on that in return for digging out the plots. Over the 6000 years I looked at (annual datapoints), the 172 year signal was evident throughout, but was modulated on longer cycles too, just like the x-y data. Given the A/M exchange between (IIRC) Jupiter and Neptune at the Hallstadt cycle length, I think the precessions might be tied together, certainly for those planets, and quite likely for the others too, if perturbation theory is anywhere like right. But over the long term (millions of years) the solar system is chaotic, and some big events will occur which will change things drastically. However, the degree of order and synchronisation we see currently must come about somehow. It could be that there is a self organising principle at work which actively causes the planets to adopt as stable a pattern as possible after a disruptive event. I suspect it’s tied to, is influenced by and influences solar activity levels in a true cybernetic system of feedbacks.
Tying that down is much further down the line however.

J. Bob
August 2, 2011 8:22 am

As with any math tool, understanding on how it works is a prerequisite on it value. In evaluating date to acquire information, the Fourier Transform allow some interesting views of the data, including methods of filtering data. The following figure, shows how the oldest temperature station in Europe, was filtered using Fourier methods. In this case, frequencies greater then 0.02 cycles/yr were cut off. On the lower part of the figure, the Power Density per frequency was shown, indicating the power or energy at each frequency. The greater the magnitude the greater the energy.
http://www.4shared.com/photo/7rxAWINH/Ave1_2010_FF_50yr.html
It shows considerable energy in the 0.015-0.02 cycle/yr area, as well in the 0.04-0.05 and 0.07 areas. However these latter frequencies were cut off, in the filtering process.
A more interesting set of plots, using anomaly data from14 European stations. These stations have records prior to 1800. There include the CEL, as well as Debilt, Berlin, Basel, etc.
http://www.4shared.com/photo/I04JY2jI/Ave14_2010_FF_20yr.html
http://www.4shared.com/photo/cjSOIdlU/Ave14_2010_FF_25yr.html
http://www.4shared.com/photo/4FKXcwnw/Ave14_2010_FF_50yr.html
In this case, three different filters were used, 20 yr. (0.05 c/y), 25 yr. (0.04 c/y) and 50 yr. (0.02 cy/y). Here one can see the various cycles ( or clumps of cycles indicating almost periodic) emerge from the raw data.

August 2, 2011 9:51 am

Geoff Sharp says:
August 2, 2011 at 12:16 am
Willis, I have clearly shown the 60 year cycle in the velocity record.
Both the 60-yr and 170-y cycles are present in the Fourier analysis, but both with minuscule and insignificant amplitude.
Geoff Sharp says:
August 1, 2011 at 11:46 am
I dont know why you are playing these funny games but I can see that I will ultimately be wasting my time.
I’m giving you a chance to redeem yourself and present your case. If you consider that a waste of time [some might] that is your choice.
You did not stipulate that you wanted the grand minima marked, but in essence it is not required.
You often make completely unfounded statements like this one. I even gave you an example of what I wanted http://www.leif.org/research/Solar-Activity-vs.Barycenter-Distance-Annotated.png where I tried to mark the ‘strongest’ U/N cycle on both plots, but seeing that you marked just about half of all the cycles it would be better for you to simply mark the strongest [in each triple] on the distance plot and what you think are grand minima on the solar activity plots.
so you will have to wait.
Have waited a long time already, so a few more weeks won’t make any difference.
tallbloke says:
August 2, 2011 at 1:01 am
Well for a first approximation you should average the two periods involved: 19.86 years and 23.72 years. This gives the average Hale cycle length pretty closely.
There is no Hale activity cycle. Each 11-yr cycle is an entity in itself. That the polarities flip is incidental to and a consequence of how the 11-yr cycle works.
I understand your desire to see everything about methods published online, but as this is still under development,
If you can’t show it, don’t hype it.
tallbloke says:
August 2, 2011 at 1:43 am
You would expect the amplitude of the cycle representing the beat interaction period of the principal periods (19.86 and 23.72 years) to be larger than the signals from the principals, which are 19.86 and 23.72 years, since at the peak they are additive and at the base they cancel.
Physically there are no two cycles beating, and even if there were they would never cancel each other [where is the cancellation of energy (which is always positive) in the purported W&P mechanism?]
And you still don’t have any firm theory or observation on the cause of your proposed 121 year principal solar cycle either.
As I pointed out the speed of the meridonal circulation sets the solar cycle and we do observe that circulation to vary. There can be many reasons for such variation [stellar variations are commonplace]. At this point we do not need to know precisely which one, it suffices to admit observed variability.
We have two thunking great big planets (Jupiter and Saturn) exhibiting just the right frequencies to explain observations (peaks in the spectral analysis at 9.93 years(j-S synodic/2) and 11.86 years (J orbital) and 10.8 (sideband you claim as principal) and ~121 years (beat frequency you claim as other principal)
Perhaps I didn’t stress strongly enough that my example was a toy-example to demonstrate how the three peaks can be explained by amplitude modulation and not only by adding two cycles. The peaks in the actual sunspot data 1820-2011 are at 10.04, 10.92, and 11.92 yrs, but those are mainly artifacts from mixing two populations: the actual data has a varying fundamental period [from 11.3 in the first half to 10.6 in the 2nd half] likely reflecting a changing speed of the circulation.
We also have two viable mechanisms
Both of which violate physical law [which doesn’t seem to bother you – hey, perhaps new physics is in the offing]
M.A.Vukcevic says:
August 2, 2011 at 4:21 am
“Every nature’s puzzle has a hidden treasure far greater than the puzzle itself.”
“For every complex phenomenon there is a simple explanation, which is wrong.”
tallbloke says:
August 1, 2011 at 3:52 pm
It is as I’ve been saying to you for more than two years now, energy from fusion radiating and convecting from solar core simultaneously with barycentric motion which makes possible the barycentric effect on solar activity. Their paper confirms what I intuited from my understanding of Newtonian dynamics in 2008.
You try to infuse some ‘high-value’ words to lend legitimacy to your ‘intuition’. Fusion is irrelevant [it doesn’t matter how the energy is produce] and it takes hundreds of thousands of years for the energy from the solar core to reach the convection zone. Your ‘understanding’ of Newtonian dynamics suffers from your denial of its universality.
Please could you give a link to where this derivation is demonstrated, along with links to the supporting empirical observational evidence for the validity of any parameterisations used to obtain the result.
Accessible explanations may be found here:
http://www.leif.org/EOS/Sunspot-Predictions-Dynamo-India.pdf
http://www.leif.org/EOS/SunMagneticCycle.pdf
http://solarphysics.livingreviews.org/Articles/lrsp-2010-3/download/lrsp-2010-3Color.pdf [see section 4.9]
http://solarphysics.livingreviews.org/Articles/lrsp-2005-1/download/lrsp-2005-1Color.pdf
http://solarphysics.livingreviews.org/Articles/lrsp-2010-6/download/lrsp-2010-6Color.pdf
These papers have references to the [much harder] technical papers describing the calculations.
Please could you also supply a link to a helioseismology result which shows a solar cycle length oscillation in the sun which I requested in the linked comment but never got a reply to.
Second time: E.g. Figure 1 of http://arxiv.org/abs/1004.2869v1
But note that helioseismology uses sound waves with travel times of a few minutes so naturally will not show longer oscillations. What it does show is solar cycle variation of the frequency of those waves. So, the interior of the sun has [ever so slightly] different characteristics as a function of phase through the 11-yr cycle.

August 2, 2011 10:29 am

Leif Svalgaard says:
August 2, 2011 at 9:51 am
………….
My granddad (life long optimist)
“Every nature’s puzzle has a hidden treasure far greater than the puzzle itself.”
Dr. S’s (life long ? ) cyclopedia of wisdom:
“For every complex phenomenon there is a simple explanation, which is wrong.”
e.g. E=mc^2, F=ma, P=VI, PV=const end so on & on & on … all wrong !
Bye bye.

August 2, 2011 11:01 am

M.A.Vukcevic says:
August 2, 2011 at 10:29 am
E=mc^2, F=ma, P=VI, PV=const end so on & on & on … all wrong !
If you believe that, then:
Bye bye.
good riddance.

August 2, 2011 11:04 am

. S.
“For every complex phenomenon there is a simple explanation, which is wrong.”

“… all wrong
Logical fallacy. You are conflating ‘existential’ vs ‘universal’ quantification.
Leif didn’t say ‘all simple explanations are wrong’, merely that ‘at least one wrong simple explanation exists’ for each complex phenomenon.
http://en.wikipedia.org/wiki/Existential_quantification
http://en.wikipedia.org/wiki/Universal_quantification

tallbloke
August 2, 2011 11:54 am

Willis Eschenbach says:
August 2, 2011 at 11:22 am
Parker spiral alignment? Solar windspeed adjusted? I thought we were discussing the barycentric cycles …

Nature isn’t neatly compartmentalised to suit your thread topics.
I have no interest in half-revealed ideas,
In short, if you want to wait until publication before dazzling us with your ideas, please go be coy somewheres else.

Fair enough.
Bye for now.

August 2, 2011 12:24 pm

John Day says:
August 2, 2011 at 11:04 am
“… all wrong”
Logical fallacy. You are conflating ‘existential’ vs ‘universal’ quantification.

He is committing another fallacy: because E=mc^2 is right, therefore his ideas must be right.

August 2, 2011 1:41 pm

John Day says:
August 2, 2011 at 11:04 am
…..
Thank you Mr. Day.
If statement is true, then it is the explanation of phenomenon.
If assertion is wrong then it is not the explanation it is a guess, whereby: explanation = the statement describing a set of facts which clarifies the cause.
For a complex problem there may be more than one explanation (e.g. light = wave & light = particle, Newtonian and Einstein’s gravity, etc), but they all must be true.

August 2, 2011 2:00 pm


> For a complex problem there may be more than one explanation …
True. But you seem to be denying that there could be multiple, simple wrong answers too. That was the gist of Leif’s aphorism.
In fact there can be an infinite number of simple answers, all of which are wrong. For example, ‘What is the meaning of Life, the Universe and Everything?’ (which I assume you’ll accept as a ‘complex’ issue)
Here is an infinite set of simple answers, all wrong:
ans={1..Inf} – {42}
Note how I cleverly excluded the right answer.
:-]

1sky1
August 2, 2011 3:06 pm

Well-founded signal analysis distinguishes categorically between strictly
periodic and aperiodic or random oscillations. It is only the former that
can be exactly decomposed by the harmonic Fourier series (provided that we
have at least one complete cycle of non-aliased data). Although the
decomposition is entirely in terms of sinusoids, the shape or complexity of
the waveform doesn’t impact the exactitude at all.
Random signals, on the other hand, are NEVER the superposition of discrete
sinusoids. They are represented not by line spectra in a series summation,
but by a spectral continuum in a Fourier-Stieljes integral, with random
phases for the infintesimal sinusoids that synthesize the signal. The
Wiener-Kintchine Theorem rigorously defines their power density spectrum as
the Fourier transform of the autocovariance function of the signal.
Applying FFT analysis to such signals tacitly imposes the PRESUMPTION of
periodicity corresponding to the available length of record. That is what
leads to various short-record artifacts in the raw periodogram obtained by
squaring the magnitude of the FFT coefficents, which novices commonly
mistake for the power spectrum.
While “periodicity analysis” appeals strongly to primitive intuition and may
be useful in distinguishing strictly periodic signals from those that are
not, it provides no new analytic insight into the stochastic structure of
random signals. The decomposition obtained is likewise mistakenly
periodic–and lacks orthogonality to boot, as the authors of the method
admit. And as others have pointed out here, its goals are readily achieved
by less quaint means, such as wavelet analysis. Fourier’s place in the
pantheon of science remains secure.
I wholeheartedly agreed with Willis on the original thread that L&S made a
basic conceptual mistake by positing strictly periodic multidecadal
oscillations in their model of GST. Unfortunately, Willis now makes the
opposite-pole mistake of claiming, largely on the basis of the qualitative
features of red-noise modeling, that such oscillations are
“pseudo-cycles”–mere artifacts of faulty analysis.
But red noise is not at all evident in long records of yearly temperature
averages, instrumental or proxy. Properly estimated power spectra of such
records (some hundreds to thousands years long) almost invariably show
statistically significant spectral peaks and valleys, instead of a power
density monotonically rising with decreasing frequency, as with red noise.
In fact, in some regions (e.g., northeast Europe) the spectral density of
vetted records consistently FALLS at the lowest frequencies. And what
makes the contrary evidence even more convincing is the high cross-spectral
coherence at widely separated stations.
There is an extremely wide range of stochastic signal structures in the real
world that result in aperiodic, but coherent, oscillations that do not
resemble any academic noise models. This is the case because real-world
signals are invariably band-limited by physics. In narrow-band situations,
there’s even some useful predictability of amplitude and phase (but never
perfect predictability, as the prognosticators of SC24 must acknowledge).
In wide-band situations, even optimal prediction filters produce no
practically useful results.
Instead of jumping from one conceptual extreme to the other on the basis of
inapropriate models, we should pay more attention to what the entire body
of geophysical data is showing. It is showing that multidecadal and longer
natural cycles may not be predictable and the mechanisms uncertain, but they are real signals rather than
noise.

sky
August 2, 2011 3:17 pm

The new system of posting comments on WUWT is queer. It attached a link to a website with which I have no affiliation.

Richard Saumarez
August 2, 2011 3:28 pm

Day
I beg to disagree.
This is one of the most fundamental ideas in signal processing, I agree that if you compute a DFT of a time windowed signal, it can be inverted. This does not detract from the analytical concept that a signal that can be measured from +- infinfibity has infinestimal frequency resolution.Your comments about the Nyquist theorem (=the Shannon theoem? ) are wrong. If you have a system in which the sampling is not dictated by the Nyquist theorem, i.e.: the samples are not adequately prefillered, you will end up with an aliased signal, which cannot be reconstructed form the aliased data.
As I have pointed out, this is one of the most elemtary pitfalls in signal processing. For example, consider daily records that are averaged into monthy records. The averaging process is a filter that has zeros at n/(30 days) The first zero is at 1/month. This is decimated ated 1/month, giving a Nyquist frequency at 1/(2months). Since this is aliased, the monthly record is corrupt.
As I pointed out, this is a problem that occurs with monotonous regularity.

August 2, 2011 3:29 pm

Willis Eschenbach says:
August 2, 2011 at 11:29 am
Why on Earth would I read the paper of someone who claims there are trident-headed waves that escape Fourier analysis, but who won’t give us the math for either how the trident-headed waves are formed or how they evade analysis? Sorry, I prefer to read science.
Ok Willis I am beginning to understand your brand of science, basically if the data doesn’t suit your agenda you will refuse to look at it. I will call this “3 monkey science”.
You say you have seen nothing in the barycentre data but refuse to look at the latest research that has been developing over the last 3 years….where have you been?
I provided a clock face with variable trident arm as an analogy to demonstrate how cycles can go undetected using your method. I then provided the actual graphs created by Leif showing the occurrence and strength of the so called prongs on the clock arm. You once again ignored the data.
This is not good enough from a frequent author on a science blog, you are doing this blog a disservice and in my mind have zero credibility. I will not waste any more time on you.

Richard Saumarez
August 2, 2011 3:31 pm

PS: Day,
I did my PhD in a World class signal processing laboratory and I am familiar with the concepts that you claim to understand.

1sky1
August 2, 2011 4:12 pm

What happened to my comment of an hour ago?
[Reply: the moderator was out shopping. Comment posted now.☺ ~dbs]

1 8 9 10 11 12 17