Cycles Without The Mania

Guest Post by Willis Eschenbach

Are there cycles in the sun and its associated electromagnetic phenomena? Assuredly. What are the lengths of the cycles? Well, there’s the question. In the process of writing my recent post about cyclomania, I came across a very interesting paper entitled Correlation Between the Sunspot Number, the Total Solar Irradiance, and the Terrestrial Insolation by Hempelmann and Weber, hereinafter H&W2011. It struck me as a reasonable look at cycles without the mania, so I thought I’d discuss it here.

The authors have used Fourier analysis to determine the cycle lengths of several related datasets. The datasets used were the sunspot count, the total solar irradiance (TSI), the Kiel neutron count (cosmic rays), the Geomagnetic aa index, and the Mauna Loa insolation. One of their interesting results is the relationship between the sunspot number, and the total solar irradiation (TSI). I always thought that the TSI rose and fell with the number of sunspots, but as usual, nature is not that simple. Here is their Figure 1:

acrim tsi vs sunspot number

They speculate that at small sunspot numbers, the TSI increases. However, when the number of sunspots gets very large, the size of the black spots on the surface of the sun rises faster than the radiance, so the net radiance drops. Always more to learn … I’ve replicated their results, and determined that the curve they show is quite close to the Gaussian average of the data.

Next, they give the Fourier spectra for a variety of datasets. I find that for many purposes, there is a better alternative than Fourier analysis for understanding the makeup of a complex waveform or a time-series of natural observations. Let me explain the advantages of an alternative to the Fourier Transform, which is called the Periodicity Transform, developed by Sethares and Staley.

I realized in the writing of this post that in climate science we have a very common example of a periodicity transform (PT). This is the analysis of temperature data to give us the “climatology”, which is the monthly average temperature curve. What we are doing is projecting a long string of monthly data onto a periodic space, which repeats with a cycle length of 12. Then we take the average of each of those twelve columns of monthly data, and that’s the annual cycle. That’s a periodicity analysis, with a cycle length of 12.

By extension, we can do the same thing for a cycle length of 13 months, or 160 months. In each case, we will get the actual cycle in the data with that particular cycle length.

So given a dataset, we can look at cycles of any length in the data. The larger the swing of the cycle, of course, the more of the variation in the original data that particular cycle explains. For example, the 12-month cycle in a temperature time series explains most of the total variation in the temperature. The 13-month cycle, on the other hand, is basically nonexistent in a monthly temperature time-series.

The same is true about hourly data. We can use a periodicity transform (PT) to look at a 24-hour cycle. Here’s the 24-hour cycle for where I live:

santa rosa diurnal temperature

Figure 2. Average hourly temperatures, Santa Rosa, California. This is a periodicity transform of the original hourly time series, with a period of 24.

Now, we can do a “goodness-of-fit” analysis of any given cycle against the original observational time series. There are several ways to measure that. If we’re only interested in a relative index of the fit of cycles of various lengths, we can use the root-mean-square power in the signals. Another would be to calculate the R^2 of the cycle and the original signal. The choice is not critical, because we’re looking for the strongest signal regardless of how it’s measured. I use a “Power Index” which is the RMS power in the signal, divided by the square root of the length of the signal. In the original Sethares and Staley paper, this is called a “gamma correction”. It is a relative measurement, valid only to compare the cycles within a given dataset.

So … what are the advantages and disadvantages of periodicity analysis (Figure 2) over Fourier analysis? Advantages first, neither list is exhaustive …

Advantage: Improved resolution at all temporal scales. Fourier analysis only gives the cycle strength at specific intervals. And these intervals are different across the scale. For example, I have 3,174 months of sunspot data. A Fourier analysis of that data gives sine waves with periods of 9.1, 9.4, 9.8, 10.2, 10.6, 11.0, 11.5, and 12.0 years.

Periodicity analysis, on the other hand, has the same resolution at all time scales. For example, in Figure 2, the resolution is hourly. We can investigate a 25-hour cycle as easily and as accurately as the 24-hour cycle shown. (Of course, the 25-hour cycle is basically a straight line …)

Advantage: A more fine-grained dataset gives better resolution. The resolution of the Fourier Transform is a function of the length of the underlying dataset. The resolution of the PT, on the other hand, is given by the resolution of the data, not the length of the dataset.

Advantage: Shows actual cycle shapes, rather than sine waves. In Figure 2, you can see that the cycle with a periodicity of 24 is not a sine wave in any sense. Instead, it is a complex repeating waveform. And often, the shape of the wave-form resulting from the periodicity transform contains much valuable information. For example, in Figure 2, from 6AM until noon, we can see how the increasing solar radiation results in a surprisingly linear increase of temperature with time. Once that peaks, the temperature drops rapidly until 11 PM. Then the cooling slows, and continues (again surprisingly linearly) from 11PM until sunrise.

As another example, suppose that we have a triangle wave with a period of 19 and a sine wave with a period of 17. We add them together, and we get a complex wave form. Using Fourier analysis we can get the underlying sine waves making up the complex wave form … but Fourier won’t give us the triangle wave and the sine wave. Periodicity analysis does that, showing the actual shapes of the waves just as in Figure 2.

Advantage: Can sometimes find cycles Fourier can’t find. See the example here, and the discussion in Sethares and Staley.

Advantage: No “ringing” or aliasing from end effects. Fourier analysis suffers from the problem that the dataset is of finite length. This can cause “ringing” or aliasing when you go from the time domain to the frequency domain. Periodicity analysis doesn’t have these issues

Advantage: Relatively resistant to missing data. As the H&W2011 states, they’ve had to use a variant of the Fourier transform to analyze the data because of missing values. The PT doesn’t care about missing data, it just affects the error bars.

Advantage: Cycle strengths are actually measured. If the periodicity analysis say that there’s no strength in a certain cycle length, that’s not a theoretical statement. It’s a measurement of the strength of that actual cycle compared to the other cycles in the data.

Advantage: Computationally reasonably fast. The periodicity function I post below written in the computer language “R”, running  on my machine (MacBook Pro) does a full periodicity transform (all cycles up to 1/3 the dataset length) on a dataset of 70,000 data points in about forty seconds. Probably could be sped up, all suggestions accepted, my programming skills in R are … well, not impressive.

Disadvantage: Periodicity cycles are neither orthogonal nor unique. There’s only one big disadvantage, which applies to the decomposition of the signal into its cyclical components. With the Fourier Transform, the sine waves that it finds are independent of each other. When you decompose the original signal into sine waves, the order in which you remove them makes no difference. With the Periodicity Transform, on the other hand, the signals are not independent. A signal with a period of ten years, for example, will also appear at twenty and thirty years and so on. As a result, the order in which you decompose the signal becomes important. See Sethares and Staley for a full discussion of decomposition methods.

A full periodicity analysis looks at the strength of the signal at all possible frequencies up to the longest practical length, which for me is a third of the length of the dataset. That gives three full cycles for the longest period. However, I don’t trust the frequencies at the longest end of the scale as much as those at the shorter end. The margin of error in a periodicity analysis is less for the shorter cycles, because it is averaging over more cycles.

So to begin the discussion, let me look at the Fourier Transform and the Periodicity Transform of the SIDC sunspot data. In the H&W2011 paper they show the following figure for the Fourier results:

fourier analysis sunspot number

Figure 3. Fourier spectrum of SIDC daily sunspot numbers.

In this, we’re seeing the problem of the lack of resolution in the Fourier Transform. The dataset is 50 years in length. So the only frequencies used by the Fourier analysis are 50/2 years, 50/3 years, 50/4 years, and so on. This only gives values at cycle lengths of around 12.5, 10, and 8.3 years. As a result, it’s missing what’s actually happening. The Fourier analysis doesn’t catch the fine detail revealed by the Periodicity analysis.

Figure 4 shows the full periodicity transform of the monthly SIDC sunspot data, showing the power contained in each cycle length from 3 to 88 years (a third of the dataset length).

periodicity monthly sunspot 3 to 88

Figure 4. Periodicity transform of monthly SIDC sunspot numbers. The “Power Index” is the RMS power in the cycle divided by the square root of the cycle length. Vertical dotted lines show the eleven-year cycles, vertical solid lines show the ten-year cycles.

This graph is a typical periodicity transform of a dataset containing clear cycles. The length of the cycles is shown on the bottom axis, and the strength of the cycle is shown on the vertical axis.

Now as you might expect in a sunspot analysis, the strongest underlying signal is an eleven year cycle. The second strongest signal is ten years. As mentioned above, these same cycles reappear at 20 and 22 years, 30 and 33 years, and so on. However, it is clear that the main periodicity in the sunspot record is in the cluster of frequencies right around the 11 year mark. Figure 5 shows a closeup of the cycle lengths from nine to thirteen years.:

periodicity analysis monthly sunspot count

Figure 5. Closeup of Figure 4, showing the strength of the cycles with lengths from 9 years to 13 years.

Note that in place of the single peak at around 11 years shown in the Fourier analysis, the periodicity analysis shows three clear peaks at 10 years, 11 years, and 11 years 10 months. Also, you can see the huge advantage in accuracy of the periodicity analysis over the Fourier analysis. It samples the actual cycles at a resolution of one month.

Now, before anyone points out that 11 years 10 months is the orbital period of Jupiter, yes, it is. But then ten years, and eleven years, the other two peaks, are not the orbital period of anything I know of … so that may or may not be a coincidence. In any case, it doesn’t matter whether the 11 years 10 months is Jupiter or not, any more than it matters if 10 years is the orbital period of something else. Those are just the frequencies involved to the nearest month. We’ll see below why Jupiter may not be so important.

Next, we can take another look at the sunspot data, but this time using daily sunspot data. Here are the cycles from nine to thirteen years in that dataset.

periodicity analysis daily sunspot count

Figure 6. As in figure 5, except using daily data.

In this analysis, we see peaks at 10.1, 10.8, and 11.9 years. This analysis of daily data is much the same as the previous analysis of monthly data shown in Figure 5, albeit with greater resolution. So this should settle the size of the sunspot cycles and enshrine Jupiter in the pantheon, right?

Well … no. We’ve had the good news, here’s the bad news. The problem is that like all natural cycles, the strength of these cycles waxes and wanes over time. We can see this by looking at the periodicity transform of the first and second halves of the data individually. Figure 7 shows the periodicity analysis of the daily data seen in Figure 6, plus the identical analysis done on each half of the data individually:

periodicity analysis daily sunspot plus halves

Figure 7. The blue line shows the strengths of the cycles found using the entire sunspot dataset as shown in Figure 6. The other two lines are the cycles found by analyzing half of the dataset at a time.

As you can see, the strengths of the cycles of various lengths in each half of the dataset are quite dissimilar. The half-data cycles each show a single peak, not several. In one half of the data this is at 10.4 years, and in the other, 11.2 years. The same situation holds for the monthly sunspot half-datasets (not shown). The lengths of the strongest cycles in the two halves vary greatly.

Not only that, but in neither half is there any sign of the signal at 11 years 10 months, the purported signal of Jupiter.

As a result, all we can do is look at the cycles and marvel at the complexity of the sun. We can’t use the cycles of one half to predict the other half, it’s the eternal curse of those who wish to make cycle-based models of the future. Cycles appear and disappear, what seems to point to Jupiter changes and points to Saturn or to nothing at all … and meanwhile, if the fixed Fourier cycle lengths are say 8.0, 10.6, and 12.8 years or something like that, there would be little distinction between any of those situations.

However, I was unable to replicate all of their results regarding the total solar irradiance. I suspect that this is the result of the inherent inaccuracy of the Fourier method. The text of H&W2011 says:

4.1. The ACRIM TSI Time Series

Our analysis of the ACRIM TSI time series only yields the solar activity cycle (Schwabe cycle, Figure 6). The cycle length is 10.6 years. The cycle length of the corresponding time series of the sunspot number is also 10.6 years. The close agreement of both periods is obvious.

I suggest that the agreement at 10.6 years is an artifact of the limited resolution of the two Fourier analyses. The daily ACRIM dataset is a bit over 30 years, and the daily sunspot dataset that he used is 50 years of data. The Fourier frequencies for fifty years are 50/2=25, 50/3=16.7, 50/4=12.5, 50/5=10, and 50/6=8.3 year cycles. For a thirty-two year dataset, the frequencies are 32/2=16, 32/3=10.6, and 32/4=8 years. So finding a cycle of length around 10 in both datasets is not surprising.

In any case, I don’t find anything like the 10.6 year cycle they report. I find the following:

periodicity daily tsi 9 to 13

Figure 8. Periodicity analysis of the ACRIM composite daily total solar irradiance data.

Note how much less defined the TSI data is. This is a result of the large variation in TSI during the period of maximum solar activity. Figure 9 shows this variation in the underlying data for the TSI:

acrim composite daily TSI

Figure 9. ACRIM composite TSI data used in the analysis.

When the sun is at its calmest, there is little variation in the signal. This is shown in the dark blue areas in between the peaks. But when activity increases, the output begins to fluctuate wildly. This, plus the short length of the cycle, turns the signal into mush and results in the loss of everything but the underlying ~ 11 year cycle.

Finally, let’s look at the terrestrial temperature datasets to see if there is any trace of the sunspot cycle in the global temperature record. The longest general temperature dataset that we have is the BEST land temperature dataset. Here’s the BEST periodicity analysis:

periodicity analysis BEST temperature

Figure 10. Full-length periodicity analysis of the BEST land temperature data.

There is a suggestion of a cycle around 26 years, with an echo at 52 years … but nothing around 10-11 years, the solar cycle. Moving on, here’s the HadCRUT3 temperature data:

periodicity analysis HadCRUT3 temperature

Figure 11. Full-length periodicity analysis of the HadCRUT3 temperature record.

Curiously, the HadCRUT3 record doesn’t show the 26- and 52-year cycle shown by the BEST data, while it does show a number of variations not shown in the BEST data. My suspicion is that this is a result of the “scalpel” method used to assemble the BEST dataset, which cuts the records at discontinuities rather than trying to “adjust” them.

Of course, life wouldn’t be complete without the satellite records. Here are the periodicity analyses of the satellite records:

periodicity analysis RSS temperature

Figure 12. Periodicity analysis, RSS satellite temperature record, lower troposphere.

With only a bit more than thirty years of data, we can’t determine any cycles over about ten years. The RSS data server is down, so it’s not the most recent data.

periodicity analysis msu uah temperature

Figure 11. Periodicity analysis, UAH satellite temperature record, lower troposphere.

As one might hope, both satellite records are quite similar. Curiously, they both show a strong cycle with a period of 3 years 8 months (along with the expected echoes at twice and three times that length, about 7 years 4 months and 11 years respectively). I have no explanation for that cycle. It may represent some unremoved cyclicity in the satellite data.

SUMMARY:

To recap the bidding:

• I’ve used the Periodicity Transform to look at the sunspot record, both daily and monthly. In both cases we find the same cycles, at ~ 10 years, ~ 11 years, and ~ 11 years 10 months. Unfortunately when the data is split in half, those cycles disappear and other cycles appear in their stead. Nature wins again.

• I’ve looked at the TSI record, which contains only a single broad peak from about 10.75 to 11.75 years.

• The TSI has a non-linear relationship to the sunspots, increasing at small sunspot numbers and decreasing a high numbers. However, the total effect (averaged 24/7 over the globe) is on the order of a quarter of a watt per square metre …

• I’ve looked at the surface temperature records (BEST and HadCRUT3, which show no peaks at around 10-11 years, and thus contain no sign of Jovian (or jovial) influence. Nor do they show any sign of solar (sunspot or TSI) related influence, for that matter.

• The satellite temperatures tell the same story. Although the data is too short to be definitive, there appears to be no sign of any major peaks in the 10-11 year range.

Anyhow, that’s my look at cycles. Why isn’t this cyclomania? For several reasons:

First, because I’m not claiming that you can model the temperature by using the cycles. That way lies madness. If you don’t think so, calculate the cycles from the first half of your data, and see if you can predict the second half. Instead of attempting to predict the future, I’m looking at the cycles to try to understand the data.

Second, I’m not blindly ascribing the cycles to some labored astronomical relationship. Given the number of lunar and planetary celestial periods, synoptic periods, and the periods of precessions, nutations, perigees, and individual and combined tidal cycles, any length of cycle can be explained.

Third, I’m using the same analysis method to look at the  temperature data that I’m using on the solar phenomena (TSI, sunspots), and I’m not finding corresponding cycles. Sorry, but they are just not there. Here’s a final example. The most sensitive, responsive, and accurate global temperature observations we have are the satellite temperatures of the lower troposphere. We’ve had them for three full solar cycles at this point. So if the sunspots (or anything associated with them, TSI or cosmic rays) has a significant effect on global temperatures, we would see it in the satellite temperatures. Here’s that record:

scatterplot uah ltt vs sunspots

Figure 12. A graph showing the effect of the sunspots on the lower tropospheric temperatures. There is a slight decrease in lower tropospheric temperature with increasing sunspots, but it is far from statistically significance.

The vagaries of the sun, whatever else they might be doing, and whatever they might be related to, do not seem to affect the global surface temperature or the global lower atmospheric temperature in any meaningful way.

Anyhow, that’s my wander through the heavenly cycles, and their lack of effect on the terrestrial cycles. My compliments to Hempelmann and Weber, their descriptions and their datasets were enough to replicate almost all of their results.

w.

DATA:

SIDC Sunspot Data here

ACRIM TSI Data, overview here, data here

Kiel Neutron Count Monthly here, link in H&W document is broken

BEST data here

Sethares paper on periodicity analysis of music is here.

Finally, I was unable to reproduce the H&W2011 results regarding MLO transmissivity. They have access to a daily dataset which is not on the web. I used the monthly MLO dataset, available here, and had no joy finding their claimed relationship with sunspots. Too bad, it’s one of the more interesting parts of the H&W2011 paper.

CODE: here’s the R function that does the heavy lifting. It’s called “periodicity” and it can be called with just the name of the dataset that you want to analyze, e.g. “periodicity(mydata)”. It has an option to produce a graph of the results. Everything after a “#” in a line is a comment. If you are running MatLab (I’m not), Sethares has provided programs and examples here. Enjoy.

# The periodicity function returns the power index showing the relative strength

# of the cycles of various lengths. The input variables are:

#   tdata: the data to be analyzed

#   runstart, runend: the interval to be analyzed. By default from a cycle length of 2 to the dataset length / 3

#   doplot: a boolean to indicate whether a plot should be drawn.

#   gridlines: interval between vertical gridlines, plot only

#   timeint: intervals per year (e.g. monthly data = 12) for plot only

#   maintitle: title for the plat

periodicity=function(tdata,runstart=2,runend=NA,doplot=FALSE,

                  gridlines=10,timeint=12,

                  maintitle="Periodicity Analysis"){

  testdata=as.vector(tdata) # insure data is a vector

  datalen=length(testdata) # get data length

  if (is.na(runend)) { # if largest cycle is not specified

    maxdata=floor(datalen/3) # set it to the data length over three

  } else { # otherwise

    maxdata=runend # set it to user's value

  }

  answerline=matrix(NA,nrow=maxdata,ncol=1) # make empty matrix for answers

  for (i in runstart:maxdata) { # for each cycle

    newdata=c(testdata,rep(NA,(ceiling(length(testdata)/i)*i-length(testdata)))) # pad with NA's

    cyclemeans=colMeans(matrix(newdata,ncol=i,byrow=TRUE),na.rm=TRUE) # make matrix, take column means

    answerline[i]=sd(cyclemeans,na.rm=TRUE)/sqrt(length(cyclemeans)) # calculate and store power index

  }

  if (doplot){ # if a plot is called for

    par(mgp=c(2,1,0)) # set locations of labels

    timeline=c(1:(length(answerline))/timeint) #calculate times in years

    plot(answerline~timeline,type="o",cex=.5,xlim=c(0,maxdata/timeint),

         xlab="Cycle Length (years)",ylab="Power Index") # draw plot

    title(main=maintitle) # add title

    abline(v=seq(0,100,gridlines),col="gray") # add gridlines

  }

  answerline # return periodicity data

}
0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

434 Comments
Inline Feedbacks
View all comments
Cohen
July 29, 2013 2:49 pm

The wikipedia entry on solar cycles has the comment
Hale’s observations revealed that the solar cycle is a magnetic cycle with an average duration of 22 years. However, because very nearly all manifestations of the solar cycle are insensitive to magnetic polarity, it remains common usage to speak of the “11-year solar cycle”.
I notice that the BEST and HADCRUT3 periodicity curves have peaks somewhat close to the 22-ish position. Maybe this is connected to the sun.

July 29, 2013 2:55 pm

Apart from a millennial cycle such as MWP to LIA to date the sun is pretty irregular on shorter time scales of 3 or 4 solar cycles as witness the active sun and accompanying warmth of the late 17th century and early 18th century before the sun became less active and temperature dropped again in the late 18th and in the 19th century..
Furthermore the modulating effect of the oceans has an effect sometimes supplementing and sometimes offsetting any solar variations.
So I don’t think there are any clear cycles on time scales of less than a millennium or so and even that varies in length due to the ocean effect.
However, the absence of clear shorter term cycles does not imply that there is no link at all between solar activity and a temperature trend at specific levels of solar activity.
It just isn’t a neat, regularly repeating relationship. Just a tendency for temperatures to rise slowly when the sun is more active than a specific (currently uncertain) level and for temperatures to fall slowly when the sun is less active than that specific level.
I think someone did suggest a specific level of the Ap index at which the system would switch from cooling to warming or vice versa.
Suggesting that such an approach is ‘cyclomania’ is a straw man argument.

July 29, 2013 2:58 pm

Much obliged for the essay, Willis Well done. I’m going to look into PT.

Curiously, the HadCRUT3 record doesn’t show the 26- and 52-year cycle shown by the BEST data, while it does show a number of variations not shown in the BEST data. My suspicion is that this is a result of the “scalpel” method used to assemble the BEST dataset, which cuts the records at discontinuities rather than trying to “adjust” them.

I urge all readers to take this observation to heart. It is the “Smoking Gun at Berkley Earth”.
My hobby horse has been that BEST’s scalpel process destroys low frequency data and the suture and regional homogenization creates counterfeit low frequency output — it looks real, but divorced from reality. Willis and I went several rounds on this topic in “Berkley Earth Finally Makes Peer Review…. ” Jan 19-23, 2013.
Willis ( 1/22/13 12:45 am

But your claim, that “Any trend longer than [12 years] in the reconstruction is apparently a result of modeling”, that’s not true. Long-period trends have noise added to them by the scalpel technique, but the scalpel technique does not lose the long-period information as you claim. The long-term trends stay in the data, they are not removed as you think.

Willis, I submit that your observations and analysis that HadCRUT3 and BEST do not show the same long period results is confirmation that the scalpel and suture is destroying important signal and substituting false artifacts. If so, climatic results with a time scale beyond 6 years from BEST work should be highly suspect and probably discarded.
I made may argument from Fourier Theorems, not because I believe long term cycles have a cause, but from an Information Content argument. A Temp vs Time time series has a certain amount of information. The Fourier Transform of that series has exactly the same information content because there is a 1 to 1 correspondence between the two. The BEST scalpel, by shortening each time series, removes the lowest possible frequency in the Fourier spectrum. Information is lost. The lowest frequency information is lost — that which is most important to climate science. High frequency information cannot be used to recreate low frequency information. Regional homogenization will not save the day, for if you look at the regional homogenization in the Fourier Space, the low frequency component of the maps have gone to NULL.
http://wattsupwiththat.com/2009/12/08/the-smoking-gun-at-darwin-zero/

And with the Latin saying “Falsus in unum, falsus in omis” (false in one, false in all) as our guide, until all of the station “adjustments” are examined, adjustments of CRU, GHCN, and GISS alike, we can’t trust anyone using homogenized numbers.
Regards to all, keep fighting the good fight,
w.

July 29, 2013 3:01 pm

And of course whatever the sun does the effect is modulated by the thermostat effect which Willis has previously alluded to as a tropical phenomenon but which I would extend to the entire global air circulations.

July 29, 2013 3:07 pm

My post at 2.55pm should have referred to the F10.7 Flux and not the Ap index.

July 29, 2013 3:08 pm

Hi Willis
Some 10 years ago I produced this equation
http://www.vukcevic.talktalk.net/LFC4.htm
(on the graph numbers are rounded off) which has ~105 year period, except at the time of Maunder minimum when period halved.
I was flogging my ideas on another blog, and the well known solar expert declared it astrology, insisting that it is the well known Gleissberg cycle (70 years) that dominated the sunspot cycles.
My stance was that such cycle did not exist and that sunspot long term output is modulated by ‘Maunder equation’ as I call it, and challenge the good old doc to produce the FFT spectrum.
Instead of producing single spectral line, Dr. Svalgaard came with this:
http://www.vukcevic.talktalk.net/FFT-105y.htm
My comment reproduced below the graph clearly shows sentiments of the conversation at the time.
This particular topic may still be available on the old ‘SC24 blog’ but I couldn’t trace it, someone else may be more successful. I was banned for misbehaviour so I made record of many of my posts.
Dr. Svalgaarrd’s recollection may be different and if put forward I shall not challenge it.
As far as I know this is first time anyone defined 105 year cycle, but I could be wrong.

Greg Goodman
July 29, 2013 3:09 pm

“It just isn’t a neat, regularly repeating relationship. Just a tendency for temperatures to rise slowly when the sun is more active than a specific (currently uncertain) level ”
First we need to understand the solar activity , then we need to understand the climate systems response to whatever part of solar activity is most important : TSI, UV, magnetism, solar wind.
Then we need to untangle anything else like a strong 9 year cycle that is probably lunar in original. Anyone expecting a nice 1 to 1 correlation to SSN is being foolishly simplistic. Equally those stating that solar is unimportant because of the lack of such a trivial relationship are being equally foolish.
There are clear signals but the relationship is not trivial.
Neither is bundling the whole global temperature data into a “global average” very helpful. Especially if we mix land and sea data. Tropics, temperate and polar regions get hit differently and have different responses and feedbacks.
Global average ‘mania’ has held back any real understanding of climate mechanisms for decades. Probably intentionally so by those wishing to analyse it as CO2 plus noise.

Spence_UK
July 29, 2013 3:19 pm

Brave of you to make so many basic errors on the capability of the Fourier Transform on a page frequented by many electrical / electronic engineers…

Advantage: Improved resolution at all temporal scales. Fourier analysis only gives the cycle strength at specific intervals. And these intervals are different across the scale. For example, I have 3,174 months of sunspot data. A Fourier analysis of that data gives sine waves with periods of 9.1, 9.4, 9.8, 10.2, 10.6, 11.0, 11.5, and 12.0 years.

A simple DFT gives those numbers because that is all of the information that is in the data set. Any other data is, by definition, interpolation (which we know from the mathematical underpinnings of the transform). But if you want to perform the interpolation using the Fourier transform, it’s trivial – you just zero pad the time domain. Then you get any frequency you want.

Advantage: A more fine-grained dataset gives better resolution.

Again, by definition, there is no additional information to give better resolution, and any method that gives better resolution must do so by interpolation – which is trivial and easy to do using the Fourier transform.

Advantage: Shows actual cycle shapes, rather than sine waves.

Cycle shapes can be trivially extracted from a Fourier transform from analysis of the harmonics.

Advantage: No “ringing” or aliasing from end effects.

Allow me to introduce you to window functions…

Advantage: Relatively resistant to missing data.

And allow me to introduce you to the Lomb-Scargle periodogram.

Advantage: Computationally reasonably fast.

A 70,000 point FFT on my creaking, 8 year old lap top in MATLAB – including generating the random numbers to feed into it – took 0.01 seconds. Not forty. Not quite sure how you’re selling this as an advantage. (Command: tic; fft(randn(70000,1)); toc)
I’m sure periodicity analysis (just like all the other variants, such as singular spectrum analysis etc) has its merits, but I’d rather hear them from someone who actually understands Fourier analysis in the first place, if a comparison is to be made at all.

July 29, 2013 3:21 pm

Global average ‘mania’ has held back any real understanding of climate mechanisms for decades.
Whole heartedly agree. I’d look for a solar effect in the desert zones. Australia has good data from a few arid desert locations. Although avoid the BoM’s Australian composite datasets. They have piled adjustment on adjustment.

LamontT
July 29, 2013 3:37 pm

I’m thinking that this shows more than anything else that a variety of tools used to examine data is best. I think to often people get locked into looking at things in only one way which is why an outside view can sometimes reveal amazing things. And yes other times multiple views don’t show anything, but that is data as well.

richard verney
July 29, 2013 3:53 pm

Greg Goodman says:
July 29, 2013 at 3:09 pm
/////////////////
An insightful comment.
I too have often commented upon the point you make in your final paragraph. The use of averages really hinders seeing what is going on and why.

Bart
July 29, 2013 3:55 pm

Greg Goodman says:
July 29, 2013 at 2:01 pm
“Someone called Bart did a very interesting frequency analysis on SSN that comes up with several of the the same periods you found, and not to be impolite in any way, he does seem a lot more experienced than you with fourrier type techniques. “
That was I. The Sunspot data are the result of the rectification of primarily two processes with energy concentrated in frequency ranges centered at those associated with periods of about 20 and 23.6 years. When rectified, these produce the major observed peaks associated with about 10, 10.8, 11.8, and 131 years, in accordance with the Convolution Theorem.
I make the distinction of pointing out where the energy is concentrated because these are NOT periodic signals. If they were periodic, the energy would be distributed in lines, such as appear in atomic spectra. This type of behavior is fairly ubiquitous in a wide range of physical phenomena, owing to the fact that natural continuum processes can often be described by partial differential equations on a bounded domain, and such equations can often be solved as the expansion of a series of spatial eigenmodes associated with a time dependent amplitude function which is the output of a 2nd order ordinary differential equation. Dissipation of energy produces a broadening of the lines, and the processes manifest themselves as resonant phenomena driven by wideband random forcing.
I made such a model of the Sunspots here and showed how I could use it to generate qualitatively very similar outputs to the observed Sunspot behavior here and here. A Kalman Filter/Predictor could be formulated for this model which would produce optimal estimates of future behavior and associated error bars.
This kind of stuff is pretty old hat in control systems analysis and design. Someday, it will migrate over into the climate sciences.

Philip Peake
July 29, 2013 3:56 pm

One thing to perhaps consider: overall, the sun is going to generate a pretty constant amount of energy. Given that, the TSI is going to be fairly constant too, but as surface (at least) conditions change, the energy spectrum will change.
It might be interesting to explore how the Earth reacts to a constant TSI with a varying spectral density.

July 29, 2013 4:02 pm

‘ The BEST scalpel, by shortening each time series, removes the lowest possible frequency in the Fourier spectrum. Information is lost. The lowest frequency information is lost — that which is most important to climate science. High frequency information cannot be used to recreate low frequency information. Regional homogenization will not save the day, for if you look at the regional homogenization in the Fourier Space, the low frequency component of the maps have gone to NULL.”
i suppose i could get a file done without scalpelling. However your description of it is wrong.

July 29, 2013 4:04 pm

I feel strongly that one needs to work out what each individual solar output component does to the global temp,ETC, as TSI evens out individual components, and is thereby not that useful. We should be seeing how proton, electron, Ultra-Violet, X Rays, Ap, 10.7, etc, etc each individually have on earths various layers of atmosphere, jetstreams, temperature, magnetic effects, other reactions, the differing reactions at poles and differing latitudes, etc, etc. Using a broad brushed TSI is not going to achieve any real detailed meaningful results. And help us that much really in finding all the answers to Solar-Climate-Weather-etc interactions…There is so much out there we need to research and learn.
However, the problem lies in that most of this solar component data is only very recent!

Bart
July 29, 2013 4:07 pm

Willis Eschenbach says:
July 29, 2013 at 3:47 pm
“But what Fourier analysis can’t do is give us back the original sine wave and triangle wave that were added together to make the resultant wave.”
Actually, the FFT is an isomorphism – it can give you back precisely what you put into it.
“Spence, you have listed a number of ways to get around some of the limitations of the Fourier transforms.”
It’s not a limitation of the Fourier Transform, but an aspect of the FFT implementation of it. The actual Fourier Transform is a continuous and dense function of frequency. The FFT is a sampled version of the Fourier Transform. Zero padding is merely a method to make it sample more points.
However, it really does not help with resolution in a strict sense. Fundamentally, resolution is limited by the length of the data set. E.g., you cannot generally isolate a 1000 year process in 10 years of data. But, you can use additional points of data to produce a plot which is more pleasing and recognizable to the human eye.
Anyway, the problem of looking for periodicities is that this is not really periodic data, but rather a random process with cyclical correlation, as I discussed above.

Bart
July 29, 2013 4:08 pm

“…as I discussed above.”
Once it gets through the spam filter, I suppose.

Nick Stokes
July 29, 2013 4:13 pm

Willis,
It’s an interesting analysis. But I think your list of disadvantages of the periodicity analysis is short but major.
The sunspot analysis was most interesting, so I ran your program for an exact sine of period 11 years, monthly data over 264 years. The full plot showed, rather like your Fig 4, a sequence of peaks at 22 yr, 33 yr etc diminishing quite slowly. These are spurious. The expansion about the 11 year also showed. like your Fig 5, side lobes at about 10.4 years and 11.7 years, though not as pronounced. It looks as though these are a sinc function relating to the 264 year window.

Spence_UK
July 29, 2013 4:14 pm

Willis, I appreciate what you’re saying, but methods you describe as “getting around limitations of the Fourier transform” apply equally to the periodicity transform – they are limits of the data, e.g. Nyquist sampling theorem.
As an example, I generated your figure 5 from using a Fourier Transform:
http://i42.tinypic.com/281wj0o.png
(Apologies for the cheesy hosting). As you can see, the Fourier Transform yields an almost identical plot to the periodicity transform. The limit of the information is dictated by the data, not the signal processing method. As someone quite familiar with frequency domain analysis, it is very difficult to get past the intro to the interesting part, without thinking “this is all wrong…”

July 29, 2013 4:16 pm

Willis Eschenbach says:
July 29, 2013 at 2:32 pm
You’ve used 308 years of data from 1700-2008. This means that the only cycles that the Fourier analysis will reveal are 308/2 = 154 years, 308/3 = 103 years, 308/4 = 77 years, and so on. These individual points are clearly visible in the Fourier analysis.
As a result, you won’t find any evidence of say a 125-year cycle in the Fourier decomposition of a 308-year signal, even though such a cycle may certainly exist in the data.

It is very true that FFT is a blunt instrument for long-period cycles, as the time-resolution [really frequency] is poor for the longer period, but it is not completely useless: the peaks will still show up, but shifted somewhat. Here is an example: I added a 70-yr period to the SSN record [with same amplitude as average SSN] and in another plot I additionally added a 125-yr period: http://www.leif.org/FFT-SSN-70-125.png so the peaks are still there but not well resolved. Of course, if you have a very long series, the peaks will show up just fine as the third plot of 2562 ‘years’ of a 125-yr cycle.

July 29, 2013 4:18 pm

I additionally added a 125-yr period: http://www.leif.org/research/FFT-SSN-70-125.png
This happens too often that I forget something on the long URL. I should make a really short one for this kind of stuff.

DirkH
July 29, 2013 4:19 pm

Willis Eschenbach says:
July 29, 2013 at 3:23 pm
“The most obvious difference is in the size of the peaks at around 52 years. Again, I suspect the result is because of the “scalpel” technique, but I have no way of demonstrating that.”
That looks indeed as if the BEST scalpel technique kills the low frequency periodicities. (Mosher’s defense “You’re wrong” doesn’t really cut it. I doubt he understands what he did.)