Guest Post by Willis Eschenbach
In my last post on the purported existence of the elusive ~60-year cycle in sea levels as claimed in the recent paper “Is there a 60-year oscillation in global mean sea level?”, I used a tool called “periodicity analysis” (discussed here) to investigate cycles in the sea level. However, some people said I wasn’t using the right tool for the job. And since I didn’t find the elusive 60-year cycle, I figured they might be right about periodicity analysis. In the process, however I found a more sensitive tool, which is to just fit a sine wave to the tidal data at each cycle length and measure the peak-to-peak amplitude of the best-fit sine wave. I call this procedure “sinusoidal periodicity”, for a simple reason—I’m a self-taught mathematician, so I don’t know the right name for the procedure. I’m sure this analysis method is known, but since I made it up I don’t know what it’s actually called.
I like to start with a look at the rawest view of the data. In this case, here’s the long-term Stockholm tide gauge record itself, before any further analysis. This is the longest complete monthly tidal gauge record I know of, at 200 years.
Figure 1. Stockholm monthly average sea level. This is a relative sea level, measured against an arbitrary zero point.
As you can see, Stockholm is (geologically speaking) rapidly leaping upwards after the removal of the huge burden of ice and glaciers about 12,000 years ago. As a result, the relative sea level (ocean relative to the land) has been falling steadily for the last 200 years, at a surprisingly stable rate of about 4 mm per year.
In any case, here’s what the sinusoidal periodicity analysis looks like for the Stockholm tide data, both with and without the annual cycle:
Figure 1a. “Sinusoidal Periodicity” of the Stockholm tide gauge data, showing the peak-to-peak amplitude (in millimetres) of the best-fit sine wave at each period length. Upper panel shows the data including the annual variations. In all cases, the underlying dataset is linearly detrended before sinusoidal periodicity analysis. Note the different scales of the two panels.
Now, I could get fond of this kind of sinusoidal analysis. To begin with, it shares one advantage of periodicity analysis, which is that the result is linear in period, rather than linear with frequency as is the case with Fourier transforms and spectral analysis. This means that from monthly data you get results in monthly increments of cycle length. Next, it outperforms periodicity analysis in respect of the removal of the short-period signals. As you can see above, unlike with periodicity analysis, removing the annual signal does not affect the results for the longer-term cycles. The longer cycles are totally unchanged by the removal of the annual cycle. Finally, I very much like the fact that the results are in the same units as the input data, which in this case is millimetres. I can intuitively get a sense of a 150-mm (6 inch) annual swing in the Stockholm sea level as shown above, or a 40 mm (1.5 inch) swing at both ~5.5 and ~31 years.
Let me start with a few comments on the Stockholm results above. The first one is that there is no significant power in the ~ 11-year period of the sunspot cycle, or the 22-year Hale solar cycle, as many people have claimed. There is a small peak at 21 years, but it is weak. After removal of the annual cycle, the next strongest cycles peak at ~5.5, 31.75, and 15 years.
Next, there are clearly cycle lengths which have very little power, such as 19.5, 26.5, and 35 years.
Finally, in this record I don’t see much sign of the proverbial ~60 cycle. In this record, at least, there isn’t much power in any of the longer cycles.
My tentative conclusion from the sinusoidal analysis of the Stockholm tide record is that we are looking at the resonant frequencies (and non-resonant frequencies) of the horizontal movement of the ocean within its surrounding basin.
So let me go through all of the datasets that are 120 years long or longer, using this tool, to see what we find.
So lets move on to the other 22 long-term tidal datasets that I linked to in my last post. I chose 120 years because I’m forced to use shorter datasets than I like. Normally, I wouldn’t consider results from a period less than three times the length of the cycle in question to be significant. However, there’s very few datasets that long, so the next step down is to require at least 120 years of data to look for a 60-year cycle. Less than that and you’re just fooling yourself. So without further ado, here are the strengths of the sinusoidal cycles for the first eight of the 22 datasets …
Figure 2. Sinusoidal amplitude, first eight of the 22 long-term (>120 year) datasets in the PSMSL database. Note that the units are different in different panels.
The first thing that strikes me about these results? The incredible variety. A few examples. Brest has lots of power in the longer-term cycles, with a clear peak at ~65 years. Wismar 2, on the other hand, has very little power in the long-term cycles, but a clear cycle at ~ 28 years. San Francisco has a 55-year peak, but the strongest peak there is at 13 years. In New York, on the other hand, the ~51 year peak is the strongest cycle after the annual cycle. Cuxhaven 2 has a low spot between 55 and 65 years, as does Warnemunde 2, which goes to zero at about 56 years … go figure.
Confused yet? Here’s another eight …
Figure 3. Sinusoidal periodicity, second eight of the 22 long-term (>120 year) datasets in the PSMSL database. Note that the units are different in different panels.
Again the unifying theme is the lack of a unifying theme. Vlissingen and Ijmuiden bottom out around 50 years. Helsinki has almost no power in the longer cycles, but the shorter cycles are up to 60 mm in amplitude.. Vlissingen is the reverse. The shorter cycles are down around 15-20 mm, and the longer cycles are up to 60 mm in amplitude. And so on … here’s the final group of six:
Figure 4. Sinusoidal periodicity, final six of the 22 long-term (>120 year) datasets in the PSMSL database. Note that the units are different in different panels.
Still loads of differences. As I noted in my previous post, the only one of the datasets that showed a clear peak at ~55-years was Poti, and I find the same here. Marseilles, on the other hand, has power in the longer term, but without a clear peak. And the other four all bottom out somewhere between 50 and 70 years, no joy there.
In short, although I do think this method of analysis gives a better view, I still cannot find the elusive 60-year cycle. Here’s an overview of all 22 of the datasets, you tell me what you see:
Figure 5. Sinusoidal periodicity, all twenty-two of the long-term tide gauge datasets.
Now, I got started on this quest because of the statement in Abstract of the underlying study, viz:
We find that there is a significant oscillation with a period around 60-years in the majority of the tide gauges examined during the 20th Century …
(As an aside, waffle-words like “a period around 60-years” drive me spare. The period that they actually tested for was 55-years … so why not state that in the abstract? Whenever one of these good cycle-folk says “a period around” I know they are investigating the upper end of the stress-strain curve of veracity … but I digress.)
So they claim a 55-year cycle in “the majority of the tide gauges” … sorry, I’m still not seeing it. The Poti record in violet in Figure 5 is about the only tide gauge to show a significant 55-year peak.
On average (black line), for these tide gauge records, the strongest cycle is 6 years 4 months. There is another peak at 18 years 1 month. All of them have low spots at 12-14 years and at 24 years … and other than that, they have very little in common. In particular, there seems to be no common cycles longer than about thirty years or so.
So once again, I have to throw this out as an opportunity for those of you who think the authors were right and who believe that there IS a 55-year cycle “in the majority of the tide gauges”. Here’s your chance to prove me wrong, that’s the game of science. Note again that I’m not saying there is no 55-year signal in the tide data. I’m saying I’ve looked for it in a couple of different ways now, and gotten the same negative result.
I threw out this same opportunity in my last post on the subject … to date, nobody has shown such a cycle exists in the tide data. Oh, there are the usual number of people who also can’t find the signal, but who insist on telling me how smart they are and how stupid I am for not finding it. Despite that, so far, nobody has demonstrated the 55-year signal exists in a majority of the tide gauges.
So please, folks. Yes, I’m a self-taught scientist. And yes, I’ve never taken a class in signal analysis. I’ve only taken two college science classes in my life, Introductory Physics 101 and Introductory Chemistry 101. I freely admit I have little formal education.
But if you can’t find the 55-year signal either, then please don’t bother telling me how smart you are or listing all the mistakes you think I’m making. If you’re so smart, find the signal first. Then you can explain to me where I went wrong.
What’s next for me? Calculating the 95% CIs for the sinusoidal periodicity, including autocorrelation. And finding a way to calculate it faster, as usual optimization is slow, double optimization (phase and amplitude) is slower, and each analysis requires about a thousand such optimizations. It takes about 20 seconds on my machine, doable, but I’d like some faster method.
Best regards to each of you,
w.
As Always: Please quote the exact words that you disagree with, it avoids endless misunderstandings.
Also: Claims without substantiation get little traction here. Please provide links, citations, locations, observations and the like, it’s science after all. I’m tired of people popping up all breathless to tell us about something they read somewhere about what happened some unknown amount of time ago in some unspecified location … links and facts are your friend.
Data: All PSMSL stations in one large Excel file, All Tide Data.xlsx
Just the 22 longest stations as shown in Figs. 2-4 as a CSV text file, Tide Data 22 Longest.csv .
Stockholm data as an excel worksheet, eckman_2003_stockholm.xls
Code: The function I wrote to do the analysis is called “sinepower”, available here. If that link doesn’t work for you, try here. The function doesn’t call any external functions or packages … but it’s slow. There’s a worked example at the end of the file, after the function definition, that imports the 22-station CSV file. Suggestions welcome.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

RGB – that’s the heartBLEED exploit…
If there is no 60 year cycle in the actual individual site data (local reality) on sea level change, is there a 60 year cycle created by methods of averaging (regional or global modelling) that are in use in analysing sea level change? You can see why people need to average to go from a mixture of local realities to something they can present academically (not that this is necessarily a good idea), but do they actually manage to ensure they are not creating a cycle as an artefact of the averaging process?
Makes me wish I actually had taken statistics beyond secondary level – might be able to answer the question.
Frank de Jong says: May 2, 2014 at 6:36 am
Being from the Netherlands, I could not help but notice that you use 7 cities from my rather modestly sized country.
___________________________
I think that is a function of the Netherlands being a great sea-trading nation, and therefore having good tidal records.
But it does leave me scratching my head as to why there is so much tidal variation in such a small area. I was thinking of gentle rippling effects in the crust, with standing waves and nodes making regions rise and fall in different fashions. But that could not result in the observed differences between tidal stations only 50 miles apart.
Any other ideas for the differences between these tidal records?
Instrument and reading errors?
Periodic dredging of the silt in harbours?
Ralph
RGB – Heartland exploit – freudian slip perhaps?
Willis, Thanks for an informative article.
You may recall that at ICCC7, Nir Shaviv presented at one of the upstairs breakout sessions, and that he made reference to his article, published in JGR, Using the oceans as a calorimeter to quantify the solar
radiative forcing, JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 113, A11101, doi:10.1029/2007JA012989, 2008. The article is available free to download.
Shaviv uses the derivative of sea level height, the rate of sea level rise, and his results are available in Fig 4 from that article. Shaviv constrains his analysis to the Atlantic in an attempt to keep the effects of ENSO, presumed to dominate the Pacific Ocean domain, from contaminating solar effects and the 11-year solar cycle from the results.
Shaviv gets his sea level data from the Permanent Service for Mean Sea Level, which many of us have accessed at http://www.psmsl.org/, a different link from what is in the Shaviv’s article, and he averages and differentiates those data to get the rate of sea level rise.
Have you considered using the derivative of sea level? I had not heard of it until I spoke with Shaviv at ICCC7.
Using Shaviv’s methodology, I was wondering if you repeat his findings. Is there an 11-year cycle in the sea level data if you choose that method of analysis? Might there be a different signal between Atlantic, Pacific, and other basins?
Bob Endlich
Willis Eschenbach says:
May 1, 2014 at 11:48 pm
You’re right I should have stated the point behind my sea level data. They clearly show decadal fluctuations, but don’t go back far enough to show 60 year cycles. However if the ups & downs evident in those data correspond to warmer & cooler decades, with some lag, then why not on longer time frames? Especially since on yet longer scales, sea level does definitely coincide with temperature. It was lower during the LIA than during the Holocene Optimum & other warm phases of the present interglacial, for instance, & much lower still during big ice ages, ie the glaciations during which rivers & oceans of ice weigh down the continents.
As for “show”, I meant a presentation to the mostly out of work fishers & loggers, plus plaster gull sellers & retirees of the coast, who are most familiar with the ups & downs of the Pacific.
Willis, from some of the comments on your previous analysis of sea level rise it seemed that the cycle shows up in the velocity of tide gauge changes rather than the tide levels themselves. I know that when I investigated an error in a tracking mount, a periodic error was obvious when first differences were plotted vs. time, where it was very hard to see in the original pointing data. Logically I don’t see how a cycle could show up in velocity but not position since the derivative of a sinusoid is a sinusoid, but it might be a way to amplify such an effect.
“””””……Greg says:
May 1, 2014 at 10:36 pm
Great Willis, looks like progress.
However, I think there’s one fundamental point that you’re missing about this which is leading you to misinterpret the many individual plots and the overlay where you see the all the records have very different spectra in the longer periods.
GE Smith: “2) on a similar vein, since my (now somewhat weak) brain thinks that a linear trend oughta morph into some recognizable “spectral” feature ”…….”””””””
Greg, I told Willis at the outset, that I was NOT criticizing his study; just a trifle baffled trying to (really) understand it.
My own degree had a Mathematical Physics major in it, but that was well over half a century ago, and sadly was somewhat unused in my industry career, so much forgotten.
But I noticed that Willis had said, that he made TWO different alterations to the RAW data, before doing his (spectral) analysis process. One was leaving out the annual cycle, and two was the linear detrending; which incidently, I had NO trouble finding in his paper.
So I was just curious as to what the result would be of just not removing those elements, and performing his “transform” on the raw data.
So that silly sentence of mine (2) was just saying, my mind could not visualize what sort of artifact would show up in his plot, if the linear trend was still in his data.
When I was in school, I was a real whizz in math (half of my degree) , and I could do easily many of these transform processes at my fingertips. But my career focused on the physics hardware side, and I lost much of my math; one of the reasons, I’m a quantum mechanics dunderhead.
I didn’t want to burden Willis with idle work, but I was curious as to how his detrending and annual cycle removal had altered his result.
Not that I think a big 60 year signal is suddenly going to pop out of the woodwork.
But I had no problem reading that Willis said he did the linear detrend.
Thanks, Willis. Looks like uncorrelated noise….
ferdberple says:
May 2, 2014 at 5:12 am
Piers hasn’t demonstrated any ability to forecast anything, because he doesn’t do forecasting. He does handwaving in the old-fashioned way—just like Nostradamus. Here’s an example. He predicted a 50/50 chance of cyclones in a certain area, and then took credit for a successful prediction when there were no cyclones! How can you not admire that kind of bald-faced audacity?
And after claiming over and over that the London bookies wouldn’t bet against him regarding rainfall, I offered to bet against him … man, you should have seen him run from the bet. I pity anyone standing between him and the door on that day, it would have been as dangerous as standing between Richard Mueller and a microphone …
So while it’s quite possible that “no-one has a better idea than anyone else”, Piers has the best idea idea of anyone—simply make unbelievable vague predictions, and then claim success for any outcome. He predicted forest fires in Colorado one month, for example, and then claimed success when there were forest fires in Arizona … even Nostradamus couldn’t beat that one.
w.
ThinkingScientist says:
May 2, 2014 at 6:01 am
I don’t have a clue what that means, “equivalent” to a Fourier transform performed in the time domain. It’s not a Fourier transform of any sort, not least because it doesn’t decompose the signal into orthogonal constituents that can then be added together to reconstitute the signal.
So I fear that I’m not understand what an “FFT … in the time domain” might be.
Mostly, I’m surprised that this procedure doesn’t already have a name. Surely I’m not the first guy to do this kind of analysis?
w.
RGB – that’s the heartBLEED exploit…
Oops. I knew that. Inadequate coffee, dashing off replies in between teaching students how to do buoyancy, oscillation, wave, statics, sundry mechanics problems pre-final exam this afternoon. Brain tired, tired…
rgb
Frank de Jong says:
May 2, 2014 at 6:36 am
Thanks for the local knowledge, Frank. In fact, I’m NOT looking for anything global at all. I’m looking to see if, as the authors claimed, there is a common ~60 year signal in the majority of the long-term tide records … so far no joy.
I agree, and it reflects my claim made way up in the head post that what we are looking at are the characteristics of the surrounding ocean basins, and not common characteristics from some purported long-term cycles.
w.
Willis, Love your work, but don’t know how you find the time or the energy! Have you not considered taking a BSc and then submitting your work for a PhD? (Yes, really!)
rgbatduke says:
May 2, 2014 at 7:37 am
Bizarre. I just tried it and it worked fine. In any case, I’ve converted it to a Word document so it can be uploaded from here, give that a try.
w.
Willis,
Is there an easy way to modify your “sinusoidal periodicity” analysis in such a way that it allows for ‘phase shifting’. Give it, say, a +/- five year phase shift. One wouldn’t expect ‘cycles’ to to be uniform throughout the Oceans.
I might want Bob Tisdaile to weigh in here concerning ‘water sloshing around the globe’ and phase shift.
As always …A very interesting and thought provoking post . Keep it up.
Willis, I have only read the first few paragraphs, so I apologize if others have already commented in a similar vein. Some comments before I go back to reading about how you applied your method.
I used Periodicity Transforms to try and make sense of vibration data in a missile and as with you, I like periodicity analysis. But there were/are some issues – the random waveform being one.
I think you have taken the next step. From Fourier to Periodicity to Sinusoidal Periodicity. It is the inverse of the Fourier Transform, but not in the usual sense of Inverse which is to reverse the Fourier Transform. Your transform is literally the inverse as in Period = 1/Frequency.
What to call it?
How about the “Willis Transform”?
This may or may not be relevant to the subject, but it appears that the ‘tidal potential’ of the 20 extreme tides is ‘correlated’ to the AMO
http://www.vukcevic.talktalk.net/AMO-T.htm
all necessary details are available here ( see table 1)
http://journals.ametsoc.org/doi/pdf/10.1175/JCLI4193.1
but calculating I used ( something I did as an quick estimate couple years ago) may not be considered entirely appropriate method to draw a satisfactory conclusion.
Mostly, I’m surprised that this procedure doesn’t already have a name.
Yes, I’ve done it myself many (40+) years ago, but only for unequally spaced sample points. For equally spaced sample points, as several of us have pointed out, you can get exactly the same spectrum with an fft, just sampled at different points. It is much faster to add zeros to the detrended data, say increasing the data length to a power of two at least 8 times the data length, and do an fft. There is no information lost in the fft, and if you need more sample points, just add more zeros. You don’t get better resolution with your method, the resolution is set by the original data length, you just have more closely spaced sample points at low frequencies.
If you did a least-squares fit of a number of sines and cosines to the data, then that is mathematically equivalent to a standard discrete Fourier transform. I understand that you used constant period intervals instead of frequency intervals, but this is not a gain: the coefficients in the standard method are uncorrelated, but yours are. The extra resolution at long periods (low frequencies) is apparent, not real.
Here is my emulation of Willis’s first plot, using just a FFT plotted with a period x-axis. The R code is below the figure. I haven’t adjusted the y-axis units to match. But the shape is right. I padded to 8192 months – more would give a smoother result.
Thanks, Nick. I did steal your code (the link worked).
Chuck says: May 2, 2014 at 11:29 am
“For equally spaced sample points, as several of us have pointed out, you can get exactly the same spectrum with an fft, just sampled at different points. It is much faster to add zeros to the detrended data, say increasing the data length to a power of two at least 8 times the data length, and do an fft.”
Indeed so. That’s what I did here. 2^16 points in all (about 32 x data length), and it takes a fraction of a second.
Thanks Willis, the word format link worked (clumsy format, but I just saved it back as txt, removed a few spurious characters, and it looks good). I won’t have time to try either one until I’m dead of old age — Final exam proceeding as I type, another one tomorrow morning, grading all weekend — and then I get to get ready to go to Beaufort to teach physics there by next Friday. But I am curious as to how your code differs from a suitable FFT — Nick looks like he more or less confirms your curve shapes with an ordinary FFT, but he has to pad the data etc (as one expects, actually) to get it to work.
Greg (perhaps Nick/Chris Clark)
Greg says:
May 2, 2014 at 1:52 am
One thing that is good about this pseudo-fourier approach is that it can work on data with breaks in it.
What you say really interests me.
I haven’t delved into Willis’ R code, so I don’t know what exactly he did, but if you agree with Nick – and I if I understand his description correctly – then it seems that by pseudo-Fourier approach you mean applying a Fourier series (in terms of sine or cosine OR both?) and in this round about way, using the coefficients, expressing the spectrum in terms of increasing period rather than – as per usual – increasing frequency. Right?
So do you mean that at this type of approach (applying a suite of Fourier series rather than using the “bucket” approach of a DFT) can be used on data with gaps. I can’t see how this would work, do you have any literature you can point me to. I know there are modified DFT methods out there that can work on sparsely sampled data on irregular supports but they appear to be a nightmare to implement and seem to be far from efficient. This would be much appreciated if you could spare a moment to elaborate or point me in the right direction.
Greg
Ah, it just clicked I think, I see from a reread of Chris’ post. It can be applied is just a general least squares method…Yes?