Guest Post by Willis Eschenbach
In my last post on the purported existence of the elusive ~60-year cycle in sea levels as claimed in the recent paper “Is there a 60-year oscillation in global mean sea level?”, I used a tool called “periodicity analysis” (discussed here) to investigate cycles in the sea level. However, some people said I wasn’t using the right tool for the job. And since I didn’t find the elusive 60-year cycle, I figured they might be right about periodicity analysis. In the process, however I found a more sensitive tool, which is to just fit a sine wave to the tidal data at each cycle length and measure the peak-to-peak amplitude of the best-fit sine wave. I call this procedure “sinusoidal periodicity”, for a simple reason—I’m a self-taught mathematician, so I don’t know the right name for the procedure. I’m sure this analysis method is known, but since I made it up I don’t know what it’s actually called.
I like to start with a look at the rawest view of the data. In this case, here’s the long-term Stockholm tide gauge record itself, before any further analysis. This is the longest complete monthly tidal gauge record I know of, at 200 years.
Figure 1. Stockholm monthly average sea level. This is a relative sea level, measured against an arbitrary zero point.
As you can see, Stockholm is (geologically speaking) rapidly leaping upwards after the removal of the huge burden of ice and glaciers about 12,000 years ago. As a result, the relative sea level (ocean relative to the land) has been falling steadily for the last 200 years, at a surprisingly stable rate of about 4 mm per year.
In any case, here’s what the sinusoidal periodicity analysis looks like for the Stockholm tide data, both with and without the annual cycle:
Figure 1a. “Sinusoidal Periodicity” of the Stockholm tide gauge data, showing the peak-to-peak amplitude (in millimetres) of the best-fit sine wave at each period length. Upper panel shows the data including the annual variations. In all cases, the underlying dataset is linearly detrended before sinusoidal periodicity analysis. Note the different scales of the two panels.
Now, I could get fond of this kind of sinusoidal analysis. To begin with, it shares one advantage of periodicity analysis, which is that the result is linear in period, rather than linear with frequency as is the case with Fourier transforms and spectral analysis. This means that from monthly data you get results in monthly increments of cycle length. Next, it outperforms periodicity analysis in respect of the removal of the short-period signals. As you can see above, unlike with periodicity analysis, removing the annual signal does not affect the results for the longer-term cycles. The longer cycles are totally unchanged by the removal of the annual cycle. Finally, I very much like the fact that the results are in the same units as the input data, which in this case is millimetres. I can intuitively get a sense of a 150-mm (6 inch) annual swing in the Stockholm sea level as shown above, or a 40 mm (1.5 inch) swing at both ~5.5 and ~31 years.
Let me start with a few comments on the Stockholm results above. The first one is that there is no significant power in the ~ 11-year period of the sunspot cycle, or the 22-year Hale solar cycle, as many people have claimed. There is a small peak at 21 years, but it is weak. After removal of the annual cycle, the next strongest cycles peak at ~5.5, 31.75, and 15 years.
Next, there are clearly cycle lengths which have very little power, such as 19.5, 26.5, and 35 years.
Finally, in this record I don’t see much sign of the proverbial ~60 cycle. In this record, at least, there isn’t much power in any of the longer cycles.
My tentative conclusion from the sinusoidal analysis of the Stockholm tide record is that we are looking at the resonant frequencies (and non-resonant frequencies) of the horizontal movement of the ocean within its surrounding basin.
So let me go through all of the datasets that are 120 years long or longer, using this tool, to see what we find.
So lets move on to the other 22 long-term tidal datasets that I linked to in my last post. I chose 120 years because I’m forced to use shorter datasets than I like. Normally, I wouldn’t consider results from a period less than three times the length of the cycle in question to be significant. However, there’s very few datasets that long, so the next step down is to require at least 120 years of data to look for a 60-year cycle. Less than that and you’re just fooling yourself. So without further ado, here are the strengths of the sinusoidal cycles for the first eight of the 22 datasets …
Figure 2. Sinusoidal amplitude, first eight of the 22 long-term (>120 year) datasets in the PSMSL database. Note that the units are different in different panels.
The first thing that strikes me about these results? The incredible variety. A few examples. Brest has lots of power in the longer-term cycles, with a clear peak at ~65 years. Wismar 2, on the other hand, has very little power in the long-term cycles, but a clear cycle at ~ 28 years. San Francisco has a 55-year peak, but the strongest peak there is at 13 years. In New York, on the other hand, the ~51 year peak is the strongest cycle after the annual cycle. Cuxhaven 2 has a low spot between 55 and 65 years, as does Warnemunde 2, which goes to zero at about 56 years … go figure.
Confused yet? Here’s another eight …
Figure 3. Sinusoidal periodicity, second eight of the 22 long-term (>120 year) datasets in the PSMSL database. Note that the units are different in different panels.
Again the unifying theme is the lack of a unifying theme. Vlissingen and Ijmuiden bottom out around 50 years. Helsinki has almost no power in the longer cycles, but the shorter cycles are up to 60 mm in amplitude.. Vlissingen is the reverse. The shorter cycles are down around 15-20 mm, and the longer cycles are up to 60 mm in amplitude. And so on … here’s the final group of six:
Figure 4. Sinusoidal periodicity, final six of the 22 long-term (>120 year) datasets in the PSMSL database. Note that the units are different in different panels.
Still loads of differences. As I noted in my previous post, the only one of the datasets that showed a clear peak at ~55-years was Poti, and I find the same here. Marseilles, on the other hand, has power in the longer term, but without a clear peak. And the other four all bottom out somewhere between 50 and 70 years, no joy there.
In short, although I do think this method of analysis gives a better view, I still cannot find the elusive 60-year cycle. Here’s an overview of all 22 of the datasets, you tell me what you see:
Figure 5. Sinusoidal periodicity, all twenty-two of the long-term tide gauge datasets.
Now, I got started on this quest because of the statement in Abstract of the underlying study, viz:
We find that there is a significant oscillation with a period around 60-years in the majority of the tide gauges examined during the 20th Century …
(As an aside, waffle-words like “a period around 60-years” drive me spare. The period that they actually tested for was 55-years … so why not state that in the abstract? Whenever one of these good cycle-folk says “a period around” I know they are investigating the upper end of the stress-strain curve of veracity … but I digress.)
So they claim a 55-year cycle in “the majority of the tide gauges” … sorry, I’m still not seeing it. The Poti record in violet in Figure 5 is about the only tide gauge to show a significant 55-year peak.
On average (black line), for these tide gauge records, the strongest cycle is 6 years 4 months. There is another peak at 18 years 1 month. All of them have low spots at 12-14 years and at 24 years … and other than that, they have very little in common. In particular, there seems to be no common cycles longer than about thirty years or so.
So once again, I have to throw this out as an opportunity for those of you who think the authors were right and who believe that there IS a 55-year cycle “in the majority of the tide gauges”. Here’s your chance to prove me wrong, that’s the game of science. Note again that I’m not saying there is no 55-year signal in the tide data. I’m saying I’ve looked for it in a couple of different ways now, and gotten the same negative result.
I threw out this same opportunity in my last post on the subject … to date, nobody has shown such a cycle exists in the tide data. Oh, there are the usual number of people who also can’t find the signal, but who insist on telling me how smart they are and how stupid I am for not finding it. Despite that, so far, nobody has demonstrated the 55-year signal exists in a majority of the tide gauges.
So please, folks. Yes, I’m a self-taught scientist. And yes, I’ve never taken a class in signal analysis. I’ve only taken two college science classes in my life, Introductory Physics 101 and Introductory Chemistry 101. I freely admit I have little formal education.
But if you can’t find the 55-year signal either, then please don’t bother telling me how smart you are or listing all the mistakes you think I’m making. If you’re so smart, find the signal first. Then you can explain to me where I went wrong.
What’s next for me? Calculating the 95% CIs for the sinusoidal periodicity, including autocorrelation. And finding a way to calculate it faster, as usual optimization is slow, double optimization (phase and amplitude) is slower, and each analysis requires about a thousand such optimizations. It takes about 20 seconds on my machine, doable, but I’d like some faster method.
Best regards to each of you,
w.
As Always: Please quote the exact words that you disagree with, it avoids endless misunderstandings.
Also: Claims without substantiation get little traction here. Please provide links, citations, locations, observations and the like, it’s science after all. I’m tired of people popping up all breathless to tell us about something they read somewhere about what happened some unknown amount of time ago in some unspecified location … links and facts are your friend.
Data: All PSMSL stations in one large Excel file, All Tide Data.xlsx
Just the 22 longest stations as shown in Figs. 2-4 as a CSV text file, Tide Data 22 Longest.csv .
Stockholm data as an excel worksheet, eckman_2003_stockholm.xls
Code: The function I wrote to do the analysis is called “sinepower”, available here. If that link doesn’t work for you, try here. The function doesn’t call any external functions or packages … but it’s slow. There’s a worked example at the end of the file, after the function definition, that imports the 22-station CSV file. Suggestions welcome.
Thanks. Fascinating. I’ve often wondered how the “60-year cycle” stands testing.
I get error (400) at dropbox… Thanks.
Edit on error (400) above: Copy the URL, paste it into address, backspace over the space, replace it with another space, and hit enter. The code came up as text.
Willis, I have to admit that I don’t follow exactly what you are doing, but the results make for interesting graphs, and evidently new insights.
So that leads me to ask two questions, neither of which is a criticism.
1) you say you are removing an annual cycle “for clarity”. OK so for just any one example, what does the resultant graph look like if you don’t do that; so we can see the fog fall away ??
2) on a similar vein, since my (now somewhat weak) brain thinks that a linear trend oughta morph into some recognizable “spectral” feature ; what if you just do your mastication on the raw data; what does it look like then. No; just a single example unless you can just dismiss the question on some logical ground that escapes me, at the moment.
Maybe you have found a transform that produces a universal null at 60 years, for any and all input data !!
I wonder why the post doc fellows aren’t doing what you are ?
G
Thanks, Willis. Good try, there is possibly nothing there like you were looking for.
Theory. There is a 60 year cycle.
Tested.
Results. Negative.
Paging dr feynman.
Of course some will come along and suggest different data different methods.
None will explain why a 60 year cycle should be found.
That is explain it with physics.
They might say. It has to because the sun.
One problem with tidal measurements is that it assumes the tidal gage at measuring point “X” is forever stable; i.e., is on land not subject to any rise or fall. What justifies accepting that assumption as true? As both are subject to change no read periodic it is possible.
Ah – But there IS – should I be more negative and say “DOES appear to be” – a recent 60 year cycle in the surface temperature record since 1820: today’s peak between 2015-1998, between 1945 – 1936, at 1880, etc. each superimposed on a longer 900 year cycle down from the Roman Optimum, down to the dark Ages, up from the dark Ages into the Medieval Warm Period, down again into the Little Ice Age and back up towards today’s Modern Warming Period.
Now, I cannot tell anybody what causes that short cycle – nor what other things “might” either co-relate to that cycle, might precede it, or lag after it, but the cycle itself certainly appears visible.
george e. smith says:
May 1, 2014 at 6:51 pm
Suppose we take a sine wave that is say exactly 40 years long. I adjust (“fit”) the phase and the amplitude of the signal until I get the very best fit between the sine wave and the data. I measure the amplitude (peak to peak) of that signal.
Then I do the same thing at every other length from two months to 70 years. The result is what is shown in the graphs above. They show the amplitude (total tidal range) at the various periods.
See Figure 1a which shows the same data before and after removing the annual cycle. The problem is that the annual cycle typically 5-10 times larger than any other cycle, so if we scale the graph to include that, it’s hard to see the details of the smaller, longer-term cycles.
Not sure what you mean by “a linear trend oughta morph into some recognizable “spectral” feature”, sorry.
Since none of the results I show above have a null at 60 years, and instead each one has a different value at 60 years, I don’t understand what you mean.
I sometimes wonder why I do what I do …
Regards,
w.
Steven Mosher says:
May 1, 2014 at 7:34 pm
Tell that to these guys:
Deser, Clara; Alexander, Michael A.; Xie, Shang-Ping; Phillips, Adam S. (January 2010). “Sea Surface Temperature Variability: Patterns and Mechanisms”. Annual Review of Marine Science 2 (1): 115–143.
And to the Pacific salmon fisheries guy who discovered the PDO, no thanks to your lying, trough-feeding, anti-scientific buddies in the pay of Big Government & the windmill & solar panel industries:
Mantua, Nathan J. et al. (1997). “A Pacific interdecadal climate oscillation with impacts on salmon production”. Bulletin of the American Meteorological Society 78 (6): 1069–1079.
Tom Asiseeitnow says:
May 1, 2014 at 7:38 pm
Thanks, Tom. Not sure what you mean. I talked in the head post about how the land under Stockholm is rising. I’m definitely assuming it’s subject to rise or fall, and in this case it’s rising.
That would certainly be true if the uplifting rate of the underlying land were greatly variable. But we’re talking about a 10,000 year adjustment to the loss of the unimaginable weight of the ice age glaciers . So as you can see from the Stockholm data in Figure 1, the uplifting is roughly linear. This definitely allows us to do real periodic analyses.
w.
you tell me whatyou see: Pretty colours. Always love your posts Willis.
My attempt to access the code was unsuccessful: “Error (400) Something went wrong. Don’t worry, your files are still safe and the Dropboxers have been notified.”
no uplift is not linear. Think earth quakes ….. starts and stops and sometimes you get a 8.5 magnitude and then sometimes a 2.1. Why should up lift be linear?
You can get the same result at equally spaced points in frequency by appending zeros to the detrended data and using the fft. Appending zeros is not going to change the sinusoidal optimization if you are doing least squares fits for the optimization. You probably want to plot the amplitude of the result, and the peak to peak will be that divided by the number of original data points. As the number of points increases, the band will be better resolved, so you can also estimate the sum across the band, but that requires divided by the total number of points, including zeros, before the sum. As you can see the total sum will remain approximately equal if you do that since the resolved band width is proportional to the total number of points.
I’ve left out some details. You would probably would want to apodize the data, and phase correct over the band if you use the sum. For this sort of thing, where you might be looking for the best fit with a fixed number of frequencies, maximum entropy or one of its relations might also be a candidate for the method. Not that I think there is much to be gained by these methods, but they will run faster.
” That would certainly be true if the uplifting rate of the underlying land were greatly variable. But we’re talking about a 10,000 year adjustment to the loss of the unimaginable weight of the ice age glaciers . So as you can see from the Stockholm data in Figure 1, the uplifting is roughly linear. This definitely allows us to do real periodic analyses.”sorry I didn’t quote what I disagreed with . Not that I exactly disagree but to my understanding , the earths tectonic processes don’t work in a linear fashion. Perhaps your averaging them out over ten thousand years might lend merit to your statement but then again the very nature of these processes would make them be much more random and nonlinear.
RACookPE1978 says:
May 1, 2014 at 7:46 pm
RA, could I ask you to restrict the issues to the question at hand, that of sea level cycles? There are a million questions about cycles, and we can’t answer them all in one thread.
I want to settle this one question at a time, and for this thread it’s the elusive ~60-year cycle in sea levels.
Thanks,
w.
Got point #1 thanks.
Other was a bit of a chain yank to keep you awake.
thanks.
g
milodonharlani says:
May 1, 2014 at 7:52 pm
Tell what to those guys? They wrote about sea surface temperature. Mosh was talking about ~ 60-year cycles in sea levels. What are you proposing that Mosh should tell them?
w.
Joe Born says:
May 1, 2014 at 8:05 pm
Fixed. Lately wordpress has added spaces at random to the end of my dropbox urls … go figure. Try it again, should work now.
w.
david says:
May 1, 2014 at 8:16 pm
Ummm … because uplift != earthquakes?
w.
Chuck says:
May 1, 2014 at 8:16 pm
Thanks, Chuck. As I mentioned above, I much prefer an analysis that is linear in period to one that is linear in frequency.
w.
Willis Eschenbach says:
May 1, 2014 at 8:57 pm
Thermal expansion from higher SST can’t help but translate into higher MSL, all other factors being equal, can it?
And the reverse for cooler SSTs.
IMO, if we could actually measure MSL changes precisely & accurately, the decadal fluctuations would be obvious. But we can’t, because MSL trend changes are so small.
PS: Even with the questionable data available, PDO contribution to 20-year MSL changes has been detected:
http://onlinelibrary.wiley.com/doi/10.1002/grl.50950/abstract
But then I might be biased, as a North Pacific salmon fisher of five decades’ standing, who has observed these changes personally.
Not really related, but when in remote Indonesia the locals often claim that ocean swells are higher with the full moon and full moon tides.
I know that in some places localised tidal currents greatly affect ocean swells in Lombok and Lembongan, because the tide acts with, or against, the incoming swells between islands, making the swells sizes change dramatically. These currents even affect the shape of the islands. I have seen ocean waves go from 6 inches to 6 feet in 1 hour with the incoming tide, and vice versa as the tide goes out, at Lembongan. I went out an sat in 6 inch waves with my surfboard and just waited because you could predict the swell change like a clock. Half an hour later the waves jumped. So the same could be true out in the deeper ocean.
Not sure if this is relevant to the discussion, but thought Id mention it, because if tidal cycles affect ocean swells, then perhaps ocean phases/temperatures couldn’t affect tidal cycles.
thingadonta says:
May 1, 2014 at 9:21 pm
Relevant, IMHO.
CO2 isn’t a pimple on the glutes of solar effects upon earth’s climate, among other factors orders of magnitude more important.
The cause of the 60 year cycle, is the heartbeat of Gaia…. No cycle, no Gaia. /sarc.
I do have an observational question: The Baltic is 416,266km2, rising 4mm a year, leads to a global sea level rise of 1.66km3. Perhaps I am wrong as there might be some subduction elsewhere?
“I don’t know the right name for the procedure”
Isn’t it just a standard Fourier Transform graphed against period instead of frequency?
I believe the Bay of Fundi has the most amplified tides in the world. Greater and lesser degrees of harmonic amplification occur everywhere. Bottom slope, channel width, moment, blah blah. This is why you don’t look for signal in sea level.
Sure, San Francisco may have a 55 year cycle because the PDO Nina phase increases Ekman transport away from the coast. Fish know this and come in spite of the lower tides. But this is an atmospheric and not a tidal phenomenon.
Milo.
The pacific salmon are not the climate.
The pdo is not the climate.
It pains me to point this out.
Climate is long term weather statistics. Not fish.
milodonharlani says:
May 1, 2014 at 9:18 pm
Certainly. But the link you gave was to sea surface temperatures, which tells us little about the steric component.
In any case, since we still haven’t found said long-term cycles in sea level, speculation on possible causes seems premature …
Say what? We have 200 years of accurate measurements of the tide at Stockholm. It clearly reveals e.g. the annual cycles, the 6-month cycles, and longer term cycles with a variety of periods.
You go on to say:
milodonharlani says:
May 1, 2014 at 9:21 pm
Paywalled, $32.
More to the point, they’re looking at 60 years of “reconstructed” sea level data for the effects of the approximately sixty year cycle of the PDO??? Really? Sorry, not buying that one, no matter how much lipstick they might put on it.
Observed which changes? The additional half-mm of sea level rise per year? Milodon, I got my first job commercial fishing the Pacific Northwest some 46 years ago, so I’m not far behind you. And while the overall effects of the PDO on e.g. the salmon fisheries are quite evident to people in the industry like ourselves, I’m doubting that you’ve observed the sea level changes due to the PDO …
w.
Nick Stokes says:
May 1, 2014 at 9:30 pm
No, because it is linear in period where the FFT is linear in frequency. As a result, the FFT has very poor resolution at long periods.
w.
gymnosperm says:
May 1, 2014 at 9:33 pm
Hey, you’re preaching to the choir. I’m not the one that claimed that there was a 60-year cycle in the data, I’m the guy claiming I can’t find such a cycle.
w.
the FFT has very poor resolution at long periods.
The resolution is the same, you just need to interpolate the frequency spectrum using the sinc function or, alternatively, extend the data with zeros before the transform. Both spectra have the same values, they are just sampled differently by the two methods.
Willis Eschenbach says:
May 1, 2014 at 9:52 pm
Maybe “we” haven’t, but I have. I don’t care whether you or anyone else has. I know where high tide was at Seaside, Oregon in the 1920s, when my grandfather’s company built the seawall there & where it is now, as well as in the 1950s, ’60s, ’70s, ’80s, ’90s & ’00s, directly observed by me from pop bottles buried in the sand during earlier decades & since recovered. Maybe there has been some downward movement of this part of the continent from the uplift in the Puget Sound region & points north, due to the melting of glaciers, but not much. We’re talking massive continents here, not the little island of Great Britain.
How about this? You, Mosher & I go to whatever community on the coast of Oregon, Washington, BC or Alaska will endure our road show & each make our case for or against a decadal fluctuation in sea level and/or associated biological proxies. I’ll abide by whatever decision local people most familiar with sea level changes & associated effects make after hearing our respective schpiels.
Deal or no deal?
But before you & Mosher decide to take your show on the road with me, please consider these data from Astoria, near Warrenton, where my grandad & grandmother are buried:
http://tidesandcurrents.noaa.gov/sltrends/50yr.shtml?stnid=9439040
Just so that you know.
Decadal fluctuations for Coos Bay:
http://tidesandcurrents.noaa.gov/sltrends/sltrends_station.shtml?stnid=9432780
Frankly, I don’t know why we should be hanging our hat on sea level as the only useful metric to detect the climate cycles.
It is pretty clear that there is a wide variety of independent evidence for such cycles – see here for instance:
http://www.climate.unibe.ch/~stocker/papers/stocker92cc.pdf
Cheers, 🙂
Great Willis, looks like progress.
However, I think there’s one fundamental point that you’re missing about this which is leading you to misinterpret the many individual plots and the overlay where you see the all the records have very different spectra in the longer periods.
GE Smith: “2) on a similar vein, since my (now somewhat weak) brain thinks that a linear trend oughta morph into some recognizable “spectral” feature ”
I suggest you create an artificial time series that is a long steady rise plus a bit of your favourite model noise, white,red, pink, whatever, then do yourself a spectrum
Now do a few samples with lengths that match , say, your first eight tidal records, in terms of the number of data points. Plot them side by side or overlay.
I think you will find the results similar to what you have produced above.
The point is that a lot of the long periods are there to reproduce the steady rise. As we know, if you do a Fourier synthesis of any data and try to use it to project the future, it will just produce a repetition of the data window. In the case of a steady rise, it will produce a saw-tooth. If your data sample is shorter (longer) the teeth on the saw will be shorter (longer) . Thus the frequencies that make the series will be different and mainly dependent on the length of the data available.
http://mathworld.wolfram.com/FourierSeriesSawtoothWave.html
Note that both the frequencies and the amplitudes are a function of 1/L , where L is the length of the saw-teeth, ie the length of the tidal series in your post.
This is what the condition of stationarity (in particular stationary mean) is all about for FT methods.
If you do the test I suggested, I think it will demonstrate to you that this is what is happening.
Shawnhet says:
May 1, 2014 at 10:35 pm
I’m OK with centennial scale cycles ruled in large part by oceanic circulation, along with decennial scale by the sun & millennial to hundred millennial scale by orbital mechanics.
Your “sinusoidal periodicity analysis” seems to me to be a simplified form of wavelet analysis:
http://paos.colorado.edu/research/wavelets/
Using your “Tide Data 22 Longest.csv” I had a look at the longest fairly continuous data which was “Wismar” using CATS software available at Cycles Research Institute. This allows accurate cycles period determinations as well as Bartel’s test of significance. The longer cycles found (and p values in brackets) are listed for p<.05:
27.3 y (.020), 10.94 y (.027), 6.33 y (.049), 3.606 y (.027), 3.253 y (.004), 2.925 y (.031), 2.474 y (.0256), also 1.000 y (<10^-8), 0.500 y (<10^-8) but nothing at 1/3 or 1/4 year.
Note that 6.33 y is rather near to the Chandler wobble modulation period which is no surprise.
Of course 11 y is the sunspot cycle period and 27 years is a common cycle appearing in many natural and human series.
It would be possible to repeat this for all the 21 other data series.
What to do?
Well Stockholm look like it has a very linear long term component, presumably attributable to post-glacial rebound. Fitting and removing this a linear fn would get rid of the saw-tooth problem and let your technique better examine the frequency content.
The other way is first diff. , or the discrete form of the time derivative. A linear rise will then become a fixed constant and will be the zero frequency point in the spectrum ( infinite freq , it you prefer), this separate from the rest and not messing up the spectrum. If there is a 60y pure harmonic in the TS it will still be there in d/dt . However if it a non harmonic repetition ( much more likely IMO ) it will not be so simple, although some 60y base component should still be there.
What to do part II.
The other thing that can be done to see whether there is a common frequency is cross-correlation. ( I guess R must have some fn to do that too ).
For example do CC of Stockholm and another long record and do your freq. analysis on the result. If there is some common variability it should come out. Similar detrending rules will probably be required.
There will be a lot of variability that is due to local resonances as you said but if there is a common signal this may be the best shot at finding it.
“Note that 6.33 y is rather near to the Chandler wobble modulation period which is no surprise.”
Hi Ray, could you explain a little about how that is related to Chandler? thx.
The only 55 year feature that strikes me is the periodic low averages at Stockholm. Maybe the lack of storm surges or calm weather for extended months. Other then that, water sloshing from currents and storm tracks from one area to another look to me to be most likely cause. Sea Level is a fluid thing. 😉 pg
milodonharlani says:
May 1, 2014 at 10:05 pm
Maybe “we” haven’t done WHAT, and you have done WHAT? You don’t care if anyone else has done WHAT?
That’s what I asked last time. You posted an article about an effect dependent on the PDO involving half a freaking mm per year, and you claimed you had seen it … but seen WHAT?
If you are claiming you’ve seen the half-millimetre per year change in tides due to the PDO, I say no … but if you’re not claiming you’ve seen the 1/2 mm, then just what are you claiming?
w.
Cool analysis, Willis! I imagine if the yearly “resonance” peak is an order of magnitude higher than longer periods, then the monthly and daily “resonance” peaks are even that much higher still.
Can I suggest that someone plot this data in such a manner that the X-Axis is not linear but rather logarithmic? Or would it be antilogarithmic? Whatever, . . . find the geometric progression power that spaces each of the peaks out so they are about the same width. This would help the eye determine if the peaks and valleys show any pattern, not that I think any pattern will emerge.
I think you are right though Willis when you suggested that what we are looking at is very, very, very low frequency resonant “waves” that are oscillating in a fixed cavity. I suspect that if you made measurments in a circular direction away from the measurement point you could relate the resonance peaks to specific radial distances where the water hits an opposing shore and “reflects” back. All of the different resonance peaks correspond to each distance just like sound in a flute, or any other standing wave instrument. The ocean is producing ultra-low frequency music?
milodonharlani says:
May 1, 2014 at 10:15 pm
Good heavens, you are approaching total incoherency. Mosher and I taking a show on the road? What “show” would that be?
And what does that have to do with the sea level trends from Astoria? You post those trends as if they clearly prove me wrong about something … but what?
Sorry, Milodon, but you’re making no sense.
w.
Shawnhet says:
May 1, 2014 at 10:35 pm
You see, this is why I asked you to quote what you disagree with. Obviously, you think someone here is “hanging their hat on sea levels as the only useful metric” for detecting climate cycles.
However, I know of absolutely no one who is making that claim, and I know I’m not, so I don’t know who you think you are disagreeing with … but it ain’t me …
w.
Willis Eschenbach says:
“No, because it is linear in period where the FFT is linear in frequency.”
There’s no issue of linearity here. You are doing each optimisation at a fixed frequency (or period).
Here’s the math. You’re minimising
I = ∫ (f(t) – a*cos(ωt+b))² dt
over a and b, so dT/da=0
∫ (f(t) – a*cos(ωt+b)) cos(ωt+b) dt = 0
or a = 1/N ∫ f(t) cos(ωt+b) dt
where N is the length of your finite integration interval. Or expanding the cos:
a = 1/N( cos(b) ∫ f(t) cos(ωt) dt – sin(b)∫ f(t) sin(ωt) dt)
ie linear combination of cos and sin transform. When you optimise over b, you just have 1/N times the magnitude of the complex Fourier Transform. There’s a fuss about finite range, and that does lead to problems at low frequency. That’s inevitable – it follows from the finite range (200 years).
Greg says:
May 1, 2014 at 10:36 pm
Greg, let me suggest you re-read the head post, particularly the part where I said:
Is there some part of that which is unclear? I don’t care in the slightest about your brilliant ideas about the “fundmental point” that you think I’m missing. Come back when you can find the missing signal, and I’ll pay attention. Until then … not so much.
w.
PS—What on earth does this mean?
What “steady rise” are you talking about when all of the datasets are first detrended?
Ray Tomes says:
May 1, 2014 at 10:42 pm
Thanks, Ray. I note that your method finds about the same cycles as my method … including the fact that you do not find a 60-year cycle either.
w.
Greg says:
May 1, 2014 at 10:48 pm
Um … er … was my writing not clear? I said in the head post:
Please, please, please first read what you are criticizing, and only after you do that, then get all critical. Your repeated attempts to do it in the opposite order aren’t doing your reputation any favors.
w.
Willis
there is document at DEFRA with lots of links (some which I can’t get to work) to UK resources which you might find useful (if you’re not aware of them already)
http://chartingprogress.defra.gov.uk/feeder/Section_3.5_Sea_Level.pdf
One link leads here
http://www.ntslf.org/
Willis what was your point anyway. Cheerio.
I know you don´t love him… but 60 years?
https://www.youtube.com/watch?v=6R26PXRrgds#t=48
Ground is dropping up to 10 times faster than the sea level is rising in coastal megacities, a new study says: http://www.dailymail.co.uk/sciencetech/article-2616714/Forget-global-warming-groundwater-extraction-causing-megacities-SINK-beneath-sea-level.html
bushbunny says:
May 2, 2014 at 12:20 am
Without a quote to indicate which of my many points you might be referring to, I fear I can’t answer that.
Regards,
w.
SØREN BUNDGAARD says:
May 2, 2014 at 12:25 am
No clue what you’re talking about, Soren … and the odds of me watching a 48-minute video of Piers Corbyn in order to find out are either zero or zero, depending on the state of the tides.
w.
Sorry , I did not see that detrending comment in the caption of the graph. Perhaps it would have been better to put it in the text of your article describing the method. The description you gave just above the graph was this:
“In any case, here’s what the sinusoidal periodicity analysis looks like for the Stockholm tide data, both with and without the annual cycle:”
You forgot to say you’d removed the trend too, so I did not pay too much attention to the graph.
What is significant is that most of the graphs have still have rising energy at longer periods, which indicates that either there is still a trend of there is significant variability >=70 years. This can cause similar problems to a trend.
The usual remedy for this is a window function or “taper”. But if I explain what that means you’ll probably just say I’m trying to be smart, so I won’t bother.
The 22 station data file has actually 20 European stations plus 2 U.S. stations, with 0 elsewhere. (That wasn’t made blatant but required looking up the names; for example, if anyone wonders where the heck Swinoujscie is, it is just in Poland). Entire continents are not represented in any way in it. The reader should keep in mind how non-representative that is of what most people are most interested in: global average history.
As an analogy, would one judge the average temperature of planet Earth by having 10 stations in Antarctica averaged with 1 station in Alaska?
So let’s start getting into some of the problems here if one wants to evaluate global climate history:
In a way, it is appropriate that this article begins with a gauge reading a sea level fall over the past 2 centuries (due to local land rise), for such implicitly highlights how that is very, very different from the global average.
To isolate a signal on the scale of mm/year, data for each station would need to be properly calibrated relative to local land rise or fall (postglacial rebound, subsidence, tectonic effects).
As analogy, not doing so would be a little like trying to measure the history of global climate (e.g. the 1/500th absolute temperature change in Kelvin which constituted global warming over the past century) by looking solely at the readings from a thermometer on some airline jet, without even compensating for variation in how much time the plane spent in different locations and at different altitudes.
Looking at a single station is not a way to judge global sea level history. If someone thought it was, the result would be entertaining if done analogously with temperature: Someone could depict global cooling over the past century or just about anything by choosing the station, as localities vary greatly. (As a further analogy, there may be some temperature proxy in some location which, if looked at in isolation, would have noise and local factors overwhelm the detection even of entire ice ages in global climate history).
What about simple averaging, such as with the 22 station data file? How much would averaging those 22 stations give a global average? Averaging all (those) stations then would only be misleading. About a third of the total are in the tiny country of the Netherlands, for example, thus grossly overweighted in such an average. Another several are in Germany. Zero of those 22 are outside the U.S. or Europe.
For this comment, I considered doing an illustration nevertheless with stations in that file, without overweighting Europe quite so much, but it’d take more time than desired, especially since the file is riddled with gaps of years of missing data in different series, which would have to be handled (better by interpolation than by treating them as 0s).
———
So what does give the actual picture better? As an example, which is not behind a paywall but as an available full-text PDF, let’s observe what is done within a paper at http://www.joelschwartz.com/pdfs/Holgate.pdf :
First of all, look at figure 1 in that link: When creating a 9 tide gauge average, the author has only 3 of 9 stations (not 20 of 22) be from Europe. He deliberately includes others spread out and far away, such as Hawaii and New Zealand. Europe is still relatively overrepresented, due to the history of modern society and to where technology like tide gauges got first installed long ago, but such isn’t as bad.
Secondly, notice that gauge data is “corrected for glacial isostatic adjustment and inverse barometer effects.”
Thirdly, notice that averaging is used to get the decadal rates, as opposed to plotting unaveraged data. As an analogy with temperature instead of sea level, if someone plotted surface temperature history in daily values without averaging, they could create a grand mess concealing the existence of about any real patterns in global climate history. (The 0.6K of global warming over the past century, a 1 in 500 part change in absolute temperature, requires averages to even be visible on a plot).
Fourthly, note the author compares the 9 station average to a 177 station average over a time when data is available from both (figure 2). As this is real world data, sometimes a peak or trough in one dataset happens before or after that in another dataset, but they are similar enough as could be seen by an unbiased observer, which is a good sign.
———
As for a 60 year cycle, that isn’t something I emphasize in climate history, let alone in sea level in particular, so mainly it isn’t something I’m here to support. As I’ve often noted, surface temperature in the 20th century followed more a double peak appearance (in original data) than a hockey stick, but such isn’t about implying peaks were always 60 years apart indefinitely further back.
However, amusingly, although few if any commenters here seem to have even noticed, earlier this very week there was a WUWT article showing a large plot of sea level rise rate history with a high near 1940, then another high towards the end of the 20th century (around 60 years later). The plot in the following is of 18-year trends, over about 2 decades at a time, so it hides the shorter oscillations in the Holgate plot, depicting a longer average:
http://wattsupwiththat.com/2014/04/28/sea-level-rise-slows-while-satellite-temperature-pause-dominates-measurement-record/
———
Note to readers: If this post is argued with, look at what is snipped and not quoted, as that can be most revealing of all.
“they are investigating the upper end of the stress-strain curve of veracity”
I’m sorry Willis, I’m pinching this…
Willis, can you tell us how “monthly” tide gauge readings are calculated? Calendar months, with their variable length, 30.44 days on average, complicated by leap years, are not the best time units for tidal data. True periods should be related to the synodic month (29.18 – 29.93 days, 29.53 days on average) and tropical year (12.368 synodic months or 365.2422 days). Also, at some locations, like Stockholm, max tidal range is small (~40 mm) compared to haphazard effects of wind driven swells (~500 mm).
Greg says:
May 2, 2014 at 12:49 am
Riiiight. You didn’t read all of what I wrote, and somehow your inattention is all my fault because I forgot to write it correctly …
Do you realize how foolish you look when you do that? You screwed up, and as a result you made wildly incorrect claims. Now me, if I did that I’d apologize and move on. Hey, it happens. When you’re wrong, say so, it’s the painful but honest way out, and I’ve been forced to take it more than I’ve wished. It happens.
But instead of apologizing for your incorrect assertions and getting on with life, you want to convince everyone that you not reading all the parts of a scientific work before lecturing the author is somehow my fault? What, like I control where your eyes wander?
Sorry, Greg, but that pig won’t fly …
w.
@kasuha
I looked at the wavelet site you listed, and it’s not done by someone who can write about mathematics. They are clearly regurgitating some other text. See formula 2.1, they use a complex exponential and then plot the result in a 2D space. Not possible! e^ix = cos(x) + i*sin(x). They clearly got this formula from some other place. The use of complex exponentials is required for electrodynamics and quantum mechanics where there really are two components to the field being represented. However, it’s unjustifiable for representing temperature, as they propose to do. Their introduction to their terms is non-existent. This sort of poor understanding of a field is everywhere in mathematics. So, I don’t recommend Willis read this site. Any others?
Henry Clark says:
May 2, 2014 at 12:56 am
Look, Henry, if you want to “evaluate global climate history”, that’s a fine thing … but this is not the place for it.
Here, I’m asking for assistance in evaluating a claim from a scientific study which said that “there is a significant oscillation with a period around 60-years in the majority of the tide gauges examined during the 20th Century”. To investigate that claim, I simply chose all the tide stations I could find with a record at least twice as long as the 60-year cycle I’m looking for. Records 120 years long are not really long enough to reliably identify a 60-year cycle, but it’s what we’ve got.
I am NOT doing this to “evaluate global climate history”. I’m evaluating individual tide gauge records to see if there is a ~60 year cycle in a majority of the individual records as the authors claim. I am saying absolutely nothing about global climate history, or global average temperature of the planet earth, or anything global of any kind. I’m looking at the characteristics of individual tidal records.
Let me ask that you do the same, and you leave “evaluating global climate history” for some other thread. One problem at a time, please.
w.
Willis, Piers Corbyn´s explanation of a 60-year cycle starts here:
[ http://youtu.be/6R26PXRrgds?t=23m42s ]
Willis,
As an aside about the Stockholm results -and I haven’t time this morning to read all of the comments – given that the land is rising could the periodicity of peaks be influenced by planetary gravitational forces ?
Just a thought.
I sometimes wonder why I do what I do …
Regards,
because unlike some of us old farts you really do enjoy playing with maths and you do it really well even if you are self taught although I see nothing degrading (not right word but but can’t think of the one I want) in being so.
One thing that is good about this pseudo-fourier approach is that it can work on data with breaks in it.
One notable difference is that you can not take the series of fitted amplitudes to rebuild the source file. It is not a transform in the same way as FFT is.
Henry Clark says:
May 2, 2014 at 12:56 am
Thanks for that, Henry. Unfortunately, Holgate et al. have used decadal running means, which have horrendous properties, often inverting peaks. I’d take a look at the underlying data to see how badly this affected the results … but heck, Holgate et al. didn’t bother with archiving their data.
Ah, well …
w.
michaelozanne says:
May 2, 2014 at 1:01 am
Glad someone appreciates it as a description of bending the truth until it breaks …
w.
“Now me, if I did that I’d apologize and move on. Hey, it happens. When you’re wrong, say so, it’s the painful but honest way out, and I’ve been forced to take it more than I’ve wished. It happens.”
What part of the first sentence are you having trouble reading?
Sorry , I did not see that detrending comment in the caption of the graph.
The fact that you forgot to say you’d subtracted the trend when you described your method but would be a good opertunity to take your own painful way out and move on. But that’s not really the way you work is it? Despite what you like to think.
I’m not going to scrupulously read every line and caption of something that is wrong according to the description of how it was dervied. At that point I start scanning.
As a result I missed what you put in the caption and appologised. Enought.
Get back to what the data may tell us.
Willis, Piers Corbyn´s explanation of a 60-year cycle starts at 23:40…
(I do not know why the video is not started at this point – sorry)
[it doesn’t matter, Corbyn has nothing useful to say -mod]
To find a signal it is necessary to look where it is.
As shown in this graph ocean level ,mid-ocean is dropping. Why is the amount of change greater than the temperature change can explain?
The solar magnetic cycle changes causes the ocean to expand and contract mid-ocean. The mechanism is not thermal expansion or contraction. Expansion and contraction of the mid-ocean (no change in mass of the ocean) has minimal affect on coast tidal gauges.
The physical reason why the solar magnetic cycle causes the mid-ocean of the planet to expand and contract is also the physical reason why/how the solar magnetic cycle causes the geomagnetic field and cause geomagnetic field changes. It is a charge mechanism.
Excerpt of graph of the data that shows the mid ocean level dropped from Jo nova.
http://jonova.s3.amazonaws.com/graphs/rainfall/sea-level-rise-cazenave-s3.gif
Abstract of the paper from Curry’s blog.
http://judithcurry.com/2014/04/24/slowing-sea-level-rise/
Abstract. Present-day sea-level rise is a major indicator of climate change. Since the early 1990s, sea level rose at a mean rate of ~3.1 mm yr−1. (William: The ocean level increased mid-ocean from satellite data the tidal ocean level data did not increase) However, over the last decade a slowdown of this rate, of about 30%, has been recorded. It coincides with a plateau in Earth’s mean surface temperature evolution, known as the recent pause in warming.
http://www.sciencedirect.com/science/article/pii/S1364682611002896
Geomagnetic South Atlantic Anomaly and global sea level rise: A direct connection? (William: Yes there is a direct connection. The same mechanism that is causing the Southern Atlantic geomagnetic field anomaly – the magnetic field strength in a region larger than South America has dropped in field strength by more than 30% – to occur and is also causing the sudden unexplained Greenland geomagnetic field anomaly,)
We highlight the existence of an intriguing and to date unreported relationship between the surface area of the South Atlantic Anomaly (SAA) of the geomagnetic field and the current trend in global sea level rise. These two geophysical variables have been growing coherently during the last three centuries, thus strongly suggesting a causal relationship supported by some statistical tests. The monotonic increase of the SAA surface area since 1600 may have been associated with an increased inflow of radiation energy through the inner Van Allen belt with a consequent warming of the Earth’s atmosphere and finally global sea level rise. An alternative suggestive and original explanation is also offered, in which pressure changes at the core–mantle boundary cause surface deformations and relative sea level variations. Although we cannot establish a clear connection between SAA dynamics and global warming, the strong correlation between the former and global sea level supports the idea that global warming may be at least partly controlled by deep Earth processes triggering geomagnetic phenomena, such as the South Atlantic Anomaly, on a century time scale.
Stockholm is a facinating place to speculate about ocean cycles due to “sloshing”. Locally there many islands, then there is the Baltic sea, then the North sea, then the Norwegian/Barents/Greenland sea basin, then the whole Atlantic, then the whole world, all of which could be sloshing at different rates.
But would not the twice-a-day tidal sloshing of the Atlantic pretty much remove anything longer term?
The only interesting feature of the recent Lovejoy paper was its use of Haar wavelets for ‘spectral’ analysis of temperature time series. Intuition suggests these may be better for very erratic
time series than smooth trigonometric functions. Worth a try?
From fig 3 I see the four dutch stations on the RHS all have a strong peak just below 20. From power spectrum of IJMUIDEN it seems to be 17.6 with close subharmonics. The longer periods being less reliable.
72.4
36.5
17.6
The means there is a non harmonic 72y ( or possibly 145 y) periodicity.
The other stations do not seem to place the c. 35y the same in Willis’ plots but I don’t have time to check them all now.
Some of the plots in fig 4 also seem to have this c.17.6 , including Marseilles.
These shorter periods are much more reliable. So geographically dispersed sites showing commonality should be informative.
“But would not the twice-a-day tidal sloshing of the Atlantic pretty much remove anything longer term?”
Not if the Atlantic tide was itself varying with something other than the obvious 12h tide. Any influence from N.Atl would be additive to local variations, so it may mask them but it would not remove them.
Willis
Looks pretty comprehensive to me. The only thing I might add, is perhaps some researchers are using some type of global averaging to create global data sets then finding the c. 60 years signal in those.
Yes, I’m a self-taught scientist. And yes, I’ve never taken a class in signal analysis.
Which is probably true of most people who are actually paid to carry out and publish research on these issues in the environmental sciences. You on the other hand seem to have – IMHO – went out done a decent job of educating yourself on this type of analysis, better than the aforementioned professionals; but in saying that, and as you know, I still get corrected on some of the nuances of these techniques so who am I to judge.
Willis
Change:
…I still get corrected…
to:
…continually get correct…
I was being way too generous to myself.
…”As you can see, Stockholm is (geologically speaking) rapidly leaping upwards after the removal of the huge burden of ice and glaciers about 12,000 years ago. As a result, the relative sea level (ocean relative to the land) has been falling steadily for the last 200 years, at a surprisingly stable rate of about 4 mm per year.”…
============================================================
Just a quick question occurred at the beginning of my read. The satellites measurement of sea level is problematic for many reasons, some of which are the seas constantly very due to tides, 16 year tide cycles, wind, and waves. However Land does not have these particular issues. (Tides yes, but to a much simpler degree)
If we had a satellite record of the land movement at Stockholm, would that, in relationship to the tide gauges, give us a truer measurement of the actual sea level change at that location, or, for that matter, everywhere else tide gauges are fixed to land?
I have often seen debates about the tide gauges vs the satellites, but working symbiotically
would they not produce a more accurate refection of true sea level changes? After all, if the satellite measurements of land at Stockholm showed a flat 4mm per year rise, then the SL rise there would be, minus annual changes, essentially zero.
JDN says:
See formula 2.1, they use a complex exponential and then plot the result in a 2D space.
____________________________________
Complex number can be presented as combination of real and imaginary part (cartesian coordinates), or as amplitude and phase (polar coordinates). In this type of analysis, amplitude is of much higher importance than phase so the 2D plot contains just amplitude values.
At least that’s how I understand it. It’s similar to usage of Fourier transformation where phase values are also often omitted when presenting the result.
“As you can see above, unlike with periodicity analysis, removing the annual signal does not affect the results for the longer-term cycles. The longer cycles are totally unchanged by the removal of the annual cycle.”
But, surely, that’s as a result of working in the wrong space, no? To “remove” the annual cycle you convert to frequency or period space first (whichever floats your boat, pun intended) and apply the filter *there* and not in time space
Also, once in Fourier space you can switch back and forth between period and frequency by just applying the appropriate Jacobean. You might want to do a Fourier transform, convert to period, then apply your filter and see if you don’t get results you find useful. Not that there aren’t plenty of othermr methods of analysis besides just FT.
@David
>no uplift is not linear. Think earth quakes ….. starts and stops and sometimes you get a 8.5 magnitude and then sometimes a 2.1. Why should up lift be linear?
Uplift is like an iceberg rising out of the sea as the top melts. It is smooth and changes the shape of the curvature of the Earth (somewhat). The mass lost (relatively rapidly geologically speaking) was about 2000 tons per sq metre which is about the same as removing 800 metres of granite (2640 ft). Losing that much in 5000 years has consequences that continue for a long time.
Hi Greg, the Chandler wobble is a motion of the Earth’s pole that takes about 433 days, due to the Earth being oblate spheroid. The seasons play a part in Earth motion also, because of changing temperatures of land and sea water and resulting movements. I once read that the monsoons blowing on Himalayas has enough effect to change LOD (length of day). Of course these two cycles will interact, sometimes adding and sometimes subtracting. They must be of nearly equal amplitude because the Chandler wobble almost disappears every ~6.4 years. These motions must have some effect on sea levels because of rotational inertia transfers, although it might be expected to be quite small.
An 18.6 year cycle fits a lunar/Earth interaction.
http://en.wikipedia.org/wiki/Lunar_standstill
Climate is long term weather statistics. Not fish.
============
climate scientists have used tree rings, sediment, upside-down, calibration, etc. etc. as proxies for climate. about the only thing they haven’t tried is channeling the dead, so why not fish?
Willis
I think if you do an FFT on the data and extract the amplitude spectrum, you would get a similar spectrum, but the fft uses sines and cosines so it is higher resolution and has different phase properties, which comes in handy if you wish to filter certain frequencies out of a time series, or adjust the phase. On seismic data we use the FFT power spectrum to check for 60 hz power line interference and other spectral anomalies which will show a large spike at 60 hz. etc,, on the spectrum plot. The Cosine FFT forward transform or something similar, is what I think you have built, and it is very good at preserving the key events in a time series. JPEG compression works by doing a 2 dimensional cosine FFT and aggressively quantizing the coefficients. I have used the 1 dimensional Cosfft for compression of time series and achieved 10 to 1, by quantizing from 32 Bit down to 4 bit with almost no loss.
Corbyn has nothing useful to say
============
unless you have full knowledge of cause and effect in climate you cannot know that to be true. i’m not saying Corbyn has something useful to say. rather, that dismissal of competing ideas is what has caused much of the failures in climate science.
until such time as someone has demonstrated ability to reliably predict climate, no-one has a better idea than anyone else.
Underlying physical question… what would cause the cycles, any cycle? Chandler Wobble and 11 yr sunspot cycles are obvious candidates. One possible direct physical process is storm surges http://en.wikipedia.org/wiki/Storm_surge which have significant one-time effects. Throw in the occasional Katrina and Sandy, and “average” tide statistics may be badly skewed by these events.
Greg and Nick Stokes are correct. This is equivalent to an FFT, except its being performed in the time domain not the frequency domain. All Greg’s point are correct concerning requirements for padding, stationarity and so forth.
Willis,
Being from the Netherlands, I could not help but notice that you use 7 cities from my rather modestly sized country. I have two remarks:
a) Using Maassluis as a proxy for sea level is quite risky as Maassluis lies inland along a river. I would expect that the water level there is mostly dictated by rainfall and melt in the Alps. Hoek van Holland is about 12 km downstream at the sea side, and would be more indicative of sea level. The fact that the graphs for these two towns look so much different says a lot.
b) Since the maximum distance between all of these towns is only about 250 km (along the coast line), I’d say that if you want to deduce any global sea level trend (or lack of trend), it would better to look at the common trend shared by these towns. In any case, given their proximity, I think it’s quite surprising that there is so little resemblance between the 7.
Frank
Guys,
Please somebody tell me just how sea level is measured to the mm. Are there sticks stuck in the mud with little lines going up and down?
Just off the top of my head I can think of a few variables to any given daily or yearly sea level.
Barometric pressure both onsite and offshore.
Wind speed and direction both onsite and offshore.
Rainfall/snow melt onsite and offshore.
Sea temperature onsite and offshore.
Site subsistence or rise.
I suppose tides can be predicted to some reasonable degree, but to the mm?
Somebody must have worn out a bunch of pencils figuring out this model. wonder if he has a grant?
I buy your searches Willis. I can’t see why one should expect such cycles. The causation of tides is, of course, the moon’s and sun’s gravitational pull. Without analyzing the actual data, it makes sense to me that the cycles would be short, reflecting the varying but repetitive relative positions of the sun moon and earth. What does the paper offer as a mechanism for such a long period? I think in climate science there is too much aimless sifting and smithying of data, looking for stuff, any stuff, without some potential phenomenon in mind. I won’t get into the awful things done with the data itself to shoe-horn it into IPCC theory and what it might do to any search for meaning.
I still get an error trying to download the R source. My laptop seems unable to resolve the dl.dropboxusercontent.com address. In fact, I used google to look it up, clicked the google link straight to the site, and it STILL could not resolve the address. In fact, I couldn’t get an IP address talking directly to the campus nameserver. So I went to a different system with a different domain name server and learned that the address given is an alias for duc-balancer.x.dropbox.com (used for load balancing, one presumes), and that some nameservers have either blacklisted it (perhaps for distributing protected IP illegally) or else reject aliases in general as they are often used in various man in the middle attacks.
Anyway, I’m about to try a download using the actual site name — I’ll see if that works.
rgb
Oops, worse than that. The site actually has a broken SSL certificate — one wonders if they’ve heard of the Heartland exploit — I couldn’t get the site to work even (riskily) accepting the SSL certificate it offered up. My recommendation, Willis, is to find another way to distribute the source. Dropbox appears to be a bit wonky, possibly exploited by heartland, possibly just broken at the DNS level.
rgb
Tide gauges on seas that are nearly completely surrounded by land (e.g., Mediterranean Sea, Black Sea) should probably be excluded from this analysis, since the levels of these seas is likely strongly influenced by the variability of river flows into these seas. Unfortunately, this would remove many of the world’s great historic sheltered harbours: Marseilles, Poti, Travemunde, Helsinki, Stockholm, Świnoujście, Wismar, Maassluis, San Francisco, Warnemünde, New York City, and Harlingen. There is a chance that some others would need to be removed, if they are located at or near the mouths of rivers (I don’t have the exact lat/lon of the gauges.) I’m curious what your results show with these gauges removed …
Here is my emulation of Willis’s first plot, using just a FFT plotted with a period x-axis. The R code is below the figure. I haven’t adjusted the y-axis units to match. But the shape is right. I padded to 8192 months – more would give a smoother result.
RGB – that’s the heartBLEED exploit…
If there is no 60 year cycle in the actual individual site data (local reality) on sea level change, is there a 60 year cycle created by methods of averaging (regional or global modelling) that are in use in analysing sea level change? You can see why people need to average to go from a mixture of local realities to something they can present academically (not that this is necessarily a good idea), but do they actually manage to ensure they are not creating a cycle as an artefact of the averaging process?
Makes me wish I actually had taken statistics beyond secondary level – might be able to answer the question.
Frank de Jong says: May 2, 2014 at 6:36 am
Being from the Netherlands, I could not help but notice that you use 7 cities from my rather modestly sized country.
___________________________
I think that is a function of the Netherlands being a great sea-trading nation, and therefore having good tidal records.
But it does leave me scratching my head as to why there is so much tidal variation in such a small area. I was thinking of gentle rippling effects in the crust, with standing waves and nodes making regions rise and fall in different fashions. But that could not result in the observed differences between tidal stations only 50 miles apart.
Any other ideas for the differences between these tidal records?
Instrument and reading errors?
Periodic dredging of the silt in harbours?
Ralph
RGB – Heartland exploit – freudian slip perhaps?
Willis, Thanks for an informative article.
You may recall that at ICCC7, Nir Shaviv presented at one of the upstairs breakout sessions, and that he made reference to his article, published in JGR, Using the oceans as a calorimeter to quantify the solar
radiative forcing, JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 113, A11101, doi:10.1029/2007JA012989, 2008. The article is available free to download.
Shaviv uses the derivative of sea level height, the rate of sea level rise, and his results are available in Fig 4 from that article. Shaviv constrains his analysis to the Atlantic in an attempt to keep the effects of ENSO, presumed to dominate the Pacific Ocean domain, from contaminating solar effects and the 11-year solar cycle from the results.
Shaviv gets his sea level data from the Permanent Service for Mean Sea Level, which many of us have accessed at http://www.psmsl.org/, a different link from what is in the Shaviv’s article, and he averages and differentiates those data to get the rate of sea level rise.
Have you considered using the derivative of sea level? I had not heard of it until I spoke with Shaviv at ICCC7.
Using Shaviv’s methodology, I was wondering if you repeat his findings. Is there an 11-year cycle in the sea level data if you choose that method of analysis? Might there be a different signal between Atlantic, Pacific, and other basins?
Bob Endlich
Willis Eschenbach says:
May 1, 2014 at 11:48 pm
You’re right I should have stated the point behind my sea level data. They clearly show decadal fluctuations, but don’t go back far enough to show 60 year cycles. However if the ups & downs evident in those data correspond to warmer & cooler decades, with some lag, then why not on longer time frames? Especially since on yet longer scales, sea level does definitely coincide with temperature. It was lower during the LIA than during the Holocene Optimum & other warm phases of the present interglacial, for instance, & much lower still during big ice ages, ie the glaciations during which rivers & oceans of ice weigh down the continents.
As for “show”, I meant a presentation to the mostly out of work fishers & loggers, plus plaster gull sellers & retirees of the coast, who are most familiar with the ups & downs of the Pacific.
Willis, from some of the comments on your previous analysis of sea level rise it seemed that the cycle shows up in the velocity of tide gauge changes rather than the tide levels themselves. I know that when I investigated an error in a tracking mount, a periodic error was obvious when first differences were plotted vs. time, where it was very hard to see in the original pointing data. Logically I don’t see how a cycle could show up in velocity but not position since the derivative of a sinusoid is a sinusoid, but it might be a way to amplify such an effect.
“””””……Greg says:
May 1, 2014 at 10:36 pm
Great Willis, looks like progress.
However, I think there’s one fundamental point that you’re missing about this which is leading you to misinterpret the many individual plots and the overlay where you see the all the records have very different spectra in the longer periods.
GE Smith: “2) on a similar vein, since my (now somewhat weak) brain thinks that a linear trend oughta morph into some recognizable “spectral” feature ”…….”””””””
Greg, I told Willis at the outset, that I was NOT criticizing his study; just a trifle baffled trying to (really) understand it.
My own degree had a Mathematical Physics major in it, but that was well over half a century ago, and sadly was somewhat unused in my industry career, so much forgotten.
But I noticed that Willis had said, that he made TWO different alterations to the RAW data, before doing his (spectral) analysis process. One was leaving out the annual cycle, and two was the linear detrending; which incidently, I had NO trouble finding in his paper.
So I was just curious as to what the result would be of just not removing those elements, and performing his “transform” on the raw data.
So that silly sentence of mine (2) was just saying, my mind could not visualize what sort of artifact would show up in his plot, if the linear trend was still in his data.
When I was in school, I was a real whizz in math (half of my degree) , and I could do easily many of these transform processes at my fingertips. But my career focused on the physics hardware side, and I lost much of my math; one of the reasons, I’m a quantum mechanics dunderhead.
I didn’t want to burden Willis with idle work, but I was curious as to how his detrending and annual cycle removal had altered his result.
Not that I think a big 60 year signal is suddenly going to pop out of the woodwork.
But I had no problem reading that Willis said he did the linear detrend.
Thanks, Willis. Looks like uncorrelated noise….
ferdberple says:
May 2, 2014 at 5:12 am
Piers hasn’t demonstrated any ability to forecast anything, because he doesn’t do forecasting. He does handwaving in the old-fashioned way—just like Nostradamus. Here’s an example. He predicted a 50/50 chance of cyclones in a certain area, and then took credit for a successful prediction when there were no cyclones! How can you not admire that kind of bald-faced audacity?
And after claiming over and over that the London bookies wouldn’t bet against him regarding rainfall, I offered to bet against him … man, you should have seen him run from the bet. I pity anyone standing between him and the door on that day, it would have been as dangerous as standing between Richard Mueller and a microphone …
So while it’s quite possible that “no-one has a better idea than anyone else”, Piers has the best idea idea of anyone—simply make unbelievable vague predictions, and then claim success for any outcome. He predicted forest fires in Colorado one month, for example, and then claimed success when there were forest fires in Arizona … even Nostradamus couldn’t beat that one.
w.
ThinkingScientist says:
May 2, 2014 at 6:01 am
I don’t have a clue what that means, “equivalent” to a Fourier transform performed in the time domain. It’s not a Fourier transform of any sort, not least because it doesn’t decompose the signal into orthogonal constituents that can then be added together to reconstitute the signal.
So I fear that I’m not understand what an “FFT … in the time domain” might be.
Mostly, I’m surprised that this procedure doesn’t already have a name. Surely I’m not the first guy to do this kind of analysis?
w.
RGB – that’s the heartBLEED exploit…
Oops. I knew that. Inadequate coffee, dashing off replies in between teaching students how to do buoyancy, oscillation, wave, statics, sundry mechanics problems pre-final exam this afternoon. Brain tired, tired…
rgb
Frank de Jong says:
May 2, 2014 at 6:36 am
Thanks for the local knowledge, Frank. In fact, I’m NOT looking for anything global at all. I’m looking to see if, as the authors claimed, there is a common ~60 year signal in the majority of the long-term tide records … so far no joy.
I agree, and it reflects my claim made way up in the head post that what we are looking at are the characteristics of the surrounding ocean basins, and not common characteristics from some purported long-term cycles.
w.
Willis, Love your work, but don’t know how you find the time or the energy! Have you not considered taking a BSc and then submitting your work for a PhD? (Yes, really!)
rgbatduke says:
May 2, 2014 at 7:37 am
Bizarre. I just tried it and it worked fine. In any case, I’ve converted it to a Word document so it can be uploaded from here, give that a try.
w.
Willis,
Is there an easy way to modify your “sinusoidal periodicity” analysis in such a way that it allows for ‘phase shifting’. Give it, say, a +/- five year phase shift. One wouldn’t expect ‘cycles’ to to be uniform throughout the Oceans.
I might want Bob Tisdaile to weigh in here concerning ‘water sloshing around the globe’ and phase shift.
As always …A very interesting and thought provoking post . Keep it up.
Willis, I have only read the first few paragraphs, so I apologize if others have already commented in a similar vein. Some comments before I go back to reading about how you applied your method.
I used Periodicity Transforms to try and make sense of vibration data in a missile and as with you, I like periodicity analysis. But there were/are some issues – the random waveform being one.
I think you have taken the next step. From Fourier to Periodicity to Sinusoidal Periodicity. It is the inverse of the Fourier Transform, but not in the usual sense of Inverse which is to reverse the Fourier Transform. Your transform is literally the inverse as in Period = 1/Frequency.
What to call it?
How about the “Willis Transform”?
This may or may not be relevant to the subject, but it appears that the ‘tidal potential’ of the 20 extreme tides is ‘correlated’ to the AMO
http://www.vukcevic.talktalk.net/AMO-T.htm
all necessary details are available here ( see table 1)
http://journals.ametsoc.org/doi/pdf/10.1175/JCLI4193.1
but calculating I used ( something I did as an quick estimate couple years ago) may not be considered entirely appropriate method to draw a satisfactory conclusion.
Mostly, I’m surprised that this procedure doesn’t already have a name.
Yes, I’ve done it myself many (40+) years ago, but only for unequally spaced sample points. For equally spaced sample points, as several of us have pointed out, you can get exactly the same spectrum with an fft, just sampled at different points. It is much faster to add zeros to the detrended data, say increasing the data length to a power of two at least 8 times the data length, and do an fft. There is no information lost in the fft, and if you need more sample points, just add more zeros. You don’t get better resolution with your method, the resolution is set by the original data length, you just have more closely spaced sample points at low frequencies.
If you did a least-squares fit of a number of sines and cosines to the data, then that is mathematically equivalent to a standard discrete Fourier transform. I understand that you used constant period intervals instead of frequency intervals, but this is not a gain: the coefficients in the standard method are uncorrelated, but yours are. The extra resolution at long periods (low frequencies) is apparent, not real.
Here is my emulation of Willis’s first plot, using just a FFT plotted with a period x-axis. The R code is below the figure. I haven’t adjusted the y-axis units to match. But the shape is right. I padded to 8192 months – more would give a smoother result.
Thanks, Nick. I did steal your code (the link worked).
Chuck says: May 2, 2014 at 11:29 am
“For equally spaced sample points, as several of us have pointed out, you can get exactly the same spectrum with an fft, just sampled at different points. It is much faster to add zeros to the detrended data, say increasing the data length to a power of two at least 8 times the data length, and do an fft.”
Indeed so. That’s what I did here. 2^16 points in all (about 32 x data length), and it takes a fraction of a second.
Thanks Willis, the word format link worked (clumsy format, but I just saved it back as txt, removed a few spurious characters, and it looks good). I won’t have time to try either one until I’m dead of old age — Final exam proceeding as I type, another one tomorrow morning, grading all weekend — and then I get to get ready to go to Beaufort to teach physics there by next Friday. But I am curious as to how your code differs from a suitable FFT — Nick looks like he more or less confirms your curve shapes with an ordinary FFT, but he has to pad the data etc (as one expects, actually) to get it to work.
Greg (perhaps Nick/Chris Clark)
Greg says:
May 2, 2014 at 1:52 am
One thing that is good about this pseudo-fourier approach is that it can work on data with breaks in it.
What you say really interests me.
I haven’t delved into Willis’ R code, so I don’t know what exactly he did, but if you agree with Nick – and I if I understand his description correctly – then it seems that by pseudo-Fourier approach you mean applying a Fourier series (in terms of sine or cosine OR both?) and in this round about way, using the coefficients, expressing the spectrum in terms of increasing period rather than – as per usual – increasing frequency. Right?
So do you mean that at this type of approach (applying a suite of Fourier series rather than using the “bucket” approach of a DFT) can be used on data with gaps. I can’t see how this would work, do you have any literature you can point me to. I know there are modified DFT methods out there that can work on sparsely sampled data on irregular supports but they appear to be a nightmare to implement and seem to be far from efficient. This would be much appreciated if you could spare a moment to elaborate or point me in the right direction.
Greg
Ah, it just clicked I think, I see from a reread of Chris’ post. It can be applied is just a general least squares method…Yes?
To me, it looks like it has slowed substantially in the last decades. If we subtract the northern land rise, maybe we have some accelerating sea level rise after all? On a 50 to 100 year time scale that is.
/Jan
In a related issue involving Fourier Transforms, I spend a lot of time and effort designing lenses (imaging) Most of the work is achieved with approximations (ray optics)
When the results are good enough (the designs that is), then it is time to switch to real physics based analysis, based on diffraction, and real wave based computations.
So to that end my very expensive canned software can give me MTF plots, based on FFT processes, or I can have it do the full nine yards Fraunhofer diffraction integrals. It just goes much more slowly.
So I normally will try both the FFT and the Lamborghini method, to see if they differ (much).
Some times, it seems that the FFT just makes up “stuff” that isn’t really there; and it depends a lot on how one sets up some variables in the process.
Do you guys always have full faith and credit, in your FFTs; and do you also have some no holds barred fallback mathematology, you can use to verify reality ??
g
[The mods wonder if your fastest (or most expensive) results are obtained from a Mercedes analysis, a Ferrari or the traditional Lamborghini method? Mod]
The first one is that there is no significant power in the ~ 11-year period of the sunspot cycle
I also use an odd periodicity analyser (used 1801-2000 period for both data sets)
http://www.vukcevic.talktalk.net/SSL-SSN.htm
Perhaps ocean is to slow to respond to 11 year cycle, but it appears that it may not be for the 100+ years ‘cycle’.
I suggest that the name of the technique is just a DFT, Discrete Fourier Transform.
I’ve never commented here but FFT/DFT and signals relate to my work in the Navy hunting submarines and in my commercial and amateur radio licenses so I see it from a different perspective and might add some little bit of light to this.
I see that Chris Clark has also identified it as a DFT.
The beauty of the DFT is that almost anyone can put it together with simple tools and it works with discontinuous data and data points that are not a multiple of a power of 2. It is easily comprehended and easily replicated. There’s nothing magical about it, unlike the FFT.
Theory of Operation
The idea is simple. The stream to be tested contains many different signals of varying frequency or period (take your pick). You basically GUESS at the possible existence of a sine wave, so you multiply each data point by a corresponding known point on your testing sine wave. Where they match you’ll get a positive result. When the test wave is +1, and the input signal contains a hidden sine wave of the same phase and period, it will also be +1 and the result is 1. When both waves go negative, they multiply and you still get +1. Every other wave in the stream being tested will have some other value less than 1, sometimes going negative and thus canceling any random positive.
Add them all together for one test pass and graph it as a dot on Y above your test frequency (or period) on X.
Increment the frequency (or period) of the test signal and run the data again.
What you get is the graphs shown above. Note that invariably the peaks widen as your period lengthens because the number of integrating cycles is reduced and random “pluses” are not always going to be cancelled by random “minuses” the fewer the test cycles.
(End of theory of operation).
WARNING: Phase relationship to the data stream is important. FFT gives both sine and cosine coefficients but DFT doesn’t unless you deliberately make it so. If your test just happens to be out of phase you’ll get a NULL instead of a PEAK at the expected period or frequency.
DFT is very good about detecting sinusoidal signals because that is what you test against.
BUT a phenomenon that happens at 60 year intervals could be simply the sum of a 20 year phenomenon and a 30 year phenomenon. DFT/FFT will show nothing at 60, but will show the 20 and the 30. It won’t show what happens when the 20 and the 30 align.
It can only show that there is, or is not, a *sinusoidal* phenomenon at some period. It could be almost anything else other than sinusoidal at 60. DFT/FFT will show such a thing as a family of harmonics. Harmonically related DFT/FFT output suggests non-linear phenomena whose period can be deduced from the harmonics.
An abrupt change in land subsidence or rise will produce a PHASE SHIFT on the signal but not change the PERIOD of the signal. However, during the rise the existence of this phase shift is indistinguishable from a change of frequency, a fact used by amateur radio VHF radios to implement FM (frequency modulation) by using phase modulation.
It will show up on DFT/FFT as a harmonic if any phase shifts have taken place, and the rate of the phase shift will produce a frequency (or period) spike in the output, or more precisely, a whole family of spikes depending on how abrupt is the shift.
In the frequency domain, harmonics are to the RIGHT of the fundamental, but in a period chart such as above, harmonics will be to the LEFT.
I see in the aggregated chart a peak at about 20 years and a peak at about 30 years. This is likely to produce a 60 year phenomenon that is simply the sum of these two coming into alignment but won’t show up on a periodic DFT.
“””””…..[The mods wonder if your fastest (or most expensive) results are obtained from a Mercedes analysis, a Ferrari or the traditional Lamborghini method? Mod]…….””””””
Well the Muddling Mercedes Method; gets me there in style, and the Fast Ferrari Transport has much better sound effects; but for sheer ecstasy, the Lamborghini Lift-off Leaves a Lot of Lemmings in the Lurch.
Who’d ever expect to find all that wonderment, hidden in a mundane computer mouse ??
It’s actually a very serious problem, to design a 2.0 mm focal length “Macro” camera lens (1:1) that focuses in a definite plane, but is deliberately very fuzzy, so that it cannot out resolve (Nyquistly) the very low resolution CMOS camera sensor; or else the cursor may decide to move backwards.
I start with the very sharpest lens, I can design, and then I add a very clever (I think so) anti-aliasing filter surface, integral with the lens (bloody invisible too), which retains the original focal plane, but now gives a Gaussian Waist like beam spot, whose size I can control by design, so it’s bigger than the pixels of the sensor chip. My first AA filter designs, were actually Tchebychev filters of a kind; but they gave spurious responses in the stop band, under certain conditions; which was revealed by the Fast Ferrari Transport.
The best (current) ones are more of a Bessel response, and that gives very low spurs in the stop band. And that’s when I have to put the Nitro in the Lamborghini. Those are very tricky, since the profile is not defined by any closed form functions. But it is defined by an integrated set of related functions, each of which produces point to point and slope continuous half cycles of a cubic polynomial function; maybe up to a dozen different functions in the family.
That makes it both tricky and time consuming so set up the model, and even longer (multi hours) to run the Lamborghini around the course.(for maybe a hundred million laps).
UPDATE: Mr. Willis Eschenbach advises elsewhere that he is NOT using FFT/DFT but something specifically tuned to periodic analyses. Also I suspect while I accurately described a technique for signal analysis it might not be the DFT. Still, the code he uses does indeed multiply a calculated sine wave of incremented period by the data points to amplify that particular period (wavelength) if present.
The implementation code includes this at the heart:
sum((sin((seq_along(tdata)-1)*2*pi/whichline+x[2])*x[1]-(tdata-mean(tdata,na.rm=TRUE)))^2,na.rm=TRUE)
Since the implementation uses the sin() function I suspect it probably is vulnerable to phase shift nulls and this could be discovered using cos() instead of sin() and see what happens.
As I read it: For each integer in the range 1 to the number of data elements provided, multiply by 2pi (radians) then divide the resulting circle into as many parts as which line you are processing and for each such part, multiply by a normalized value in the table.
On the first pass the divisor is small and so the sine wave is going to be short and rough. If the data is monthly, a good technique is to just start with “12” — achieving the stated goal of ignoring variations less than a year in length but NOT ignoring those data points. By the time you reach higher values, the sine wave is going to be very long period with hundreds or thousands of sample points.
It looks a lot like a Sine DFT to me but like I say, I might be mistaken that this is a DFT. At any rate, it is effective, understandable and hence persuasive.
http://www.moyhu.org.s3.amazonaws.com/misc/fourier/tides/tides.html Moyhu’s implementation uses fft directly.
Willis Eschenbach says:
May 1, 2014 at 11:52 pm
“You see, this is why I asked you to quote what you disagree with. Obviously, you think someone here is “hanging their hat on sea levels as the only useful metric” for detecting climate cycles.
However, I know of absolutely no one who is making that claim, and I know I’m not, so I don’t know who you think you are disagreeing with … but it ain’t me ”
My point is that even if you are correct in every particular above, it doesn’t really tell us anything. If you assume that these cycles are well represented in a variety of other proxies but not in sea levels, then there are two possibilities: either our sea level records are somehow flawed or the climate cycles are not well represented in the oceans for some reason. Also, it is worth noting that yours is not the only approach to detecting these cycles with sea level and some of them get different answers than yours.
http://judithcurry.com/2013/09/13/contribution-of-pdo-to-global-mean-sea-level-trends/
Cheers, 🙂
Michael Gordon says:
May 2, 2014 at 5:25 pm
Outstanding, Michael. You appear to be correct that what I have done can be duplicated by a FFT which is padded with a jillion zeros, and using sines rather than cosines (which makes no practical difference, the amplitude is identical) … am I understanding you correctly?
The two analyses, mine and yours, indeed seem to be be perfectly matching as far as I can tell. However, my method (while slower) has two huge advantages in climate science.
The first one is that it is impervious to any gaps in the data. I don’t think that is true of the FFT, but then I was born yesterday. Since in climate science gaps in the data are the rule rather than the exception, this allows me to utilize both more datasets, and more of a given dataset.
The second is that it gives results directly in the units of the dataset. So swings of tide are given in millimetres and swings of temperature are given in degrees C.
In any case, thanks for identifying exactly what I am doing. It seems that I’ve invented what might be called the “slow Fourier transform”, the infamous SFT … but since it keeps going over the potholes in the data, the speed seems less important.
Regards,
w.
Can I just stress that it is important to use sine AND cosine functions, not just one. Reason: if there is a pure sine wave in the data, and you multiply it by another wave of the same frequency and phase, a positive in one is always multiplied by a positive in another, likewise a negative and a negative. The result is a positive coefficient. If it is multiplied by a wave 180 degrees out of phase, a positive is always multiplied by a negative and v.v., and the sum is a negative coefficient. If it is multiplied by a wave of the same frequency but in quadrature (90 degree phase difference), the result is sometimes +ve and sometimes -ve and the sum is zero. The signal is missed. Using a cosine as well catches a signal whatever the phase. Apologies for spelling out what may already be obvious.
There is no need to re-invent the wheel. Mathematicians have been fitting
sinusoids to periodic data for more than two centuries. Nowadays it’s
usually called “sinusoidal regression.” The “Willisgram” merely presents
the results of such curve fitting at increments of period dictated by the
data spacing. This can’t even identify precisely the period of the
best-fitting sinusoid in the record.
There is a crying need, however, for comprehension of what discrete
sinusoidal decomposition can and cannot accomplish in the general case,
where periodic components may have incommensurable periods and random
components have no strict period at all. That is the challenge facing the
analysis of geophysical data. The spectral decomposition provided by the
DFT smears all off-frequency periodic components into adjacent analysis
frequencies, whose increments are dictated by data spacing and by record
length. Meanwhile the random components are treated as if they repeat
periodically at the given record length; i.e., are highly complex periodic
wave-forms instead of random.
When the record is long enough to encompass scores of wave-forms of interest
and the noise level is low enough, these are not insurmountable challenges.
But multidecadal variations in climate records only a century long are a
different matter altogether, especially when there is no physical basis for
any expectation that they are periodic. Nor can zero-padding the record of a
continuing geophysical signal add any low-frequency information. Unlike the
periodic tides, they require different methods, akin to those used in
analysis of random ocean waves. Time allowing, I’ll report some results for
sea-level records next week.
1sky1 says: “There is no need to re-invent the wheel. Mathematicians have been fitting
sinusoids to periodic data for more than two centuries.”
Alas, I am not one of those two-century old mathematicians. I *do* have to re-invent the wheel and sometimes get considerable joy doing so. If I succeed, I learn the underlying technique of how wheels are made, or in this case, detecting the presence of a sinusoidal signal in what otherwise looks like noise.
Naturally I am a little embarrassed when my big discovery turns out to have been discovered already countless times over the past 200 years. But to me it is a validation.
I believe value exists in the fact of independent discovery. If everyone on earth used exactly the same computer program to analyze something it is possible that the program itself is introducing artifacts. As many people as wish should be encouraged to “re-invent the wheel” and discover for themselves whatever there is to discover. Others might inspect the wheel — in this case, a computer program.
1sky1 wrote: “results of such curve fitting at increments of period dictated by the data spacing. This can’t even identify precisely the period of the best-fitting sinusoid in the record.”
Agreed, but that appears not to be the purpose if this analysis, which was merely to confirm a 60 year period. I think we agree that an “about” 60 year in-phase sinusoidal phenomenon has not been detected.
But there’s a more sinister problem: Aliasing
Consider sampling a 28 day sine wave at 30 day intervals. At each sample, you are 2 days later into the cycle of the wave. Eventually your samples describe a perfect sine wave with a 14 month period — even though no such wave actually exists.
A typical source: http://redwood.berkeley.edu/bruno/npb261/aliasing.pdf
18 years 1 month is very near one of the lunar / tidal cycles. 3 x that is 54 years 3 months. I’d speculate that folks finding a 55 year (ish) cycle are finding a harmonic of lunar / tidal activity.
https://chiefio.wordpress.com/2013/01/04/lunar-cycles-more-than-one/
https://chiefio.wordpress.com/2013/01/24/why-weather-has-a-60-year-lunar-beat/
I’d also suspect that the tide gauge data is not well suited to finding sea level cycles. Tides are highly variable dependent on local geography and variations in lunar influence (latitude and more). Frankly, I’m not sure what data would work well enough other than a global satellite daily survey for the whole planet, but we don’t have that for 200 years.
https://chiefio.wordpress.com/2013/01/29/is-there-a-sea-level/
So I’m not surprised that any one place doesn’t show a 55 – 60 year cycle. It might be interesting if the global average did, but even then I’d not think it “means much”. Probably more to do with some artifact of where the measurements were taken than actual changes in sea level / ocean depth.
Aah, the joys of being a pattern-recognising animal! In the Good Old Days (good because they’re gone) we used to run away from patterns that frightened us: apply cobblers, fierce sabre-rattling tiggers, dread pylons and such. This meant we lived to pass on our pattern-recognising tendencies to our offspring. I call this survival of the frittest. [Frit was the past tense of frighten in the dialect of my youth.] These days we pay a carbon tax so that we don’t all suffer mass thermageddon. But many people seem to still be frit! So it goes…
Michael Gordon:
My comment was addressed to no one in particular, being simply an attempt to correct a common misconception that when geophysicists speak of a “quasi-periodic oscillation of ~x yrs” they do NOT imply that it is periodic. Other than those processes determined by periodic astronomical forces or the rotation of the Earth, we are dealing with RANDOM processes characterized by CONTINUOUS power densities, rather than by the LINE spectra that FFT-jockeys often presume. And the problem of aliasing is well-known and adequately controlled by competent analysts; it, along with the “Slutsky effect” oft-mentioned by novices, is a red herring.
Tomorrow, I’ll report what multi-decadal oscillations I found in some sea level data.
1sky1: Thank you for your reply of May 5, 2014 at 5:28 pm.
Over the weekend I have taken some time to study WAVELETS as recommended by at least a couple of readers (or the same reader several times) and readily see that wavelets produce information more useful in this context — phenomena that seem periodic but come and go (more or less chaotic in other words).
While I’ve heard of wavelets over the years, the link provided by a reader proved to explain it quite simply which I will here summarize. Consider a moving-window DFT/FFT that can detect periodic signals that might vanish with time. The problem with moving window FFT is the abrupt transition generates harmonics and false signals (ringing). So, shape the window with a Gaussian envelope. That eliminates ringing. The next problem is varying resolution — you have only two cycles of the longest period but dozens of the shortest. The wavelet is chosen to have the same number of cycles regardless of its test wavelength.
The result is going to be a 3D plot: wavelength, timeline, and intensity or energy at that wavelength and timeline.
It’s a little harder to implement than DFT but not by a lot. It’s utility is that it can readily reveal periodicity AND it can reveal the comings and goings of periodicity — signs of chaotic behavior.
The only time I’ve ever seen real line spectra in FFT is hunting for ships and submarines, sometimes with harmonics if a bearing is going out 🙂
Michael Gordon:
Wavelet analysis is very useful when dealing with data that, for one reason or another, are statistically non-stationary. That is the case with internal-wave packets that may appear only sporadically, or with slow, wholesale changes of signal characteristcis observed in changing sea-states. But those are not manifestations of periodicity, as such. Indeed, aside from tides, signals characterized by line spectra are quite rare in geophysics, They are usually associated with diurnal or annual cycles. In the context of sea-level data, the less said about ship-hunting applications, the better.
Because the .cvs format of PSMSL data presents obstacles to loading it into my analysis programs, I’ve managed to take only a cursory look at the monthly San Francisco record and the annual data for Cascais. As expected, the acf of the former shows the persistent periodicity of the annual cycle. Upon detrending, the Cacais data immediately reveals a strong, albeit irregular, multidecadal oscillation to the naked eye. The maximum departure from the trend occurs in 1895, followed by a second strong peak peak in 1963. Resorting provisionally to simple interpolation, instead predictive filters, to fill the gaps produces an acf with a strong negative minimum at a lag of 28yrs, followed by an upward zero-crossing at the 44yr lag. This is entirely consistent with an irregular, narrow-band “quasi-periodic oscillation” in the ~60yr range.
More on this tomorrow.
Michael Gordon:
Below is the estimated power density spectrum of the detrended and interpolated Cascais data series, normalized by to express the fractional contribution of each frequency band to total sample variance. The frequency index k denotes the number cycles per 66 years and the spectral estimates vary as chi-squared with only ~6 degrees of freedom. The miserably wide confidence interval is a consequence of attempting to resolve multidecadal components in a record\ of only 112yrs. Obviously, greater resolution could be obtained via the raw periodogram with 2dof, but the confidence interval then would be horrendous. For brevity, only the first half of the frequency baseband is presented here; it accounts for 90% of total variance and the density at higher frequencies is down to the noise level.
0 0.156349
1 0.26658
2 0.121605
3 0.063852
4 0.093805
5 0.064397
6 0.036742
7 0.021372
8 0.021294
9 0.019332
10 0.007961
11 0.006061
12 0.00624
13 0.003018
14 0.003083
15 0.005073
16 0.00362
If you have any questions or comments, please let me know here. Otherwise, I’ll take up the subject on a different thread.
Oops. WordPress eliminated the tab between the index in the first column and values in the second.