Guest Post by Willis Eschenbach
[UPDATE] In the comments, Nick Stokes pointed out that although I thought that Dr. Shaviv’s harmonic solar component was a 12.6 year sine wave with a standard deviation of 1.7 centimetres, it is actually a 12.6 year sine wave with a standard deviation of 1.7 millimetres (5 mm peak to peak) … I got 1.7 cm into my head and never questioned it because mm seemed way too small … but there it is. My thanks to Nick for pointing out my error.
So … in answer to the question, is a sine wave with a standard deviation of just under 2 mm detectable by Fourier analysis of the tidal station data … the answer is no. I’ve struck out the incorrect conclusions below.
========================================================================
I got to thinking about whether the Fourier analysis I used in my most recent post was sensitive enough to reveal the putative “harmonic solar component” which Dr. Shaviv claims to have measured. He said that he’d found a sine wave signal with a standard deviation of 1.7 cm mm in the satellite sea level record. So I added a solar signal with a standard deviation of 1.7 cm to the same 199 long-term climate records. [Ten times the size of Dr. Shaviv’s signal.] Note that unlike Dr. Shaviv’s so-called “harmonic solar component”, which was actually just a sine wave, I have used the actual sunspot record, and I scaled it to give it the same standard deviation (signal strength) as Dr. Shaviv’s sine wave. Figure 1 shows the “before” graph of the 199 tide station records from my last post.
Figure 1. The average of the station by station periodograms of the tide station data without the solar signal. All stations detrended before periodogram is calculated.
Figure 1 shows the actual tide station data. Notice that there is no signal at around 11 years. And here’s what it looks like with an added solar (sunspot) signal with a standard deviation of 1.7 cm, a mere 3/4 of an inch, the size of Dr. Shaviv’s claimed signal.
Figure 2. Average of the periodograms of all tidal stations with records longer than 60 years, to each of which have been added a copy of the sunspot signal scaled to a standard deviation of 1.7 cm. This gives a signal (sunspot data) to noise (tidal data) ratio of one part signal, seven parts noise.
So in answer to the question, can my method detect a signal of the strength claimed by Dr. Shaviv mixed into the maelstrom of the individual tide station records, the answer is clearly yes, no problem. It is quite visible.
Ah, but Willis, I hear you say … surely all of these tide stations wouldn’t be affected by the solar changes at the same time. And that is true, there might be lags that differ on the order of months, seasons, or perhaps even years between the forcing change and the response in a given location. But that is the beauty of my method of averaging the periodograms. The periodogram finds the signal regardless of the phase. The phase of the signal doesn’t matter in the slightest—if the signal is there, the Fourier analysis will reveal it. As a result, the lag at any individual tidal station is immaterial.
Let’t push it further. Let’s see if we can do twice as well, say a signal to noise ratio of one part signal to fifteen parts of tidal noise. That would mean a tiny signal with a standard deviation of 0.8 cm … bear in mind what I’m doing. I’m adding a tiny duplicate of the solar signal to the monthly tide data, with a standard deviation of only eight freakin’ millimetres, less than half an inch. None of the tidal records cover exactly the same time span, and many have gaps. So each record gets a different chunk of the sunspot data. So the question is … can the periodogram find a solar signal at fifteen parts noise to one part signal?
OK, here’s that graph.
Figure 3. Average of the periodograms of all tidal stations with records longer than 60 years, to each of which have been added a copy of the sunspot signal scaled to a standard deviation of 0.78 cm.
Yes, I can still see the signal. It’s clearest at the lower edge of the black error bar lines behind the gold graph line. But I’d say we’ve reached the detection limit for this size of signal in this size of dataset … one part signal to 15 parts noise, a detectability limit of a signal strength of 0.8 cm. Not bad.
Conclusions? Well, I’d say that if there is a solar signal in the sea levels, it is vanishingly small. And I’d also say that Dr. Shaviv’s claimed signal with a 1.7 cm standard deviation is large enough to be found if it actually existed … see Figure 1 and 2 to decide if you think it exists.
And finally, I hope that this puts an end to the claim that Fourier analysis can’t find solar signals because they have different periods from nine to thirteen years. It not only can do so, it can do so in the face of stacks of noise and with the solar data covering different periods and often broken by gaps in individual tide station records. Consider that some of the records look like this …
Figure 4. Detrended monthly tidal data, Sheerness.
Despite the gaps, we can find a signal with a standard deviation of 8 mm in the midst of tidal data like that if we have enough tide stations … not bad.
Regards to all,
w.
For Clarity: If you disagree with someone, please quote the exact words that you disagree with. That way we can all understand what you are objecting to.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

I cant believe its not the sun, therefore you must have done something wrong.
because I cant change my mind, you must have made a mistake.
Yes…. Obviously…. It’s the SUN Stupid 🙂
Nope Seriously… Well done and well ferreted Willis, don’t let the buggers grind you down
I can’t believe it is the sun, therefore I’ll keep trying to prove a negative.
Because I can’t change my mind, it’s uncomfortable sitting on a fence looking for better data and methods.
What you think is frankly immaterial.
Fixed it for you:
I cant believe its not the CO2, therefore you must have done something wrong.
because I cant change my mind, you must have made a mistake.
Fixed it for you:
I can’t believe the CO2 matters much or its the Sun, therefore you must be selling something.
Because I can change my mind I’ll smell your ready meals but likely eat elsewhere.
It’s not just the co2. It’s methane volcanoes natural cycles the sun land use changes hfc and co2
I can’t believe it’s not CO2, therefore I change the data. No matter that my CO2’s central q channel, 50% of its radiative and absorptive capacity, is saturated. No matter that human emissions are a mere 3% of the carbon cycle. No matter if assuming ALL of the atmospheric increase is human, the human increment is less than 4% of the planetary greenhouse gasses. No matter that fossil fuel combustion produces twice as much WATER as CO2. No matter that there is no rational signal of CO2 in historic temperature nor any proxy evidence that CO2 EVER caused any warming on the planet. I can’t belive it’s not CO2!
Check the stratosphere. Where it matters co2 is not saturated. We learned this in the 50s
Outgoing radiation in the q channel does not get to the stratosphere. It does not even get to aircraft with sprctrophotometers.
Where it matters co2 is not saturated.
Then does that not falsify the assumption that CO2 causes any measurable global warming?
And where exactly does it matter?
Then does that not falsify the assumption that CO2 causes any measurable global warming?
NO.
1. There is no such ASSUMPTION.
2 The Hypothesis put forward as early as 1896 is this:
a) if you increase c02 and hold everything else constant the temperature will increase.
b) The effect is Logrithmic
c) The estimates of how much warming vary between 1.5C per doubling and 6C per doubling.
d) Saturation has not been reached
d1. In the early 20th century come scientists argued that saturation Had been reached.
d2. In the 1950s the US AIR FORCE flew missions in the stratosphere to Settled the matter.
d3. Settling this issue was of VITAL national interest.. read the cold war.
IF you want to have IMPACT and not be ignored read nic lewis. Your best argument is about sensitivity.
Notice that there is zero transmittance in the center of the CO2 spectrum. This corresponds to the Q channel shown below at wave numbers in the 660’s and it is about half or the energy of the CO2 spectrum. The “wings” of increased absorption at higher CO2 concentrations are very real in the higher and lower wave numbers to the left and right but they occur over a narrower range and at lower intensity than the central channel.
Steven Mosher says:
c) The estimates of how much warming vary between 1.5C per doubling and 6C per doubling.
Ok then, using the log effect, please extrapolate from the chart below. Show us how much warming would occur if CO2 was doubled from 400 ppm to 800 ppm:
As anyone can see, there would not be a 1.5ºC rise in global T, or anything close. It is very doubtful that any such warming could even be measured.
wrong again db.
1. you need to show your work,
2. Pre Industrial (1750) is taken to be 280ppm.
3. Double 280 is 560.
4. at 560ppm, you would see a 1.5C to 6C warming from the 1750’s value. PROVIDED nothing
else had changed
5. We are at 400pmm
6. Temperatures are roughly 1.5C higher than in 1750.
Steven M says:
1. you need to show your work
Did you miss the graph I posted? That’s my ‘work’. Please use the graph to extrapolate from the current 400 ppm to 800 ppm. Can you see how little warming that would add? It is far too minuscule to be measurable. Certainly, CO2 rising from 400 ppm to 800 ppm would not raise global T by another 1.5ºC.
The real world is verifying that: despite the rise in (harmless, beneficial) CO2, global warming stopped a long time ago.
And:
6. Temperatures are roughly 1.5C higher than in 1750.
Non-sequitur. You need to show your work.☺
In 1750 we were at the tail end of the Little Ice Age — one of the coldest episodes of the entire Holocene. The assumption that the 1.5º warming was due to rising CO2 is a conjecture. An opinion. Which you’re entitled to.
But if that 1.5º global warming was due to CO2, then we are still in the LIA. That blows Occam’s Razor to smithereens, doesn’t it?
Ockham says the simplest explanation is almost always the correct explanation. And the simplest explanation, corroborated by the graph, is that almost all of the warming effect of CO2 happened in the first few dozen ppm. That’s why no one has been able to produce a measurement quantifying AGW. Carbon dioxide has just too small of a warming effect at current concentrations.
Billy Ockham also said we should dispense with extraneous variables whenever possible. CO2 is an extraneous variable that only confuses matters. The planet is naturally recovering from the LIA. So forget CO2, it only plays an unimportant bit part. Simples, no?
Mr. Mosher appears unaware that the logarithm he refers to is a roughly roughly logarithmic DECREASE in radiative forcing with increased concentration. This is mostly due to saturation. He also appears unaware that the data on saturation comes from the HITRAN efforts he refers to by the air force for national security.
He clearly needs to go to the HITRAN website and download their little program.Mandatory curriculum for anyone in his position.
“Everything should be made as simple as possible, but not simpler.”
Albert Einstein
Comparing forcing to temperature is too simple. There are interesting questions being asked all the time. Here’s one: does solar control ocean heat transport?
http://link.springer.com/article/10.1007/s00343-015-3343-3
Phase shift will matter in the case of perfectly regular sine waves. If perfectly aligned with the sampling, your sine coefficient could be zero when cosine is maximum or vice versa. FFT outputs are mirrored with one side being the sine coefficent and the other side being the cosine coefficient if I remember right (more or less; but they call one side real and the other side imaginary but apparently means the same:)
http://www.mathworks.com/matlabcentral/answers/7475-fft-result-does-not-jive-with-theory-for-basic-sine-and-cosine
Apply euler’s equation.
I was thinking about this today. So many post math-cad people do their fft’s and if they even know the difference, put the result in amplitude format then throw away the phase information. Like it doesn’t matter. Half the information they just throw away. That’s how ignorant the fast in fft has made us.
Note that there was no reply to the question posted in your mathworks link. That’s because the question is ill posed. The sines and cosines merely represent projections of a complex signal ‘vector’ (complex numbers are isomorphic with 2D vectors) onto the X and Y axes of the plot. They are not properties of the data itself, just part of a 2D to 1D transformation of a rectilinear representation.
Not sure what you mean by ‘regular’ sine waves (‘regular’ and ‘normal’ are the two most overused adjectives in mathematics). But recall that any signal can be represented as a sum sinusoidal components (“Fourier series”). So it’s sinusoids all the way down.
Furthermore, the Fourier transform is shift invariant. Which means the absolute spectrum of frequencies does not change due time shift of the input signal. This is a very useful property of FT’s and does not hold in general for other transforms, such as wavelet transforms.
But recall that any periodic signal can be represented as a sum sinusoidal components (“Fourier series”).
But what makes sunspot data periodic? [Not a trick question and not because there are 11 year cycles.]
Michael 2 August 19, 2015 at 9:04 pm Edit
There are two ways of looking at Fourier. One is using imaginary numbers. The other is using the sum of sine and cosine waves, which was the method used by Fourier. I’m a practical guy, and although I can do the former, the latter makes more sense to me.
In either case, what we get from the analysis is both amplitude and phase information at each given period (or frequency). In one case the two variables (amplitude and phase) are imaginary and real components of an imaginary number.
In the case I prefer, the two variables are the amplitudes of the sine and cosine waves with the given period. A sinusoidal wave of arbitrary amplitude and phase can be described as the sum of properly scaled sine and cosine waves. So the two variables are the amplitudes of the sine and cosine waves, and from these we can calculate the resulting amplitude and phase angle.
In either case, the periodogram is unconcerned with the period. It throws away the phase information, and simply shows the amplitude of the sinusoidal wave with that period.
I use a brute-force Fourier method that I developed myself, and later found out (thanks to Tamino) was first described in the literature in the 1970s. To determine the amplitude and the phase of a Fourier component of a given period, simply do an iterative or other fit of a sine wave to the data. This has the enormous advantage in climate studies of being tolerant of missing data.
At first I used an iterative fit … sloooow. Far too slow. My big insight into doing it was to note that I could use a linear model of the form
data = a * sin(2*pi*t/P) + b * cos(2*pi*t/P)
where P is the period.
Using any linear model solver, say LINEST in Excel, or lm in the computer language R, set up sin(t) and cos(t) as the variables, and solve for a and b using your data. Then calculate a * sin(t) + b * cos(t), and calculate the amplitude. Done.
Of course, you’ll want to use a Hanning or Hamming or other filter on the data first, but there you are.
w.
Willis Eschenbach writes “My big insight into doing it was to note that I could use a linear model of the form
data = a * sin(2*pi*t/P) + b * cos(2*pi*t/P) where P is the period.”
Yes, that is how I do it (more or less) particularly when I have some idea what I am looking for since, as you point out, your datapoints don’t have to be regular and neither do your frequency bins. Also, the number of datapoints does not have to be a power-of-two as it is for FFT if I remember right.
But I think merely adding sine and cosine could result in cancellation of a signal in some circumstances.
I assume datapoints are distributed over y=0, half negative and half positive. If not, the algorithm raises the “floor” since it all looks like signal to varying degree.
Adding sines and cosines produces a peak (all points between 0 and about +1.41) when the signal being discovered is 45 degree phase shift relative to the sine term (pi/4). At 7pi/4 (135 degrees phase shift) the resulting addition of sine and cosine produces a sine centered on y=0 thus the sum will also be zero and you have cancellation.
Plot: sin(x) sin(x+7*pi/4) + cos(x) sin(x+7*pi/4)
The term “sin(x+y*pi/4)” produces the “signal” and sin(x) + cos(x) is the sampler or correlator sine and cosine wave.
As it happens, you can correlate with pretty much anything and that’s how GPS works. Each satellite talks on the same frequency but each has a digital pattern unique to that satellite. By multiplying the incoming stream of datapoints by a copy of the expected pattern you will get a peak when the pattern is phase matched to the very noisy signal. Phase is everything to GPS since it derives timing from phase, and from timing it derives location on Earth.
Similarly, if you find a signal at a period or frequency, you then start phase matching for a peak if you really want to know the amplitude of the signal. Vector addition will somewhat give you absolute amplitude but phase matching with just a sine wave is ideal for the purpose and substantially more sensitive. To actually do that you run multiple passes over the data with the same “P” but tweak “t” slightly each pass, effectively phase shifting the sine wave used for correlation.
Thanks for going the extra mile–er, centimeter, Willis. 🙂
Willis Eschenbach:
In your article you make repeated reference to a “signal” including: “is the signal detectable” and “despite the gaps, we can find a signal with a standard deviation of 8 mm…” You make reference to “noise” in stating that”…this gives a signal (sunspot data) to noise (tidal data) ratio of one part signal, seven parts noise.”
“Signal” and “noise” refer to concepts that are appropriate in the context of telecommunication but inappropriate in the context of control. Control (of Earth’s climate) is the objective of global warming research. Hence use of “signal” or “noise” is inappropriate in this context.
That use of either term is appropriate is a consequence of relativity theory. For telecommunication the signal must travel from the past to the future; this can occur at less than or equal to the speed of light satisfying a requirement of relativity theory. For control the signal would have to travel from the future to the past at greater than the speed of light in a vacuum but this would be impossible under relativity theory.
Terry,
Is this a joke? It sure reads like it. The problem being that you’re using two completely different meanings for “control.” I hope you don’t actually think political control (via a dubious appeal to controlling some aspects of the weather) is the same as control theory as a scientific enterprise.
Terry,
applications for the post of Director of Communications for the IPCC closed 2 days ago, which only 3 days ago was in the future by 1 complete diurnal sine wave function, but due to a transient time wave function , at a rate approximately equivalent to 1 day per 1/365 of a year (ignoring the minor 0.25 `noise` signal, now means the forward date has changed function, or possibly phase chnaged, to a backward function.
Good luck with the job search.
Love it.
Excellent. Don’t tell me you worked on Hijri date conversion too. Visibility is everything 🙂
Oldberg, you are completely wrong about the use of the terms ‘signal’ and ‘noise’. The term ‘signal’ is used by practitioners of data processing to denote any useful or informative waveforms found in a dataset. ‘Noise’ are the components (temporal or spectral) that are deemed to be not of interest, useless, or errors of process and observation.
Terry, what terms would you prefer were used in the context.
“He said that he’d found a sine wave signal with a standard deviation of 1.7 cm in the satellite sea level record.”
Is there a link or citation? It sounds like a lot, relative to the model results that were plotted, eg in Fig 4 in your original post.
Thanks, Nick. Sadly, you are right. I remembered the scaling parameter of his purported signal as being 2.5 cm, and never questioned it, because 2.5 mm seemed ridiculous … but now I’ve gone back to take a look and sure enough, it’s 2.5 mm.
This means that he claims that that satellite data are accurate enough to detect a sinusoidal signal with a standard deviation of 1.7 mm, not 1.7 cm as I’d believed.
Hmmm … well, time to edit the head post and point out that you found my error … then I’ll add some data and see if it is detectable in the satellite data.
Thanks for finding the error,
w.
Willis, from your article: ” bear in mind what I’m doing. I’m adding a tiny duplicate of the solar signal to the monthly tide data, with a standard deviation of only eight freakin’ millimetres, less than half an inch.”
8 mm’s is actually even less than 1/2 inch, it is even less than 1/3 of an inch ( one inch = 25 mm, so half would be 12.5 mms and 8 mms is even less than a third but I guess I am nitpicking here LOL) .
Thanks, asybot. Yeah, 8 mm is about 5/16 of an inch, but “less than half an inch” scanned better and was equally true.
Numbers are always trouble in a post. I figure for every number I put in I lose one reader … so I’d rather use something like “less than half an inch” rather than “about 5/16 of an inch”.
Plus which “less than half an inch” has more impact and is more memorable, while being just as true.
As you can see, I do put thought into these small details, particularly with numbers …
All the best.
w.
On global sea level :
its yearly rate of change looks like an irregular periodic swing as can be seen on this diagram obtained from a running average over 3 months, thus eliminating the yearly periods :
http://blog.mr-int.ch/wp-content/uploads/2015/07/gmsl_rise_rate.png
Source: John A. Church, Neil J. White, “Sea-Level Rise from the Late 19th to the Early 21st Century”, Surveys in Geophysics, Volume 32, Issue 4-5 , pp 585-602
and Jevrejeva, S., J. C. Moore, A. Grinsted, and P. L. Woodworth, “Recent global sea level acceleration started over 200 years ago?” (2008), Geophys. Res. Lett., 35
Satellites : Combined TOPEX/Poseidon, Jason-1 and Jason-2/OSTM
The coefficient of correlation R2 for the Jevrejeva series is 0.0555, which indicates that nothing is statistically significant to determine an acceleration of the rate of rise over a long period of time.
Sorry: over 13 (thirteen), not 3 months
And sorry again: for the Jevrejeva set the running average was calculated over 7 years (it contains only yearly data points)
Would be interesting to see a frequency spectrum of the Jevrejeva data.
Heads up…
http://www.bloomberg.com/news/articles/2015-08-19/weather-channel-said-to-hire-morgan-stanley-pjt-to-seek-sale
http://www.weather.com/science/environment/news/global-warming-weather-channel-position-statement-20141029
Could it be Jupiter? Heaviest of the planets. Orbital period 11.86 years.
Earth overtakes Jupiter roughly every 399 days. Seems to me that would have a larger impact than Jupiter’s own orbit.
Agreed but that effect could ‘hidden’ in the peak at 1 year.
I’m just floating idea not necessarily arguing that it is true.
Distance varies between 4 and 6 AU every 399 days, but gravity pull is very small. Both, the Earth’s and Jupiter’s magnetospheres are regularly wrapped by the solar ‘magnetic ropes’
http://i.space.com/images/i/000/043/034/original/solar-eruption-model.jpg
The Earth’s 399 day transit is picked up by the N. Atlantic SST oscillations.
Vuk, citation please.
The Earth’s 399 day transit is picked up by the N. Atlantic SST oscillations.
====================
why not post to WUWT showing the data and analysis? it would throw the large body of science that believes there is no effect on its ear. seriously.
No, unless the data is very messy FT would resolve 12mo from 13mo. However, such a signal would be totally removed by the ever popular 13mo runny mean.
The tidal effect of Jupiter is almost certainly negligible, so I would not expect to see anything at around 396d from Jupiter. However J does affect the perihelion distance of Earth from the sun, and that is related to the jovan year of 11.85y. It is also implicated in the complex lunar “months” of 27.323, 27,545 and 29.5 days and the moon is the primary mover of tides.
Jupiter’s relationship to those periods is fascinating but far too long for an elevator speak presentation.
Mr. FB
I have been looking at the CET and and the Earth’s magnetic field for number of years now, data such as they are show indisputable nonstationary correlation, (two data sets timings are ‘intertwined’)
http://www.vukcevic.talktalk.net/HmL.gif
which is hard to explain. Detailed analysis of the N. Atlantic SST – Arctic link reveled strong ~400 days factor. I know of only one reason for it, and it is unlikely to be tidal gravity pull, since it is almost negligible compared to the lunar one.
I will write full description hopefully get it published, then anyone would be able to reproduce the results.
“it would throw the large body of science that believes there is no effect on its ear.”
I doubt it; it will be declared just another coincidence and duly ignored.
Vuk’, interesting as ever but not much use until you either do publish or provide enough information about the data source for someone else to look at it.
How significant is your R^2 value with such heavy filtering? Does it look any better with a proper non distorting filter. You are using a crappy running mean that will invert part of the data and lets a lot of LF noise through. Try better filter ( eg. gaussian ) with shorter period like 10y.
Your delta(Bz) is basically a 30 running mean on the rate of change as well.
What is the R^2 without the filter, that is the first thing to report , with an estimation of significance for the number degrees of freedom.
The number of degrees of freedom with a 30y filter will be quite low, so you need to estimate significance there too. 0.95 may not be that impressive. It’s not much use quoting r^2 values on their own.
You presumably know that but I don’t recall you ever showing a significance value with your results.
This is interesting, please do it more rigorously.
a quick look at CET with a 120mo triple running mean ( 120, 104, 74 mo ) has a peak around 1730 that better matches you Bz ( wherever that comes from ) and is far “smoother” than your 30y ma.
Hi Mike
Thanks for the observations, the above graph is the reason I decided to look further. Delta Bz comes from the Arctic, and since the CET is highly responsive to the N. Atlantic SST, the above correlation however crude, was the trigger for the further research, which revealed strong ~ 400 day factor in the AMO – Arctic link (data from NOAA), in which neither the CET or Bz feature as such.
p.s. for filtering I often use Butterworth low pass filter.
Vuk.
Please explain what your N Atlantic hydro magnetic loop is.
Tx
Well I wish I could.
Ryskin’s paper claims that ocean circulation in the N. Atlantic modulates the Earth’s magnetic field.
http://iopscience.iop.org/1367-2630/11/6/063015/pdf/1367-2630_11_6_063015.pdf
On the other hand ocean currents are saline electric conductor, aby change in the magnetic field could affect velocity of the circulation. If so one could speculate about existence of a feedback loop between saline water circulation and magnetic field change, hence: hydro-magnetic loop.
Just the idle mind’s speculation.
The linear regression dots distribution for the Loehle’s global temperatures (right hand portion) clearly suggest some kind of a feedback in action
http://www.vukcevic.talktalk.net/LGt-Bz
http://www.vukcevic.talktalk.net/LGt-Bz.gif
For time series analysis Butterworth is not a good choice. Butterworth filters are used for frequency domain analysis. For time domain analysis Butterworth filters have undesirable effects such as impulse responses, long-distance correlations, and so on.
Also you have edge effects you need to deal with. Do you use reflection, extension, zero padding or something else?
Good discussion here: http://climateaudit.org/2008/10/07/are-butterworth-filters-a-good-idea-for-climate-series/
If you post the source code we will stop asking you questions and provide you constructive feedback to improve your analysis.
Peter
Mr. Sable
Thank you for your remarks. There is nothing there I am not aware of; Butterworth filter’s +s and –s are well known, so are methods and length of padding necessary (I use ‘trend function’ for both back and front),
Having all that in mind it is easy to use, fast and satisfactory for the most of practical purposes including time data series.
Here is what crutem4 end looks like
http://www.vukcevic.talktalk.net/BF.gif
Now you can write your own code if so inclined
I google “trend function” and I get Excel’s linear trend function that returns a line. Your padding certainly does not look like a line.
So once again, you leave us guessing, instead of just publishing source+data+tests. It just means we don’t believe you for the most part. It’s bad rhetoric style in the 21st century.
Also, you typically test a filter function with known signals such as step, impulse, ramp, pink noise, sine wave, etc, not crutem4. crutem4 may or may not contain the signal components that those test signals have that validate edge conditions, step responses, etc.
Peter.
It is very simple; for each data point that data is extended, trend function is the linear trend for the previous 10% of the total data points in the data set. Since Crutem4 has ~160 data points it is the linear trend as determined by the previous 16 data points; it eventually ends in a strait line. For the data start it ‘TF’ works in reverse.
Jupiter makes the Sun wobble:
From memory, looks like the Sun’s position changes by one Sun diameter, varying the Earth to Sun distance by about 1%. It seems reasonable that this distance change could effect the Earth/Moon system.
The sun and the planets all orbit around the center of mass of the solar system (mass centroid). And this centroid wobbles due to the changing distribution of planetary mass. Jupiter of course is the primary player. The planets cause tidal waves to develop within the sun, just as the sun causes tidal effects on the planets. These tidal perturbations are thought to be the underlying cause of the 11 yr solar cycles. (From an Isaac Asimov book, The Stars In their Courses, that I read decades ago… )
-Brian in central Nevada
This is a good example of a Lorenz oscillation.
Since radiation drops off as 1/r^2, does this mean that the incident radiation from the sun drops by (1/86M – 1/(86M*0.99)^2 = 2%? At 340W/m^2 that’s a fluctuation of 6.8 W/M^2, which is pretty significant. It should show up in some measurement somewhere..
Peter
Willis, this isn’t quite a kosher way to test whether your method works. You need to add the cycle to data you know for sure doesn’t have one, then see if your method picks in up. If you add it to the tidal gauge data you are assuming the tidal gauge data doesn’t have a cyclical signal in it. You may say “yes, but I already showed that it doesn’t” with your previous analysis, which ordinarily would be fine except you are testing the method you used in your previous analysis.
Here’s a suggestion for the right way to do it: create synthetic data with similar characteristics to the tide gauge data, then add a cycle to the tide gauge data. If you’re right, this shouldn’t change the result, but since whether or not you’re right is the very thing you want to test, that’s what you need to do.
You could always use the NOAA methodology. If something is not there that you want to be there, such as warming oceans, you just change the data so it is there (in the data as least), then say -“oh look what we’ve found”.
I don’t understand why you think Willis’ method isn’t kosher. If there is an underlying signal, the addition merely adds to it. If there isn’t, it just adds itself. It’s pretty clear from what Willis did that there’s no signal of the type and amplitude that Dr. Shaviv claims. There should be a bump. There’s no bump.
D.J. If there’s no signal there isn’t a problem, except that you can’t test something by first assuming that it’s true, except by contradiction.
The reason it isn’t kosher is that if the signal is present, but not detectable which is the question we want to test whether or not we can reliably answer adding a signal on top of a signal artificially lowers the threshold below which we conclude a signal isn’t detectable.
Look at it this way. Willis wants to know “how large could a solar cycle signal be and my method give a false negative result as to it’s presence?” To answer that we need to take data we know doesn’t have any such signal at all, and add the cycle we want to detect in varying magnitudes.
We have N + C where N has no cycle in it, and C is a cyclical component added to it.
But the test Willis actually ran was U + C, a series which may or may not have a cyclical component. It therefore consists of a series N which doesn’t have a cycle in it and a series kC where k is a number between zero and one inclusive.
Let’s say for the sake of argument that the actual data do contain a signal that’s, say, 30% the size of the signal Willis artificially adds the data. Willis concludes when it detects a cycle that it can detect a cycle of magnitude C, but actually, he’s found that it can detect a cycle of magnitude 1.3C! And say he finds that the signal has to be at least of magnitude .5C to be detected-no, what he’s actually found is that it can detect a signal as long as it’s at least .8C in magnitude!
These numbers are arguendo. If k happens to be zero, there’s no problem! But that k is zero is the proposition Willis wishes to know whether his test can successfully…test!
Willis has made a mistake that exaggerates the power of his statistical test to avoid false negatives.
I’ve multi-linear fitted the Brest data to a detrended Fourier type series and do not find a statistically significant cycle around eleven or twelve years. Cycles at one year, half year, 65 years, and 3.1 years, were statistically significant, in descending order.
I can hear the data screaming … UNCLE
I like this kind of trial-and-error approach to obtaining detection thresholds, equations exist to give quick and easy estimates, but my ex-employer is likely to get burned shortly by promising something based on equations, but that in reality cannot be achieved.
I don’t think anyone said Fourier methods would not work, just that they are a bit sub-optimal due to the varying period, which spreads the signal energy over multiple Fourier cells, i.e. the peak cell amplitude will be less than for a pure sinewave, a loss that can be avoided.
Good point with respect to varying periods and FT. I would do a scatter plot of sea level rise versus SSN and see if there are any hints of correlation.
I like the use of tide data for this analysis. Some have complained that there is a seasonal signal that is creating too much noise, some complained about using the rate of change of sea level but what this shows is how insignificant the signal must truly be compared to everything else that the tide gauge is measuring.
Thanks to everyone on both sides of this ongoing thread for an interesting exercise.
Tide gauges measure Relative Sea Level. That is, they measure the local sea level relative to the nearby land surface. That relative measurement is affected by things like the rebound of the Earth’s crust after then end of the last ice age (Glacial Isostatic Adjustment), tectonic uplift, self attraction, loading, subsidence, etc. If you just average together a collection of tide gauge data, then the result is not the Global Mean Sea Level (GMSL). See the University of Colorado’s Sea Level Research Group FAQ: Why is the GMSL different than local tide gauge measurements?
When different research groups use the tide gauge data to estimate GMSL, they come up with significantly different answers. The University of Colorado’s Tide Gauge Sea Level page lists eleven different result for estimates of the GMSL. They range from 1.2 mm/yr to 2.8 mm/yr. The differences result from different ways of considering vertical crustal movement, the number of tide gauges used, and the length of the tide gauge records. Basically, estimating GSML from tide gauge data is a very involved and very tricky business.
The tide gauge data presented in this post are not an estimate of Global Mean Sea Level. Therefore, the tide gauge data presented in this post cannot address the question of whether or not there is an 11 year cycle in the GMSL.
Willis,
The steric plus eustatic effect claimed by Howard, Shaviv and Svensmark amounts to a total amplitude of 2.4mm, repeat MILLIMETERS, over the solar cycle. The total peak to trough change is less than 5mm. If you convert that to a standard deviation I would estimate something close to sd = 1.2 millimeters. I cannot therefore see how your sd = 1.7 centimeters is derived or where it fits in. It looks way too big for a detection test of the solar cycle.
As I explained in the previous thread, you cannot find the solar cycle in the data unless you apply a reasonable methodology to eliminate the massive seasonal variation. Once you have filtered out the very high frequency variation, the solar cycle (and other multiannual cycles) does then become visible with a variety of methods of spectral analysis.
You should perhaps be asking yourself why the well-documented 18.6 year lunar-tidal cycle is not visible in your data (either) – especially in individual local records where it should show up as a notable feature with higher amplitude than solar cycle effects.
Indeed, 18.6y is one of the first things I was expecting to see. It was certainly pretty visible in the Brest data from the last post.
Paul, can you remind me what the freq response is for the annual monthly difference you suggested? I seem to recall it is some odd notch filter. #
I agree with using a difference to detrend the data and remove autocorrelation, but I’d be wary of this annual difference. More inclined to use a low pass after the diff. ( or a diff-of-gauss to do both at once ).
Just had a quick look at the power spectrum of rate of change at PMSL data for Honolulu, that is I did a year to two ago.
There is a clear , though small, peak at 10.8 years and another similar one at 20.4y a much stronger one at 5.38y. Obviously the diff will be attenuating the longer periods.
Clear 10.8 in PMSL Ijmuiden ( Netherlands ) too.
I think Willis needs to look at why he is not finding what should be there.
very clear 10.7 and 18.1y in Hoek van Holland too.
These results were obtained by using a low pass filter to remove the annual and sub-annual variations from the monthly rate of change as I suggested above.
@Mike,
Paul, can you remind me what the freq response is for the annual monthly difference you suggested? I seem to recall it is some odd notch filter. #
Yes. It applies a crude band-stop or notch filter to the time-differentiated series to reduce the obscuring effect of the very high amplitude intra-annual change in tide levels. It also has the less desirable effect of frequency-weighting the amplitudes of the original series which causes relative attenuation of the higher wavelengths in the subsequent Fourier analysis, but the quasi 11 year cycle is still visible. (cf your Honolulu analysis)
A low pass filter is probably better. Additionally there are empirical spectral analysis methods which allow for some variation of periodicities within a bandwidth. But Willis is a do-it-yourself kind of guy and I did not know whether he had ready access to signal processing software. I wanted to give him something which he could very easily test for himself with tools to hand.
I hope that your results prompt Willis to re-think what he is doing here.
Mike,
My first response to you vanished into the ether.
Yes, it is a crude notch filter or bandstop filter applied to the differentiated series. It has the unfortunate effect of attenuating the longer wavelength amplitudes in relative terms, but it is still sufficient to render the solar cycle visible.
A low-pass filter is probably better. Additionally, simple Fourier analysis is not necessarily the most sophisticated way to isolate the quasi-11-year signal when some variations in periodicity and amplitude are expected. But Willis is a do-it-yourself kind of guy and I did not know whether he has access to more sophisticated signal processing software.
Willis –
You make good points. You did exactly the right test. It is a misconception to suppose that the Fourier Transform is trying to “detect” a frequency component. It is displaying a signal in an dual domain. Often this is useful. But it is not going to “dig out” a tiny signal. If you don’t “almost see” it in the time domain itself, don’t expect anything overwhelming in the FT. Noise is noise.
Further, given that most “Fourier Transforms” are achieved with an FFT and this involve sampling, end effects (“windowing”) and many issues with resolution and so on, it is easy to get fooled. That’s why most of us who use FFTs run and rerun “toy” examples before even loading any real data.
Bernie
I’m assuming that Willis is using his “slow FT” , FFT requires continuous evenly spaced data and that is not what is being used. He should be more explicit about what “FT” means.
Mike –
I think Willis developed wnat he called a “slow Fourier transform” as a kine of etemplate matching to a sinusoid. But there is no traditional “slow Fourier transform” This is all there is:
http://electronotes.netfirms.com/AN410.pdf
Bernie
Yes that’s it. I did say “his” SFT. It seemed to give results compatible with FFT but just took a lot longer. The huge advantage being that it does not require continuous, equally spaced data like FFT does.
Also since it does not effectively wrap the end back onto the start, it does not require the usual window functions, which are themselves distorting.
I’m only guessing that’s what he did, he really should have explained what he was doing in the article.
Willis fixed his lack of windowing a while back after I and a couple of others nagged him… AFAICT his “slow FT” is quite robust at this point. I independently developed a matlab/octave model and my test data agrees mostly with his test data as well as agreeing with my experience in signal processing. (sine wave, step function, ramps, impulse, pink noise, gaussian noise… am I missing a test?)
Proper software engineering methods would require posting the source every time for every submission, and the test results should be available in that post of course. Willis didn’t do that, so I can’t tell if his code is still robust for this analysis (it was a bit messy thus easy to introduce new bugs )
I continue to be amazed that the signal processing folks and stats folks don’t talk more. Linear interpolation and filtering can easily fix continuous data “problems” with FFT. (Heck if you’re careful zero padding can do it as well if you are only interested in low frequency components, it’s the default in matlab/octave). Equally spaced data is a “depends” but is also addressable with conventional signal processing techniques. FFT is used by signal processing folks, and stats folks use oddball (to me) tools like the SFT, loess filters, etc.
Peter
Peter Sable writes “I continue to be amazed that the signal processing folks and stats folks don’t talk more.”
I get quite a bit out of this conversation.
Peter said:
“Linear interpolation and filtering can easily fix continuous data “problems” with FFT.”
Really! Interpolation adds NO actual information. You must mean something else.
Please do elaborate.
Bernie
Bernie:
You say, “But there is no traditional ‘slow Fourier transform'”, but even the link you give shows the “discrete Fourier transform” as the standard transform for a sampled data series, with the FFT as just a more computationally efficient version of this.
Compared to the FFT, the DFT is exactly a traditional ‘slow Fourier transform’.
Curt – Not at all !
The DFT is just N-equations in N-unknowns (often way too slow in practice). Please read the notes that follow my “map”. The FFT is a Fast algorithm. They both compute the EXACT same RESULT – one is just obtained faster.
And I think the DFT is unrelated to the procedure Willis calls a SFT.
Bernie
Precisely!. There is no information to be added, it’s missing, and you don’t want to add any, that’s called “drylabbing”. I hate drylabbing, it’s one of the reasons I think the CAGW folks are wrong, they do it all too often.
However, FFT/DFT does require a continuously sampled function, it’s the nature of how it works.
So to fill in empty points to satisfy the requirements of FFT, you generally have the following options. Keep in mind the requirement is to add no information to the PERIODOGRAM that wasn’t there already, while also not removing any useful information. You can modify the time domain signal in a manner where it doesn’t affect the periodogram in a significant manner. Tricks generally derived from FT properties:
https://en.wikipedia.org/wiki/Fourier_transform#Properties_of_the_Fourier_transform
(1) just assume the points are zero, then low-pass filter to remove the resulting signal/energy spike. This has problems because it introduces significant anomalies that are proportional to how far the signal is from zero. You can move the filter to a lower frequency to compensate, but that can start destroying useful information elsewhere in the signal. Unfortunately this is the default in matlab/octave for e.g. its upsample functions.
(2) linear interpolate. This introduces a fairly small high frequency blip that is easily low pass filtered with a filter that doesn’t destroy information in the rest of the signal. I tend to use this method, it’s fairly fast.
(3) spline or polynomial interpolate, likely don’t need a low pass filter. I have found I get almost no different result between this and linear interpolate+filter.
For those of you who might have noticed, I insist on source code to back claims. I’m working on cleaning up the above observations, I’ll publish one of these days… trying to get the same data through R and octave is driving me nuts (I’m very new to R), and a lot of the stats functions aren’t available in octave, only in R.
Also Willis stop publishing his source code and he writes in a style where it’s hard to test. So just testing his code and comparing to other related methods has its own headaches. That’s how I found his (lack of) windowing problem which he subsequently addressed. (though to my eye I can see a sync smeared pretty easily just by looking at the graph).
Peter
Peter – thanks for that. I think we would agree completely if standing around a chalk board coffee cup in hand. I like the “how is the FFT trying to fool me this time school”! Of course, “subtle but not malicious” applies. Best wishes.
Bernie
Not in the 11 year(no signal will be apparent) so called sunspot cycle because it is obscured by noise and many other climatic items that run counter to the solar activity.
In addition the solar activity ‘s duration of time when in a typical 11 year cycle is way to short and cancels itself out.
Besides typical 11 year sunspot cycles are going to act to keep the climate stable not unstable.
Willis is entitled to write about any solar study he wishes but this particular one is a big waste of time.
Simple analyses disprove consensus assertions.
Proof that CO2 has no effect on climate and identification of the two factors that do cause reported average global temperature change (sunspot number is the only independent variable) are at http://agwunveiled.blogspot.com (new update with 5-year running-average smoothing of measured average global temperature (AGT), results in a near-perfect explanation of AGT since before 1900; R^2 = 0.97+).
Willis,
Thanks for continuing on this topic with another post. I think the most valuable part of this is the discussions that have sprung redarding the meaning (or lack thereof) of this signal if it is there.
I am a bit confused though. I don’t know where you came up the 1.7cm number for standard deviation??? Are you still talking about the paper entitled “The solar and Southern Oscillation components in the satellite altimetry data” by Howard, Shaviv and Svensmark?
I’ve searched this paper carefully, and the number of 1.7cm is nowhere to be found. I agree with you that a sine wave (or your suspot data) with a standard deviation of 1.7 cm would be easily detectable. Did you make a mistake on units — perhaps changing mm to cm?
In the paper’s equation (1), table (1) and also in figure (2) the claimed solar component is clearly stated to be a 2.5 MM peak sine wave so it will have an RMS value (roughly the same as standard deviation) of about 1.76 MM. Your signal with a 1.7 CM standard deviation is equivalent to a sine wave with a 2.4 CM (or 24 MM) peak value.
I do agree that a signal this large (24 mm peak sine wave, or equivalent sunspot data) would be easily detectable. But that’s not what the quoted paper is claiming is there — they are only claiming a 2.5 mm peak sine wave (equivalent to 1.76mm standard deviation).
So unless I’ve made my own mistakes here (that’s not impossible either), the bottom line is this:
It is not possilbe to determine the presence of the solar signal in the above quoted paper using the indicated tide gauge data. It might be there, it might not but you cannot tell by looking at tide gauge data.
In an article that I posted last year, I did a Fourier transform of HADSST3gl sea surface temperatures http://wattsupwiththat.com/2014/07/26/solar-cycle-driven-ocean-temperature-variations/, Fig. 4
?w=762&h=360 and found a variation of about 0.13C peak to trough over the last four solar cycles. I also found that it would take about 0.62 Watt/m^2 peak to trough on a global average to produce that level of temperature variation. That level of solar heating variation is very considerably more than the averaged variation of solar irradiance at the earth surface during a solar cycle. Nevertheless, the thermosteric contribution to sea level rise would only be about 1.7 mm peak to trough. ( http://wattsupwiththat.files.wordpress.com/2013/10/clip_image005_thumb1.jpg?w=936&h=669 integrated ) This would likely be lost in the noise of individual tide guage records, but might show up in carefully averaged data, as in Holgate’s work http://i1244.photobucket.com/albums/gg580/stanrobertson/sealevel3_zpsc79e6748.jpg
To be honest I would say that NIr Shaviv has a good case for libellous content, I hope Willis Eschenbach has a good lawyer.
OK, I can see it now: “Yer honor, he bludgeoned my hypothesis with a blunt transform.”
The detectability of any signal of a given variance depends upon the signal-to-noise ratio. With tide gauge records, where the least-count-unit round-off noise is typically more than a centimeter RMS, the prospect of detecting a stochastic signal of a few millimeters RMS is slim to none.
Agreed, your post (which is more concise) crossed with my comment below.
Anyone who has been involved in shipping and has first hand experience in obtaining/assessing accurate draft information for ports/port approaches/river estuaries etc, the ballast/laden draft of vessels, would readily appreciate the difficulty (I would say absurdity) of hoping to ascertain a small signal pertaining to a claimed variation of 1 to 3 mm annually from what is noisy data.
Leaving aside the tectonic movement and isostatic rebound problem, how does one know that there has not been minor settlement of the structure on which the tide guage is attached (how good is the foundation into the seabed?), or silting in and around the relevant datum point. How many tide gauges are scaled in millimetres. Certainly not those put in place 50 years ago!!
How do you know that the measurements are taken at precisely the same point in the tide cycle (not simply daily, but also lunar & seasonal changes of exceptionally strong neap tide or spring tide. What TOB adjustments are being done ? After all TOB is claimed to be extremely material to the proper assessment of temperatures and the global anomaly assessment.
How can a satellite which measures a continuously moving surface (which movement can amount to 10s of metres depending upon swell and storm conditions over the oceans) realistically measure to a millimetre accuracy? Even measurement to a fixed land point to millimetre accuracy is difficult, but the surface of the ocean; come on, someone is having a laugh when they claim to be able to measure change to within 1 to 2 millimetres.
Further, what is the annual variation in rainfall? How much is annual variation in rainfall a reflection of the amount of solar insolation being absorbed by the oceans and/or minor differences in the amount of evaporation from the oceans?
Before embarking on an analysis of this kind of data, one should first conduct a quality audit of the data, to see whether anything useful could be found from the data given its quality issues.
Willis
You often rhetorically ask whether something passes the ‘smell/sniff test.;
Like most data sets in Climate Science, the tidal data sets are extremely noisy and have huge error margins, and when coupled with tectonic movement and changes in the ocean bed render it impossible to eek out a small signal.
Given the extremely modest variation of TSI over the solar cycle, why anyone would expect to be able to eek out the signal from the noise is beyond me. Quite simply looking for this solar cycle signal in the sea level data, does not pass the smell/sniff test.
Further, as far as I understand matters, those who consider it probable that the sun is a major player in natural variation on a long term basis (ie., one measured on say a centennial basis if not longer), do not argue that the small changes in TSI over the solar cycle are the drivers for climate change on a decadal basis, It is the prolonged and cumulative effect of a ‘quiet’ or ‘strong’ sun which is claimed to drive temperature/climate change over a multi-decadal basis (aided by ocean cycles and the amount of energy that solar has imparted into the oceans over the long term).