**Guest Post by Willis Eschenbach**

Among the papers in the Copernicus Special Issue of Pattern Recognition in Physics we find a paper from R. J. Salvador in which he says he has developed A mathematical model of the sunspot cycle for the past 1000 yr. Setting aside the difficulties of verification of sunspot numbers for say the year 1066, let’s look at how well their model can replicate the more recent record of last few centuries.

*Figure 1. The comparison of the Salvador model (red line) and the sunspot record since 1750. Sunspot data is from NASA**,** kudos to the author for identifying the data.*

Dang, that’s impressive … so what’s not to like?

Well, what’s not to like is that this is just another curve fitting exercise. As old Joe Fourier pointed out, any arbitrary wave form can be broken down into a superposition (addition) of a number of underlying sine waves. So it should not be a surprise that Mr. Salvador has also been able to do that …

However, it should also not be a surprise that this doesn’t mean anything. The problem is that no matter how well we can replicate the past with this method, it doesn’t mean that we can then predict the future. As the advertisements for stock brokers say, “Past performance is no guarantee of future success”.

One interesting question in all of this is the following: how many independent tunable parameters did the author have to use in order to get this fit?

Well, here’s the equation that he used … the sunspot number is the absolute value of

*Figure 2. The Salvador Model. Unfortunately,** in the paper** he does not reveal the secret values of the parameters. However, he says you can email him if you want to know them. I passed on the opportunity.*

So … how many parameters is he using? Well, we have P1, P2, P3, P4, F1, F2, F3, F4, N1, N2, N3, N4, N5, N6, N7, N8, L1, L2, L3, and L4 … plus the six decimal parameters, 0.322, 0.316, 0.284, 0.299, 0.00501, and 0.0351.

Now, that’s twenty tunable parameters, plus the six decimal parameters … plus of course the free choice of the form of the equation.

With twenty tunable parameters plus free choice of equation, is there anyone who is still surprised that he can get a fairly good match to the past? With that many degrees of freedom, you could make the proverbial elephant dance …

Now, could it actually be possible that his magic method will predict the future? Possible, I suppose so. Probable? No way. Look, I’ve done dozens and dozens and dozens of such analyses … and what I’ve found out is that past performance is assuredly no guarantee of future success.

So, is there a way to determine if such a method is any good? Sure. Not only is there such a method, but it’s a simple method, and we have discussed the method here on WUWT. And not only have we discussed the testing method, we’ve discussed the method with various of the authors of the Special Issue … to no avail, so it seems.

The way to test this kind of model is bozo-simple. Divide the data into the first half and the second half. Train your model using only the first half of the data. Then see how it performs on the second half, what’s called the “out of sample” data.

Then do it the other way around. You train the model on the second half, and see how it does on the first half, the new out-of-sample data. If you want, as a final check you can do the training on the middle half, and see how it works on the early and late data.

I would be shocked if the author’s model could pass that test. Why? Because if it could be done, it could be done easily and cleanly by a simple Fourier analysis. And if you think scientists haven’t tried Fourier analysis to predict the future evolution of the sunspot record, think again. Humans are much more curious than that.

In fact, the Salvador model shown in Figure 2 above is like a stone-age version of a Fourier analysis. But instead of simply decomposing the data into the simple underlying orthogonal sine waves, it decomposes the data into some incredibly complex function of cosines of the ratio of cosines and the like … which of course could be replaced by the equivalent and much simpler Fourier sine waves.

But neither one of them, the Fourier model or the Salvador model, can predict the future evolution of the sunspot cycles. Nature is simply not that simple.

I bring up this study in part to point out that it’s like a Fred Flintstone version of a Fourier analysis, using no less than twenty tunable parameters, that has not been tested out-of-sample.

More importantly, I bring it up to show the appalling lack of peer review in the Copernicus Special Issue. There is no way that such a tuned, adjustable parameter model should have been published without being tested using out of sample data. The fact that the reviewers did not require that testing shows the abysmal level of peer review for the Special Issue.

w.

UPDATE: Greg Goodman in the comments points out that they appear to have done out-of-sample tests … but unfortunately, either they didn’t measure or they didn’t report any results of the tests, which means the method is still untested. At least where I come from, “test” in this sense means measure, compare, and report the results for the in-sample and the out-of-sample tests. Unless I missed it, nothing like that appears in the paper.

NOTE: If you disagree with me or anyone else, please QUOTE WHAT YOU DISAGREE WITH, and let us know exactly where you think it went off the rails.

NOTE: The equation I show above is the complete all-in-one equation. In the Salvador paper, it is not shown in that form, but as a set of equations that are composed of the overall equation, plus equations for each of the underlying composite parameters. The Mathematica code to convert his set of equations into the single equation shown in Figure 2 is here.

BONUS QUESTION: What the heck does the note in Figure 1 mean when it says* “The R^2 for the data from 1749 to 2013 is 0.85 with radiocarbon dating in the correlation.”? *Where is the radiocarbon dating? All I see is the NASA data and the model.

BONUS MISTAKE: In the abstract, not buried in the paper but in the abstract, the author makes the following astounding claim:

The model is a slowly changing chaotic system with patterns that are never repeated in exactly the same way.

Say what? His model is not chaotic in the slightest. It is totally deterministic, and will assuredly repeat in exactly the same way after some unknown period of time.

Sheesh … they claim this was edited and peer reviewed? The paper says:

Edited by: N.-A. Mörner

Reviewed by: H. Jelbring and one anonymous referee

Ah, well … as I said before, I’d have pulled the plug on the journal for scientific reasons, and that’s just one more example.

In fact I could probably get a curve to closely match the red one with only 3 or so parameters, one of 11 years, one of around 100 years and one longer than the whole time period plus offsets, etc.

Would be just as meaningless though!

Strange then that various commentators did actually predict the current solar quietness whilst the establishment was still predicting that cycle 24 would be another strong one.

http://personal.inet.fi/tiede/tilmari/sunspots.html

No doubt mention of Timo’s work will cause apoplexy in some quarters but he wasn’t the only one.

Stephen Wilde says:

January 22, 2014 at 2:44 am

Not sure how this relates to the lack of peer-review and the lack of out-of-sample testing of the Salvador model …

w.

PS—there are a whole lot of folks out there guessing the size of the next solar cycle, based on various things. One thing is for sure … the next solar cycle will be either larger or smaller than this one.

And that means that in a general sense, your best bet is that half of the prognosticators will be right and half wrong.

I am sure that there is a chaotic input into solar activity that would throw this. Also the sun does not have an inexhaustable supply of fuel. This is gradually changing with nuclear fusion reactions so this will alter output as time goes by.

Willis,

That equation doesn’t appear like that in the published paper. It looks like your own expansion of the SNC that somewhat obfuscate the origin of the (decimal) numbers.

Salvador describes the origin of each of the constants in his equations. Including their

physical basis. And the methods of derivation of the phase parameters and scalars from physical observationsvianon-linear least squares optimisation of Salvador’s SNC equation.There actually is quite a bit of evidence supporting a millennial-scale Holocene climate cycle (quasi periodic fluctuation if you prefer). A power spectrum of the GISP2 ice core indicates a very significant 950-1100 year “cycle”…

This doesn’t necessarily validate the solar model in question.

von Neumann’s observation regarding elephants is valid. A model based on the physics of a phenomenon will allow maximum precision of ‘forecasting’ with a minimum number of ‘adjustable parameters’.

A mathematical model not based entirely on physics is not empirically testable. If it is not testable, it is not science.

In the 90’s I published a model for calculating the anomalous viscosity of mixing for mixtures of gases. Because I based the model on kinetic molecular theory, I needed only one parameter which I could not adequately justify theoretically – an exponent of exactly 1/3 in the mixing term. The model gave results that removed all secular variability from the residuals.

The procedure you outline for testing the model on out-of-sample data may be useful for determining whether or not the model is reflective of a real-world physical process, but provides only hints at best of where to look for the physics involved.

Huh Willis, you’ve gone into overdrive…

If the constants (fixed numbers) have now physical interpretation then forget it; if they do it may mean something but not implicitly so

http://www.vukcevic.talktalk.net/PF.htm

Correction: in the above comment it should be NO for

nowStephen Wilde says:

January 22, 2014 at 2:44 am

Actually, most “establishment” predictions weren’t for a strong cycle, they were for a weak cycle. There is a list of them

, along with an interesting analysis of the various methods. See Table 1.herew.

“It is totally deterministic, and will assuredly repeat in exactly the same way after some unknown period of time.”

Deterministic it is. Periodic, it’s not. Except if N1~N8 are zero.

It’s obvious it’s just a curve fitting excercise but I believe the number of free parameters is not the main argument here. It’s just the easiest to reach argument. Even such number of parameters could be excused if the formula was making physical sense. But it does not.

Yes sure. Salvador only needs to tell Jupiter, Uranus, Earth and Venus to change the rates they orbit at in order to tune his parameters.

It must be great playing god. Willis should know.

Bernd Felsche says:

January 22, 2014 at 3:14 am

It “looks like my own expansion”? I can put the words up on the silver screen … but you have to read them. In the head post I pointed out that the equation doesn’t appear like that in the paper. I discussed the exact expansion I used. I posted a link for the Mathematica code for the expansion … and now you come along to repeat what I said, like you’ve made some discovery?

Yes, I know that the “phase parameters and scalars” are fit, I discussed that as well. You seem impressed that he used twenty fitted parameters. Did you read the link about the elephant?

Next, while there is a “physical basis” and an “origin” of the constants in that they represent real astronomical ratios, given that there are hundreds and hundreds of equally real astronomical ratios, their choice of which ones to use is equally arbitrary.

Finally, they’ve used, not the exact timing of the astronomical cycles, but a series of slightly different values near to the exact timing … which of course is how they got the beat frequencies you see in Figure 1.

Are you really impressed by this curve fitting exercise? Why not just use Fourier analysis? Do you believe, as the author does, that his is a chaotic, non-repeating model? Do you think his method will work on out-of-sample data?

w.

Kasuha says:

January 22, 2014 at 3:29 am

Thanks, Kasuha. Each of the individual sin and cosine functions that make up the equation repeats in a regular periodic manner. How can their sum not be periodic?

If what you are saying is true, seems like it would make a theoretically perfect random number generator, one that never, ever repeats… and I doubt that.

Seems to me that the sum/product/difference whatever of a finite number of infinitely repeating cyclical functions has to be a repeating cyclical function, and that that is a recurring and big problem in random number generators … but I’ve been wrong before …

w.

No (

computer) model or analysis will ever give correct results if (key) parameters are missing. If those are missing due to ignorance or deliberate behaviour does not matter (obviously). That is the reason why you (Willis) has not found something that works in this matter. As you point out, the Salvador model is to simple to generate any substance for any conclusion (other then crap).Reversed engineering works in general, but it requires a fundamental understanding. Ie, to have knowledge of the big picture, but nobody today is in that position yet. To perform reversed engineering of a chaotic system, however, is impossible in practice due to its complexity.

(

Extremely/Very) Low-frequency parameters can be very difficult to identify, but can not be ignored.What is the most complex – the climate or the human brain? (

The human brain is still not fully mapped yet …)tallbloke says:

January 22, 2014 at 3:30 am

Tallbloke, first, I made a clear distinction between the “tunable parameters” and the decimal constants. So your objection makes no sense, the tunable parameters have nothing to do with Jupiter or anything at all … that’s why they are “tunable”.

Next, given twenty tunable parameters, plus an infinite choice of forms for the equation, I don’t care what six astronomical constants you might pick—with 20 tunable parameters and my choice of equations, I guarantee you I can make the curves fit no matter what the six other constants you might hand me.

It’s easy because I can just do what Salvador did. It appears that you didn’t notice that he doesn’t actually use the astronomical constants themselves. Instead, he uses the astronomical constants

either increased or decreased by the value of one of the many tunable parameters. That’s how he gets the beat frequencies shown in Figure 1 … and since I have free choice of the form of my equation, the choice of astronomical constants doesn’t matter. I’ll just change the tunable parameters to make up the difference. If you choose a parameter that is 178.8 years, and I need 76.4 years to make my formula work, I’ll just multiply it by an appropriately sized parameter.Regards,

w.

He seems to have reduced 260+ years of data points to only 26 values with his rather lossy compression. He has indeed modeled the past, but I think it’s predictive ability is somewhat worse than how a random 30 second clip of MP3 lossy compressed music can “model” the next 30 seconds.

Perhaps off topic, but do the global climate models also suffer from lack of out of sample testing?

Reg. Blank says:

January 22, 2014 at 4:12 am

—————————————–

LoL! I just spit coffee all over my key board. I need to avoid Willis posts first thing in the morning, they end up being too entertaining!

So the ‘official’ SIDC sunspot numbers can be fitted with an expression with many parameters. But here are more than one sunspot series. There is the Group Sunspot Series and there is the Wolf Numbers corrected for Waldmeier’s weighting of sunspots since 1947. These series are different from the SIDC series so presumably must have their own fitting expression. If so, the whole thing is just different curve fittings with no physical content. http://www.leif.org/research/Long-term-Variation-Solar-Activity.pdf

Willis says:“I guarantee you I can make the curves fit no matter what the six other constants you might hand me. “

OK, game on.

278 days

1.3 years

8.4 years

98 years

for the four planets orbital periods. Now, not all the other parameters are completely free, as you would know if you’ve read R.J.s paper carefully. Some of their cyclicities are bound to the planetary orbital periods. So bearing that in mind, off you go, play fair, and don’t forget to show your working.

lsvalgaard says:

January 22, 2014 at 5:22 am

……………..

Hi Dr. S. Thanks for the reply (Danish data).

Agree with the above, that is why ‘superior elegance’ (??!!) of my formula doesn’t fit any of the above, but tells in its crude simplicity what the ‘mother nature’ had in mind some (was it ?) 4 billion of years ago, but again things have moved on since then.

:) :)

By the way, R.J.s model’s latest iteration is up to R^2=0.91

The paper lays out the physics that were used to derive the parameters in the math. So you appear to be misrepresenting the paper. So then, please show us your math where you successfully reduced the physics of the entire planetary system to fewer than twenty or so. Oh, you can’t do that? So then why attempt to ridicule a paper that has twenty or so parameters that are linked to physical attributes of the planetary system? I’ve seen some poor arguments against papers before but your “arguments” against this paper – aren’t scientific.

AFAIK, or probably best to say AFAINK, the sun is a stochastic process at least to a certain unknown extent.

Those equations may be usefull to detect the deterministic component, but it is still just a nice numerology example.

RJ actually did do a forecast with a fraction of the data and he doesn’t claim his model is right (as it isn’t — it fails the simplest diagnostics). RJ does these models for fun. He’s not a political activist. It’s unfortunate that he got tangled in this whole PRP mess. I would have advised him to steer well-clear of publishing in PRP had I known he was doing so, as it has been obvious for many months that a blow-out like this was going to be inevitable. (Anthony: I don’t know how you didn’t see it coming. You must have had blinders on.)

Dr. Svalgaard and Willis, since the purpose of these exercises ultimately, is to predict temperature changes by using solar phenomena, I ask, a non-scientist, how accurate a proxy for solar activity are sunspots? If they are not accurate, are there other proxies for solar activity besides sunspots? If such phenomena exist, have there been efforts to try to correlate these other solar phenomena with temperatures, if so, how credible are they? Thanks.

Chuck L says:

January 22, 2014 at 6:28 am

how accurate a proxy for solar activity are sunspots?The microwave flux from the Sun is a good index of solar activity and the modern sunspot number is a good proxy for the flux: http://www.leif.org/research/SHINE-2010-Microwave-Flux.pdf

There are indications that over the past decade the official sunspot number has been a bit too low compared to the flux, but that is a second order effect.

Underlying physical model — No

A few well defined fitting parameters — No

A load of feces — Yes

Willis says “The way to test this kind of model is bozo-simple. Divide the data into the first half and the second half. Train your model using only the first half of the data. Then see how it performs on the second half, what’s called the “out of sample” data.

I bring up this study in part to point out that it’s like a Fred Flintstone version of a Fourier analysis, using no less than twenty tunable parameters, that has not been tested out-of-sample.

More importantly, I bring it up to show the appalling lack of peer review in the Copernicus Special Issue. ”

From the paper :

4 Forecasting

To test if the model has forecasting ability, we can redo the

correlation with data only up to the years 1950 and 1900 and

determine the forecast for the next 50 and 100 yr to see if the

model can predict the sunspot data we have already experi-

enced.

Figure 5 gives a forecast for the period 1950 to 2050 made

from the correlation of the model with data up to 1950.

“Figure 6. A comparison of monthly sunspot numbers from 1900 to

2000 (in blue) with the absolute value of the correlation model (in

red), derived using data only up to 1900 and the extended forecast

to 2000.”

Jeezus Willis you’re at it again. Read the frigging paper before shouting off on WUWT.

I’m not saying I find this very convincing but if you want to rip something apart at least read it first.

“In fact, the Salvador model shown in Figure 2 above is like a stone-age version of a Fourier analysis. But instead of simply decomposing the data into the simple underlying orthogonal sine waves, it decomposes the data into some incredibly complex function of cosines of the ratio of cosines and the like … which of course could be replaced by the equivalent and much simpler Fourier sine waves.”

Willis you are in danger of talking above you pay grade.

If you have a modulation of two cosines, it will appear as three peaks in a Fourier spectrum. Each with its own phase amplitude and frequency. You could then link some of the phase and amplitude terms so avoid needing extra parameters but it certainly would not be “much simpler”.

http://climategrog.wordpress.com/2013/09/08/amplitude-modulation-triplets/

Willis,

You

stated in the main text of your articlethat the formula you wrote was the one used by Slavador. Putting a contrary statement in the footnote? Without even a marker in the main text that there was a note to your “handiwork”?And it’s

myproblem that I missed the note in the end credits following the listings of Best Boy and Fluffers?You couldn’t simply have written: “The author’s equations can be expanded to …” in the main text.

I don’t need no steenking

Mathematicato do simple algebraic substitution and expansion. Not that I would begin to do so in this case because it’s superfluous effort and it obscures the physical parameters. Parameters which, if left “pristine”, provide additional insight while working with equations. (*)FWIW: I’d have been happier if Salvador had left symbols to represent each of the period components in his equation and if he’d not referred to “years” as frequencies.

I wasn’t “impressed” by the number of parameters. After all, you need that many to describe the simplified physical behaviour of the system with that many “degrees of freedom”. They’re not arbitrary parameters, They’re derived from

measurementsof the physical world.Salvador was looking for particular “spectral content” within the sunspot record at the frequencies of interest. If the number of significant degrees of freedom and their relative “directions” (characteristic frequencies and phases) is wrong, then the other parameters are likely to be substantially different for different sample sets of sunspot data. Salvador mentions that his model won’t work without considering the 21.005 quarter Uranus period.

mentions the difficulty with being accurate regarding the longer cycles as the detailed sunspot record is comparatively short… the longest period in his analysis is 1253 years.

A Fourier analysis would have told Salvador nothing about the physical world. His method tests a hypothesis which has some physical basis in the real world. It’s not just a “wiggle match”.

(*) I knew the origin of the elephant quote from “offline” sources. IIRC, von Neuman also urged Feynman to not lose sight of the physics when ploughing through formulae.

P.S. The Figure 1 carbon 14

data(not dating) illustrates a proxy. (Let Willlliam Connnnolllley illustrate it and Tallbloke set it straight.)Notwithstanding the above, I agree with Willis’ view that this is largely curve fitting. However, the out of data tests do suggest there may be something worth further study.

As I’ve pointed out to a couple of this team in personal communication (not this author) there is a danger of this kind of approach becoming numerology if you are ready to arbitrarily accept any combination harmonics sub-harmonics beats, resonances and amplitude modulations of any planetary periods.

I might buy the possibility of Ian Wilson’s VEJ idea but when I see

“– one-quarter Uranus orbital frequency equal to 21.005

–two modulating frequencies of 178.8 and 1253 (forming

a beat frequency of 208 yr).”

I start to to think , hang on. Without a concrete reason to suggest a link with U and then why the 4th harmonic but not 1st , 2nd or 3rd , the word numerology springs to mind.

You need to write out all the base frequencies all the possible permutations within in your scheme of combinations and you’d probably realise that you have enough numbers to start your number system. In that context the “planetary” constants are essentially random numbers.

¹⁴C has been used as a proxy for solar activity for a while. (Science:

Changes in atmospheric carbon-14 attributed to a variable sun. 1980)It’s

data. Not dating.Greg Goodman says:

January 22, 2014 at 6:48 am

Figure 5 gives a forecast for the period 1950 to 2050…“Figure 6. A comparison of monthly sunspot numbers from 1900 to

2000 (in blue)…

Both look like failures to me.

Thanks Willis. A good article on a difficult subject.

I know of many methods to get a small signal out of a lot of noise, and it’s a good thing that they can be rapidly changed and transformed until one seems to work.

To prove that your noise-reduction black box is working you extract something meaningful, like a conversation. But if to extract that conversation you must have a transcript of it before it occurs, then your black box is useless.

How about getting funding for your black box factory?

lsvalgaard says: Both look like failures to me.

Clearly the fit is not as good. My point was the W was ripping into both the authors and the editors in no uncertain terms, for not doing something that was in fact in the paper.

I should add that most probably, predicting solar cycles is as difficult as predicting weather or climate cycles.

Also that difficult is no enough of a word, this is a convoluted and often obfuscated subject.

Willis has Occam’s Axe.

It’s an “open access” journal. This means it is a way of extracting “publication fees” from people who otherwise can’t survive a legitimate peer review. In any case, the journal is being pulled off the virtual “shelf” probably because it is too obviously fraudulent

http://www.pattern-recogn-phys.net/volumes_and_issues.html

The standard model has a similar problem.

Tallbloke, as you’re watching, can you enlighten us on the Uranus thing?

Dan, no, it doesn’t necessarily mean that, there are excellent open access journals that are free for both author and reader, e.g. http://jmlr.org/ .

Excellent demolition job! Unfortunately, it means CAT scans and MRI’s don’t mean anything, either.

This is an excellent example of curve fitting. This article should be used as an example for students in mathematics and statistics.

Curve fitting is a very common error to make. It is easy to believe that you have found a good formula for predicting the future when you in reality have just performed curve fitting on historical data.

I remember when I worked in Telecom that a small company called us and told that they had found an unbelievable accurate formula for projecting future telephone traffic for each telephone trunk.

I watched them present their solution, and to no surprise, it was a curve fitting exercise just like this.

A journal which publishes things like this cannot be called scientific.

/Jan

Greg Goodman says:

January 22, 2014 at 6:51 am

In that context the “planetary” constants are essentially random numbers.

=============

Kepler tells us otherwise.

To the extend the solar cycles are cyclical, curve fitting can have predictive value. The three best known examples are perhaps the day-night cycle, the cycle of annual seasons, and the cycle of ocean tides. These were all understood first as a result of curve fitting, long before we understood the underlying process.

The ocean tides are perhaps the most informative, whereby we are able to accurately calculate the future state of a chaotic system years in advance. This doesn’t mean the current paper has it right. Only that it might. My suspicion is that the curves may be over fit. To my eye there likely should be more noise in the observations vs the model.

dikranmarsupial,

“Open Access” is a “wild west” environment for authors and readers alike. Maybe some are OK, but many are just scams that extract “fees” and publish junk – a lucrative business model given the oversupply of Ph.D.s who will sell their soul to beef up their CVs. My problem with “open access” is the same as the internet in general: lots of interesting stuff but you have to know how to avoid wasting time on the incredibly larger amount of junk masquerading as quality material. Maybe we need Google to invent an algorithm to score “open access” papers (a substitute for peer review) in order to assist in this process of finding the quality needles in the paper glut haystack.

dikranmarsupial, fair point but the same is now true of PR with the tens of thousands of journals spewing out hundreds of papers each every year.

Even the previously “highly respected” titles now print garbage, so science is basically screwed.

fredberple: Kepler tells us otherwise.

No fred, you missed the point. I’m talking about the plethora of beats, interferences, periods, harmonics and sub-harmonics that are used as feed stock in this kind of fitting exercise.

If you generate hundreds of “planetary constants” and then pick half a dozen at will in parameter fitting, it just like a quantised free parameter. There will always be something near enough to get a decent fit.

Jan “Curve fitting is a very common error to make. It is easy to believe that you have found a good formula for predicting the future when you in reality have just performed curve fitting on historical data. ”

I can think of much more widely known example : the models use for AR4.

It’s the same problem except that they can’t eve back cast properly.

lsvalgaard says:

January 22, 2014 at 6:36 am

“The microwave flux from the Sun is a good index of solar activity and the modern sunspot number is a good proxy for the flux: http://www.leif.org/research/SHINE-2010-Microwave-Flux.pdf

There are indications that over the past decade the official sunspot number has been a bit too low compared to the flux, but that is a second order effect.”

Thanks, Dr. S. Interesting presentation.

Dan/Greg, there are good journals, there are no so good journals, that was as true before the push for open access as it is now. Part of being a good researcher is knowing which journals to monitor for interesting papers and which journals are worth sending papers to. Science is actually in reasonably good shape, or at least it would be if the funding were a bit better, but we have had to put up with the financial downturn, just like everybody else, the money has to come from somewhere.

The real problem (IMHO) is researchers being assessed using metrics that favour quantity too strongly over quality. If there was no reward for publishing in a low quality predatory open access journal, there would be no low qualty predatory open access journals. The problem is that quality is much less easily assessed than quantity.

Dan says: “Maybe we need Google to invent an algorithm to score “open access” papers (a substitute for peer review) in order to assist in this process of finding the quality needles in the paper glut haystack.”

Actually, it does, Google scholar will tell you how many times a paper has been cited; that is usually a pretty good indication of the value of a paper. Impact factor is a reasonable measure of the quality of a journal (although you can’t directly compare the impact factors of journals in different fields as it depends on the number of researchers publishing in the field etc.).

Willis — You write:

You are falsely conflating “periodic” with “quasiperiodic,” and conflating “nonperiodic” with “random.”

A sum/product/difference of periodic functions will only be periodic if all the periods are

commensurate,i.e., have a least common multiple. If the periods are irrationally related rather than commensurate, then while the sum/product/difference willresemblea periodic function, it will never repeatexactly— hence the term “quasiperiodic.” A simple example is the sum cos(t) + cos(sqrt(2)*t): Because the two periods are related by an irrational number, sqrt(2), there is no value of T such that cos(t+T) + cos(sqrt(2)(t+T)) == cos(t) + cos(sqrt(2)*t) for all t; hence, the sum of two noncomensurate periodic functions is NOT periodic. While onecanfind T’s such that theaverageabsolute error between f(t) and f(t+T) of a “quasiperiodic” function becomes smaller than some specified value “epsilon,” it canneverbe exactly zero; in the given example, these “quasiperiods” are related to the “continued fraction expansion” of sqrt(2).Admittedly, any

finite-precisionapproximation to a quasiperiodic function will be periodic, because any set of finite-precision numbers necessarily has a finite least common multiple — but that false “periodicity” is a property of thefinite precision approximation,notthe underlying quasiperiodic function — which itself can never repeatexactly,but only approximately in an “average absolute error smaller than some specified epsilon” sense.So, get drawing the future solar activity.

Lets see how the solar tides work out.

Sure its not based on scientific principles but its a fair basis for a hypothesis.

If the numbers work back 1000yrs, whats the odds it will aid a reasonable guess for the next 90 solar cycles?

tallbloke says:

January 22, 2014 at 3:30 am

“Yes sure. Salvador only needs to tell Jupiter, Uranus, Earth and Venus to change the rates they orbit at in order to tune his parameters.”

If there is a physical basis underlying the “fit” equation, presumably it would be the perfect case for classical fourier analysis. Why would any physically based generator need to be tunable? Also, with perhaps hundreds of effects, surely the lesser order ones would give apparent ‘noise’.

Perhaps a more useful suggestion would be to gather those physicists with the greatest insights into which leading indicators of solar behaviour are most useful in predicting amplitude, length and year of maximum within the next 1, 2 or possibly 3 cycles. I get the impression that there are a few out there who got it quite right this time, so either they know the right parameters or they just got lucky.

At least it would tell the world whether or not the scientists can agree on which are the most important solar parameters to measure right now…….

Unfortunately this modelling technique is fully specious. The cluster of typically two to three weaker solar cycles that occur on average every 10 solar cycles, are periodic events and in no way are they a continuous cycle that is modulated by any other cycles. The real timing of these events can be away from the average of 110.7yrs by as much as two solar cycles, due to slips in the planetary harmonic periods producing uneven gaps between events, and the non circularity of orbits. So any fixed period used for modelling is guaranteed to be off target at times, e.g. at fixed 110.7yr intervals for the start of weak solar cycle clusters: 1680, 1792, 1902 and 2013, has the 1902 minimum starting too late, as it actually started around 1880. Where these clusters occur is always at a critical breakdown in the harmony of Jupiter, Earth and Venus, with one other planet.

There are a number of problems with the periods in the paper. The 178.8yr period is local planetary cycle and drifts way out sync in more than one step. The 1253yr was not explained, if it is based on 7*179 then it is spurious. The 19.528 seems to be the axial period of 165.5 and 22.14 and not the beat frequency, and why Hale and not Schwabe? And as the 178.8 is not repeatable, it cannot produce a 208yr beat with 1253yrs. I left an original and plausible explanation for a 207yr period at tallbloke’s on the thread where he banned me. It’s an event series though and not a cycle as such.

On the test strategy of dividing the data set (1749-2013) into two, we would have 132 years in each sub-set. From the 132 year set, we could get a reasonable ideal for patterns that have cycles of less than 132 years, but no idea of patterns of greater than 132 years. A significant pattern with a cycle in 132-264 year range would render the predictive capability of the 132 year set invalid. On the question of whether the full dataset has predictive powers – that would depend on whether there are patterns somewhat greater than 264 years. Any patterns of very long periodicity would not affect short-term horizon.

If the sunspot numbers are due to the interaction of a variety of constant, cyclical processes, then curve fitting may indeed serve as an excellent predictor of future behavior, especially over the near term. In addition, the longer your calibration period is, the more likely your near term predictions are to be accurate (in the absence of chaotic processes). However, if chaotic processes are also involved then Willis is right. So in my opinion this curve fitting exercise is more a test of chaos vs the interaction of repeating cyclic phenomena.

dikranmarsupial

I am not convinced that the author citation index or journal “impact factor” is a reliable metric of quality. Maybe at first they are reliable, but only before the mass of low-quality authors figure out how to game them. The only really reliable way to figure out quality is to be embedded in a field and pay close attention to your established set of peers and the occasional upstart. For outsiders, this doesn’t work and it is these outsiders who develop aversions to “the establishment” and get taken for rides by junk authors and journals.

impact factor is not that easy to game as it can be computed so that it discounts self-citations and citations from papers in the same journal. Google scholar also helps you judge the citations a paper has recieved because it will list for you all the papers that have cited it, and you can judge the quality from the reputations of the authors of the citing papers.

It is unlikely that predatory journals will try and game impact factors etc. There is no incentive for them to increase the quality of the journal, while performance is measured by quantity over quality there will always be a ready supply of authors wanting their papers published in the journal, no matter what its reputation.

We humans love patterns we see them everywhere.

A lot of people don’t get it. I can have a known random generator. Within each block of random numbers there will be obvious or hidden patterns.

Once I know the data I can model the output with a few free parameters. Anybody can do it folks it is easy!

The trick is will the model match the next batch of random numbers and the one after that etc. The answer is not likely eventually progressing to a definite no.

Don’t believe any model constructed on known data, unless all the physical processes involved are known and agreed and its parameters are based on observation.

Alan

Nitpicking on Chaotic and Deterministic: A deterministic system can be chaotic. In fact, chaotic attractors have first been described in deterministic systems.

“His model is not chaotic in the slightest. It is totally deterministic, and will assuredly repeat in exactly the same way after some unknown period of time.”

Willis,

With all due respect and great humility, I believe your definition of “chaotic” is not universally accepted. A system can be both deterministic and chaotic. Of course, it can also be probabilistic and chaotic. If is fair to say, his system is not probabilistic, but probabilistic is not interchangeable with chaotic.

“Chaos theory studies the behavior of dynamical systems that are highly sensitive to initial conditions…” (Wikipedia).

The three-body problem is an example of a deterministic system that is also chaotic. Every time you run the calculation you get the same results – deterministic. If you change the initial contentions slightly, you get much more than a slight change in the results – chaotic.

Hence, I believe that his model is chaotic. Any slight change in the initial conditions/parameters will dramatically change the results.

I agree with you Willis – this is an astonishingly gimcrackery approach. I would add that the fit is not even very good (note the poor fits around 1780-1790, 1840, 1870, 1895, 1960). Perhaps it looks good because of the colours used. As you say, Fourier transform model with high frequencies removed would almost certainly do better.

I also agree with you statement “this model is not chaotic in the slightest. It is totally deterministic, and will assuredly repeat in exactly the same way after some unknown period of time.” For one thing, it is not sensitively dependent on initial conditions (unless he incorrectly thinks that the model parameters are initial conditions) because future states are completely determined by the equation, not by previous states.

To clarify with respect to previous commenters: agreed that deterministic systems may be chaotic, but truly periodic systems cannot. The key distinguishing fact in this case is that chaotic systems are (as far as I know) always driven by dynamical equations in which future states evolve from past states, thus enabling sensitive dependence on initial conditions and strange attractors.

Dear Willis, What is the point of this article? I have no problem with you comments on testing using subsets of the data to train and test that training against alternate subset data. My approach to assessing what anyone says is all about trying first to understand the message and does it convey useful truth. I do not care what degrees or background whomever is communicating. I look to what the say and what they conclude.

I have read several of your pieces in the past and was very impressed.

The conclusion of the author is simply.

” Fortunately because the changes to the base frequencies and phasing occur slowly in terms of human life spans, we can make forecasts that may be useful”

I would proffer nothing you have discussed or argued would quantitatively improve the usefulness of any forecasts made. Well by accident they might but we both know that no model predicts with certainty. With thousands of tests and observations we may be able to statically predict some level of confidence in what a particular model is forecasting. But such is not the case in the modelling as described in R. J Salvador’s ” A mathematical model of the sunspot cycle for the past 1000 yr”

So in that there is no real technical issues I see in Salvador’s article and it’s conclusions and in that you are not adding any real technical insights. My conclusion is all your arguments seem to support or be about your current opinion that some sin was committed em-mass by the authors of the articles in now cancelled Physics of Pattern Recognition. In particular some real problem assigning some vast import to proper Peer Review.

I am not any where near as articulate as Jo Nova. So I would only suggest all read.

http://joannenova.com.au/2014/01/science-is-not-done-by-peer-or-pal-review-but-by-evidence-and-reason/ “Science is not done by peer or pal review, but by evidence and reason”

I would suggest Salvador has not said, “my papers is true because it is published, he or they all say “judge me by my work”.

In my decades in doing engineering science my work was mostly reviewed by pals, good friends and colleagues. The closer my friendship was with the reviewer the more brutal they were. When I reviewed other work I was most brutal or argumentative with my friends. To me a friend really wants to make sure I know exactly everything I am really stupid about.

Some of my own work on using models to predict the proper applications of energy are shown here. http://watman.com/PASTWORK/#ifsar

It was all about.

“Let’s follow due process in science, but that is not by review whether peer-or-pal, it’s by prediction, test, observation, and repeat.”

Alan Millar says:

January 22, 2014 at 9:20 am

“Within each block of random numbers there will be obvious or hidden patterns.”

=======================================================================

But by definition (of “random”) that is impossible. –AGF

agfosterjr you are missing the point, the patterns are “obvious” to the human observer *even though they don’t exist*. That is the problem.

Think about it again, perhaps the point was that for a finite length string of random digits there will be a pattern that allows the string to be algorithmically compressed, even though such compression is impossible for an infinite length string. In statistical modelling, this problem is called over-fitting, because it allows you to make a model that explains a particular sample of data well, but is not able to predict future data because the pattern it exploits is spurious.

Good presentation. Personally, I think the best test of predictivity is future out of sample data, but splitting the sample should at least be tried.

About this:

BONUS MISTAKE: In the abstract, not buried in the paper but in the abstract, the author makes the following astounding claim:The model is a slowly changing chaotic system with patterns that are never repeated in exactly the same way.

Say what? His model is not chaotic in the slightest. It is totally deterministic, and will assuredly repeat in exactly the same way after some unknown period of time.A chaotic system is deterministic (unless it is a stochastic chaotic system, but no such claim is made here), but their model is perfectly periodic. Lots of chaotic systems can appear periodic over a few to many cycles, an example being the near periodicity of Earth’s revolution about the sun; another is the near periodicity of the heartbeat.

I don’t understand why anyone would have issue with curve fitting in a paper in a journal named “Pattern Recognition”.

As of yet I have seen no one meet the challenge of finding something at least as erroneous as MBH98 in any of the papers in PRP.

Sure some people don’t like curve fitting but others do, and certainly we can find examples where curve fitting and pattern recognition were used to forecast the future long before the underlying physics was understood. (i.e.: seasons, tides)

Sure the reviewers were in general agreement with each other on a particular issue, so what, most all papers are reviewed by people who agree the world is round.

The response by the publisher is out of proportion with the supposed crime. They swatted a mosquito with a sledge hammer. It’s censorship. It’s not allowing dissention. It’s suppressing free speech. It’s wrong.

Why couldn’t counter arguments be published in other peer reviewed journals instead of obliterating the dissidents’ voice?

Is anyone forced to cite these papers?

This is craziness gone mad!We should be attacking the publisher for hypocrisy, oppression, and religious fanaticism; not supporting the oppression of people just because we think they’re wrong or dislike their methodology (even though it’s been successfully used for millennia to advance our understanding of the universe).

When they came for the sky dragon slayers, I did nothing because I wasn’t a sky dragon slayer.

When they came for the curve fitters, I did nothing because I wasn’t a curve fitter.

When they came for the low sensitivity proponents, we were too few to resist.

(With apologies to Pastor Martin Niemöller)

marsupial says: “Actually, it does, Google scholar will tell you how many times a paper has been cited; ”

citation counting is measure of conformity, not quality.

Met Office cites japanese paper that is complete bunk and eliminates data that does not agree before “finding” that the data “validates” Hadley “bias corrections”.

All this means is that Hadley find a geographically subjective paper, that says their adjustments are good, is a convenient citation that supports their speculative correction methods.

In engineering this would be recognised as a positive feedback, which causes is inherently unstable but is inevitably bounded by a negative feedback. The result is a system that latches to an extreme.

The current AGW paradigm is such a latched in extreme.

Quality is not a key factor in such a process.

dikranmarsupial says:

January 22, 2014 at 10:33 am

=========================

You may be right about my missing Millar’s point, but about compressing random data, how is THAT possible (except by using the same generator)? –AGF

Fitting a “Fourier Series” is easy if there is a “fundamental”, which there doesn’t seem to be for planetary orbits. (Well, you can always choose a low-enough harmonic. The Earth’s orbit is the 8766th harmonic of a period of 1 hour!). You can also exactly fit a time series, GUARANTEED, to an artificially chosen (your choice) of harmonics by a Discrete Fourier Transform (the FFT). Or choose your own basic functions in a wavelet-like approach. Gram-Schmidt them if you like. Choosing your parameters based on physical data also sounds very advisable. Whatever you like.

Now, having fit your elephant, test it. Wait 50 years for the new data! Not wishing to wait, you can simply go back, discarding say 50 years of data, and recalculate the parameters to the then-available data. Calculate forward to today. Does it fit the data you threw out? It better. And it better not BE diverging more and more as you approach the present.

About 45 years ago in a physics experiment I tried curve fitting. I tried graph paper from every bin in the campus store. Some results were quite lovely. My Professor (Herbert Mahr – bless his heart) was kind enough to commend my industry and my artistic effort and then remark that “this doesn’t mean anything.” Curiously, the exact same four words Willis said above.

agfosterjr wrote “You may be right about my missing Millar’s point, but about compressing random data, how is THAT possible (except by using the same generator)? –AGF”

On average you can’t compress random data, but that doesn’t mean you can never compress *any* sequence of random data. Say I flip a coin and it comes down H T H T H T, that is a random sequence that I can compress because it has a pattern, it isn’t a pattern that extends beyon the six coin flips I have observed, the next one may well be a tail.

Greg A paper that has been cited many times is much less likely to be fundamentally flawed than a paper that has recieved few citations. The reason for this is that it has been scrutinised by more eyes and has been tested by its use as the basis for other work. The chance of a flaw going undetected decreases the more the work is used, but never dissapears entirely. Of course you can find examples where the metric doesn’t give a reliable indication, but that will be true of any metric, and is not a good reason for ignoring a metric that is amongst the best that scientists have found useful so far.

Now if you think that scientists are conformists, I have to say my experience is different, there is nothing we like more than to show something is incorrect (falsification is an important concept in science) or to demonstrate some result that our research field will find surprising (that is what makes high impact papers). It is often said that getting academics to coordinate is like herding cats – there is more than enough truth in that for the analogy to work rather well.

Are you sure the system is truly chaotic? How do you know? Are you claiming to know ALL the variables Willis? Chaos is not the appearance of disorder, chaos is the lack of knowledge to see the order. You perceive a chaotic solar output because your viewpoint is based on the “current” understanding of the sun’s dynamic processes. How do you know your understanding is really correct? You don’t and that’s where curve fitting comes into play to find a constant.

The idea that the first third in a series could be predicted by the data from second or third set is based on the notion that there are no UNKNOWN processes. The fact is we don’t know all the processes that affect the magnetic flux output of the sun over 50 years, 100 years or 1000 years for that matter, much less to what degree for each process. Fitting the curves treats the UNKNOWN variables as a constant, a fudge factor, that seems to work most of the time. And a constant doesn’t have to be a fixed number but it also could be an output from an equation. It may take a very long series on the order of a 1000 years just to find a close approximation of the constant. So this exercise by Salvador is valid as a predictive tool to see what his predictions are for the next two cycles and therefore is falsifiable per Popper’s requirement.

Willis, I think you are prematurely hyperventilating.

“So this exercise by Salvador is valid as a predictive tool to see what his predictions are for the next two cycles and therefore is falsifiable per Popper’s requirement.”

true, but there is little reason to think that his model will work well, given that it has performed badly in the out of sample testing that has already been performed. Also since two cycles is a rather small amount of data compared to that used in the existing out of sample testing, it will not be that easy to draw a solid statistical conclusion either way.

toms3d says:

“The conclusion of the author is simply.

”Fortunately because the changes to the base frequencies and phasing occur slowly in terms of human life spans, we can make forecasts that may be useful””

May be useful for predicting SSN if it could work, but not for the weather and climate. Look at solar cycle 8:

http://www.solen.info/solar/cycl8.html

and now look at CET 1833-1843, it’s dropping to LIA temperatures:

http://climexp.knmi.nl/data/tcet.dat

then compare SC 16 with CET:

http://www.solen.info/solar/cycl16.html

There is another whole side of planetary ordering of solar activity that is driving temperature deviations in the short term that can be remarkably unrelated to solar cycle size. That’s the really useful bit. There are though typically deeper and more frequent cold shots in the weakest cycles, but Salvador has not identified even the average period in which the weaker solar cycles reoccur.

Greg Goodman says:

January 22, 2014 at 6:48 am (Edit)

“Testing”, whether in or out of sample, requires measurement and comparison. Near as I can tell, they have done neither. But you say they have, and perhaps you are right … so where are the results of the out of sample tests?

As with the other papers in the series, they have waved their hands in the direction of seriously examining their claims. And I’m sure some people are impressed with the pretty pictures.

But if you say they’ve tested it out of sample, I’m sure that you can compare for us the R

^{2}and p-value of the out-of-sample forecast with the R^{2}and p-value of the in-sample forecast that is shown in Figure 6.I still say the reviewers did not do their job, that we have no results of out-of-sample testing, we have no code, and as such, the paper should not have been published as it stands.

w.

Thanks Salvador for your work a great read and insights, not quite sure I agree with all of it though, of course time will tell..

Obviously here in the Temple of Greatness twas not appreciated, and how the mighty Priests have enjoyed burning the heretic..

Sad Rude Poeple

Willis Eschenbach says:

January 22, 2014 at 3:43 am

“Thanks, Kasuha. Each of the individual sin and cosine functions that make up the equation repeats in a regular periodic manner. How can their sum not be periodic?

If what you are saying is true, seems like it would make a theoretically perfect random number generator, one that never, ever repeats… and I doubt that.

Seems to me that the sum/product/difference whatever of a finite number of infinitely repeating cyclical functions has to be a repeating cyclical function, and that that is a recurring and big problem in random number generators … but I’ve been wrong before …”

__________________________________________________________________

There’s no point in opinions or beliefs if we can check it:

http://www.wolframalpha.com/input/?i=periodicity+of+y+%3D+sin%28%28x+%2B+sin%28x%29%29%2F%281+%2B+sin%28x%29%29%29

Note that sin and cos functions are interchangeable if we have parameters in them which affect phase. Which is this case.

But it’s also not true that it would make perfect random number generator. It would be actually a very bad random number generator.

Georgie:

Please explain the purpose of your post at January 22, 2014 at 1:27 pm assuming it is other than to demonstrate you are a sad, rude person who cannot spell.

Richard

From the paper:

“Wilson also shows that the strength of the tidal force depends on the heliocentric latitude of Venus and the mean distance of Jupiter from the Sun, and that when these forces are weakest, solar minimums occur. This happens approximately every 165.5 yr. The frequency to produce a 165.5 yr beat with 22.14 yr is 19.528 yr.”

I corrected Ian on that figure a while back. At 14 Jupiter orbits (166.0648 sidereal years) there are 15 average length solar cycles of 11.071 years. And the beat of 166.0648 and 11.071 years is 11.8617 years, exactly one Jupiter orbit. The quoted 19.528yr period doesn’t exist anywhere as a “frequency”, it’s the axial period of his 165.5 and 22.14.

Willis Eschenbach says:

January 22, 2014 at 3:22 am

Stephen Wilde says:

January 22, 2014 at 2:44 am

Strange then that various commentators did actually predict the current solar quietness whilst the establishment was still predicting that cycle 24 would be another strong one.

Actually, most “establishment” predictions weren’t for a strong cycle, they were for a weak cycle. There is a list of them here, along with an interesting analysis of the various methods. See Table 1.

w.

Ah, no, not quite, I figure the folks at NASA are as establishment as it gets these days.

Here is NASA prediction of cycle 24

http://www.swpc.noaa.gov/SolarCycle/SC24/index.html

click on the link Solar Cycles 24 Consensus Prediction (PPT)

It was so far off as to be laughable. And they were going to massage the data once they were sure the cycle had started. (sound familiar?)

They could have done better with darts on a wall chart.

see: http://www.landscheidt.info/?q=node/50 for a more accurate way to estimate solar cycles.

the ‘peak’ about 48 to 50 months, and the ‘max’ pretty much wasn’t.

r

Willis the hilarious thing is this: If the IPCC published such a forecast and refused to put numbers on it, or didnt accurate predict magnitudes but got the direction( increasing or decreasing) right, people would scream.

Heres a good one. In 1988 Hansens model predicted increasing temperatures under all scenarios. Although he got the magnitude wrong he got the direction right. haha

#####

unknown period of time.

———————————————————————–

Thanks Willis I thought you might eventually use it.! Best Regards

Willis

This is an incredibly patronising post and filled with sneer. One could accuse you of “Kettle calling Pot black” here.

BTW the author does do your suggested “Bozo test”.

Furthermore, he qualifies what he meant by ‘chaotic’ at the end of the paper (he wasn’t referring to the model as such rather the process).

I passed on the opportunity.Yet saw fit to pull him on it.

On the paper…

It ain’t sophisticated stuff but then neither is applying a discrete Fourier transform – but then he now has a parameterised function that (for testing) is a whole lot neater than trying to expand a signal in the frequency domain with extra terms beyond the sample window; and before you say it, there are a myriad of ways to deal with this but none of them as simple as they seem and none of them right only “best-for-case”. I don’t know enough about solar cycles to know if this all has any value but overall I found it interesting and showed exactly what he wanted to show.

Steven Mosher says:

January 22, 2014 at 2:21 pm

Willis the hilarious thing is this: If the IPCC published such a forecast and refused to put numbers on it, or didnt accurate predict magnitudes but got the direction( increasing or decreasing) right, people would scream.

Heres a good one. In 1988 Hansens model predicted increasing temperatures under all scenarios. Although he got the magnitude wrong he got the direction right. haha

#####

————————————————————————————–

No he didn’t it hasn’t warmed for going on 17 years ha ha!

Matthew R Marler says:

January 22, 2014 at 11:02 am

Lots of chaotic systems can appear periodic over a few to many cycles, an example being the near periodicity of Earth’s revolution about the sunYeah I think, but I could be wrong, Willis was talking about the model, so we may all be talking cross-purposes here. But if he is assuming that chaotic systems are not periodic as you suggest then he is wrong and you’re right. The signal can be non-stationary (that is periodicity is not constant and can change randomly throughout the chronology). Of course there is a matter of scale. That point was made at the end of the paper.

Greg Goodman says:

January 22, 2014 at 7:53 am

If you generate hundreds of “planetary constants” and then pick half a dozen at will in parameter fitting, it just like a quantised free parameter.

=========

Agreed, that lends itself to a form of cherry picking, which this type of analysis is prone to. Thus you must check to see if the results have predictive power.

However, one cannot simply dismiss the work until a predictive test has been done. Equally one cannot embrace the results without a predictive test, because of the large number of failed results in the past using similar approaches.

FrankK.

Last I checked its gone up since 1988. See how easy it is when u dont quantify things

Matthew R Marler says:

January 22, 2014 at 11:02 am

an example being the near periodicity of Earth’s revolution about the sun

================

orbits in a N body system are inherently unstable mathematically. It was not until the voyager photographs of Jupiter’s rings that we began to understand why the planets in the solar system haven’t long ago been thrown out of orbit or crashed into the sun.

As Kepler proposed, there is a resonance between the objects in to solar system, such that they are always adjusting their positions relative to each other, until over time they reach a “stable” arrangement, where their orbits oscillate within bounds, to minimize the energy of the entire system.

Some planets will move inwards, some outwards. Some will spin faster, some will slow down, until over time the system stabilizes at the lowest energy. If one item then tries to change its position from this pattern, it raises the energy of the system above the minimum. The other objects will shift slightly in response, shepherding the first object back into place.

We know this happens by looking at the rings. Our math says they should not be there. Reality says they are. We see this soft of behavior everywhere in nature. Somehow the system always seeks the lowest energy level and there it stabilizes. Ask any boat captain, why if they lose power, the boat will always turn broadside to the waves.

There is mention here of Fourier Analysis (FA) that is not specific enough. Most likely FA should not even apply here. FA is familiar in its two most basic forms as the Fourier Series (FS) [ for example, … 2 Cos(f) + 7 Sin(2f) +…] which applies to periodic functions (integer harmonics), and the Fourier Transform (FT -integral transforms) that apply to non-periodic functions. The famous Fast Fourier Transform (FFT) efficiently computes the Discrete Fourier Transform (DFT), and being a computation on discrete data, can APPROXIMATE either the FS or the FT when they can’t be solved analytically (the usual case). [ Incidentally, although composed of the sum of two periodic components, Cos[f] + Cos[sqrt(2)f], for example, is not periodic.] As another reference example, the highly regarded Akasofu linear trend + sinusoidal is neither a FS nor a FT, partly periodic, partly not, but is delightfully easy to envision as just an equation. In this post, we also have, most basically, just an equation.

It is not clear that Fourier Analysis should be applied to sunspot data (it seems otherwise), which is at best quasi-periodic. Indeed the equations Willis posts are phase-modulation or frequency-modulation equations and almost certainly can (in theory) be solved in terms of discrete frequencies (something like, but a lot more tedious than the usual Bessel-function sideband amplitudes). An FFT could be then applied to verifying these “sideband” calculations, if they were done, but it is far from certain any additional insight would result.

Before working on this however, one might want to consider what physics (if any) would suggest a modulation result. And, the projections of the model into the test gaps would need to be a lot better – like approaching the quality of the full fit.

tallbloke says:

January 22, 2014 at 5:54 am

OK, tallbloke, here you go. I’ve used 15 tunable parameters, plus your four fixed parameters.

That’s six less parameters than Salvador used. However, it is equally meaningless.

w.

Willis,

Look up the term superficial again please.

I just don’t quite understand what they teach kids these days… a few decades ago, when I was studying meteorology under one of the greats (‘Doc’ Saucier, at NC State), I had it hammered into my brain that models were quite useful, but you had to really, really respect boundary conditions. You can fit a curve to any set of data if you have enough degrees of freedom. The test was how well it behaved once you went outside the boundaries of the model. And that was just for predicting weather patterns a day or two in advance.

The Doc would have just laughed at anybody trying to predict average temperatures even a decade in the future.

For what it’s worth, tallbloke, the Fourier transform of the sunspot cycle has no long-period peaks. The energy is concentrated in various frequencies in the range from about 10-12 years, and there is little energy at periods longer than that. Salvador gets the short frequencies from the beat frequencies of much longer periods. However, those periods are not evident in the Fourier transform.

Regards,

w.

kuhnkat says:

January 22, 2014 at 6:09 pm

Is this a superficial analysis? Since a fitted model with 20 tunable parameters is a superficial model, I suppose any analysis of it has to be superficial.

w.

patrio says:

January 22, 2014 at 6:08 am

Not true. It specifically says that the 20 tunable parameters are just that, tuned. Nor does it “lay out the physics used to derive” the six decimal numbers. He picked several astronomical cycle lengths, and used those. A number of other people, including Scafetta, have done the same thing … but they picked different astronomical cycle lengths, or averages of two cycle lengths, or half cycle lengths … so where is the “physics” in picking astronomical cycles?

w.

Bernd Felsche says:

January 22, 2014 at 6:49 am

Since the two equations are totally identical in results, and one can be freely transformed into the other, how are they different?

My friend, what you read is up to you and you alone. When you start blaming me because you didn’t read the entire post, sorry, I say goodbye. Come back when you realize that you are in charge of your eyeballs, not me.

w.

RJ’s calculations have been available (in .xls format) for more than 3 months.

Steven Mosher says:

January 22, 2014 at 2:21 pm

“Heres a good one. In 1988 Hansens model predicted increasing temperatures under all scenarios. Although he got the magnitude wrong he got the direction right. ”

Getting the magnitude right is the critical component of any prediction. It is what makes people consider taking action. So it is easy to understand Hansen eagerly erring on the high side.

It will rain tomorrow; 1 ” OK, 4 ” rethink your plans for the day

It will be windy tomorrow; 15 mph OK, 45 mph rethink your plans for the day

Gas prices will increase next month; 2 cents/gal OK, $1/gal rethink your driving habits

You will gain weight as you get older; 5 lbs OK, 30 lbs rethink your life insurance

There will be surf tomorrow; 2 ft stay home, 6 feet nice!

Thank you Willis; thank you Anthony; thank you “Pattern Recognition” scientists. Duking it out with your different definitions — trying to find a common ground or the specific failure — different hypotheses, statistics, math, this is what it is about. Continuing the scientific method. I hope the very hard, hurtful, feelings can heal. It is difficult enough to slug it out over scientific principles and methods, but when one holds a horribly falsified academic process called peer review sacred, the science can be lost.

Willis –

We can perhaps expand a bit on your comment about the spectrum (6:34 pm today), although I am certain you ARE right that it will show very little except the approx 11 year cycle. To clarify: Any “beat frequency” does NOT appear in the spectrum (as you observed). It certainly looks (to everyone!) like it should be there, but it ain’t. Consider the sum of two sine waves which beat, so A and B are close to being equal:

Sin(A) + Sin(B) = 2 Sin[(A+B)/2] Cos[(A-B)/2]

The left side is exactly THE spectrum, just the two similar frequencies A and B. The right side looks like the average frequency (A+B)/2 amplitude modulated (balanced modulated to be technical) by half the difference frequency. But it beats AT the difference frequency because there are two amplitude beats for each cycle of Cos[(A-B)/2]. So the beat frequency is the difference (A-B) as traditionally stated. But (A-B) is not in the spectrum [nor is (A-B)/2 ]. (A-B) is the repetition rate of the amplitude “bumps”.

Steven Mosher says:

January 22, 2014 at 5:25 pm

FrankK.

Last I checked its gone up since 1988. See how easy it is when u dont quantify things

——————————————————————————————-

?? Nonsense. He predicted it would keep rising beyond 2000 to 2030

https://wattsupwiththat.com/2012/06/15/james-hansens-climate-forecast-of-1988-a-whopping-150-wrong/

It hasn’t so far from 1998 to 2013. See how easy if you ignore the obvious!

The headline word in all the chorus of criticism of the PRP special edition is “nepotism”. And this word is wrongly understood. It means professional favouritism toward biological family members, not friends and colleagues.

More good news,

Morner was a thesis advisor to Jelbring,http://www.pog.nu/03education/education.htm

How many publication rules does that break? Mashey is having a field day.

And for good measure, Roger did a nice post on the thesis

http://tallbloke.wordpress.com/2012/03/11/book-review-wind-driven-climate-doctoral-thesis-by-hans-jelbring/

Willis Eschenbach says:January 22, 2014 at 6:34 pm

For what it’s worth, tallbloke, the Fourier transform of the sunspot cycle has no long-period peaks.Which sunspot cycle did you transform Willis? you have 24 to choose from. Or do you mean the the fourier transform of the entire sunspot record? Surely you wouldn’t use such a blunt instrument on such delicate data. Actually, you would, to ‘show’ there’s ‘nothing there’. The same technique used in your analysis here:

http://tallbloke.wordpress.com/2013/08/03/blam-blam-willis-eschenbach-takes-a-scattergun-to-solar-temperature-datasets/

I saw your reconstruction, well done. How well does it hindcast back past the Maunder Minimum?

Can we see the parameters you ended up with please.

Poptech says:January 22, 2014 at 10:18 pm

Morner was a thesis advisor to Jelbring,

http://www.pog.nu/03education/education.htm

How many publication rules does that break?

None. And Jelbring wrote his thesis a looooong time ago.

Poptech says:January 22, 2014 at 10:20 pm

And for good measure, Roger did a nice post on the thesis

http://tallbloke.wordpress.com/2012/03/11/book-review-wind-driven-climate-doctoral-thesis-by-hans-jelbring/Yes, and very good it is. I still have some copies if you’d like to buy one.

Then you’ll be able to see that the content of the thesis and the content of the papers Hans submitted to PRP are pretty much unrelated. But don’t let that stop you making a fool of yourself.

Carry on.

pyromancer76, Poptech and tallbloke:

Please desist from disrupting this thread.There is a time and a place for everything.

This thread is about the paper by Salvador RJ which would have been in a journal (PRP) if the publisher had not withdrawn the journal. Discussion of other things is a distraction in this thread.Please note that this thread is NOT about the violations of peer review procedures which resulted in the publisher cancelling the journal. Anybody who wants to discuss that issue can do so in the still active thread on the blog of Jo Nova which first raised that issue.

Richard

My post is short, has no links, contains no profanity, and does not mention our host but WordPress dumps it in the moderation ‘bin’. Aaaargh!

Until last week, the “official” consensus understanding of the longest of the Milankovich cycles, eccentricity, was that both its 100,000 year period and the 400,000 year modulation of its amplitude, were a direct consequence of interaction of earth’s orbit with those of s.a.t.u.r.n and j.u.p.i.t.e.r .

But, post PRP-gate, can we now still say this? Can we even mention the names of other planets in the s.o.l.a.r s.y.s.t.e.m at all? Are there any such things?

It looks like the PRP scandal has put the scientific community firmly on course to return to the pre-Copernican geocentric view of the universe. Who reviewed Galileo and Copernicus? A few like-minded Jesuits? All this heliocentrism will have to be rejected. Nothing affects the earth’s orbit because the earth does not orbit, instead a 2d disc sun rotates in a glassy sphere around a static 2D Narnia-diskworld earth.

It will of course be a delight for the research community with its powerful computational resources to return to the task of getting epicycles to work correctly after an anomalous and inadequately peer reviewed interval of several centuries.

Great to see science striding confidently in the right direction!

There doesn’t seem to be much awareness here of RJ’s simpler model and the Maunder Minimum constraint.

REPLY:You keep referencing this, but dare not lift a finger to provide a citation, URL, or source. Don’t be lazy, be a contributor. – AnthonyFrankK says:

January 22, 2014 at 9:12 pm

Steven Mosher says:

January 22, 2014 at 5:25 pm

FrankK.

Last I checked its gone up since 1988. See how easy it is when u dont quantify things

——————————————————————————————-

?? Nonsense. He predicted it would keep rising beyond 2000 to 2030

https://wattsupwiththat.com/2012/06/15/james-hansens-climate-forecast-of-1988-a-whopping-150-wrong/

It hasn’t so far from 1998 to 2013. See how easy if you ignore the obvious!

################

It has gone up from 1988 to present. As predicted. he predicted up not down

http://www.woodfortrees.org/plot/gistemp/from:1988/to:2013/mean:12

2000 to present

http://www.woodfortrees.org/plot/gistemp/from:2000/to:2013/mean:12/plot/gistemp/from:2000/to:2013/trend/plot/none

You see IF you accept the model in this solar paper, IF you accept a model that gets the DIRECTION right, but the magintude wrong, IF that standard is good enough for you,

THEN you have no choice but to accept hansens model as he gets the direction right

but the magintude wrong.

But, IF, like me, you reject the solar model which gets the magnitude WRONG, then you ALSO get to reject Hansen who gets the magnitude wrong

This is the difference between you and me.

1. I always demand code, wether skeptic or warmist writes the paper

2. I always demand the data, who ever writes it

3. I apply the same tests, let the chips fall where they may.

IN this way my individual politics is controlled for. My friends get criticized, my foes get praised. My position on taxes, on the epa, on any other issue is put aside. three simple rules

1. supply your data AS USED

2. supply your code, as RUN

3. I will believe you when you show your work, else, hit the road I have no time for your BS

Without consistent principles fairly applied we are back in the dark ages.

Here Frank

http://www.woodfortrees.org/plot/gistemp/from:1997/to:2013/mean:12/plot/gistemp/from:1997/to:2013/trend/plot/none

Is that increasing or decreasing

here frank increasing or decreasing

http://www.woodfortrees.org/plot/uah/from:1997/to:2013/mean:12/plot/uah/from:1997/to:2013/trend/plot/none

Tom in Florida says:

January 22, 2014 at 7:07 pm

Steven Mosher says:

January 22, 2014 at 2:21 pm

“Heres a good one. In 1988 Hansens model predicted increasing temperatures under all scenarios. Although he got the magnitude wrong he got the direction right. ”

Getting the magnitude right is the critical component of any prediction. It is what makes people consider taking action. So it is easy to understand Hansen eagerly erring on the high side..

#############

So Tom, the model in this paper, by the authors own admission, gets the direction right but the magnitude wrong.

Now, you have seen these guys attack Hansen’s model for getting the magnitude wrong

falsified they yell.

But when they publish a model, they forget their standards.

For me its easy. The standard I apply to Hansen say he wrong. I apply these same standard here and say. Its also wrong. As a reviewer why would I urge the publication of something that admits its wrong?

Why?

well there is an explanation. its not pretty

In talking about using the Fourier Transform on the sunspot data and/or the model, this is hardly the “blunt instrument” Roger suggests (Jan 23, 12:14 am). But neither did Willis give any details (Jan 22, 6:34 pm). I assume we are talking about using the FFT of a time series.

First, above at 7:32 pm Jan 22, I mentioned that any “beat frequency” should not be in the spectrum. This is true, but I need to refine that comment. The reason relates to the use of the absolute value, which you would never do before taking a spectrum. In the math model, the underlying signals are clearly bipolar as being derived from cosines. But then they take an absolute value. I believe I have also seen suggestions (if not assertions) that every other cycle of actual sunspot data is (in some sense) flipped, and is accordingly sometimes “un-rectified” for proper interpretation. Thus while the cycles “bumps” have an 11 year periodicity, the fundamental period would be 22 years. The 11 year cycle is built into the math model, although as an artifact of the (non-linear) absolute value. By eye, if there are harmonics of the 22 year period in the actual data, they seem to be odd harmonics (i.e., a third harmonic of period 7.3333 years).

It could well be that the “sign” of the alternating cycles is irrelevant to any astronomical consequences or implications for Earth’s climate. Does Nature “self-rectify” and disregard the sign? Quite possibly. But the absolute value has severe consequences for using Fourier Analysis.

Seeing an 11 year cycle and lower frequencies due to any amplitude variations (even beat-like effects) is consistent with using absolute value. Without this, we should expect a 22 year period largely devoid of lower frequency components.

This and many other issues with Fourier Analysis are familiar to electrical engineers from the early days of building power supplies to “envelope” extraction of speech and music.

Bernie Hutchins says:

January 23, 2014 at 10:00 am

The reason relates to the use of the absolute value, which you would never do before taking a spectrum.?The sunspot number is strictly positive and there is no physical justification for introducing a sign that changes at sunspot minimum.

lsvalgaard said, January 23, 2014 at 10:08 am

“Bernie Hutchins says:

January 23, 2014 at 10:00 am

The reason relates to the use of the absolute value, which you would never do before taking a spectrum. ?

The sunspot number is strictly positive and there is no physical justification for introducing a sign that changes at sunspot minimum.”

Thanks Dr. Svalgaard. I kind of had the feeling you would be the one who would know. Agreed that the counting numbers are strictly positive. But I could suppose it might be like ocean tides: two a day for one rotation, and if you have a picnic on the beach, you are chased by the water twice a day in much the same (absolute!) way, despite a once-daily rotation.

You put a (?) after my sentence about never using absolute value before taking an FFT. My reason for stressing that is simply that it would of course give you an FFT of a different signal. For example, instead of a single sinusoidal you would get DC plus all even harmonics, of the sinewave which is now gone completely. Taking absolute value (technically the magnitude of a complex result) AFTER the FFT is common, rather than displaying real and imaginary parts (or keeping a separate phase display).

I am not by any stretch knowledgeable of, or a fan of, sunspot “counts”. (Is this particular spot one or two – and do we count this tiny one as much as this huge one? Etc.) Very noisy data at best? But I think that what I said (the cautions) about using the FFT, are correct.

Again thanks for the reply.

Bernie Hutchins says:

January 23, 2014 at 12:03 pm

You put a (?) after my sentence about never using absolute value before taking an FFT.The ? was by accident. About the cyclic nature of sunspot numbers: there is a qualitative difference between no spots and many spots. It is not just deviations from the mean number. Anyway, making the spot counts signed has no meaning.

Finally: sunspot counting is not all that subjective. Experienced observers count the same number of spots [with same telescope].

Leif –

Thanks – got it.

I was at a talk on AGW where one individual said that the skeptics were using silly notions – even counting sunspots – good for a laugh it seemed. The way he said it, he might have said tea-leaves or chicken entrails. For more than 50 years, I have known about sunspots, short-wave reception, the ionosphere, etc. This individual was a professor of electrical engineering specializing in upper atmospheric physics, space plasma physics, and radar. Nature is just subtle. People are perplexing!

Thanks for all your good work.

Bernie

tallbloke says:

January 23, 2014 at 12:14 am

No clue. Such fits are never valid out of sample for more than a cycle, and often not even for that.

Sure … as soon as Scafetta releases his code.

Or we could do it exactly as Salvador did in the paper discussed in the head post … here are my parameters:

X1

X2

X3

X4

X5

X6

X7

X8

X9

X10

X11

X12

X13

X14

X15

D1

D2

D3

D4

w.

cd:

Yeah I think, but I could be wrong, Willis was talking about the model, so we may all be talking cross-purposes here.The authors said “The model is a slowly changing chaotic system with patterns that are never repeated in exactly the same way,” and Willis correctly pointed out that the model is perfectly periodic. But then he contrasted “deterministic” with “chaotic”.

Matthew R Marler says:

January 23, 2014 at 2:38 pm

Thanks, Matt, you are right and I was wrong. My bad, I did use “deterministic” as an antonym of chaotic, foolishly forgetting that a model can be both deterministic (future state totally predictable) and chaotic.

Having said that, the Salvador model is not chaotic in the slightest.

w.

Steven Mosher says:

January 23, 2014 at 9:40 am

Thank you Mosh, words to live by. Couldn’t say it better.

That’s been the most depressing part of this episode to me, people who are defending junk science and hiding data and code, stuff they’d condemn in a heartbeat if some alarmist was doing it. Sauce for the goose, sauce for the gander.

Regards,

w.

The model is a slowly changing chaotic system with patterns that are never repeated in exactly the same way.After studying that model for a while, I have changed my mind about it being perfectly periodic. The scales of the time arguments of the outer cosines are constantly changing.

Paul Vaughan says:

January 23, 2014 at 4:51 am

The problem that I had with the longer-term historical claims is the use of the Solanki data to define the minima. Among other oddities, it has negative numbers of sunspots at certain points … and it doesn’t fit for beans with the modern data. I probably should do a post on that at some point, there’s another important oddity with the data.

w.

Willis writes ” past performance is assuredly no guarantee of future success.”

Also applies to any “constant” used in a climate model that has been derived directly from observation. One might ask oneself whether that “constant” would likely be the same in say an ice age. If its unlikely to be true then its use to predict future changed climate is no better than this paper’s method.

Greg writes “Notwithstanding the above, I agree with Willis’ view that this is largely curve fitting. However, the out of data tests do suggest there may be something worth further study.”

Agreed. Any interesting correlations are a great starting point. Unfortunately science is losing its way and more and more people are seeing them as “end points” and drawing unsupportable conclusions from them.