Congenital Cyclomania Redux

Guest Post by Willis Eschenbach

Well, I wasn’t going to mention this paper, but it seems to be getting some play in the blogosphere. Our friend Nicola Scafetta is back again, this time with a paper called “Solar and planetary oscillation control on climate change: hind-cast, forecast and a comparison with the CMIP5 GCMs”. He’s posted it up over at Tallbloke’s Talkshop. Since I’m banned over at Tallbloke’s, I thought I’d discuss it here. The paper itself is here, take your Dramamine before jumping on board. Dr. Scafetta has posted here on WUWT several times before, each time with his latest, greatest, new improved model. Here’s how well Scafetta’s even more latester, greatester new model hindcasts, as well as what it predicts, compared with HadCRUT4:

scafetta harmonic variabilityFigure 1. Figure 16A from Scafetta 2013. This shows his harmonic model alone (black), plus his model added to the average of the CMIP5 models following three different future “Representative Concentration Pathways”, or RCPs. The RCPs give various specified future concentrations of greenhouse gases. HadCRUT4 global surface temperature (GST) is in gray.

So far, in each of his previous three posts on WUWT, Dr. Scafetta has said that the Earth’s surface temperature is ruled by a different combination of cycles depending on the post:

First Post: 20 and 60 year cycles. These were supposed to be related to some astronomical cycles which were never made clear, albeit there was much mumbling about Jupiter and Saturn.

Second Post: 9.1, 10-11, 20 and 60 year cycles. Here are the claims made for these cycles:

9.1 years : this was justified as being sort of near to a calculation of (2X+Y)/4, where X and Y are lunar precession cycles,

“10-11″ years: he never said where he got this one, or why it’s so vague.

20 years: supposedly close to an average of the sun’s barycentric velocity period.

60 years: kinda like three times the synodic period of Jupiter/Saturn. Why three times? Why not?

Third Post9.98, 10.9, and 11.86 year cycles. These are claimed to be

9.98 years: slightly different from a long-term average of the spring tidal period of Jupiter and Saturn.

10.9 years: may be related to a quasi 11-year solar cycle … or not.

11.86 years: Jupiter’s sidereal period.

The latest post, however, is simply unbeatable. It has no less than six different cycles, with periods of 9.1, 10.2, 21, 61, 115, and 983 years. I haven’t dared inquire too closely as to the antecedents of those choices, although I do love the “3” in the 983 year cycle. Plus there’s a mystery ingredient, of course.

Seriously, he’s adding together six different cycles. Órale, that’s a lot! Now, each of those cycles has three different parameters that totally define the cycle. These are the period (wavelength), the amplitude (size), and the phase (starting point in time) of the cycle.

This means that not only is Scafetta exercising free choice in the number of cycles that he includes (in this case six). He also has free choice over the three parameters for each cycle (period, amplitude, and phase). That gives him no less than 18 separate tunable parameters.

Just roll that around in your mouth and taste it, “eighteen tunable parameters”. Is there anything that you couldn’t hindcast given 18 different tunable parameters?

Anyhow, if I were handing out awards, I’d certainly give him the first award for having eighteen arbitrary parameters. But then, I’d have to give him another award for his mystery ingredient.

Because of all things, the mystery ingredient in Scafetta’s equation is the average hindcast (and forecast) modeled temperature of the CMIP5 climate models. Plus the mystery ingredient comes with its own amplitude parameter (0.45), along with a hidden parameter for the zero point of the average model temperatures before being multiplied by the amplitude parameter. So that makes twenty different adjustable parameters.

Now, I don’t even know what to say about this method. I’m dumbfounded. He’s starting with the average of the CMIP5 climate models, adjusted by an amplitude parameter and a zeroing parameter. Then he’s figuring the deviations from that adjusted average model result based on his separate 6-cycle, 18-parameter model. The sum of the two is his prediction. I truly lack words to describe that, it’s such an awesome logical jump I can only shake my head in awe at the daring trapeze leaps of faith …

I suppose at this point I need to quote the story again of Freeman Dyson, Enrico Fermi, “Johnny” Von Neumann, and the elephant. Here is Freeman Dyson, with the tale of tragedy:

By the spring of 1953, after heroic efforts, we had plotted theoretical graphs of meson–proton scattering.We joyfully observed that our calculated numbers agreed pretty well with Fermi’s measured numbers. So I made an appointment to meet with Fermi and show him our results. Proudly, I rode the Greyhound bus from Ithaca to Chicago with a package of our theoretical graphs to show to Fermi.

When I arrived in Fermi’s office, I handed the graphs to Fermi, but he hardly glanced at them. He invited me to sit down, and  asked me in a friendly way about the health of my wife and our newborn baby son, now fifty years old. Then he delivered his verdict in a quiet, even voice.

“There are two ways of doing calculations in theoretical physics”, he said. “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.”

I was slightly stunned, but ventured to ask him why he did not consider the pseudoscalar meson theory to be a self-consistent mathematical formalism. He replied, “Quantum electrodynamics is a good theory because the forces are weak, and when the formalism is ambiguous we have a clear physical picture to guide us.With the pseudoscalar meson theory there is no physical picture, and the forces are so strong that nothing converges. To reach your calculated results, you had to introduce arbitrary cut-off procedures that are not based either on solid physics or on solid mathematics.”

In desperation I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, “How many arbitrary parameters did you use for your calculations?” I thought for a moment about our cut-off procedures and said, “Four.” He said, “I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.”

With that, the conversation was over. I thanked Fermi for his time and trouble, and sadly took the next bus back to Ithaca to tell the bad news to the students.

Given that lesson from Dyson, and bearing in mind that Scafetta is using a total of 20 arbitrary parameters … are we supposed to be surprised that Nicola can make an elephant wiggle his trunk? Heck, with that many parameters, he should be able to make that sucker tap dance and spit pickle juice …

Now, you can expect that if Nicola Scafetta shows up, he will argue that somehow the 20 different parameters are not arbitrary, oh, no, they are fixed by the celestial processes. They will likely put forward the same kind of half-ast-ronomical explanation  they’ve used before—that this one represents (2X+Y)/4, where X and Y are lunar precession cycles, or that another one’s 60 year cycle is kind of near three times the synodic period of Jupiter and Saturn (59.5766 years) and close is good enough, that kind of thing. Or perhaps they’ll make the argument that Fourier analysis shows peaks that are sort of near to their chosen numbers, and that’s all that’s needed.

The reality is, if you give me a period in years, I can soon come up with several astronomical cycles that can be added, subtracted, and divided to give you something very near the period you’ve given me … which proves nothing.

Scafetta has free choice of how many cycles to include, and free choice as to the length, amplitude, and phase of each those cycles. And even if he can show that the length of one of his cycles is EXACTLY equal to some astronomical constant, not just kind of near it, he still has totally free choice of phase and amplitude for that cycle. So to date, he’s the leading contender for the 2013 Johnny Von Neumann award, which is given for the most tunable parameters in any scientific study.

The other award I’d give this paper would be for Scafetta’s magical Figure 11, which I reproduce below in all its original glory.

kepler trigon II

Figure 2. Scafetta’s Figure 11 (click to enlarge) ORIGINAL CAPTION: (Left) Schematic representation of the rise and fall of several civilizations since Neolithic times that well correlates with the 14C radio- nucleotide records used for estimating solar activity (adapted from Eddy’s figures in Refs. [90, 91]). Correlated solar-climate multisecular and millennial patterns are recently confirmed [43, 44, 47]. (Right) Kepler’s Trigon diagram of the great Jupiter and Saturn conjunctions between 1583 to 1763 [89], highlighting 20 year and 60 year astronomical cycles, and a slow millennial rotation. 

First off, does that graphic, Figure 11 in Scafetta’s opus, make you feel better or worse about Dr. Scafetta’s claims? Does it give you that warm fuzzy feeling about his science? And why are Kepler’s features smooched out sideways and his fingers so long? At least let me give the poor fellow back his original physiognomy.

kepler painting

There, that’s better. Next, you need to consider the stepwise changes he shows in “carbon 14”, and the square-wave nature of the advance and retreat of alpine glaciers at the lower left. That in itself was good, I hadn’t realized that the glaciers advanced and retreated in that regular a fashion, or that carbon 14 was unchanged for years before and after each shift in concentration. And I did appreciate that there were no units for any of the four separate graphs on the page, that counted heavily in his favor. But what I awarded him full style points for was the seamless segue from alpine glaciers to the “winter severity index” in the year 1000 … that was a breathtaking leap.

And as you might expect from a man citing Kepler, Scafetta treats scientific information like fine wine—he doesn’t want anything of recent vintage. Apparently on his planet you have to let science mellow for some decades before you bring it out to breathe … and in that regard, I direct your attention to the citation in the bottom center of his Figure 11, “Source: Geophysical Data, J. Biddy J. B. Eddy (USA) 1978″. (Thanks to Nicola for the correction, the print was too small to read.)

Where he stepped up to the big leagues, though, is in the top line in the chart. Click on the chart to enlarge it if you haven’t done so yet, so you can see all the amazing details. The “Sumeric Maximum”, the collapse of Machu Pichu, the “Greek Minimum”, the end of the Maya civilization, the “Pyramid Maximum” … talk about being “Homeric in scope”, he’s even got the “Homeric Minimum”.

Finally, he highlights the “20 year and 60 year astronomical cycles” in Kepler’s chart at the right. In fact, what he calls the “20 year” cycles shown in Kepler’s dates at the right vary from 10 to 30 years according to Kepler’s own figures shown inside the circle, and what he calls the “60 year astronomical cycles” include cycles from 50 to 70 years …

In any case, I’m posting all of this because I just thought folks might like to know of Nicola Scafetta’s latest stunning success. Using a mere six cycles and only twenty tunable parameters plus the average of a bunch of climate models, he has emulated the historical record with pretty darn good accuracy.

And now that he has explained just exactly how to predict the climate into the future, I guess the only mystery left is what he’ll do for an encore performance. Because this most recent paper of his, this one will be very hard to top.

In all seriousness, however, let me make my position clear.

Are there cycles in the climate? Yes, there are cycles. However, they are not regular, clockwork cycles like those of Jupiter and Saturn. Instead, one cycle will appear, and will be around for a while, and then disappear to be replaced by some longer or shorter cycle. It is maddening, frustrating, but that’s the chaotic nature of the beast. The Pacific Decadal Oscillation doesn’t beat like a clock, nor does the El Nino or the Madden-Julian oscillation or any other climate phenomena.

What is the longest cycle that can be detected in a hundred year dataset? My rule of thumb is that even if I have two full cycles, my results are too uncertain to lean on. I want three cycles so I can at least get a sense about the variation. So for a hundred year dataset, any cycle over fifty years in length is a non-starter, and thirty-three years and shorter is what I will start to trust.

Can you successfully hindcast temperatures using other cycles than the ones Scafetta uses? Certainly. He has demonstrated that himself, as this is the fourth combination of arbitrarily chosen cycles that he has used. Note that in each case he has claimed the model was successful. This by no means exhausts the possible cycle combinations that can successfully emulate the historical temperature.

Does Scafetta’s accomplishment mean anything? Sure. It means that with six cycles and no less than twenty tunable parameters, you can do just about anything. Other than that, no. It is meaningless.

Could he actually test his findings? Sure, and I’ve suggested it to him. What you need to do is run the analysis again, but this time using the data from say 1910 to 1959 only. Derive your 20 fitted variables using this data alone.

Then test your 20 fitted variables against the data from 1960 to 2009, and see how the variables pan out.

Then do it the other way around. Train the model on the later data, and see how well it does on the early data. It’s not hard to do. He knows how to do it. But if he has ever done it, I have not seen anywhere that he has reported the results.

How do I know all this? Folks, I can’t tell you how many late nights I’ve spent trying to fit any number and combination of cycles to the historical climate data. I’ve used Fourier analysis and periodicity analysis and machine-learning algorithms and wavelets and stuff I’ve invented myself. Whenever I’ve thought I have something, as soon as it leaves the training data and starts on the out-of-sample data, it starts to diverge from reality. And of course, the divergence increases over time.

But that’s simply the same truth we all know about computer weather forecasting programs—out-of-sample, they don’t do all that well, and quickly become little better than a coin flip.

Finally, even if the cycles fit the data and we ignore the ridiculous number of arbitrary parameters, where is the physical mechanism connecting some (2*X+T)/4 combination of two astronomical cycles, and the climate? As Enrico Fermi pointed out, you need to have either “a clear physical picture of the process that you are calculating” or a precise and self-consistent mathematical formalism”. 

w.

PS—Please don’t write in to say that although Nicola is wrong, you have the proper combination of cycles, based on your special calculations. Also, please don’t try to explain how a cycle of 21 years is really, really similar to the Jupiter-Saturn synodic cycle of 19+ years. I’m not buying cycles of any kind, motorcycles, epicycles, solar cycles, bicycles, circadian cycles, nothing. Sorry. Save them for some other post, they won’t go bad, but please don’t post them here.

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

461 Comments
Inline Feedbacks
View all comments
milodonharlani
July 24, 2013 11:01 am

Salvatore Del Prete says:
July 24, 2013 at 10:48 am
Science cannot now predict precisely when the next glaciation in the cycles of Pleistocene ice sheet formation will occur, but nevertheless another one will happen within 10,000 years & probably much sooner than that.
The Holocene so far has been a cooler than average interglacial, & for at least the past 3000 years has been getting colder, despite its climb out of the LIA for the past 150 or so years.
I don’t know if an exhaustive list of confirmed or plausible causes of the observed cycles has been compiled, but they fall into two categories: terrestrial & extra-terrestrial.
Among the former are plate tectonics, oceanic circulation, water, GHGs besides water vapor, organisms & other known & unknown factors. Proposals among the latter are the solar outputs, planets, moon, solar system dust, cosmic rays, position of the sun in its orbit of the galactic center, etc.
Earth’s climate is a complex system. No surprise that GIGO computer models of it suck & lack predictive skill.

July 24, 2013 11:08 am

I am confident that the present prolonged solar minimum period is going to once again result in lower temperatures going forward, with a definite downward trend starting in year 2014,once the weak maximum of solar cycle 24 passes by.
The question is not if the temperatures are going to decline but rather how they decline and by how much.
Again that will depend on thresholds which are out there being met or approached, but what does it take to meet a particular threshold that could set up a positive climatic feedback which would overwhelm the inherent negative climatic feedbacks in the earth climatic system?
I think by decade end much will be resolved from proving the AGW theory is BS,to showing a definite solar and geomagnetic climate relationship. If were lucky revealing possible thresholds.
Time will tell..

Mark Bofill
July 24, 2013 11:50 am

I’ve started looking at this paper in my spare minutes here and there. So far:
Started by reading Scafetta_EE_2013.pdf through to get a general idea of paper.
1. Intro tells us:
1. GST has increased .8-.85C, .5-.55C since the ’70s, CO2 has been increasing
in the atmosphere, this is AGWT.
2. AGWT and GCM’s crap out against the record, so new theory by Scafetta.
2. Sources of uncertainties in GCM’s. Ok, don’t care right now.
3. AGWT agrees only with outdated hockey stick reconstructions. Ok, don’t care right now.
4. DECADAL AND MULTIDECADAL CLIMATIC OSCILLATIONS ARE SYNCHRONOUS TO MAJOR ASTRONOMICAL CYCLES — kerblamo, found something good.
1. Geophysical systems are characterized by oscillations at multiple time scales from a few hours to hundred thousands and millions of years [65]. ok good.
2. Oscillations found in global and regional temp recs, in ocean oscillations, & solar proxy records. fine.
Figure 5 A – shows HADCRUT4 GST after detrending (subtracting .0000297*(t-1850)^2-.384), from 1850 to 2100. Looks nice and sinusoidal. ****This shouldn’t be hard to reproduce**** but for details I must see Scafetta [8].
Hmm. Scafetta [8] appears to be ‘Scafetta, N., 2010. Empirical evidence for a celestial origin of the climate oscillations and its implications. Journal of Atmospheric and Solar-Terrestrial Physics 72, 951-970.’. Good, I can get this here (http://people.duke.edu/~ns2002/pdf/ATP3162.pdf).
I skim through and see fig. 3. AhHa! I think to myself. This looks good, power spectrum estimates of global temperature, showing 60, 20, and 9 year cycles. (I’ve become sidetracked, I know) This must be an important source of what Nicola is talking about. I presume we get this from discrete Fourier transform of the global temperature record, but the reference is to Ghil et al., 2002. The caption tells me that the maximum entropy method and the multi taper methods were used. OK, time to look at Ghil et. al. 2002, title ‘ADVANCED SPECTRAL METHODS FOR CLIMATIC TIME SERIES’. I google this and find a link (www.atmos.umd.edu/~ide/data/research/…/ssa_revgeophys02.pdf‎)
in ssa_revgeophys02.pdf,
I see section 3.4 Multitaper Method. The discussion under 3.2 Classic Spectral Estimates mentions discrete Fourier transforms and the power leakage / systematic distortion that results, so I begin to think, uh oh, multitaper and maximum entropy must be ways to improve on well known discrete Fourier transforms.
Ok, at this point I pause to report and put in my two cents.
If we were talking about simply performing the classic discrete Fourier transform on HADCRUT data to look at the power spectrum, I’d be more inclined to agree that nobody needs to be walked through this. But now it seems we’re using more advanced methods. I don’t do this sort of analysis, so for all I know these are well known, widely used techniques. Still, this causes me to lean towards the ‘show your work’ end of the argument. Right or wrong, I think I can find 10 guys who know what a discrete Fourier transform is for every guy you can find me who knows about multitaper and/or maximum entropy, although I could be wrong.
Further, I’m in three papers deep in trying to follow what’s going on at this point and I haven’t gotten past the first figure in the original paper. A walkthrough of the core calculations would save considerable pain.
Just my two cents so far.

July 24, 2013 12:14 pm

Tme will tell. Again I think it is the magnetic field strength of the sun and all the secondary effects that are associated with it.
Again the beginning state of the climate has much to do with the end results.
What drives the earth climatic system.Answer is the sun. Therefore that does not mean any change on the sun will effect the climate in a significant way, or have the same effect on the climate each time it happens but what it does mean is a changewith the sun which is strong enough in magnitude and duration should have an effect on the climatic system.It must sonce the sun is the DRIVER off the climatic system.
This present prolonged solar minimum is the first time since the Dalton Minimum where it looks like the changes on the sun will be large enough in magnitude /duration to have a climatic impact.
Past history clearly shows that each time the sun has gone into a prolonged minimum state the temperatures have gone down ,while each time the sun has been in a very active state the temperatures have gone up.How much clearer could it be?
.
If you read the first post I posted you will see I mention many solar parameters that have to be met and the many secondary effects I think wilL result. i wil restate the parameters.
Until they are met no one can say if what I am saying is correct or not correct.
LUMINOSITY OF THE SUN VERSUS TEMPERATURE
That is not going to hold up because it is also related to the intial state of the earth in regards to the climate situation geo magnectic field strength, atmospheric composition etc etc., and the secondary effects from a change in the sun.
This decade I feel is going to clear up once for al how much of an effect solar magnetic field strength will have on the climate
PARAMETERS NEEEDED
SOLAR FLUX SUB 90
SOLAR WIND 350KM/SEC. OR LESS
AP INDEX 5.0 OR LESS
SOLAR IRRADIANCE OFF BY .2% OR MORE
UV EXTREME SHORT WAVELENGTH OFF UPWORDS OF 50%
Those above conditions following a period of sub solar activity in general of at least 5 to 10 years and lasting in duration over a significant period of time.(2 plus years) The present prolonged solar minimum started in year 2005, as a reference.
These conditions have not be met since the Dalton Solar MIMIMUM, so at best it is very premature to say solar changes have no effect on climate, since the solar conditions needed have yet to occur soince the Dalton Solar Minimum.
In my opinion the factors keeping temperatures from falling more since 2005, were the weak maximum of solar cycle 24, and the ocean heat content which rose last century in response to strong solar activity, and the lack of accumulation of years of sub-solar activity. This is all fading away as we head deeper into this decade Time will tell.

July 24, 2013 12:20 pm

Clive E. Birkland says:
July 24, 2013 at 7:50 am
What is clear is that none of the procrastinators know what Landscheidt used to predict grand minima. If you do not know this basic piece of information, you know nothing.
But we, and you, do know this. Because Theo told us:
Dear Theo makes the observation that a big hand is 180 years and that half a big hand is 90 years. With that in hand, so to speak, he says “Forecasting is easy. The next minimum is expected about 2007, and the next peak about 2021. Consequently, the observation of golden-section points within cycles seems to be essential” Furthermore “The vertical dotted line marks the starting phase (1933) of a big hand. This dynamically fundamental period coincided with establishment of Stalin’s and Hitler’s dictatorship” which seems to be important as well. By going forward to the next minimum in the 90-year sunspot cycle� half of a “big hand” cycle of 180 years� is expected about the year 2026. So an essential feature is, as he says, that ‘Forecasting is easy’, just count forward half a big hand from the rise of Hitler.

July 24, 2013 12:26 pm

Thanks, Mark.
I feel your pain; Up to my chin in supposedly supporting papers but still struggling to get past the beginning of the paper at hand.

July 24, 2013 12:31 pm

I do predict that many of the questions and different opinions that have been presented on this board will be answered one way or the other, before this decade ends..
We will know who is correct and who is not correct.

July 24, 2013 12:46 pm

William, contary to your opinion the news is very good in regards to lower temperatures and the Dalton Solar Minimum.
If you try to use an increase in volcanic activity as a scape goat it will not fly since an increase in geological activity is one of the many secondary effects associated with a prolonged solar minimum period.
Another very significant secondary effect of a prolonged solar minimum period is a more -AO/NAO.
Evidence for this type of atmospheric circulation very strong for both the Maunder Minimum and Dalton Minimum.
One of the problems of those that want to dismiss the sun is they do not know or do not want to concede the secondary effects. An increase in cosmic rays more clouds ,yet another example.
It is not just solar irradiance changes although that is part of it.

July 24, 2013 12:47 pm

Willis
Re your comment
Not to worry, I find discord of opinions stimulating, so back to the cycling.
– As far as I can ascertain, it all depends on the length of solar magnetic cycle; the AMO length may vary anywhere between 55 and 70 years, so it is an irregular oscillation, which leaves deep imprint on the N. Hemisphere’s and by implication on the global temperature data. For last hundred years, based on the solar periodicity (as agreed with Dr.S) the AMO would average around 64.5 years.
My reconstruction for the AMO, based on the solar cycles starting at 1700 gives following approximate periods: 66, 54, 58 & 64 years (peak to peak). Surprisingly this does make an average of 60 years for 4 AMO cycles (this is the first time I actually looked at it), I would expect that an astronomy factor should be more regular.
The AMO troughs are relatively uniform at -0.25C, while the peaks vary from +0.1C at time of Dalton to about 0.3C for the current.
In my calculations the AMO shows a lag of about 15 years, I am also aware that CO2 people have considered similar number, but associated with totally different factors.
Thus no long term prediction possible using the geomagnetics.
– I have come to think that the geomagnetic change may be just a proxy for the tectonics of the N. Atlantic, however seeing recently some comments about HAARP experiment, although far-fetched it may be possible that the combination of solar and geo magnetic fields interactions over Arctic by moving altitude of the magnetosphere, may affect the Arctic’s atmospheric pressure oscillations.
Both of the above are regularly dismissed as a nonsense by our good doc Svalgaard, which indeed it may be, but that doesn’t bother me too greatly, I just record what is in there, make few observations, and from time to time contradict my previous findings.
Only thing I believe is that ‘nature is ruled by a cause and consequence’ and do not subscribe to the chaos theory.

John West
July 24, 2013 1:12 pm

Willis Eschenbach says:
July 24, 2013 at 11:20 am
WOW!
If I were the referee you’d just be declared the winner.
I hope everybody reads that. I may have to read it daily myself, my brain so badly wants to see a cycle or at least an amalgam of cycles in the temperature record but alas it’s probably a wild goose chase.

milodonharlani
July 24, 2013 1:22 pm

John West says:
July 24, 2013 at 1:12 pm
Does this proxy temperature record look like cycles to you, or merely chaotic, random noise?
http://en.wikipedia.org/wiki/File:Ice_Age_Temperature.png
I’m seeing cycles, but maybe that’s just my eye & brain playing tricks on me.

F. Ross
July 24, 2013 2:30 pm

LdB says:
July 24, 2013 at 10:37 am
@Tim Folkerts
Tim if the sun was transported into a wormhole and disappeared right now you would not know about it for 8 min and 20 sec. You would still have light and the earth would still act as if the sun was there.
The gravity earth experiences right now right here from the sun was relative to it’s position 8 minutes ago. In the 8 minutes everything moved the effect is massive you can’t say Newton gravity is okay it isn’t when trying to do this.

@LdB
It may be heresy to point out this study, but please see the following partial Abstract (and the associated link to the complete study) with regard to gravitation’s propagation speed. I do not know which is right or if even if I understand the finer distinctions, but the study does seem to speak with some authority.
If it is just BS, sorry my bad.

http://ldolphin.org/vanFlandern/gravityspeed.html
“The Speed of Gravity – What the Experiments Say
Tom Van Flandern [as published in Physics Letters A 250:1-11 (1998)]
Meta Research
tomvf@metaresearch.org
Abstract.
Standard experimental techniques exist to determine the propagation speed of forces. When we apply these techniques to gravity, they all yield propagation speeds too great to measure, substantially faster than lightspeed. This is because gravity, in contrast to light, has no detectable aberration or propagation delay for its action, even for cases (such as binary pulsars) where sources of gravity accelerate significantly during the light time from source to target. By contrast, the finite propagation speed of light causes radiation pressure forces to have a non-radial component causing orbits to decay (the “Poynting-Robertson effect”); but gravity has no counterpart force proportional to first order. General relativity (GR) explains these features by suggesting that gravitation (unlike electromagnetic forces) is a pure geometric effect of curved space-time, not a force of nature that propagates. Gravitational radiation, which surely does propagate at lightspeed but is a fifth order effect in , is too small to play a role in explaining this difference in behavior between gravity and ordinary forces of nature. …
The most amazing thing I was taught as a graduate student of celestial mechanics at Yale in the 1960s was that all gravitational interactions between bodies in all dynamical systems had to be taken as instantaneous.
…”

Mark Bofill
July 24, 2013 2:39 pm

F. Ross says:
July 24, 2013 at 2:30 pm
————-
I don’t know if that’s correct or not. Sort of cool if it is. I can send faster than light messages, all I need to do is transform really massive amounts of matter into energy and back again to implement the gravity telegragh!
🙂

July 24, 2013 2:58 pm

Anthony,
which data are you talking about? Which code?
why don’t you try to read my paper with an open mind and point clearly to the “mysterious” data you want?
Give some specific “name” to your requests, Anthony.
As I said, all data, equations and math analysis are available and clearly stated in the paper.
See Anthony, if you want not to be fooled, the only way is to study and learn by yourself instead of just believing in Willis, or in me or in somebody else.
You are too easily forgetting that my papers are peer-reviewed by at least two specialists in the field plus the editors, who are experienced scientists. And I have published many papers in different scientific journals.
If you do not want to be fooled and at the same time you do not want to study and learn by yourself, instead of blindly trusting an individual such as Willis who has no scientific credentials, you should trust the scientific process itself.
If you think that Willis is at the level or above the scientific process you need to invite him to write a professional comment at the journal so that his opinions may be properly addressed by the scientific process itself.
If you agree that Willis is below the scientific process, you cannot trust him when a professional scientist tells you that Willis is making up things because he does not have a sufficient understanding of science and does not have the humility of learning.
If you do not what to trust anybody, then in science you need to trust “nature” itself.
I did a forecast two years ago. Did you forget it?
When I proposed it, Leif immediately stated that the data contradicted it already.
Well it was not true. My forecast is here
http://people.duke.edu/~ns2002/#astronomical_model_1
against the updated temperature. As you see the things are going well up to now.
If you believe in Willis more than in Nature, you are acting outside the scientific realm.

Gary Hladik
July 24, 2013 3:22 pm

“Because of all things, the mystery ingredient in Scafetta’s equation is…”
Or, maybe there is no secret ingredient</i?

July 24, 2013 3:47 pm

L. Ross, thanks for the link to Tom Van Flandern’s article; Interesting proposal for adoption of Lorentzian Relativity to replace Einstein’s Special Relativity.
Gravity would then propagate at a speed not less than 2×10^10 c.
That sounds fast!

Mark Bofill
July 24, 2013 3:53 pm

Dr. Scafetta,
I’ve been reading your paper, although I have not had nearly as much time to devote to it as I would like. In following the papers your paper referred to, I came across ‘Scafetta, N., 2010. Empirical evidence for a celestial origin of the climate oscillations and its implications. Journal of Atmospheric and Solar-Terrestrial Physics 72, 951-970.’ with a very interesting graph (figure 3). I’ve gathered that the methods used to produce the graph (multitaper and maximum entropy) are explained in Ghil et. al. 2002, ‘ADVANCED SPECTRAL METHODS FOR CLIMATIC TIME SERIES’.
I was wondering what implementation of the algorithms described in Ghil et. al. 2002 you used. For example, is there a programming library or a set of routines from some other source that you used? Did you perhaps instead code your own implementation? It seems unlikely that you computed these values by hand. Could you point me to this code, utility, or program?
Thanks,
Mark Bofill

Mark Bofill
July 24, 2013 4:10 pm

Dr. Scafetta,
I most humbly beg your pardon, I see that you have already given me my answer, here:

You may even use popular tools of analysis such as the The Singular Spectrum Analysis – MultiTaper Method (SSA-MTM) Toolkit.

I apologize for missing that.
Regards,
Mark Bofill

LdB
July 24, 2013 4:34 pm

@Willis and others
=> Relativity is in the “true but not relevant” pile for the subject under discussion. The relative speeds of the objects are far, far below the range where relativistic effects rise above the level of noise
You are completely wrong the induced error is completely chaotic it adds up every second it becomes massive very quickly.
.You want to see the effect at play scientifically program a model car to follow a sine wave and then introduced a small tiny error via a pseudo random number generator.
Now your challenge try and write a program to track the motion of the model car ….. good luck you are going to need more that Scafettas 20 parameters.
Do you see the problem the error is not constant and that’s what SR does to the problem the error is small but the longer you run the timespan the problem compounds for every second.
That is the problem in space you can actually get totally lost because there is systematic and compounding chaotic errors …. ask NASA
I am sorry Willis but you are wrong about that statement and you can see why if you think about it.

LdB
July 24, 2013 4:47 pm

@Willis
I should say there is a way to sort of try and reduce down the search of where the car is likely to be in the above example. You draw the programmed sine wave the you add a line plus and minus the random error you add multiplied by time.
So what you end up with is a triangle with ever increasing size and the car will be somewhere in that range from the two limits bounded by the triangle at each time along the path.
The obvious problem is the unknown of the car position grows more and more the longer you run the simulation.
Scaeffa’s problem is worse that example because the planets are doing ellipses so you search area comes back on itself.

July 24, 2013 5:14 pm

Anthony, just an addition to make sure that you are getting the point.
Willis stated that he is asking for data and code, by repeating your own request etc.
********
Willis Eschenbach say: July 24, 2013 at 9:51 am
Nicola, you’ve been offered the opportunity to use the magnificent pulpit of WUWT to spread your theories, but only if you reveal your secret data and your secret computer code.
*******
Ok, your request (Anthony’s one) is based on your unwillingness to read my papers to figure out the data, procedure etc., which are clearly written there.
However, the same request from Willis is surprising. Please note that this implies that Willis has criticized my work without knowing the data that I use, the mathematical procedure that I use and the analysis that I use.
Are you able to understand that Willis criticism is invalid?
He essentially stated that he has not read my papers, but he criticized them, nevertheless.
Do you really trust this guy? Are you so sure that he is not fooling you in some way?
He is essentially cheating and very badly.
He is behaving like a student that makes up a story without having done his homework before.
Look now at Willis article and compare it with the comment by
Mark Bofill July 24, 2013 at 11:50 am
Do you note the substantial difference? Mark is trying to read my papers with the purpose to understand what I did, a fact that Willis did not do.
Do you understand the substantial difference?
Next time ask Mark Bofill to write something instead of asking Willis.
Or ask me, which would be even better.
REPLY: Dr. Scaffetta, this is a simple request, one I make often of others. Where is your SI (supplemental info) for the paper that contains your data and worksheets. Things like Excel spreadsheets and databases and formulae? Surely such things exist. If you can’t or won’t produce those things to allow independent replication, then what you have done is not science, but simply opinion. – Anthony

Clive E. Birkland
July 24, 2013 5:25 pm

Willis Eschenbach says:
July 24, 2013 at 8:40 am
Regarding the extent of my knowledge (as opposed to how hard I’ve looked), Clive, if you want to get into a contest about the length of your johnson, you’ll have to do it with some other guy.
Willis, this is not a question of johnson’s, but rather your level of expertise in reviewing literature in the scientific domain. I will repeat the question that so far Leif or Anthony has also failed to answer. This is basic stuff that anyone criticizing planetary theory should be totally aware of. Your chance to put up or shut up.
Give me some detail on your knowledge, what major concept does Theo use to predict grand minima??

tjfolkerts
July 24, 2013 5:43 pm

LdB says: “You are completely wrong the induced error is completely chaotic it adds up every second it becomes massive very quickly.”
If that were true, then predictions of planetary motion would be impossible. Quite to the contrary, the motions are quite predictable. Even small variations from Newtonian predictions were enough to send people scrambling to find Neptune. Milankovitch cycles are calculated for the earth over periods of 100,000’s of years — again impossible if chaos quickly ruled the motions. The “serious error” in Mercury’s precession was a fraction of a degree per century! People routinely calculate where near-earth asteroids are 30-100 years in advance to see if they might hit the earth.
What specific aspect of the planetary motions do you think is massively chaotic? How far do you think a planet wanders from simple newtonian predictions in a century?

July 24, 2013 5:59 pm

Clive E. Birkland says:
“Give me some detail on your knowledge…”
Do you demand the CV of everyone? Because in case you are not aware of it, Willis is a published, peer reviewed author.
That’s enough for me. If you can make points using your own knowledge, good for you. But this isn’t a contest, this is an attempt to get to the heart of an interesting scientific question.
I’m not taking sides here. But the important thing is to separate the wheat from the chaff: verifiable science from supposition.

1 9 10 11 12 13 19