Sunspots and Sea Level

Guest Post by Willis Eschenbach

I came across a curious graph and claim today in a peer-reviewed scientific paper. Here’s the graph relating sunspots and the change in sea level:

sea level change and sunspots

And here is the claim about the graph:

Sea level change and solar activity

A stronger effect related to solar cycles is seen in Fig. 2, where the yearly averaged sunspot numbers are plotted together with the yearly change in coastal sea level (Holgate, 2007). The sea level rates are calculated from nine distributed tidal gauges with long records, which were compared with a larger set of data from 177 stations available in the last part of the century. In most of the century the sea level varied in phase with the solar activity, with the Sun leading the ocean, but in the beginning of the century they were in opposite phases, and during SC17 and 19 the sea level increased before the solar activity.

Let me see if I have this straight. At the start of the record, sunspots and sea level moved in opposite directions. Then for most of the time they were in phase. In both those cases, sunspots were leading sea level, suggesting the possibility that sunspots might affect sea level … except in opposite directions at different times. And in addition, in about 20% of the data, the sea level moved first, followed by the sunspots, suggesting the possibility that at times, the sea level might affect the number of sunspots …

Now, when I see a claim like that, after I get done laughing, I look around for some numerical measure of how similar the two series actually are. This is usually the “R2” (R squared) value, which varies from zero (no relationship) to 1 (they always move proportionately). Accompanying this R2 measure there is usually a “p-value”. The p-value measures how likely it is that we’re just seeing random variations. In other words, the p-value is the odds that the outcome has occurred by chance. A p-value of 0.05, for example, means that the odds are one in twenty that it’s a random occurrence.

So … what did the author of the paper put forwards as the R2 and p-value for this relationship?

Sad to relate, that part of the analysis seems to have slipped his mind. He doesn’t give us any guess as to how correlated the two series are, or whether we’re just looking at a random relationship.

So I thought, well, I’ll just get his data and measure the relationship myself. However, despite the journal’s policy requiring public archiving of the data necessary for replication, as is too common these days there was no public data, no code, and not even a Supplementary Online Information.

However, years of messing around with recalcitrant climate scientists has shown me that digitizing data is both fast and easy, so I simply digitized the graph of the data so I could analyze it. It’s quite accurate when done carefully.

And what did I find? Well, the R2 between sunspots and sea level is a mere 0.13, very little relationship. And even worse, the p-value of the relationship is 0.08 … sorry, no cigar. There is no statistically significant relationship between the two. In part this is because both datasets are so highly auto-correlated (~0.8 for both), and in part it’s because … well, it’s because as near as we can tell, sunspots [or whatever sunspots are a proxy for] don’t affect the sea level.

My conclusions from this, in no particular order, are:

• If this is the author’s “stronger effect related to solar cycles”, I’m not gonna worry about his weaker effect.

• This is not science in any sense of the word. There is no data. There is no code. There is no mathematical analysis of any kind, just bald assertions of a “stronger” relationship.

• Seems to me the idea that sunspots rule sea level would be pretty much scuttled by sunspot cycles 17 and 19 where the sea level moves first and sunspots follow … as well as by the phase reversal in the early data. At a minimum, you’d have to explain those large anomalies to make the case for a relationship. However, the author makes no effort to do so.

• The reviewers, as is far too often the case these days, were asleep at the switch. This study needs serious revision and buttressing to meet even the most minimal scientific standards.

 • The editor bears responsibility as well, because the study is not replicable without the data as used, and the editor has not required the author to archive the data.

So … why am I bothering with a case of pseudo-science that is so easy to refute?

Because it is one of the papers in the Special Issue of the Copernicus journal, Pattern Recognition in Physics … and by no means the worst of the lot. There has been much disturbance in the farce lately regarding the journal being shut down, with many people saying that it was closed for political reasons. And perhaps that is the case.

However, if I ran Copernicus, I would have shut the journal down myself, but not for political reasons. I’d have closed it as soon as possible, for both scientific and business reasons.

I’d have shut it for scientific reasons because as we see in this example, peer-review was absent, the editorial actions were laughable, the authors reviewed each others papers, and the result was lots of handwaving and very little science.

And I’d have shut it for business reasons because Copernicus, as a publisher of scientific journals, cannot afford to become known as a place where reviewers don’t review and editors don’t edit. It would make them the laughing stock of the journal world, and being the butt of that kind of joke is something that no journal publisher can survive.

To me, it’s a huge tragedy, for two reasons. One is that I and other skeptical researchers get tarred with the same brush. The media commentary never says “a bunch of fringe pseudo-scientists” brought the journal down. No, it’s “climate skeptics” who get the blame, with no distinctions made despite the fact that we’ve falsified some of the claims of the Special Issue authors here on WUWT.

The other reason it’s a tragedy is that they were offered an unparalleled opportunity, the control of special issue of a reputable journal.  I would give much to have the chance that they had. And they simply threw that away with nepotistic reviewing, inept editorship, wildly overblown claims, and a wholesale lack of science.

It’s a tragedy because you can be sure that if I, or many other skeptical researchers, got the chance to shape such a special issue, we wouldn’t give the publisher any reason to be unhappy with the quality of the peer-review, the strength of the editorship, or the scientific quality of the papers. The Copernicus folks might not like the conclusions, but they would be well researched, cited, and supported, with all data and code made public.

Ah, well … sic transit gloria monday, it’s already tuesday, and the struggle continues …

w.

PS—Based on … well, I’m not exactly sure what he’s basing it on, but the author says in the abstract:

The recent global warming may be interpreted as a rising branch of a millennium cycle, identified in ice cores and sediments and also recorded in history. This cycle peaks in the second half of this century, and then a 500 yr cooling trend will start.

Glad that’s settled. I was concerned about the next half millennium … you see what I mean about the absence of science in the Special Edition.

PPS—The usual request. I can defend my own words. I can’t defend your interpretation of my words. If you disagree with something I or anyone has written, please quote the exact words that you object to, and then tell us your objections. It prevents a host of misunderstandings, and it makes it clear just what you think is wrong, and why.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
381 Comments
Inline Feedbacks
View all comments
Jan Stunnenberg
January 24, 2014 1:45 pm

TonyG says to To Paul Westerhagen:
‘According to the math, there is NO correlation.’
Wrong, it’s only Willis’Math that did so.
Paul Westerhagen seems clever enough to me, to be able to ‘feel’ uncomfortable with that kind of ‘math’.

January 24, 2014 1:49 pm

Usoskin & Korte:
We conclude that changes of the regional tropospheric ionization at midlatitudes are defined by both geomagnetic changes and solar activity, and none of the two processes can be neglected.

Paul Westhaver
January 24, 2014 2:00 pm

The color purple doesn’t exist. Yet we see it still.
You cannot find the color purple anywhere on the visible light spectrum.
Our brains adequately decipher red, ted to yellow to blue very well and yield us a good approximation of the spectrum even though we only have blue red and green receptors.
Purple is a color invented by our brains to resolve the difference in intensity by non-adjacent spectrum receptors in the absence of an intermediate.
So, even though purple is not real, the ratio of the two colors that make purple is real and quantifiable. So purple is real…? but as a difference calculator.
Maybe the data sets are like that. Willis never checked for something like that.

Jan Stunnenberg
January 24, 2014 2:01 pm

Sorry Paul Westerhagen.
I’ve just submitted my comment as you submitted yours.
However your’s tells it all.

January 24, 2014 2:12 pm

vukcevic says:
January 24, 2014 at 1:49 pm
Usoskin & Korte:
We conclude that changes of the regional tropospheric ionization at midlatitudes are defined by both geomagnetic changes and solar activity, and none of the two processes can be neglected.

As you have a tendency to misunderstand things, you should provide a link so we can see what you misunderstood. Now, I can guess where your confusion comes from: The main sources of electric fields and currents in the Global Electric Circuit are thunderstorms in the troposphere and the dynamo situated in the ionosphere and magnetosphere produced by tides generated in there and tides propagating upward from the lower atmosphere. Again, learn from what I say, rather than digging your holes ever deeper.

Paul Westhaver
January 24, 2014 2:32 pm

vukcevic,
I speculate that what happens on the earth may be the result of solar activity, Radiation, solar wind, solar farts, etc… all that stuff, seen and unseen and yet-to-be-discovered.
Is there a good study of cloud cover (anywhere/everywhere) vs sunspot number (or a proxy)?
I suspect that there is a transfer function of some kind between solar activity and terrestrial variables, like sea level change etc. Maybe the TF is an aggregate of all of these. Why not?
PW

Greg
January 24, 2014 2:42 pm

Willis : “Here are the Jevrejeva annual sea level changes plotted against sunspots …”
Where did you get that data from Willis ?? Did you digitise fig 3 from the paper by any chance. If you did you should read the caption: it is the SSA 30year window analysis, which they say is similar to 30y running average. That may explain the lack of any decadal detail !
Data should be available from PMSL but it’s not responding my end right now.
http://www.psmsl.org/products/reconstructions/gslGRL2008.txt‎
Suggest KNMI
http://climexp.knmi.nl/getindices.cgi?WMO=PSMSLData/gsl_ann&STATION=global_sea_level&TYPE=i&id=someone@somewhere&NPERYEAR=1

rgbatduke
January 24, 2014 2:47 pm

A high autocorrelation means two adjacent data points are almost the same [and so are not independent] so that when you think you have 1000 data points you may only have [say] 50 points. this affects the p-value making you believe that the correlation is more significant than it really is. A good example is the sunspot number: in a solar cycle there are 4000 days and thus 4000 daily values of the sunspot number, but since a high sunspot number on a given day is always followed by a high number the next day, the number of independent data points for a whole cycle is only about 25, not 4000.
Excellent summary. I wrote a whole Phys. Rev. paper on this once upon a time, elaborating (among other things) how one can actually determine the autocorrelation time (or number of samples per “independent” sample) by looking at the scaling of the variance of the data. It is easy to make egregious claims for the precision of an entirely spurious experimental result if one has thousands of samples but the scaling of the variance indicates that you really only have a hundred.
I was actually doing it the other way around — I was doing numerical simulations using (and comparing) several Markov Chain processes — Metropolis, Heat Bath, a cluster/metropolis method and a mixed cluster/heat bath method. Metropolis was cheap and generated enormous numbers of samples, but the samples had an enormous autocorrelation as one might expect. Heat bath did better — it isn’t accept/reject and so every spin in the model was changed in every step. Cluster methods alone did break up spatial correlations quickly but — surprise! They were accept-reject AND preserved large blocks of spin bonds in unchanged states per move, and actually slowed down the decay of energy autocorrelation (accept/reject methods more or less guarantee that SOME bonds in a lattice are not thermalized for many, many sweeps). Best of all was a mix of cluster steps to break up egregious spatial correlations at a variety of length scales and heat bath sweeps to facilitate the ergodic progression of the bond energies. All revealed by comparing how the standard deviation of the sample compared to what one would expect from a knowledge of the system variance and the number of samples in question.
This is one reason I’m very skeptical about climate science conclusions in general. There isn’t one autocorrelation time or important time scale in climate science, there are dozens (at least) — there is probably a continuum of them to where Laplace transforms are more appropriate to speak of than an single exponential decay process. And it isn’t clear that these times themselves are stationary; it may not even be that the usual concept of an “autocorrelation time” HOLDS for a climate system, even in an approximate sense.
That’s why I read papers on Hurst-Kolmogorov statistics with great interest. Many climate variables appear to exhibit an H-K pattern of autocorrelation — stretches of order decades of approximately uniform variation followed by a discrete jump to a new stretch of order decades. “The pause” is just such a stretch in the GASTA , where the 1997/1998 Super-ENSO is coincident with the preceding jump, and where quite a few of these intervals are visible e.g. here:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1940/to:2014
or even more clearly here:
http://www.woodfortrees.org/plot/uah/from:1977/to:2014
or here:
http://www.woodfortrees.org/plot/rss/from:1977/to:2014
In the satellite era in the last two of these curves, note well that one does quite well imagining that the climate was locally stable from 1980 to 1998 give or take a year or two — call it twenty years. Then it jumped by around 0.3 C over a very short period — two or three years, with large oscillations attending the jump — and has been stable since for anywhere from 13 to 17 years depending on just where you want to put the jump and whether you want to discretize it or extend it over a 2-4 year period that includes the initial very sharp rise, the overcorrection, and the smaller scale bounces that could just as easily belong to the new equilibrium.
Similar jumps are clearly visible in SST graphs generated by Bob Tisdale. Similar jumps are clearly visible in rainfall patterns (one of the places where they were first observed and analyzed by hydrologists like Koutsoyiannis).
Guestimating (looking at longer timeseries and allowing for both poorer data and various thumbs on the scales) 15 to 25 years is one highly non-exponential autocorrelation time — the Earth seems to like to spend 1-2 decades going only slowly up or slowly down interspersed with intervals where it goes up or down comparatively rapidly. Much of even this cannot really be trusted — IMO global climate data itself has a clear trend of decreasing reliability as one goes back into the past (that no one ever gets to see as no one prints the curves with even guestimated error bars, but the estimated errors in things like HADCRUT4 now are pretty large, on the close order of the 30 year supposed warming “signal” of a few tenths of a degree).
If one reduced the last 36-odd years of satellite data — the most reliable data source out there for an unbiased global temperature anomaly — to the two independent “climate” samples it probably represents, and decorated each of those points with an error estimate of 0.1 to 0.2 C (based solely on the variance around the sample means of the observational data taken in two 18 year chunks) nobody would dare to say anything about a trend in the climate. One point at an anomaly of 0 \pm 0.1 C centered on 1986, one point at an anomaly of 0.2 \pm 0.1 C centered on 2005. The probable error (accounting for the scaled autocorrelation and hence sample independence on a smaller scale WITHIN these intervals pretty much completely overlaps and suggests a mean warming of perhaps 0.2/C over 18 years, or roughly 1.1 C/century if it made the slightest bit of sense to extrapolate two data points in any graph ever built!
The problem here is that we really do not know the various climate autocorrelation times (plural) and so we do not know how much “significance” to assign to any apparent linear trend in the highly correlated data with its own internal dynamics that is essentially noisy but stable for apparent decade-plus intervals.
To conclude, I hate to see Bayesian reasoning bad-mouthed, so I’ll just point out that the correct application of Bayesian reasoning in problems like this is along the lines of the stuff Willis was discussing, to automagically correct one’s initial prior estimates of probability until they are in asymptotic agreement with the observational data.
For example, one might examine as he did flipping a two-sided coin. You might begin with a strong bias, an experience-based belief that two-sided coins are likely to have a probability 0.5 for heads, 1 – 0.5 for tails, and 0 for landing perfectly on edge, making them perfect Bernoulli Trial objects with a nice binomial distribution of outcomes.
Bayesian analysis in principle gives one a way to systematically and nearly smoothly correct this belief on the basis of data as one starts to actually flip the coin. Initially one’s prior for heads might be 0.5, but after observing 78 flips in 100 flips, one would/should have a posterior probability that is much higher than 0.5. This is the flip side of ordinary p-values. The Binomial distribution might allow one to compute the probability of getting 78 heads out of 100 flips of an unbiased coin — the usual definition of the p-value — but it does not tell you what your best estimate for the probability of heads is given the data.
The problem with Bayesian reasoning is that it is often misapplied or used as a means of legitimizing an improbable estimate. By weighting your prior beliefs highly, you can essentially demand a lot of evidence — probably far too much evidence — before you start bending your posterior estimate much. There are right and wrong ways to do it, but in the hands of a clever person Bayesian analysis can conceal a kind of statistical lie that resolves to “the result of my analysis shows that the probability of the event is p, very near where I expected/wanted/needed it to be”. Taleb reviews a very similar case in The Black Swan where a Scientist refuses to alter his prior estimate for the probability of a tail in the face of the observational evidence of 100 straight heads because he knows that 2^100 events can happen (one in 10^30 tries, give or take a bit) where Joe the Cab Driver immediately recognizes this as “a mugs game” — the coin is very, very probably fixed so that p(heads) is nearly unity, at least in the hands of the clever person flipping the coin.
This is very similar to the problems with concluding either correlation or causality from the data. Suppose I flipped a truly unbiased coin several thousand times. Over several thousand flips, it is very likely to see sequences of heads and tails at least 8 or 9 long. If somebody wishing to demonstrate that the coin is biased were allowed to pick samples out of this data and concentrate their analysis on some subset, all they have to do is point to one of these long sequences (which have low probability of happening in any given sequence of coin flips even as they will occur quite reliably if one honestly samples the unbiased coin) and say — LOOK — the coin produced ten highly correlated flips in a raw! The coin must be biased! Our brains make us especially susceptible to this — all of the mundane flips sequences are boring; even though the probability of getting HTTHTHHTHH. is exactly equal to the probability of getting HHHHHHHHHH, the latter is exciting and we notice it while forgetting all of the equally unlikely but less strikingly patterned sequences. Picking particular sequences of data that favor some desired conclusion is called “cherrypicking” and is a cardinal sin of science. The tendency to discover favorable sequences and cherrypick them is called “confirmation bias” and is a even worse sin as the latter can happen by accident but confirmation bias involves deliberate action e.g. presenting the average of 9 tidal stations that we know appear to have some sort of correlation but deliberately concealing — often even from ourselves — the fact that 9 stations picked at random from the full set of stations would exhibit no such visible correlation or worse, that the unbiased average of all of the data exhibits no such correlation.
Climate science (on both sides) research in much of the humanities and soft sciences are rife with confirmation bias and cherrypicking and overreaching conclusions drawn from inadequate or incorrect statistical analysis. A lovely example of the latter is the infamous hockey stick. An example of the former is D’Arrigo’s infamous testimony: “if you want to make cherry pies, you have to be willing to pick cherries”. Whether or not the misuse of statistics is deliberate or accidental, much of published climate science is statistically incompetent. A properly cautious approach would simply refuse to draw conclusions about trends, causes or effects until far, far more reliable data is collected and analyzed by somebody other than a collection of world-saving zealots — on either side. Let me be clear — we do not have sufficient evidence to reject the hypothesis of catastrophic warming by the end of the century (human caused or otherwise) any more than we have sufficient evidence to accept it. At the moment, the evidence in favor of the hypothesis is weak, but because of the uncertainties in things like autocorrelation, the range of so-called “natural” variation, the effect of a variety of processes that we know have timescales on the order of several decades (e.g. the multidecadal oscillations, the observed multidecadal variability of the sun, multidecadal oceanic turnover processes) it is far from sufficient to positively assert that CO_2-linked global warming will not, in fact, eventually cause real catastrophic e.g. sea level rise. There just isn’t any evidence that it is doing so yet.
For fans of proper Bayesian analysis, there is a delightful article here:
http://marginalrevolution.com/marginalrevolution/2005/09/why_most_publis.html
that analyzes a surprising but plausible claim: Most published research findings are false! This is probably not true in all fields (I’d like to think it isn’t true in physics, for example, because we police the field at roughly the level where any further stringency would eliminate the fringe from which new revolutions rarely but significantly spring) but in fields like medicine it is probably (and tragically) true!
It is interesting to see where climate science falls in the schema given at the bottom of this article.
1) I’d have to say climate science is very high in what he calls background noise, the number of hypotheses tested. Examples of this are frequent on WUWT — when every biologist has to add the magic words “to look for evidence of the impact of global warming” to a grant proposal to study pinkfooted bungie-jumpers, when every population study, every medical study, every study of coral reefs or the migratory patterns of beetles invokes the phrase on the presumption that such evidence exists (that has to be presented sufficiently compellingly to help influence a grant officer into funding the work when otherwise there is nothing particularly special to look for sheer Bayesian analysis dictates that a significant fraction of such studies will find something to report at the level of marginal statistical significance.
2) Many results supporting a supposed coming catastrophe are reported for tiny sample populations, or highly constrained and limited environments. Last week we heard about the supposed extinction of a subpopulation of a butterfly in a part of a single meadow subject to uncontrolled and confounding environmental alterations in addition to any supposedly discriminable change in climate. Again, many of these studies of marginal subpopulations or specific locations can easily turn out to be at best statistical accidents — happening to test green jelly beans first and never bothering to report that jelly beans in general have a null result, and even green jelly beans have a null result unless they were tested on one particular Sunday.
3) Small effects are to be distrusted. Wow. Talk about defining an entire science! The entire predicted catastrophe is less than a one percent effect on the absolute temperature scale. All observed warming to date in the sixty-odd years where it could be attributed to human-produced CO_2 is on the order of two-tenths of one percent of the absolute temperature (some fraction of which is almost certainly natural) and there has been no warming at all for the last quarter of that era precisely where one would expect it to be the strongest.
4) Multiple types of evidence are desirable, but are not forthcoming. The GCMs that are the sole basis for predictions of warming fail to predict global warming, they fail to predict tropospheric warming, they fail to predict rainfall, they predict storm intensification that has not happened, they do not predict the correct patterns of what warming there has been, and they utterly failed to predict what has happened with e.g. Antarctic sea ice, the Greenland ice pack, sea level rise, sea surface temperatures, El Nino, the PDO, and well, pretty much anything. It isn’t clear that there is anything that GCM’s have gotten right!
The problem there is that when multiple biological studies find supposed evidence for negative sequellae attributable to GW that supposedly took place over the last 15 years, that is not evidence for CAGW when in fact no statistically discernible warming whatsoever took place over the last fifteen years. So a huge body of what people consider to be “evidence” is in fact evidence — against the hypothesis, and for the presence of an enormous degree of confirmation bias in the literature given that people are finding effects for a cause that in fact has not budged since many of the researchers were in middle school.
5) This also is a problem here. Much of the literature can be discounted when they find evidence of global warming in e.g. the biosphere when no discernible global warming occurred or is occurring since the 1997-1998 ENSO, and only 0.2 C of warming occurred then. Changing “global warming” to “climate change” is intended to hide this since “climate change” cannot be falsified given that the climate is always changing, but does not repair the science. Then there is the deification of Mann by the IPCC and the consequent corruption of the mainstream science. Post-Mann climate science has been all about a literature controlled by the authors of a few individual papers, some of which have been shown to be terrible examples of statistics. Briffa, Jones and others published many papers clearly showing the MWP before Mann “erased” it. Climategate clearly revealed the naked backstabbing, gatekeeping, and behind the scenes pressure to suppress the competing voices that ordinarily keep science honest. It has taken fifteen years plus where the sky stubbornly refused to fall to gradually free the editors and referees to consider papers that don’t conform to a narrowly politicized conclusion as evidenced by the presence of one or more catch phrases about global warming even where it ends up being irrelevant (the mirror image of the grantseeking process alluded to above). One shudders to think about how long it will take to de-politicize the granting agencies themselves.
6) There are damn few papers published that test other people’s theories in climate science, at least as far as I’ve seen. If there were, would not the GCMs largely have been rejected at this point? WUWT has indeed covered at least one paper that tested GCMs directly against one another for a toy problem, where they failed miserably, but only a handful of researchers dare to question the party line, often at the expense of being called names by their peers and being subjected to a brutal and withering peer review process in journals where the editors are routinely subjected to pressure from a small cadre of climate scientists. Indeed, a properly skeptical test of the fundamental theoretical basis for the prediction of CAGW would be most welcome, as those predictions are busy not coming true and everybody knows it but nobody will say so in the literature!
7) Don’t reject papers that fail to reject the null hypothesis. Boy, Climate Science in a nutshell, at least if the null hypothesis is that human CO_2 has had a negligible effect on the climate, one impossible to disentangle so far from natural climate variation and impossible to directly observe even with sophisticated instrumentation and hence discernible at best by dubious analyses that attempt to discriminate recent warming as being due to CO_2 from previous but identical warming that was supposedly natural, e.g. comparing the first and second half of the 20th century.
If a non-expert in the field cannot tell the difference between GASTA in the first half of the 20th century (when most climate scientists agree that CO_2 levels had not yet begun to change at a significant rate from pre-industrial levels) and the second half of the 20th century (where all or almost all of the warming is openly attributed to CO_2 in IPCC ARs) then one has little basis for rejecting the null hypothesis.
rgb

Greg Goodman
January 24, 2014 4:03 pm

I posted this a couple of days back but it seems to have disappears 😕
Anyway I’ve cleaned the graphs up on posted as description of the derivation.
Cross-correlation:
http://climategrog.wordpress.com/?attachment_id=760
Power spectrum:
http://climategrog.wordpress.com/?attachment_id=759
There is a smallish but significant correlation between the Jevrejave annual MSL and sunspot area. This show clear evidence of a modulation due to interference pattern with a second periodicity. The other frequency causing the modulation can be estimated by:
1/79.1+1.10.49 = 9.23 years.
This suggestive of a lunar cycle. Periods of 9.1 +/- 0.1 years have been reported by Nicolas Scafetta and Berkeley Earth project.
I have been tyring to point for some time that you cannot detect or refute an particular driver (like solar) by trivial single variable regression of correlation.
Willis challenged me to show it and I think that does.

Bart
January 24, 2014 4:14 pm

Greg Goodman says:
January 24, 2014 at 4:03 pm
“I have been tyring to point for some time that you cannot detect or refute an particular driver (like solar) by trivial single variable regression of correlation.”
Throw out all the other comments on this thread. That is the only one needed.

richardscourtney
January 24, 2014 4:43 pm

Jan Stunnenberg:
In my post at January 24, 2014 at 7:34 am I wrote

Willis’ analysis certainly indicates no direct causal effect although there could be an indirect effect. Such indirect effect would result from the solar cycle determining the periodicity of ‘something else’ which – in turn – affects the periodicity of SLR. But nobody has suggested what such a ‘something else’ could be.
So, at present, Willis’ analysis indicates that there is no relationship between the solar cycle and SLR variation. Hence, the hypothesis of a direct causal relationship between the solar cycle and SLR variation is falsified.
This is a useful finding because it frees people to investigate whatever is the true cause of the SLR variation.

At January 24, 2014 at 9:20 am you have replied saying in total

‘richardscourtney’ says:

‘J E Solheim published a paper which suggested that the SLR variation is related to the solar cycle. In other words, the solar cycle causes the SLR variation. And, as you say, their similar periodicities does imply that the SLR is solar driven.’

NO! Must read: [..] In other words, the solar cycle AND the SLR variation MIGHT have A COMMON CAUSE. full stop
Whatever that cause might be – tidal forces of the planetary system and/or even electromagnetism … etc. – the effect is verifiable in the record. At earth: as SLR. At the sun as SSN.
The COMMON CAUSE has yet to be discovered.

The “must read” applies to you and not me.
Firstly, my words “at present” are true and have a meaning which your reply ignores.
Secondly, I said the possibility of a common cause exists but the study under discussion falsifies a direct cause.
Thirdly, you assert a common cause but provide no evidence for it.
So what? I see no reason to accept an unjustified assertion that something DOES exist when – as I pointed out – it may exist but there is no evidence for it.; e.g. the reason for fairies at the bottom of your garden has yet to be discovered.
Richard

richardscourtney
January 24, 2014 4:58 pm

Paul Westhaver:
Your post at January 24, 2014 at 12:29 pm begins saying

Richard,
You had not contributed anything to the rational explanation as to why a perceived pattern is not real. Only ad hominem to me.
Your lack of a an attempt at any possible reasonable hypothesis as to the basis of why a pattern is commonly perceived yet is absent in W.E’s analysis or yours, is the reason I am unconvinced by his and your assertions.

No! That is absolutely untrue except possibly that you may be “unconvinced”.
I provided no ad homs.
The only ad homs. were from you at me; e.g. untrue assertion that I used “snark”, my explanation is less cogent than the perception of an 8 year old child, I am “quite a tyrant want-to-be”, etc..
Both I and Willis explained to you that the human brain often generates the perception of patterns which do not exist so mathematics are needed to determine if perceived patterns are real. If you want to continue your pestering then try it on Willis because his tongue will be more lashing than mine and I cannot be bothered to reply to more of your insulting and offensive twaddle.
Richard

richardscourtney
January 24, 2014 5:21 pm

rgbatduke:
Your long but excellent post at January 24, 2014 at 2:47 pm contains much ‘good stuff’. One sentence jumped out at me because only yesterday I replied to a series of questions put to me on WUWT and I provided an answer which effectively said the same; i.e.

Let me be clear — we do not have sufficient evidence to reject the hypothesis of catastrophic warming by the end of the century (human caused or otherwise) any more than we have sufficient evidence to accept it.

Indeed, I said that to my questioner, I explained why it is true and I suggested further study that would assist the questioner’s understanding of it. But the questioner then responded with a post which tried to pretend I had said that there would no global warming! I suspect his intention was to induce me to say that and when he failed he tried to pretend I had.
Reality is important. What we want reality to be is not important. But as this thread clearly demonstrates, some people want to pretend reality is as they ‘see’ it and not as available evidence indicates.
Richard

Greg Goodman
January 24, 2014 5:35 pm

Richard: “Reality is important. What we want reality to be is not important. But as this thread clearly demonstrates, some people want to pretend reality is as they ‘see’ it and not as available evidence indicates.”
Indeed, and other some people want to pretend reality is as they ‘see’ it and do not understand what the available evidence indicates.”
http://climategrog.wordpress.com/?attachment_id=760

richardscourtney
January 24, 2014 6:37 pm

Greg Goodman:
At January 24, 2014 at 5:35 pm you say to me

Indeed, and other some people want to pretend reality is as they ‘see’ it and do not understand what the available evidence indicates.”
http://climategrog.wordpress.com/?attachment_id=760

I agree.
And I observe that your link provides some debatable data which its presenter says only “suggests” something and rightly does not claim it “indicates” anything.
Richard

Paul Westhaver
January 24, 2014 7:01 pm

More ad hominems.
I see what I see…

Manfred
January 24, 2014 8:06 pm

richardscourtney says:
January 24, 2014 at 7:34 am
But the analysis by Willis strongly suggests that the similarity of the SLR oscillation is purely by chance.
——————————————————————-
Richard,
if curve fitting is not regarded as science, computaton of a single p value is hardly an “analysis”, which “strongly” suggests something without having a “look” at more specifics and details.
That language is not matching the evidence brought forward in support.
And readers notice that, and rightfully may get annoeyed, because they are not used to be manipulated like that, but enjoyed much more sophisticated and constructive discussions, many of them thanks to Willis.
And curiously, exactly that overdoing of an argument may be rightfully critizised in a few PRP papers as well. and it is therefore neither a serious-minded nor a clever option to citizise their work.
You say, maths is superior to visible inspection. Maybe, maybe often, but not in this case. The opposite is true.
Firstly, what you call maths is just a single value. And by visible inspection you may not only find out as well, that the p value may be low, but also why.
It is due to 2 phase shifts at the start of the data set and around 1990. Without these phase shifts, the datasets would be beating nearly synchronously all along the data.
The p value may still be low due to different amplitude patterns, but the beat is obvious.
The first thing of interest would be to investigate if the phase shift around 1990 is real. Comparing with Shaviv’s data set suggests, that it may be not, as in his data set the spread thereafter is smaller. Perhaps part of it due to Mt Pinatubo ?
The second thing of interest would be, if the beat continues with recent data.
The third thing of interest would be, if the beat is also visible in the satellite sea-level data.
Then you may be closer to conclude, if the relationship is thermal / mechanical.or due to a third common driver or, perhaps more likely “purely by chance”.

Ian Schumacher
January 24, 2014 10:54 pm

Willis,
Regarding R^2 equals correlation.
R^2 isn’t very good at helping you find components of a signal. The correlation I (and others) are referring two is of the signal processing variety (Sum of the product, not sum of the square of the differences). This allows you to be wrong in phase, in scale, and still ‘detect’ a correlation. In addition there is another type of correlation known as synchronicity. R^2 is only valid for linear models. Consider Milankovitch cycles ‘correlation’ with ice-ages. If you were to do an R^2 you’d probably get a very a very low value. This is because the Earth is a complex non-linear feedback loop system and correlation assumes a linear model. Whatever Milankovitch cycles do and how they do it, it is probably more of a ‘trigger’ then a direct driver. Blindly using R^2 you’d probably get a value so low that you’d assume that Milankovitch cycles are irrelevant. However, synchronicity is amazing. No rational person can look at the synchronicity of Milankovitch cycles and ice ages and conclude “it’s just a coincidence”.
Regarding dismissing the graph:
However, you also linked to some other data that was far less flattering for sun spots (both synchronicity and statistical correlation). As I’ve said I’m not a ‘sun spot guy’. I objected to your use of R^2 meant for linear models and dismissal of the graph above when to the eye there appears an obvious synchronicity. Does the synchronicity hold for longer periods? Who knows? I thought you were too quick to dismiss it based on linear model metrics. However, If other data shows that this paper is essentially ‘cherry picking’, then that is a different story.

Paul Westhaver
January 24, 2014 11:07 pm

Has this paper been accepted for publication?
Solar Forcing of the Streamflow of a Continental Scale South American River
P. J. D Mauas, E. Flamenco, A. P. Buccino
http://ruby.fgcu.edu/courses/twimberley/EnviroPhilo/Mauas.pdf
http://arxiv.org/abs/0810.3882
From the Abstract:
Solar forcing on climate has been reported in several studies although the evidence so far remains inconclusive. Here, we analyze the stream flow of one of the largest rivers in the world, the Parana in southeastern South America. For the last century, we find a strong correlation with the sunspot number, in multidecadal time scales, and with larger solar activity corresponding to larger stream flow. The correlation coefficient is r=0.78, significant to a 99% level. In shorter time scales we find a strong correlation with El Nino. These results are a step toward flood prediction, which might have great social and economic impacts.
River stream flow may impact sea level change?

Manfred
January 24, 2014 11:40 pm

Hi Willis,
re failed Shaviv 2008 p=0.54 replication.
Shaviv used Lean 2000. Is your data identical ?
Shaviv removed secular trends.
Your zscore uses column D and not detrended values in E ?
=’C:\Applications\Microsoft Office 2011\Office\Add-Ins\myfunctions2.xla’!zscore(F305:F407,D305:D407)

1 7 8 9 10 11 16