Guest Post by Willis Eschenbach
In a comment on a recent post, I was pointed to a study making the following surprising claim:
Here, we analyze the stream flow of one of the largest rivers in the world, the Parana ́ in southeastern South America. For the last century, we find a strong correlation with the sunspot number, in multidecadal time scales, and with larger solar activity corresponding to larger stream flow. The correlation coefficient is r = 0.78, significant to a 99% level.
I’ve seen the Parana River … where I was, it was too thick to drink and too thin to plow. So this was interesting to me. Particularly interesting because in climate science a correlation of 0.78 combined with a 99% significance level (p-value of 0.01) would be a very strong result … in fact, to me that seemed like a very suspiciously strong result. After all, here is their raw data used for the comparison:
Figure 1. First figure in the Parana paper, showing the streamflow in the top panel, and sunspot number (SN) and total solar irradiance (TSI) in the lower two panels.
They are claiming a 0.78 correlation between the data in panel (a) and the data in panel (b) … I looked at Figure 1 and went “Say what?”. Call me crazy, but do you see any kind of strong 11-year cycle in the top panel? Because I sure don’t. In addition, when the long-term average of sunspots rises, I don’t see the streamflow rising. If there is a correlation between sunspots and streamflow, why doesn’t a several-decade period of increased sunspots lead to increased streamflow?
So how did they get the apparent correlation? Well, therein lies a tale … because Figure 2 shows what they ended up analyzing.
And wow, that sure looks like a very, very strong correlation … so how did they get there from such an unpromising start?
Well, first they took the actual data. Then, from the actual data they subtracted the “secular trends” (see dark smooth lines Figure 1). The effect of this first one of their processing steps is curious.
Look back at Figure 1. IF streamflow and sunspots were correlated, we’d expect them to move in parallel in the long term as well as the short term. But inconveniently for their theory … they don’t move in parallel. How to resolve it? Well, since the long-term secular trend data doesn’t support their hypothesis, their solution was to simply subtract that bad-mannered part out from the data.
I’m sure you can see the problems with that procedure. But we’ll let that go, the damage is fairly minor, and look at the next step, where the real destruction is done.
They say in Figure 2 that the sunspot data was “smoothed by an 11-yr running mean to smooth out the solar cycle”. However, it is apparent that the authors didn’t realize the effect of what they were doing. Calling what they did “smoothing” is a huge stretch. Figure 3 shows the residual sunspot anomaly (in blue) after removing the secular trend (as the authors did in the paper), along with the 11-year moving average of that exact same data (in red). Again as the authors did, I’ve normalized the two to allow for direct comparison:
Figure 3. Sunspot anomaly data (blue line), compared to the eleven-year centered moving average of the sunspot anomaly data (red line). Both datasets have been normalized to a mean of zero and a standard deviation of one.
Talk about a smoothing horror show, that has to be the poster child for bad smoothing. For starters, look at what the “smoothing” does to the sunspot data from 1975 to 2000 … instead of having two peaks at the tops of the two sunspot cycles (blue line, 1980 and 1991), the “smoothed” red line shows one large central peak, and two side lobes. Not only that, but the central low spot around 1986 has now been magically converted into a peak.
Now look at what the smoothing has done to the 1958 peak in sunspot numbers … it’s now twice as wide, and it has two peaks instead of one. Not only that, but the larger of the two peaks occurs where the sunspots actually bottomed out around 1954 … YIKES!
Finally, I knew this was going to be ugly, but I didn’t realize how ugly. The most surprising part to me is that their “smoothed” version of the data is actually negatively correlated to the data itself … astounding.
Part of the problem is the use of a running mean to smooth the data … a Very Bad Idea™ in itself. However, in this case it is exacerbated by the choice of the length of the average, 11 years. Sunspot cycles range from something like nine to thirteen years or so. As a result, cycles longer and shorter than the 11 year filter get averaged very differently. The net result is that we end up with some of the frequency data aliased into the average as amplitude data … resulting in the very different results from about 1945-60 versus the results 1975-2000.
Overall? I don’t care what they end up comparing to the red line … they are not comparing it to sunspots, not in any way, shape, or form. The blue line shows sunspots. The red line shows a mathematician’s nightmare.
How about the fact that they performed the same procedure on the Parana streamflow data? Does that make a difference? Figure 4 shows that result:
Figure 4. Parana streamflow anomaly data (blue line), compared to the eleven-year centered moving average of the streamflow anomaly data (red line). Both datasets have been normalized to a mean of zero and a standard deviation of 1.
As you can see, the damage done by the running mean is nowhere near as severe in this streamflow dataset as it was for the sunspots. Although there still are a lot of reversals, and turning peaks into valleys, at least the correlation is still positive. This is because the streamflow data does NOT contain the ± eleven-year cycles present in the sunspot data.
Conclusions? Well, my first conclusion is that as a result of doing what the authors did, comparing the red line in Figure 3 with the red line in Figure 4 says absolutely nothing about whether the Parana river streamflow is related to sunspots or not. The two red lines have very little to do with anything.
My second conclusion is, NEVER RUN STATISTICAL ANALYSES ON SMOOTHED DATA. I don’t care if you use gaussian smoothing or Fourier smoothing or boxcar smoothing or loess smoothing, if you want to do statistical analyses, you need to compare the datasets themselves, full stop. Statistically analyzing a smoothed dataset is a mug’s game. The problem is that as in this case, the smoothing can actually introduce totally false, spurious correlations. There’s an old post of mine on spurious correlation and Gaussian smoothing here for those interested in an example.
Please be clear that I’m not accusing the authors of any bad intent in this matter. To me, the problem is simply that they didn’t understand and were unaware of the effect of their “smoothing” on the data.
Finally, consider how many rivers there are in the world. You can be assured that people have looked at many of them to find a connection with sunspots. If this is the best evidence, it’s no evidence at all. And with that many rivers examined, a p-value of 0.05 is now far too generous. The more places you look, the more chance of finding a spurious correlation. This means that the more rivers you look at, the stronger your results must be to be statically significant … and we don’t yet have even passable results from the Parana data. So as to rivers and sunspots, the jury is still out.
How about for sea level and sunspots? Are they related? I can’t do better than to direct you to the 1985 study by Woodworth et al. entitled A world-wide search for the 11-yr solar cycle in mean sea-level records , whose abstract says:
Tide gauge records from throughout the world have been examined for evidence of the 11-yr solar cycle in mean sea-level (MSL). In Europe an amplitude of 10-15 mm is observed with a phase relative to the sunspot cycle similar to that expected as a response to forcing from previously reported solar cycles in sea-level air pressure and winds. At the highest European latitudes the MSL solar cycle is in antiphase to the sunspot cycle while at mid-latitudes it changes to being approximately in phase. Elsewhere in the world there is no convincing evidence for an 11-yr component in MSL records.
So … of the 28 geographical locations examined, only four show a statistically significant signal. Some places it’s acting the way that we’d expect … other places its not. Nowhere is it strong.
I haven’t bothered to go through their math, except for their significance calculations. They appear to be correct, including the adjustment to the required significance given the fact that they’ve looked in 28 places, which means that the significance threshold has to be adjusted. Good on them 1980s scientists, they did the numbers right back then.
However, and it is a very big however, as is common with such analyses from the 1980s, I see no sign that the results have been adjusted for autocorrelation. Given that both the sunspot data and the sea level data are highly autocorrelated, this can only move the results in the direction of less statistical significance … meaning, of course, that the four results that were significant are likely not to remain so once the results are adjusted for autocorrelation.
Is there a sunspot effect on the climate? Maybe so, maybe no … but given the number of hours people have spent looking for it, including myself and many, many others, if it is there, it’s likely very weak.
My best regards to all,
w.
NOTA BENE! If you disagree with something I said, please quote my exact words, and then tell me why you think I’m wrong. Telling me things like that my science sucks or baldly stating that I don’t understand the math doesn’t help me in the slightest. If I’m wrong I want to know it, but I have no use for claims like “Willis, you are so off-base in this case that you’re not even wrong.” Perhaps I am, but we’ll never know unless you specify exactly what I said that was wrong, and what was wrong with it.
So if you want me to treat you and your comments with respect, quote what you object to, and specify your objection. It’s the only way I can know what the heck you are talking about, and I’ve had it up to here with vague unsupported accusations of wrongdoing.
DATA: Digitized Parana streamflow data from the paper plus SIDC Sunspot data and all analyses for this post are on an Excel spreadsheet here. You’ll have to break the links, they are to my formula for Gaussian smoothing.
PS—Thanks to my undersea contacts for coming up with a copy of the thirty-year-old Woodworth study, and a hat tip to Dr. Holgate and Steve McIntyre at Climate Audit for the lead to the study. Dr. Holgate is well-known in sea level circles, here’s his comment on the sunspot question:
Many people have tried to link climate variations to sunspot cycles. My own feeling is that they both happen to exhibit variability on the same timescales without being causal. No one has yet shown a mechanism you understand. There is also no trend in the sunspot cycle so that can’t explain the overall rise in sea levels even if it could explain the variability. If someone can come up with a mechanism then I’d be open to that possibility but at present it doesn’t look likely to me.
If you’re interested in solar cycles and sea level, you might look at a paper written by my boss a few years back: Woodworth, P.L. “A world-wide search for the 11-yr solar cycle in mean sea-level records.” Geophysical Journal of the Royal Astronomical Society. 80(3) pp743-755
You’ll appreciate that this is a well-trodden path. My own feeling is that it’s not the determining factor in sea level rise, or even accounts for the trend, but there may be something in the variability. I’m just surprised that if there is, it hasn’t been clearly shown yet.
I can only agree …
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Paul Westhaver says:
January 26, 2014 at 11:39 am
I’ve shown clearly that the “smoothing” method that they used totally destroys the original data, changes low points into peaks and vice versa, and astonishingly, ends up with a negative correlation to the data.
On what planet does that NOT refute their paper? You might re-establish their underlying claim in some other manner, but their paper is toast.
w.
Paul Westhaver says:
January 26, 2014 at 11:39 am
Thanks, Paul. As mentioned above, their paper is refuted by their abysmal smoothing method. But what about their underlying claim?

Here’s the cross-correlation:
Note that in contradiction to their claim of perfect temporal alignment, the biggest signal is at a lag of two years. However, even that signal is still a long ways from significance, with R^2 = 0.05 and the p-value = 0.10.
So unless you know some other way to measure it, I’d say that not only is the paper refuted, but the underlying claim is refuted as well.
w.
george e. smith: “Well I know what you mean Willis; but I tend to believe that the original real measured sampled data values, are the most information you can ever have. And if you did your sampling correctly according to the requirements of the Nyquist sampling theorem, then those samples will indeed be enough to recover the complete original continuous signal;”
While it’s best to work with least processed data if possible there are cases where you need to filter. As I said above, if you want to look for correlation on inter-annual to decadal scale that is a order or two smaller than the annual cycle you need to remove it. Also if you want to do spectral analysis on such a signal the annual cycle will saturate the dynamic range and severely reduce the reliability of the rest of the spectrum. There will also be artefacts around the annual signal which will render most of the 0.5 to 1.5 year band useless.
You may also need to remove autoregression from the data before trying to estimate correlation coeffs or do spectral analysis. That also implies the need to use processed data.
That’s why I think Briggs’ “you should never…” type comments are ill-informed and unhelpful. Sadly they seem to be getting repeated and linked rather too often.
Paul Westhaver says:
January 26, 2014 at 11:39 am
OK, I’ll compare them. Scafetta refused to share his data and code, so we don’t know what he did or how he did it.
As a result … I can’t compare this paper to the Scafetta paper.
Sorry, that’s as far as I can get. I could disassemble and replicate this Parana study exactly. I cannot do that with Scafettas work because, unlike my high-school chemistry teacher, who would give us an “F” if we didn’t show our work, Scafetta gives himself an A+ and doesn’t show anything … sorry, Paul, but that’s not science.
You go on to state that Scafetta attracts attention here, viz:
Yes, there is a backstory. He published a couple of his polemics here, and they got … well, a cool reception, accompanied by calls for his code and data. He got all huffy, refused to share either one, abused us all roundly, and limped off to Tallblokes to lick his wounds. Periodically he comes back, abuses us once again, says if we had any brains we’d recognize his genius, tells me I’m too uneducated to understand his math, things like that, then goes back to Tallblokes and whines about how badly I treat him.
So when someone steps in and starts prating about Scafetta’s data, well, that’s a sore point around here because he refuses to share, reveal, or archive either his code or his data … sorry you got caught up in it.
w.
RC Saumarez says:
January 26, 2014 at 3:51 pm
Great plan, RC! Ask them about the effect of the 11-year running mean while you’re at it, see what they say. You should have some interesting news to report back to us, I await their comments.
w.
PS: Since I was able to replicate their Figure 2 completely from their data and their paper, I kinda suspect I didn’t misunderstand them …
afjacobs says:
January 26, 2014 at 4:21 pm
Unexplained REAL correlations are where scientific inquiry starts. The trick is to tell the real correlations from the spurious. We have an entire branch of math dedicated to just that … math which you seem content to ignore.
There is a clear correlation, for example, between the CO2 levels and the price of US stamps … should we rush to investigate?
Correlations are everywhere, and time is short. If you want to chase every correlation, significant or not, be my guest—I don’t have the time for that.
w.
Mike Jonas says:
January 26, 2014 at 6:24 pm
That’s true, Mike, but it was just a study by NASA authors, not an official NASA study. Those are done and published by NASA, this was done by individual authors and published in a journal by the authors.
You do understand that their first statement means the NIle results are NOT significant, don’t you? The rest just seems like handwaving to re-establish that they are significant … but they aren’t.
Finally, if this current Parana study has taught you anything, it should have taught you to be very, very suspicious of this kind of analysis. There are many, many pitfalls, and even older papers like this one can contain egregious errors like their 11-year running mean “smoothing”.
Next, yes, I’m sure they can find the occasions river, or one lake, that has some correlation with sunspots over some period … but by the time they’ve looked at four rivers to find that one that is “significant”, they’ve forgotten that they need to adjust significance levels to allow for repeated trials.
So as I said at the top, there may be a relationship … but given how long people have looked, and how weak and how little they’ve found, any reasonable person would have to admit that IF such an effect exists, it’s not a very big effect …
My best to you,
w.
Greg Goodman says:
January 26, 2014 at 7:09 pm
My own feeling is that I’d rather people took Brigg’s advice than not, the resulting problems would be less. Yes, you are right that there are times when filtering is not an option, it is a necessity. However, in climate science those times are not as common in say electrical signal analysis.
And certainly, when you see egregious examples such as the choice of the 11-year filter in the Parana paper by people who are established scientists, you’ve got to agree that casual filter use is an issue in the field …
One recurring problem that gets too little attention is the common practice of removing the “climatology”, that is to say subtracting the month-by-month average values of the variable in question. Those “reduced” values are then taken as accurate datapoint when doing things like calculating trends.
But the problem is, once you remove the monthly values, the resulting points all inherit the standard error of the mean in the climatology … and how often is that taken into account when calculating the trend in say the satellite tropospheric temperatures? We just use the regular methods for the error in trend, without including the additional error inherent in the climatology.
Me, I use filters a lot to help me understand what’s going on, by overlaying a gaussian or a loess average over the data. But I do my best to avoid running statistical tests on filtered data. Yes, you can kind of adjust for the increased autocorrelation caused by the smoothing by reducing the effective “N”, the number of data points. And in fact, often you have to adjust for autocorrelation even when there is no smoothing.
But in general the methods are ad-hoc, and often either over- or under-estimate the actual significance of the results. So I try to keep it as simple as possible.
And having had to deal with the kind of garbage smoothing we see in this study far too often, I say, like Briggs, DON’T RUN STATISTICAL TESTS ON SMOOTHED DATA … knowing full well that people like yourself who know what they are doing and know what kind of filters to use and why they are using that particular filter, well, you’ll filter anyhow, as well you should.
Regards,
w.
The generation of random numbers is too important to be left to chance.
Ah. Some one who understands crypto.
Re my January 26, 2014 at 8:26 pm
Unless you are making one time pads.
Willis: “However, in climate science those times are not as common in say electrical signal analysis.”
I agree with your comment in general, it’s best to err on the side of caution. However, the need to remove the annual cycle in climate science is as omnipresent as the need to remove mains ‘humm’ in audio electronics.
The only way you can avoid filtering it is by doing something silly like subtracting a “climatology” which of course also affects the degrees of freedom.
When I find this kind of detail is available from satellite data and they are splashing around “climatologies” of monthly averages and running means it make me want weep:
http://climategrog.wordpress.com/?attachment_id=756
Hardly surprising there’s been so little progress in the last 20 years.
Erwin
Thank you very much that makes a lot more sense.
very nice job
@Ox AO: thank you.
It would be interesting to repeat the analysis using the Parana River data with a better time-resolution (monthyl values). I will have a look If I can find these data …
One of the problems in the paper is the reliance on the so called ‘stream flow’ which is a very rough proxy for rainfall and can be highly variable even if the rainfall is consistent.
I have proposed that solar effects cause latitudinal shifting of climate zones so the effect on rainfall is hard enough to correlate with solar activity let alone the consequent stream flow because a given area can have a complex relationship with the rain bearing winds above as they move to and fro latitudinally.
Even the shifting of climate zones is ‘noisy’ around the globe as one can see from the variable locations of the ‘dips’ in the jet stream tracks from year to year.
So, if there is indeed a solar / stream flow relationship it is bound to be very poorly correlated but that does not mean that there is no relationship.
It more likely means that to discern the relationship one needs to observe over longer periods of time than a single solar cycle.
The sort of indicator I would see as useful would be as to whether the stream flow changes on average across decades in line with the long slow and very irregular changes in solar activity from say the MWP to LIA or LIA to date.
On those timescales I think there is a relationship as witness the rise and fall of civilisations as the climate changed around them over multi-centennial periods. Many such developed due to abundant supplies of natural resources but declined as the environment altered around them.
So my take on this thread is that Nicola is being too ambitious in trying to discern (let alone prove) significance at the level of just a few solar cycles and Willis may be right in calling him out on that but there is still room for a real relationship between sun, global air circulation and stream flows around the globe over longer periods of time.
Hi Willis
Thanks for finding time and effort, my posts demonstrate most vividly how not to do science, the reason I do not object to the stuff dismissed as pseudoscience etc. It is more of ‘what if ?’ than ‘it is’.
You made number of very valid points, so I shall make an attempt to explain, but it is not necessarily correct.
– On SSN number flipping. Scan down the annual data, the lowest score around a minimum will change sign. In your example:
1963 +27.9
1964 -10.2
1965 -15.1
– Arbitrary choice (cherry picking) is deplorable, but in this case, where I attempt to demonstrate principle, rather than project accuracy (neighbouring cycles in practice do overlap by number of years anyway) it doesn’t make great deal of difference.
– NASA’s statement, I have added link on the graph , but here it is:
http://science1.nasa.gov/science-news/science-at-nasa/2008/16dec_giantbreach/
On this point Dr. S has made number of points at your ‘SSN & sea level thread.
It is not disputed that both SSN and Ap index data show difference between the even and odd cycles. A young solar scientist wrote about it 45 years ago see: Fig. 23 in http://www.leif.org/research/suipr699.pdf , and now is going to revisit the issue.
It is a mater how these differences are treated, If a median is calculated and normalised to zero, than the odd cycles remnants will be positive and even negative (with one or two exception), i.e. it is not a robust rule. I simplified process by not following that procedure, but used easy (lazy man) option and inverted whole cycles (I did say at the start ‘how not to do science’).
– I also agree entirely with what Greg and you said at the top of your next comment, and if I may add, even if all calculations are shown in fine detail, without physical mechanism, it is no more than numerology.
– Temperature data is from NASA-GISS (de trended, 3 year moving average), Ap index provided by Dr.S. These details are now on the graph link(see above).
Ap data from individual stations are covering only recent decades, back to 1840s are reconstructed from SSN. (Dr.S is an authority on this one).
Why N. Hemisphere? It is very different to S. H, as far as distribution of geo-magnetic field intensity, i.e. existence of the field bifurcation . The physics I have in mind, don’t think would work in the S.H. where field is omni-centred
– 16 year lag is part of the mechanism I have in mind, there are number of papers, from NASA-JPL (on google scholar use ‘Earth differential rotation Dickey’, too many links to quote directly )
– Gulf stream is the critical factor, it moves heat energy from subequatorial to mid and high latitudes, it is principal contributor to the subpolar gyre’s SPG circulation, which is the engine of the heat transport across the North Atlantic Ocean ( feedback circulation delays in the SPG as mentioned above -google ‘subpolar gyre feedback: Treguier et al 2004,, Levermann et al 2007, Born et al 2011 )
– Energy is not going anywhere, it is in the N. Atlantic ocean it is just mater of one of two, or both feedbacks (Earth differential rotation & SPG circulation) when energy is released.
– Then we have oddities :De-trended anomaly (NASA-GISS) as stated above
Angular momentum in price of beer papers by Dickey et al, as above.
I just skip your graphs wise move, most of others do it too.
take this in the positive sense Will do, but only in the even numbered solar cycles.
This must be the longest of my posts, but I thought it is only fair to ‘answer’ all of your questions (hopefully not nonsense in its entirety , pseudo-science is unavoidable). I have learned my lesson, I shall avoid addressing you again.
Thanks, had lot of fun reading and replying to your observations, all the best.
regards. vuk.
Blair says:
January 26, 2014 at 7:14 am
They simply don’t know when they get something wrong because they rely entirely on software. It has happened so frequently we laughingly gave it a name; The sorcerer’s apprentice syndrome. Apparently there is also a button which says, “Design The HVAC systems”.
They do however, show no lack of confidence in their results.
The late Bob Pease used to rail about this frequently in electronic design. He said – you design with a “slide rule” and check with a circuit simulator – if you use a simulator at all. And don’t depend on the simulator for numerical results. Use it for finding trends – if I increase this resistor it changes the operation this way. The problem is particularly acute in the tuning of PID loops which are common in chemical plants and HVAC systems.
I have designed autotune functions for PID loops. If the data your plant produces is very noisy they can produce terrible results. (Ah – back to the theme of this post) But you can’t sell controllers without it these days so….
However, the need to remove the annual cycle in climate science is as omnipresent as the need to remove mains ‘humm’ in audio electronics.
The mains frequency is going to be 60 Hz (50Hz) to better than 0.1% Easily filtered. You know quite well when to expect events (peaks, zero crossings). And if your filter is narrow it hardly affects how you hear the music. Now apply an annual filter to noisy data. Can you always expect snow on 1 Jan? Does the summer temperature always peak on 10 August? (plus or minus 3.65 days to get us in the .1% band). And BTW the mains are recentered to reduce absolute phase drift to well under 1 second (60 cycles) indefinitely so your clocks which time off the mains don’t go drifting away from real time too much. Your VCR, DVR, microwave, etc wouldn’t like that. Not to mention synchronous motor clocks.
The recentering is to NIST time which is good to 1E-12 or better over long periods. You start with a low drift oscillator and then discipline it. That is how you get local short term accuracy. The filter times get very long compared to the oscillator frequency. Because the NIST transfer source (satellites) are noisy (+/-1E-7 jitter). You start with short filters to get close. And then make them longer.
Well, I slipped a decimal point (another common error in the trade): (plus or minus .365 days to get us in the .1% band)
And just to clarify “And then make them longer.” refers to averaging times. Every 100X increase in averaging time gets you a 10X increase in accuracy. (it is a square root function). What you rely on is the one second tick from your GPS receiver. .
Willis Eschenbach says:
January 26, 2014 at 5:21 pm
Tasty.
scf says:
January 25, 2014 at 5:31 pm
I have two grandfather clocks. The timing of the chimes is very strongly correlated, to over a 99% level. Also, one of the two always chimes first, so that one must be causing the other one to chime.
When the first one of them stops chiming so does the other, so this confirms the causation (whenever a power failure occurs).
This comment is no sillier than the Parana river study.
Well that is interesting. It turns out that two grandfather clocks that are coupled in any way (placed on the same wooden floor for instance) will synchronize. They cease to be independent time keepers. You have to go to a LOT of trouble to isolate them from each other. Studies have been done on this.
https://www.google.com/#q=grandfather+clock+coupling
Injection locking is another good search term.
M Simon:
To help others I insert a link in your comment and add an observation.
And true.
Richard
Nice article on coupled clocks http://blogs.unimelb.edu.au/sciencecommunication/2012/10/28/coupled-oscillators-and-the-tale-of-huygens-clocks/
With this video. http://youtu.be/DD7YDyF6dUk
Obviously they went to a lot of trouble to make the coupling reasonably good. It is a short video after all. Looks like the side off an old computer (the table) mounted on some handy rollers (mailing tubes). Clocks with integer related periods will synchronize given enough time. And the period relation need not be an exact multiple. Numbers like 3:5 will work given enough time. Although numbers like 7:31. Rarely work. The repeating period becomes too long and chaos takes over.
richardscourtney says:
January 27, 2014 at 5:12 am
I usually search (‘find’ function) on a time. Such as “5:21” which usually produces one result on a page.
Or I will use a phrase mentioned to see all comments about it. Your method certainly produces much more exact results. But it is a lot more work.
M Simon:
At January 27, 2014 at 5:37 am you say to me
Yes, and ctrl-f does the same but is quicker.
One needs knowledge of such methods to use them. Hence, providing a link helps people.
Richard