Sea Level and Effective N

Guest Post by Willis Eschenbach

Over in the Tweeterverse, I said I wasn’t a denier, and I challenged folks to point out what they think I deny. Brandon R. Gates took up the challenge by claiming that I denied that sea level rise is accelerating. I replied:

Brandon, IF such acceleration exists it is meaninglessly tiny. I can’t find any statistically significant evidence that it is real. HOWEVER, I don’t “deny” a damn thing. I just disagree about the statistics.

Brandon replied:

> IF such acceleration exists

It’s there, Willis. And you’ve been shown.

> it is meaninglessly tiny

When you have a better model for how climate works, then you can talk to me about relative magnitudes of effects.

[As a digression, I’m one of the few folks with a better model, supported by a number of observations, for how the climate works. I say that the long-term global temperature is regulated by emergent phenomena to within a very narrow range (e.g. ± 0.3°C over the entire 20th Century). And I have no clue what that has to do with whether or not I can talk to him about “relative magnitude of effects”.

But as I said … I digress … ]

Brandon accompanied his unsupported claims with the following graph of air temperature, not sure why … my guess is that he grabbed it in haste and mistook it for sea level. Easy enough to do, I’ve done worse.

Now, I’ve written recently about sea level rise in a post called “Inside The Acceleration Factory”. However, there is a deeper problem with the claims about sea levels. This is that the sea level data is very highly autocorrelated.

“Autocorrelated” in respect to a time series, like say sea level or temperature, means the present is correlated with the past. In other words, autocorrelation means that hot days are more likely to be followed by hot days, and cold days to be followed by cold days, than hot days following cold or vice versa. And the same is true of hot and cold months, or hot and cold years. When such autocorrelation extends over long time periods, years or decades, it is often called “long term persistence”, or “LTP”.

And trends are very common among datasets that exhibit LTP. Another way of putting this is expressed in the name of a 2005 article in Nature magazine. The article was called “Nature’s Style: Naturally trendy”. This is quite accurate. Natural datasets tend to contain trends of various lengths and strengths, due to the existence of long term persistence (LTP).

And this long-term persistence (LTP) brings up large problems when you are trying to determine whether the trend of a given climate time series is statistically significant or not. To elucidate this, let me discuss the Abstract of “Naturally Trendy”. I’ll put their abstract in bold italics. It starts as follows:

Hydroclimatological time series often exhibit trends.

True. Time series of river flow, rainfall, temperature, and the like have trends.

While trend magnitude can be determined with little ambiguity, the corresponding statistical significance, sometimes cited to bolster scientific and political argument, is less certain because significance depends critically on the null hypothesis which in turn reflects subjective notions about what one expects to see.

Let me break that down a bit. Over any given time interval, every weather-related time series, whether it is temperature, rainfall, or any other variable, is in one of two states.

Going up, or

Going down.

So the relevant question for a given weather dataset is never “is there a trend”. There is, and we can measure the size of the trend.

Here’s the relevant question; is a given trend an UNUSUAL trend, or is it just a natural fluctuation?

Now, we humanoids have invented an entire branch of math called “statistics” to answer this very question. We’re gamblers, and we want to know the odds.

It turns out, however, that the question of an unusual trend is slightly more complicated. The real question is, is the trend UNUSUAL compared to what?

Plain old bog-standard statistical mathematics answers the following question—is the trend UNUSUAL compared to totally random data? And that is a very useful question. It is also very accurate for truly random things like throwing dice. If I pick up a cup containing ten dice, and I turn it over and I get ten threes, I’ll bet big money that the dice are loaded.

HOWEVER, and it’s a big however, what about when the question is, is a given trend unusual, not compared to a random time series, but compared to random autocorrelated time series? And particularly, is a given trend unusual compared to a time series with long-term persistence (LTP)? Their Abstract continues:

We consider statistical trend tests of hydroclimatological data in the presence of long-term persistence (LTP).

They are taking a variety of trend tests to see how well they perform with random datasets which exhibit LTP.

Monte Carlo experiments employing FARIMA models indicate that trend tests which fail to consider LTP greatly overstate the statistical significance of observed trends when LTP is present.

In simplest terms, regular statistical tests that don’t consider LTP falsely indicate significant trends when the trends are in fact just natural variations. Or to quote from the body of the paper,

More important, as Mandelbrot and Wallis [1969b, pp. 230 –231] observed, ‘‘[a] perceptually striking characteristic of fractional noises is that their sample functions exhibit an astonishing wealth of ‘features’ of every kind, including trends and cyclic swings of various frequencies.’’ It is easy to imagine that LTP could be mistaken for trend.

This is a very important observation. “Fractional noise”, meaning noise with LTP, contains a variety of trends and cycles which are natural and interent in the noise. But these trends and cycles don’t mean anything. They appear, have a duration, and disappear. They are not fixed cycles or permanent trends. They are a result of the LTP, and are not externally driven. Nor are they diagnostic—the presence of what appears to be a twenty-year cycle cannot be assumed to be a constant feature of the data, nor can it be used as a means to predict the future. It may just be part of Mandelbrot’s “astonishing wealth of features”.

The most common way to deal with the issue of LTP is to use what is called an “effective N”. In statistics, “N” represents the number of data points. So if we have say ten years of monthly data, that’s 120 months, so N equals 120. In general, the more data points you have, the stronger the statistical conclusions … but when there is LTP the tests “greatly overstate the statistical significance”. And by “greatly”, as the paper points out, using regular statistical methods can easily overstate significance by some 25 orders of magnitude

A common way to fix that problem is to calculate the significance as though there were actually a much smaller number of data points, a smaller “effective N”. That makes the regular statistical tests work again.

Now, I use the method of Koutsoyiannis to determine the “effective N”, for a few reasons.

First, it is mathematically derivable from known principles.

Next, it depends on the exact measured persistence characteristics, both long and short term, of the dataset being analyzed.

Next, as discussed in the link just above, I independently discovered and tested the method in my own research, only to find out that …

… the method actually was first described by Demetris Koutsoyiannis, a scientist for whom I’ve always had the greatest respect. He’s cited several times in the “Naturally Trendy” paper. So I was stoked when he commented on my post that he was the originator of the method, because that meant I actually did understand the subject.

With all of that as prologue, let me return to the question of sea level rise. There are a few reconstructions of sea level rise. The main ones are by Jevrejeva, and by Church and White, and also the satellite TOPEX/JASON data. Here’s a graph from the previous post mentioned above, showing the Church and White tide station data.

Now, I pointed out in my other post how it is … curious … that starting at exactly the same time as the satellite record started in 1992, the trend in the Church and White tide gauge data more than doubled.

And while that change in trend is worrisome in and of itself, there’s a deeper problem. The aforementioned “effective N” is a function of what is called the “Hurst Exponent”. The Hurst Exponent is a number between zero and plus one that indicates the amount of long-term persistence. A value of one-half means no long-term persistence. Hurst exponents from zero to one half show negative long-term persistence (hot followed by cold etc.), and values above one half indicate the existence of long-term persistence (hot followed by hot etc.). The nearer the Hurst Exponent is to one, the more LTP the dataset exhibits.

And what is the Hurst Exponent of the Church and White data shown above? Well, it’s 0.93, way up near one … a very, very high value. I ascribe this in part to the fact that any global reconstruction is the average of hundreds and hundreds of individual tide records. When you do large-scale averaging it can amplify long-term persistence in the resulting dataset.

And what is the effective N, the effective number of data points, of the Church and White data? Let’s start with “N”, which is the actual number of data points (months in this case). In the C&W sea level data the number of datapoints N is 1608 months of data.

Next, effective N (usually indicated as “Neff”) is equal to:

N, number of datapoints, to the power of ( 2 * (1 – Hurst Exponent) )

And 2 * (1-Hurst Exponent) is 0.137. So:

Effective N “Neff” = N ^ (2 * (1 – Hurst Exponent))

= 1608 ^ 0.137

= 2.74

In other words, the Church and White data has so much long-term persistence that effectively, it acts like there are only three data points.

Now, are those three data points enough to establish the existence of a trend in the sea level data? Well, almost, but not quite. With an effective N of three, the p-value of the trend in the Church and White data is 0.07. This is just above what in the climate sciences is considered statistically significant, which is a p-value less than 0.05. And if the effective N were four instead of three, it would indeed be statistically significant at a p-value less than 0.05.

 

However, if you only have three data points, that’s not enough to even look to see if the results are improved by adding an acceleration term to the equation. The problem is that with an additional variable, that’s three tunable parameters for the least squares calculation and only three data points. That means there are zero degrees of freedom … no workee.

So … do I “deny” that sea levels are accelerating in some significant manner?

Heck, no. I deny nothing. Instead, I say we don’t have the data we’d need to determine if sea level is accelerating.

Is there a solution to the problem of LTP in datasets? Well, yes and no. There are indeed solutions, but the climate science community seems determined to ignore them. I can only assume that this is because many claims of significant results would have to be retracted if the statistical significance were to be calculated correctly. Once again, from the paper “Nature’s Style: Naturally Trendy”:

In any case, powerful trend tests are available that can accommodate LTP.

It is therefore surprising that nearly every assessment of trend significance in geophysical variables published during the past few decades has failed to account properly for long-term persistence.

Suprising indeed, particularly bearing in mind that the “Naturally Trendy” paper was published 14 years ago … and the situation has not gotten better since then. LTP is still rarely accounted for properly.

To return to the paper, the authors say:

These findings have implications for both science and public policy.

For example, with respect to temperature data there is overwhelming evidence that the planet has warmed during the past century. But could this warming be due to natural dynamics? Given what we know about the complexity, long-term persistence, and non-linearity of the climate system, it seems the answer might be yes.

I’m sure that you can see the problems that such statistical honesty would cause for far too much of mainstream climate science …

Best regards to all, including to Brandon R. Gates, whose claim inspired this post,

w.

As Usual: I ask that when you comment please quote the exact words you are discussing, so that we can all understand both who and what you are referring to.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
148 Comments
Inline Feedbacks
View all comments
Greg Freemyer
February 20, 2019 8:59 pm

It amazes me to come to a post like this less than 6 months after Judith Curry published a free to the world summary of state of sea level rise research and not see her name, nor reference to the findings she reported:

https://curryja.files.wordpress.com/2018/11/special-report-sea-level-rise3.pdf

The consensus scientific community has found no solid evidence of acceleration in the North Atlantic. Where they have found it is in the Indian Ocean basin.
And even then it isn’t an acceleration in the way normal people tank about it.

The ocean floor of the Indian Ocean is receding and the volume of water container in the Indian Ocean basin is increasing at an accelerating rate.

Thus, analysis of tide gauge data won’t find it. You have to have a model that incorporates the ocean floor.

I’m not saying I agree or disagree, but if you want t to say consensus sea level s scientific analysis is wrong, you have know what it is,actually saying!

https://curryja.files.wordpress.com/2018/11/special-report-sea-level-rise3.pdf

Greg Freemyer
Reply to  Willis Eschenbach
February 20, 2019 9:25 pm

Willis, I was talking more to the commenters. Few of which address the excellent content of your post. Your post was merely a launching pad.

Chaamjamal
February 20, 2019 9:46 pm

Acceleration does not prove human cause

https://tambonthongchai.com/2019/02/20/csiroslr/

Art
February 20, 2019 10:47 pm

Reply to  Art
February 21, 2019 9:45 am

The 2014 Church and White global sea level, which resembles NASA/Csiro much better than it resembles NASA/Hansen 1982 up to the beginning of satellite contribution to this in or around 1993, was cited favorably for the pre-satellite period in https://wattsupwiththat.com/2018/12/17/inside-the-acceleration-factory/

mike haseler
February 21, 2019 1:23 am

I’ve been saying this in various forms since at least my clinategate submission.

To put it smply: you need a model of normal variation in order to know what is normal. And anyone with any gumption for this kind of issue uses a frequency based model. Statistical tests are pretty meaningless for complex noise.

Steven Mosher
February 21, 2019 2:26 am

“And what is the Hurst Exponent of the Church and White data shown above? Well, it’s 0.93, way up near one … a very, very high value. I ascribe this in part to the fact that any global reconstruction is the average of hundreds and hundreds of individual tide records. When you do large-scale averaging it can amplify long-term persistence in the resulting dataset.”

Hmm.

How did you calculate that? There are a half a dozen different methods that all yeild different results.

Also self-similarity and LTP assume that the time series of arrivals is second-order stationary :ie the variance of the time series doesnt change over time . running standard tests on Church will yeild values too high ( even over 1)

Some folks have estimates more around .6

Johann Wundersamer
February 21, 2019 3:06 am

!!!

February 21, 2019 3:29 am

@Willis

When such autocorrelation extends over long time periods, years or decades, it is often called “long term persistence”, or “LTP”.

This statistical LTP concept that you are suggesting as a generic statistical term actually appears to be a very specialized term, used exclusively in hydroclimatology, as Koutsoyiannis discusses here:
https://www.researchgate.net/publication/237280296_Statistical_Analysis_of_Hydroclimatic_Time_Series_Uncertainty_and_Insights

The term itself suggests some kind of bias, i.e. what about “short-term”, “medium-term” persistence? Apparently LTP is important in hydroclimatology, not so much in other areas or study

But, generically, what we are trying to do here is find a mathematical model which can explain and/or predict a numeric time-series. In that much broader context, the issues you describe are generally characterized as a tradeoff between ‘bias’ and ‘variance’:
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff

tty
Reply to  johanus
February 21, 2019 7:20 am

There are few really long time series outside hydroclimatology which is why the “Hurst phenomenon” was first noticed and has been most studied there. But LTP is applicable in many other fields:

https://www.tandfonline.com/doi/full/10.1080/02626667.2015.1125998?scroll=top&needAccess=true

Steven Mosher
February 21, 2019 3:37 am

“And what is the Hurst Exponent of the Church and White data shown above? Well, it’s 0.93, way up near one … a very, very high value. I ascribe this in part to the fact that any global reconstruction is the average of hundreds and hundreds of individual tide records. When you do large-scale averaging it can amplify long-term persistence in the resulting dataset.”

Churches data

http://www.cmar.csiro.au/sealevel/sl_data_cmar.html

using R fractal package and DFA for calculating hurst exponent.

.38 or .35

Steven Mosher
Reply to  Willis Eschenbach
February 21, 2019 7:09 pm

The method of Koutsoyiannis gives 0.93.

Sorry, did you detrend the data and check for stationarity?
All the standard peer reviewed methods I am using dont come close to .93

seems like there is considerable uncertainty in the calculation of this statistic.

Other published reports show values of .62

Rather than hide this it seems a most important analytic decision.

Dave Fair
Reply to  Steven Mosher
February 21, 2019 7:34 pm

Mr. Mosher, please reference those reports.

Frank
Reply to  Willis Eschenbach
February 22, 2019 3:15 am

Willis and Steven: This passage from Koutsoyiannis (2007) suggests that high Hurst exponents here don’t provide conclusive evidence for LTP in all but the longest time series:

In addition, because LTP is eventually an asymptotical property of the process (which should be detected on the tail, i.e., on the largest scales), even the detection of LTP is highly uncertain when dealing with time series with short length [Taqqu et al., 1995].

[27] This point has already been made in some studies. For example, Koutsoyiannis [2002] showed that the sum of three Markovian processes (whose behavior, rigorously speaking is STP) is virtually indistinguishable from a process with LTP for lags as high as of the order of 1000. To demonstrate this point further, we fitted to the E02 series [Esper’s 2002 millennium temperature reconstruction] an ARMA(1, 1) process. Testing the autocorrelation function of the residuals of this, we concluded that they are indistinguishable from white noise; this means that the series is compatible with the ARMA(1, 1) process, i.e., it exhibits STP with Hurst coefficient 0.50. Furthermore, we generated with the fitted ARMA(1, 1) a synthetic series with sample size 2000, and all estimation methods we tried gave incorrect values of H on the order 0.79–0.93. Continuing this experiment, we also found that we need a series with length of about 20 000 to correctly estimate H, namely, to find a value around 0.50. These examples clearly point out that even the distinction between the extreme cases H = 0.5 and H → 1 is not statistically decidable with typical sample sizes.

https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2006WR005592#wrcr11123-fig-0002

Which leaves me confused as to how we can be sure that the Nilometer data (700 years long) must represent LTP rather than a ARMA process with high autocorrelation? Figure 5 of the Koutsoyiannis paper linked below shows: Logarithmic plots of standard deviation versus scale for the time series of (left) Boeoticos Kephisos annual runoff; (right) Nilometer with lines of slope -0.5 for classical statistics and H-1 for aN LTP model (fractional Gaussian model?). A line is needed for auto-correlated data, where Neffective is also less than N.

https://www.itia.ntua.gr/en/getfile/673/1/documents/2005JoHNonStatVsScalingPP.pdf

Bair Polaire
February 21, 2019 3:42 am

Interesting post, as usual! Another strong argument in favor of adaption instead of mitigation.

“Yeah, there is a trend, but have you accounted for LTP?” This should become a standard reply to warmist claims.

Typo: Kousoyiannis -> Koutsoyiannis

Reply to  Bair Polaire
February 21, 2019 5:32 am

@Blaire
“Yeah, there is a trend, but have you accounted for LTP?”

You have it completely misunderstood “LTP”. The warmists have indeed gladly accounted for, and adjusted, the ‘long-term persistence’ of warming in the 1930’s and other similar past historical ‘trends’.

LTP is _not_ the same as ‘irreducible error’ in analysis. It can be interpreted as bias or variance, depending on your hypothesis. And it definitely has cause and effect.

tty
Reply to  Johanus
February 21, 2019 7:12 am

“And it definitely has cause and effect”

Depends on what you mean with cause and effect. LTP is possible in completely random processes. Ever heard of Hurst-Kolmogorov dynamics?

Bair Polaire
Reply to  Johanus
February 21, 2019 8:31 am

From the head post:

“Nature’s Style: Naturally Trendy”:

In any case, powerful trend tests are available that can accommodate LTP.

It is therefore surprising that nearly every assessment of trend significance in geophysical variables published during the past few decades has failed to account properly for long-term persistence.

From your comment:

The warmists have indeed gladly accounted for, and adjusted, the ‘long-term persistence’ of warming in the 1930’s and other similar past historical ‘trends’.

I go with the authors of “Nature’s Style: Naturally Trendy”: the warmists have not accounted for LTP.

mike macray
February 21, 2019 5:34 am

Wow! three cups of coffee to get through this thread! Thank you Willis for this food ( coffee?) for thought.
Leo Smith added a few more variables (unquantified):
” if tectonic plates are not colliding, erosion will ted to reduce land height and decrease sea depth as the high bits are washed away into the ocean floors.
a greening planet will show falling sea levels as more water is locked up in hydrocarbons…
are these significant?
At the risk of tossing another porkpie into the mosque.. oops correction.. adding another variable to the mix: Is the ocean floor area increasing, decreasing or constant..V/A=d–>MSL?
Cheers
Mike

tty
February 21, 2019 6:46 am

I’ll point once more to my favorite sea-level gauge. Kungsholmsfort in southern Sweden. Sweden initiated an ambitious program of sea-level measurements in 1886 and many of the sites have been continuously monitored ever since. Kungsholmsfort is a coastal fortress built on Archaen bedrock of the Baltic Shield. There is probably very few places anywhere in the World that are more stable or tectonically quiescent.
The only confounding factor is the isostatic adjustment still going on since the inland ice melted c. 15,000 years ago. This has been studied since the days of Linnaeus in the eighteenth century and is known to be very nearly linear over century-length intervals.

Completely by accident Kungsholmsfort happened to lie on the line where the isostatic rise (c. 1.5 mm/year) was equal to the absolute sea-level rise in the late nineteenth century (c. 1.5 mm/yr), hence the relative sea-level rise was zero.

Guess what? The relative sea-level rise is still zero:

comment image

This can only be due to two things:
1. There is no acceleration in absolute sea-level rise.
2. The absolute sea-level rise and the isostatic rise accelerate in unison.

Jeff in Calgary
Reply to  tty
February 21, 2019 10:36 am

Very interesting. I think any rational person can discount option 2.

Can you point me to further reading on this?

Phil
Reply to  tty
February 21, 2019 4:05 pm

From Viking warrior women? Reassessing Birka chamber grave Bj.581

The Viking Age settlement of Birka on the island of Björkö in LakeMälaren, Uppland, is well known as the first urban centre in what is now Sweden.

It is about 10 miles west of Stockholm. If you look at Figure 1 on page 3 of the pdf, it shows the Viking Age “(c. AD 750–1050)” shoreline in relation to today’s shoreline. Using Google Earth, I eyeballed the Viking Age shoreline at about 16 feet above today’s shoreline. I just thought this might be interesting, given the Chicken Little fear mongering about sea level rise.

P.S. Here the Supplementary Information for the paper.

February 21, 2019 8:34 am

It is hard enough to measure sea level rise with tide gauges immersed in the sea. How is it possible to measure sea level from orbit? I contend that it is not.

Same with ice volume from orbit, just how are these instruments calibrated? NASA and NOAA have a lot of radical activists who have gained positions of “authority.”

Makes for lots of headlines, but not much if any good science.

tty
Reply to  Michael Moon
February 21, 2019 9:41 am

Ice volume by radar measurements is probably marginally possible. It is after all much easier than sea-level. Icecaps stay still (more or less), don’t have waves, don’t have tides, aren’t affected by winds or air-pressure and have very slow “currents”. On the other hand they are affected by GIA and variations in snow density (but then oceans are affected by GIA too).

Measuring ice volume gravitationally (GRACE) is much more dubious since the GIA uncertainty in that case is of the same order as the effect you are trying to measure.

February 21, 2019 9:35 am

As Willis has pointed out many times, statistical analyses of data can be a powerful tool in evaluating the meaning of the data. But powerful as they are, statistics cannot tell you anything about the scientific validity of the data. If you apply statistical analysis to bad data you will still have bad data. A case in point is the validity of satellite sea level measurements. The data has been so ‘adjusted’ can we trust the results? Comparison of satellite data with tide gauge data shows a discrepancy–which should we trust? An apparent acceleration of sea level when you switch from tide gauge records to satellite records sounds highly suspicious.

Frank
Reply to  Don J. Easterbrook
February 21, 2019 11:41 am

Don: You might want to express your criticisms in terms of systematic vs random error. Satellite altimetry (IMO) has high potential for systematic error and the “adjustments” you and others complain about were corrections of systematic errors in the process of calculating the altitude of a satellite’s orbit and converting the time for a radar signal bounce off the ocean and return into a distance to the surface of the ocean. There are large correction factors (up to meters) for humidity and waves/wind speed (which are calculated from re-analysis, a process that changes over the years) and for the state of the ionosphere. Orbital drift in the vertical direction from normal satellite tracking is at least 1 cm/year, so satellite altitude is being calibrated by measuring the distance to the ocean at special sites (with tide gauges and GPS monitoring of VLM).

Because satellites are measuring SLR “continuously”, the large number of data points produces a small confidence interval for SLR. However, each of the adjustments to correct for systematic error provided a new answer outside the confidence interval for the previous answer.

Jeff in Calgary
February 21, 2019 10:32 am

Now, I pointed out in my other post how it is … curious … that starting at exactly the same time as the satellite record started in 1992, the trend in the Church and White tide gauge data more than doubled.

I would love to see a blog post expounding on this topic. It truly seems to be the crux of the matter.

Frank
February 21, 2019 2:01 pm

Willis: Thanks for educating readers about how the Hurst coefficient can be used to calculate an Neffective for data with fractional Gaussian noise. I learned a lot from this and earlier posts.

You haven’t, however, addressed the question of whether fractional Gaussian noise is an appropriate statistical model for the noise in SLR data. Nor how accurately one can determine the slope of the line that gives the Hurst exponent from limited data.

A few year ago, Doug Keenan and others made a big deal with the Met Office about the fact that a random walk statistical model without a trend fit the historical global temperature record better than an AR1 statistical model with a trend. However, a random walk statistical model that included a “step size” of 1 degC/century would likely drift 10 degC over the 100 centuries of the Holocene and is inconsistent with the expectation that the planet’s climate feedback parameter is negative (which will serve to return temperature steady state mean under constant forcing). The choice of an appropriate statistical model for analyzing the noise in any data set is a difficult and tricky question. Postulating a model isn’t appropriate without discussion of alternatives, but is all too common.

The Church record of SLR poses enormous challenges, because it is a compilation of different data sources over different periods of time. For most of the period, it is a record of coastal sea level, which gradually expanded from covering a few tide gauges to many. With the addition of satellite altimetry, it became a record of global sea level. The discontinuity at the beginning of the satellite era is a huge red flag. And the uncertainty in this record is much greater in the first half-century than the rest of the record. The Hurst coefficient works well for data with artificial fractional Gaussian noise, but is unlikely to be meaningful here. Time-dependent data inhomogeneity can produce the appearance of LTP, whether it exists or not.

For the same reason, no other statistical model is likely to be appropriate for detecting acceleration in the Church (2011) data set. The authors provide the results for a quadratic fit to their data, but given the obvious surges and pauses in the rate of SLR (Figure 8), the noise is obviously auto-correlated. The best evidence for acceleration comes from the homogeneous satellite altimetry record alone. In both cases, the calculated acceleation is consistent with the IPCCs central estimates for sea level rise (about 0.5 m by the end of the century), not greater than 1 m. 1 m by the end of the century requires an acceleration of 1 inch/decade/decade or 0.25 mm/yr/yr. Clearly that hasn’t been happening!

Winds can blow water towards or away from various tide gauges in the coastal record. There is nowhere winds can move ocean water where it can “hide” from satellite altimetry, except perhaps polar regions with sea ice that aren’t part of the satellite record. Huge changes in local sea level (1 foot?) occur in the Equatorial Pacific during El Ninos as trade winds weaken and shift direction. The influence of the AMO can be seen in some Atlantic tide gauge records. From a mechanistic perspective, the noise in local SLR is likely to be due to changing winds (do they show LTP?), with a trend arising from ocean heat uptake, melting of ice caps, and VLM. VLM is likely to be linear with time in some places, but linear is an oversimplification for these other factors.

1sky1
Reply to  Frank
February 21, 2019 2:32 pm

You haven’t, however, addressed the question of whether fractional Gaussian noise is an appropriate statistical model for the noise in SLR data. Nor how accurately one can determine the slope of the line that gives the Hurst exponent from limited data.

When estimates of the p-value can range over 25 orders of magnitude, there’s strong prima facie indication that the entire model for SLR data, not just the noise, is way off base.

February 21, 2019 11:31 pm

Willis,
Long ago I was disenchanted by some customary mathematical ways to look at effects like autocorrelation and Long Term Persistence, so I started looking at other ways including the math of geostatistics rather than (say) conventional correlation coefficients. A few others were also looking, one references being
http://climate.indiana.edu/RobesonPubs/janis_robeson_physgeog_2004.pdf
which is a recommended paper.
Early days for me yet, but I’d love to see some of the brain power here pick up the threads and run with it.
Geoff.

Hivemind
February 22, 2019 4:00 am

Thank you for an excellent article. As always, very well thought out.

Terry D. Welander
February 22, 2019 10:40 am

Earth being 4.4 billion years old and using standard statistical methods; no standard statistical methods here and none in the article based on my training; the minimum time increment for 4.4 billion years is around 100,000 years. Or anything less than 100,000 years is just weather. There are some ice cores over 100,000 years old; but not enough. The only way to find climate change for Earth is to date undisturbed rocks from specific locations; and measure the amount of gases in those rocks: nitrogen, oxygen, carbon dioxide, and miscellaneous gases for a comparison over the 4.4 billion year life of Earth. Or this is a geologist inquiry and everyone else is wasting their time talking about weather only; or talking about anything less than 100,000 years is not useful with climate change.

Reply to  Terry D. Welander
February 22, 2019 1:08 pm

PLUS TEN!!
It amazes me that if you look at a graph of the global temperature from the Carboniferous period to today the average temperature is around 18 to 20 degrees C. and today and for the last several thousand years we are around 13 to 15 degrees C. Thus, we are presently in an exceptionally COLD period of weather. Further, for about 200 million years of that 600 million years the CO2 level was 10 to 20 times greater than today and the global temperature leveled off at 25 degrees C. That is a rather comfortable temperature which should be great for providing an excess of plant foods for the animal/human population of the globe. Yet we are worried about disastrous irreversible climate change. That just does not compute and is illogical. What prevented the temperature from exceeding 25 degrees C for 600 million years? WHAT?