Sea Level and Effective N

Guest Post by Willis Eschenbach

Over in the Tweeterverse, I said I wasn’t a denier, and I challenged folks to point out what they think I deny. Brandon R. Gates took up the challenge by claiming that I denied that sea level rise is accelerating. I replied:

Brandon, IF such acceleration exists it is meaninglessly tiny. I can’t find any statistically significant evidence that it is real. HOWEVER, I don’t “deny” a damn thing. I just disagree about the statistics.

Brandon replied:

> IF such acceleration exists

It’s there, Willis. And you’ve been shown.

> it is meaninglessly tiny

When you have a better model for how climate works, then you can talk to me about relative magnitudes of effects.

[As a digression, I’m one of the few folks with a better model, supported by a number of observations, for how the climate works. I say that the long-term global temperature is regulated by emergent phenomena to within a very narrow range (e.g. ± 0.3°C over the entire 20th Century). And I have no clue what that has to do with whether or not I can talk to him about “relative magnitude of effects”.

But as I said … I digress … ]

Brandon accompanied his unsupported claims with the following graph of air temperature, not sure why … my guess is that he grabbed it in haste and mistook it for sea level. Easy enough to do, I’ve done worse.

Now, I’ve written recently about sea level rise in a post called “Inside The Acceleration Factory”. However, there is a deeper problem with the claims about sea levels. This is that the sea level data is very highly autocorrelated.

“Autocorrelated” in respect to a time series, like say sea level or temperature, means the present is correlated with the past. In other words, autocorrelation means that hot days are more likely to be followed by hot days, and cold days to be followed by cold days, than hot days following cold or vice versa. And the same is true of hot and cold months, or hot and cold years. When such autocorrelation extends over long time periods, years or decades, it is often called “long term persistence”, or “LTP”.

And trends are very common among datasets that exhibit LTP. Another way of putting this is expressed in the name of a 2005 article in Nature magazine. The article was called “Nature’s Style: Naturally trendy”. This is quite accurate. Natural datasets tend to contain trends of various lengths and strengths, due to the existence of long term persistence (LTP).

And this long-term persistence (LTP) brings up large problems when you are trying to determine whether the trend of a given climate time series is statistically significant or not. To elucidate this, let me discuss the Abstract of “Naturally Trendy”. I’ll put their abstract in bold italics. It starts as follows:

Hydroclimatological time series often exhibit trends.

True. Time series of river flow, rainfall, temperature, and the like have trends.

While trend magnitude can be determined with little ambiguity, the corresponding statistical significance, sometimes cited to bolster scientific and political argument, is less certain because significance depends critically on the null hypothesis which in turn reflects subjective notions about what one expects to see.

Let me break that down a bit. Over any given time interval, every weather-related time series, whether it is temperature, rainfall, or any other variable, is in one of two states.

Going up, or

Going down.

So the relevant question for a given weather dataset is never “is there a trend”. There is, and we can measure the size of the trend.

Here’s the relevant question; is a given trend an UNUSUAL trend, or is it just a natural fluctuation?

Now, we humanoids have invented an entire branch of math called “statistics” to answer this very question. We’re gamblers, and we want to know the odds.

It turns out, however, that the question of an unusual trend is slightly more complicated. The real question is, is the trend UNUSUAL compared to what?

Plain old bog-standard statistical mathematics answers the following question—is the trend UNUSUAL compared to totally random data? And that is a very useful question. It is also very accurate for truly random things like throwing dice. If I pick up a cup containing ten dice, and I turn it over and I get ten threes, I’ll bet big money that the dice are loaded.

HOWEVER, and it’s a big however, what about when the question is, is a given trend unusual, not compared to a random time series, but compared to random autocorrelated time series? And particularly, is a given trend unusual compared to a time series with long-term persistence (LTP)? Their Abstract continues:

We consider statistical trend tests of hydroclimatological data in the presence of long-term persistence (LTP).

They are taking a variety of trend tests to see how well they perform with random datasets which exhibit LTP.

Monte Carlo experiments employing FARIMA models indicate that trend tests which fail to consider LTP greatly overstate the statistical significance of observed trends when LTP is present.

In simplest terms, regular statistical tests that don’t consider LTP falsely indicate significant trends when the trends are in fact just natural variations. Or to quote from the body of the paper,

More important, as Mandelbrot and Wallis [1969b, pp. 230 –231] observed, ‘‘[a] perceptually striking characteristic of fractional noises is that their sample functions exhibit an astonishing wealth of ‘features’ of every kind, including trends and cyclic swings of various frequencies.’’ It is easy to imagine that LTP could be mistaken for trend.

This is a very important observation. “Fractional noise”, meaning noise with LTP, contains a variety of trends and cycles which are natural and interent in the noise. But these trends and cycles don’t mean anything. They appear, have a duration, and disappear. They are not fixed cycles or permanent trends. They are a result of the LTP, and are not externally driven. Nor are they diagnostic—the presence of what appears to be a twenty-year cycle cannot be assumed to be a constant feature of the data, nor can it be used as a means to predict the future. It may just be part of Mandelbrot’s “astonishing wealth of features”.

The most common way to deal with the issue of LTP is to use what is called an “effective N”. In statistics, “N” represents the number of data points. So if we have say ten years of monthly data, that’s 120 months, so N equals 120. In general, the more data points you have, the stronger the statistical conclusions … but when there is LTP the tests “greatly overstate the statistical significance”. And by “greatly”, as the paper points out, using regular statistical methods can easily overstate significance by some 25 orders of magnitude

A common way to fix that problem is to calculate the significance as though there were actually a much smaller number of data points, a smaller “effective N”. That makes the regular statistical tests work again.

Now, I use the method of Koutsoyiannis to determine the “effective N”, for a few reasons.

First, it is mathematically derivable from known principles.

Next, it depends on the exact measured persistence characteristics, both long and short term, of the dataset being analyzed.

Next, as discussed in the link just above, I independently discovered and tested the method in my own research, only to find out that …

… the method actually was first described by Demetris Koutsoyiannis, a scientist for whom I’ve always had the greatest respect. He’s cited several times in the “Naturally Trendy” paper. So I was stoked when he commented on my post that he was the originator of the method, because that meant I actually did understand the subject.

With all of that as prologue, let me return to the question of sea level rise. There are a few reconstructions of sea level rise. The main ones are by Jevrejeva, and by Church and White, and also the satellite TOPEX/JASON data. Here’s a graph from the previous post mentioned above, showing the Church and White tide station data.

Now, I pointed out in my other post how it is … curious … that starting at exactly the same time as the satellite record started in 1992, the trend in the Church and White tide gauge data more than doubled.

And while that change in trend is worrisome in and of itself, there’s a deeper problem. The aforementioned “effective N” is a function of what is called the “Hurst Exponent”. The Hurst Exponent is a number between zero and plus one that indicates the amount of long-term persistence. A value of one-half means no long-term persistence. Hurst exponents from zero to one half show negative long-term persistence (hot followed by cold etc.), and values above one half indicate the existence of long-term persistence (hot followed by hot etc.). The nearer the Hurst Exponent is to one, the more LTP the dataset exhibits.

And what is the Hurst Exponent of the Church and White data shown above? Well, it’s 0.93, way up near one … a very, very high value. I ascribe this in part to the fact that any global reconstruction is the average of hundreds and hundreds of individual tide records. When you do large-scale averaging it can amplify long-term persistence in the resulting dataset.

And what is the effective N, the effective number of data points, of the Church and White data? Let’s start with “N”, which is the actual number of data points (months in this case). In the C&W sea level data the number of datapoints N is 1608 months of data.

Next, effective N (usually indicated as “Neff”) is equal to:

N, number of datapoints, to the power of ( 2 * (1 – Hurst Exponent) )

And 2 * (1-Hurst Exponent) is 0.137. So:

Effective N “Neff” = N ^ (2 * (1 – Hurst Exponent))

= 1608 ^ 0.137

= 2.74

In other words, the Church and White data has so much long-term persistence that effectively, it acts like there are only three data points.

Now, are those three data points enough to establish the existence of a trend in the sea level data? Well, almost, but not quite. With an effective N of three, the p-value of the trend in the Church and White data is 0.07. This is just above what in the climate sciences is considered statistically significant, which is a p-value less than 0.05. And if the effective N were four instead of three, it would indeed be statistically significant at a p-value less than 0.05.

 

However, if you only have three data points, that’s not enough to even look to see if the results are improved by adding an acceleration term to the equation. The problem is that with an additional variable, that’s three tunable parameters for the least squares calculation and only three data points. That means there are zero degrees of freedom … no workee.

So … do I “deny” that sea levels are accelerating in some significant manner?

Heck, no. I deny nothing. Instead, I say we don’t have the data we’d need to determine if sea level is accelerating.

Is there a solution to the problem of LTP in datasets? Well, yes and no. There are indeed solutions, but the climate science community seems determined to ignore them. I can only assume that this is because many claims of significant results would have to be retracted if the statistical significance were to be calculated correctly. Once again, from the paper “Nature’s Style: Naturally Trendy”:

In any case, powerful trend tests are available that can accommodate LTP.

It is therefore surprising that nearly every assessment of trend significance in geophysical variables published during the past few decades has failed to account properly for long-term persistence.

Suprising indeed, particularly bearing in mind that the “Naturally Trendy” paper was published 14 years ago … and the situation has not gotten better since then. LTP is still rarely accounted for properly.

To return to the paper, the authors say:

These findings have implications for both science and public policy.

For example, with respect to temperature data there is overwhelming evidence that the planet has warmed during the past century. But could this warming be due to natural dynamics? Given what we know about the complexity, long-term persistence, and non-linearity of the climate system, it seems the answer might be yes.

I’m sure that you can see the problems that such statistical honesty would cause for far too much of mainstream climate science …

Best regards to all, including to Brandon R. Gates, whose claim inspired this post,

w.

As Usual: I ask that when you comment please quote the exact words you are discussing, so that we can all understand both who and what you are referring to.

Advertisements

148 thoughts on “Sea Level and Effective N

    • Thanks, Nicholas. The problem is that all we can measure from the shore is relative sea level, which depends on which way the land is moving. Some places relative sea level is dropping rapidly, due to the land springing back upward ever so slowly after the mile of ice melted that was on it during the ice ages. Here are some typical records:

      As a result, we can’t say much about the globe from a single record, whether it is in Sydney or San Francisco.

      Regards,

      w.

      • The relative sea level is all that counts – unless you are interested in the length of day, which might increase very little if there is more water in oceans. The issue is that it is highly location-dependent, because the land may be locally rising or sinking.

        • I second that, and I will add that the phenomenons that causes relative sea level also affect eustatic sea level. There is much more to global sea level than the cryosphere.

      • The Sydney sandstone is geologically (relatively) stable.

        “The stability of the Australian continent, with limited volcanic activity for many millions of years, and the relatively small amount of seismic activity is the result of Australia being situated in the centre of its tectonic plate, well away from the active regions along its margins, particularly in New Guinea ”
        https://austhrutime.com/australias_stability.htm

        It could be rising or falling, there is certainly no isostatic rebound, but the Sydney land basin seems static and to me seems to be an excellent place to study sea level.

          • Yes, moving North at finger nails growth, they say 5 to7 cm per year. Any mountain range uplift (Great Dividing range) occurred around 40Ma. Earthquakes are rare, usually deep with no fissuring.

            If satellites show movement, I would like to see this.

          • Usurbrain – February 20, 2019 at 3:41 pm

            However, Australia is moving northward, that plate tectonics thing. How does that effect sea level?

            And hundreds of volcanoes have been depositing solid particulate into the ocean basin each and every year for the past 2K years. How has that effected sea levels?

            And tens of thousands of rivers have been depositing solid “erosion” particulate into the ocean basin each and every year for the past 2K years. How has that effected sea levels?

            And tens of thousands of once deep rivers channels have been filling up with solid particulate causing a horrendous decrease in “water impoundment capacity” of the rivers with the overflow now being collected by the ocean basin each and every year for the past 2K years. How has that effected sea levels?

        • There is hydrostatic rebound of a bit over a meter at Sydney which ended about 4k years ago.
          Blog science usually ignores this as well as the use of the word eustatic.

        • Hmmmm. I should ask if you call the 5.6 intensity EQ in Newcastle 1989 which killed people or the Meckering EQ 1968 (6.5) or Ellalong 1994 ( 5.4) or Tennant Creek (6.6) or Kalgoorlie-Boulderin 2010 (5.0) OR Cadoux 1979 (6.1) as commensurate with the term “small amounts of seismic activity?The North Sea is relatively becalmed by comparisson yet it is looked on as seismically active. There are massive earthquakes going off in Archean Terrains in Oz. Take a look at the Austrailian stress map and the stress field is radial point. The North Coast of Oz is exhibiting a higher rate of sea level change compared to the rest of OZ. My point? There are things going on which do not even come close to being answered by the imperfect concept of Plate Tectonics let alone the idea of AGM which requires religious belief.

      • Willis says:
        “As a result, we can’t say much about the globe from a single record, whether it is in Sydney or San Francisco.”

        I disagree completely. The post is framed as an issue with the acceleration of SLR as shown.
        From the post:
        “Brandon R. Gates took up the challenge by claiming that I denied that sea level rise is accelerating.”

        The issue is whether the SLR trend is accelerating, not what any given trend is ant any given location. If there *is* acceleration in overall SLR, you would have to see it in all the tide gauges. If you want to get fussy, perhaps the acceleration might not be the same at all places at all times, but it would *have* to be there. For any and all locations, the trend is just a constant rate, which becomes a fixed background. The acceleration would be seen as an increase in the slope of that background. This must be true due to mass balance considerations. If the water volume of the ocean basins is increasing at an accelerating rate, due to glacier and ice cap melt, the water level must accelerate up. Maybe it will not accelerate up uniformly at all places at all times, but it cannot go down.
        I know Willis and many others have done tons of tide gauges and the results are always the same. Linear, linear, linear.
        {My favorites are Portland Me, Boston, and the Battery NYC.}

        In short, any one gauge should show acceleration if it was happening, as would they all.
        {Everybody please spare us any comments about gauges affected by water or resource extraction. The issue is tough enough without throwing out the red herring of poorly sited gauges.}

        What I do not understand is why tide gauges get such short shrift in the acceleration debate, especially given the obvious manipulations the Sat. data has been put through.

        • Not necessarily. Local tide gauges are also affected by land subsidence and uplift which is why they differ so much all over the world. If local subsidence or uplift accelerated it would mask the “global” acceleration (if there were any). See this NOAA interactive map of global tide gauges and trends:

          https://tidesandcurrents.noaa.gov/sltrends/

          • Once again, this time with feeling!
            Geologic processes up or down, are *constant* over any time frame we can observe. That is hundreds of years, if not thousands of years.

            You said:
            “If local subsidence or uplift accelerated”
            That is a really big IF. There is no evidence that is happening anywhere. Indeed, the huge number of gauges that are linear, linear, linear, over a century or more shows this effect is not happening at all.
            Any SLR due to CAGW will be seen as an increase over and above that background. And this is certainly the argument advanced by the alarmists.

            As far as the rest of it goes:
            Trinidad and Tobago have massive subsidence in some areas due to commercial oil production. Affected land owners are all up in arms as their properties flood.
            Let’s put a tide gauge there!!! It will show catastrophic Sea Level Rise due to CAGW, *FOR SURE*!

          • He’s saying that there should be a curl on every tide gauge. Even if the trend is down, there should be an uptick to some degree, on every gauge.

            The upward or downward movement of the crust will be uniform over epoch scales. If there has been any change in sea level in the last 100 years, it should be noticeable on every gauge.

          • Um, question:

            If the land somewhere is subsiding, wouldn’t also some of the local sea floor be subsiding as well. Assuming so, couldn’t that be providing additional area for the water effectively eliminating any so called apparent local “rise”?

            In other words, wouldn’t enough “sinking” land counter balance somewhat some increase in the sea level rise?

            Just wondering.

          • “Geologic processes up or down, are *constant* over any time frame we can observe. That is hundreds of years, if not thousands of years.”

            100% correctomondo……

            Funny that the tide gauge in Key West…..that the Air Force, Navy, Army, COE, Derm…on and on…say is absolutely not moving up down or sideways…..shows zero acceleration..and perfectly consistent SLR since 1913 of 0.8 ft a century…flat as a flitter and no acceleration

            ..and the tide gauge in Pensacola…that the AF, Navy, Army, COE, DERM, etc…that sits on land that is sinking…the entire Gulf Coast…..shows exactly the same SLR since 1923….0.8 ft a century…and no acceleration

            When someone points out that “land is moving”…show them this…it may be moving….by it’s moving so slow it doesn’t matter

            …and any acceleration has to be artifact of satellites….cause gauges don’t show it

          • Geologic processes up-and-down may be “constant”…subsidence due to groundwater withdrawal is not.

          • Tony is 100% correct. There is a science that intensely studies sea levels, it’s called sequence stratigraphy.

            Geologically, when all coast lines show either an advancement or retreat of the oceans, this is called global sea level change. When coastlines show a mix of advance/retreat as well as no change at all, that is a standstill in sea level. That is where we are now and have been for the past 8,000 years.

            This is relevant because at standstill, it’s the local factors that dictate the coastline, not the global change. And like Tony said, there has been no global change in any rates in the tide gauges, so not only is there no global sea level rise, there is no indication that fact is changing.

            If there is global sea level change, you will know it.

          • “Geologic processes up or down, are *constant* over any time frame we can observe. That is hundreds of years, if not thousands of years.”

            Ummm…once again, not necessarily. Subsidence can indeed accelerate, sometimes dramatically. As Ernest Hemingway observed about the 2 ways of going bankrupt, “gradually and then suddenly”, so it is with subsidence. Sinkholes are an extreme case of sudden subsidence. There are many places around the world with tide gauges where subsidence is accelerating, typically around river deltas.

            Compare the measurements at Grand Isle, Louisiana:

            https://tidesandcurrents.noaa.gov/sltrends/sltrends_station.shtml?id=8761724

            and Montauk, New York (which have almost the same record length so you can eyeball it better):

            https://tidesandcurrents.noaa.gov/sltrends/sltrends_station.shtml?id=8510560

            There are very significant differences not only in overall trend but short-term trends as well. Look at 1949 to 1960. Or 1990 to 2000.

            QED

          • There’s something else which has not been fully answered or considered so far: how many and what exactly are the factors that affect apparent sea level, either locally or globally?

        • correct.taking the differential of sea level movement wrt time removes any constant rise from the final result.

          allowing short term changes to be isolated from steady long term ones.

      • Check the NOAA data from all available coastal gauges in-place in some cases since the late-1800s, early-1900s, around the US and in the Pacific Basin (Hawaii, Guam, Marshalls, Kwajalein) and the result is the same…No SLR Acceleration. An average variation of around 1.5 – 3.5 mm a year and steady pretty much in each area were the data is collected. The most remarkable, consistent difference I’ve seen is of 1.5 mm per year in Honolulu vs. 3.5 mm per year in Hilo, located not that far apart. Sorry, but the coastal gauges in tectonically stable areas where the land is neither rising nor sinking (due to pumping or building on ‘dredge and fill’ in coastal wetland areas, always a dumb idea) are the most accurate measurements and they do no show accelerating SLR.

        • Fred, could that slr difference be due to the difference in age of Oahu and Hawaii?

          Ordinarily, inactive volcanic islands begin to erode and sink back in the ocean over time IIRC.

      • Willis,
        We can say that the Australian continent has escaped ice coverage that saw sheets more than a km thick over large parts of northern America and northern Europe. Geological mapping is easily good enough to be sure of this. Most if not all of the sites where you showed diverse sea level trends were affected by ice retreat. There are various rates of rebound. They do not apply to Sydney, which therefore must be regarded as a more representative case, accurate because of lack of rebound. (It is a little more complicated than these broad assertions, but they are OK for the purpose.)
        Geoff.

      • It is worth referring people to John Daly’s web site ‘Still waiting for greenhouse”:
        https://www.john-daly.com/

        The home page shows an image of a tide benchmark in Tasmania struck into the rocky shore in 1841. It shows no evidence of sea level change in a techtonically stable place. It is captioned thusly:

        “The 1841 sea level benchmark (centre) on the `Isle of the Dead’, Tasmania. According to Antarctic explorer, Capt. Sir James Clark Ross, it marked mean sea level in 1841. Photo taken at low tide 20 Jan 2004. Mark is 50 cm across; tidal range is less than a metre. © John L. Daly.”

      • Willis, you know your Church and Whites end at a huge El Nino, right? 2013…and sea level has fallen since then

      • One thing we can say from a single record is whether or not there is a noticeable inflexion point from 1992 onwards. This inflexion point shown by the Church and White study is mysteriously missing from the tide gauges.

        White Church is just another case of Wong Religion

      • Flying from London England to Auckland New Zealand takes (plus Minus 24 hours flying time ) and direct flight you do not pass over land, water in places 7-8 km deep!
        Greens, yep, it means the brain is moldy ! weather (storms ) earthquakes, volcanos, water temperatures, tide movement, gravity, do not count, as our green mill pond oceans reflect idiotology,
        The North Sea Brent oilfield platforms were designed with an “AIR GAP” for the 100 year wave. in the first year Shell Brent Delta had the 100 year wave 9 times.

      • Hi Willis this frustrates the bejesus out of me …. a case in point yesterday with the PC BBC grandstanding about climate changed extinction of a rat on a tiny island just offshore the North of Austrailia claiming well I do not know what they are claiming because totally contentious claims are made about AGW as if it has been proven a thousand times over down to the last 25 decimal places. The whole of the northern coast of Australia exhibits sea level rise out of kilter with the rest of Australia YET this is not mentioned and following on from it the glaring plate tectonics going on to the north is not mentioned. To remove any “local” tectonic effect or man made ( ground water extraction) before even considering summing sea levels year on year is a must. Just look at isostatic readjustment going on in Northern Europe after the ice was removed. Basic science is being willfully ignored or abused. The fact this screening is not being done before wild claims are made suggests what the agenda really is…. it is certainly not good science. Indeed if the method is rubbish then the outcome of your work is preordained.

    • NWT
      Among other things, the force of gravity is not uniform over the globe. That means water tends to pile up where gravity is stronger, and it takes water away from where gravity is weaker. The extent to which it piles up one place compared to another is the proportional strength of the gravity field. So, you can’t expect the measurements in one place to be representative of oceans over the entire globe.

      • And, “For the last few billion years the Moon’s gravity has been raising tides in Earth’s oceans which the fast spinning Earth attempts to drag ahead of the sluggishly orbiting Moon. The result is that the Moon is being pushed away from Earth by 1.6 inches (4 centimeters) per year and our planet’s rotation is slowing.” How does that effect sea level?

        • It means the gravitational pull of the moon is decreasing by 1/distance squared, reducing tidal fluctuations.

      • Clyde, it’s a twofer

        Gravity increases greatly over seafloor volcanoes and mountains..that sets the sea level on top of them……it also exerts a pull on satellites….doesn’t have to be much because they are trying to measure something very small…but gravity keeps them in orbit….and any change in gravity has an effect….increase it and they dip

        ..that constant positive anomaly in Indonesia

      • another things: if tectonic plates are not colliding, erosion will ted to reduce land height and decrease sea depth as the high bits are washed away into the ocean floors.

        a greening planet will show falling sea levels as more water is locked up in hydrocarbons…

        are these significant?

      • Since the earth is not spherical, the relative level of the ocean compared to the center of the earth is not constant over the globe. Gravitational affects are at the 0.1% level in sea level relative to other areas based on a list of local g for 30 major cities on or near the coastline. As the local g is quite constant over our time scale, local sea level height is much more closely tied to subsidence or rebound. Computing the total volume of the oceans is another question entirely, with many unquantified variables.

      • RickWill,
        Some places like Fremantle, the sea port for Perth, have been studied in detail and in this case, the discrepancy plausibly explained by fresh water extraction from the sandy depths below Perth and Fremantle. In detail, each site has a complex history that makes it hard to generalise.
        But, irrespective of the trend that is measured today, any acceleration of sea level change would show as a non-linear effect. Time after time, tide gauge measurements show up as straight lines for many decades. No acceleration.
        This is all by-the-way. The important part of the Willis analysis above is the suitability of conventional statistical methods for understanding tide gauge responses. I’d reckon that more emphasis on the consequences of autocorrelation lead to revision the conclusions of a number of climate papers in the next few years. (If relevant climate authors have heard of ‘revision’). Geoff.

    • I recall reading about a mean sea level etched in stone on the island of Tasmania more than 100 years that is now about 30 centimeters above mean sea level. Can’t remember the name of the fellow (now deceased) who made it a feature of his website.

  1. Willis, let me get this straight. Over a 135 year period, Church and White plots, the sea level variation was about 220 mm, or 22 cm, or less than one (1) foot. We geologists are used to looking at sea level changes, and their resultant record displayed in the geologic record, of (I’m generalizing) 50 meters higher and 150 meters lower sea level as being normal. It looks like the sea level went up around 100 metes following the end of the last glacial phase of the Ice Age we are living in. One (1) foot of change? Does not register on any reality basis. You want acceleration-get a Corvette!

      • And sea level rise has been estimated to have been as high as 60 mm/ year during the melt water pulses. Low resolution could mean that individual years could have had much higher sea level rise than that. 3 mm/year? That’s what geologists call still stand, as sedimentation and isostasy overwhelm that rate of eustatic change and global coastlines subsequently transgress or regress based on local factors alone.

    • Supporters of the meme ALWAYS interject in discussions about records that show, say, nearly all the warming we have experienced up to 2000, took place before 1940: “Oh that was because of station moves”. When I compared Capetown, S. Af raw temperature record to that of the USA (and Canada, and Greenland, and Russia and,Paraguay, and Bolivia….(checkout Paul Homewoods excellent blog -” Not a lot of people know”…), I was told the “station moves” chestnut.

      Having become a Missourian in my thinking in this amoral age (https://idioms.thefreedictionary.com/I%27m+from+Missouri)
      and hearing this too often I think it possible that ‘moving stations’ is mainly just another tool in the Adjustocene tool box. Station stubornly refusing to warm? Move it out to the Jumbojet inferno.

  2. You also have to look at where these readings are being taken. Land subsumes and rises regularly depending on where it is. The only way that you could take ACCURATE data about sea level change would be from above the surface that is not affected by the above factors.

  3. would the use of the original method of sea level calculation not be a better way to determine this notional acceleration when satellites were adopted?

    The sats stuff looks impressive but actually what they did was mischievous or am I missing something?

  4. NASA helpfully shows two graphs of sea level rise; one from satellite data, the other from tide gauge data.

    https://climate.nasa.gov/vital-signs/sea-level/

    The upper one (satellite from 1993 to present) is labeled prominently with a trend of 3.2 mm/yr.

    The lower one (tide gauge from 1870 to 2013) is suspiciously not labeled with a trend, but it’s not too hard to estimate; roughly 225 mm over 130 years, trend of approximately 1.7 mm/yr.

    Here’s the important part. They both measure sea level rise between 1993 and 2013 and nowhere in the 20 years the data overlaps is there a sudden jump in sea level rise. But during that 20 years the trend measured by satellite is DOUBLE the rate measured by tide gauges over the same time period. Which means that the accuracy of the measurements of one (likely both) is off. By quite a bit.

    That also means that the so-called “acceleration” in the last couple decades when splicing satellite data on the last 25 years of the tide gauge record—data torture similar to Michael Mann’s statistical malfeasance on his infamous “hockey stick” temperature graph—is simply an artifact of the measurement method and the splicing of different data together.

    • Exactly. There is no detectable acceleration in 25+ years of satellite data. Therefore the acceleration if exists is very very small.

      I am on record over a year ago saying that a deceleration in sea level rise should be expected for the next couple of decades. So far so good, the sea level increase since October 2015 has been below average.
      https://i.imgur.com/dewkwJl.png

    • The lower one (tide gauge from 1870 to 2013…

      They are posting current tide gauge data…now why would they stop at 2013?…El Nino

      ..if you continue the graph 6 more years you lose that slight up curve

    • It is my understanding that tide gauge measurements have continued, and they fail to show the higher rate of sea rise that is reflected in the satellite measurements. Until that mystery is explained, I don’t see how any conclusions can be drawn from either dataset.

      • There’s no sign of acceleration on tide gauges WORLDWIDE. Here’s thumbnails of 375 of them, linking to NOAA’s data.

        http://www.sealevel.info/MSL_global_thumbnails5.html

        The tide gauges didn’t suddenly ALL simultaneously start under-reporting in 1992, so what conclusion does that leave? It’s obvious to anyone who cares to look that the SATELLITE data is WRONG.

  5. “Autocorrelated” in respect to a time series, like say sea level or temperature, means the present is correlated with the past. In other words, autocorrelation means that hot days are more likely to be followed by hot days, and cold days to be followed by cold days, than hot days following cold or vice versa. And the same is true of hot and cold months, or hot and cold years. When such autocorrelation extends over long time periods, years or decades, it is often called “long term persistence”, or “LTP”.

    No.

    Autocorrelation just means there is a repeated pattern.
    cold-hot-hot-cold-hot-cold-hot-hot-cold-hot--cold-hot-hot-cold-hot- …
    So, there’s a repeated pattern. We could also write it as:
    -1, 1, 1,-1, 1,-1, 1, 1,-1, 1,-1, 1, 1,-1, 1, …
    To test for autocorrelation, we multiply the series by a time shifted version of itself.
    -1, 1, 1,-1, 1,-1, 1, 1,-1, 1,-1, 1, 1,-1, 1, … original
    -1, 1, 1,-1, 1,-1, 1, 1,-1, 1,-1, 1, 1,-1, 1, … time shifted
    1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 product

    If the products for all of the terms are positive, we have an autocorrelation.

      • From page two of the link you provided:

        Autocorrelation is very useful in activities where there are repeated signals. Autocorrelation is used to determine the periodic signals that get obscured beneath the noise, or it is used to realize the fundamental frequency of a signal that does not have that frequency as a component but applies it there along with many harmonic frequencies. page 2

        My example is the simplest, most concrete, I could think of. It’s not wrong though. My reading of Willis’ statement is that he’s flat out wrong or he expressed himself poorly.

    • Thanks, commieBob. In general, if there is long-term persistence there is also high lag-1 autocorrelation. This is what I was describing.

      Next, it’s always a judgment call as to how to explain math to a lay audience. There are entire tomes written on autocorrelation and LTP. I picked an explanation that I thought people could understand. You might pick another.

      My general rule is that each numeral in my post loses me one reader. So I work to keep them to a minimum. I’d never use your explanation,despite the fact that it is correct. Too many numbers, peoples’ eye would glaze over and they’d stop reading.

      Best regards,

      w.

          • George: “Of two correct explanations available”…..

            How on earth is it possible for Willis chose one that is not correct?

            Can’t wait to see how you explain this.

          • But does it matter if you are incorrect and nobody is listening?

            Bottom line is that in Eschenbach’s world, correct/incorrect is not important, what it important is if someone/anyone is listening.

        • A more understandable correct explanation is superior to a less understandable correct explanation: respect for the audience you are addressing. Now who would struggle to find merit in such thinking?

        • Sketch:1)Why would you want to communicate something to people in a way that would turn them away from reading it?
          2) Effectively communicating something complex to lay people is a skill not widely possessed but you do have to leave out much of the math.
          3) Why do you think alarmists have hired Madison Avenue advertising agencies to teach them how to BS people. Because they think they have a communicaton problem when its really the stuff they are trying to communicate is puff science by minor talents who misuse statistics. Com’on, you were shocked about revelations from climategate yourself.

        • Everyone makes mistakes. Me, you, Anthony, Willis …

          1 – The beauty of this forum is that mistakes can be addressed quickly.
          2 – The perfect is the enemy of the good. link
          3 – Even work that contains major errors (which is not the case here as far as I can tell) may still be highly valuable because of the information and insights it provides, even if you disagree with its ultimate conclusion.
          4 – If you want perfection, you have to die and hope you get into heaven.
          5 – Willis’ contributions here are huge and thought provoking.

      • Willis, if you want to hold reader’s attention, I suggest you put pictures of naked women in your posts.

      • Keith Sketchley February 20, 2019 at 5:26 pm

        Eschenbach admits that holding readers attention is more important than being correct.

        Say what? This is why I insist that people quote the exact words they are discussing. I did not say that, nor did I say anything even remotely resembling that.

        What I said was:

        Thanks, commieBob. In general, if there is long-term persistence there is also high lag-1 autocorrelation. This is what I was describing.

        Next, it’s always a judgment call as to how to explain math to a lay audience. There are entire tomes written on autocorrelation and LTP. I picked an explanation that I thought people could understand. You might pick another..

        My explanation was correct, but not detailed. So was yours. Generally, no verbal explanation of math for a lay audience is 100% detailed … that’s what the mathematical symbols and the math classes are for.

        Next, if you can’t hold your readers’ attention you might as well say nothing. However, I did NOT say that holding their attention is more important than being correct. That’s all you.

        w.

        • Willis wants quotes, here ya go buddy (my emphasis):
          ….
          “I’d never use your explanation,despite the fact that it is correct. Too many numbers, peoples’ eye would glaze over and they’d stop reading.”
          ….
          You would dispense with a correct explanation for one that people would not stop reading.

          • Keith, I did NOT say that my explanation was incorrect. I said that yours was correct. Mine was correct as well. I chose the simpler one.

            So NO, I would NOT “dispense with a correct explanation for one that people would not stop reading”. What I said was that I used a simpler but still correct explanation.

            Here’s another correct definition, from the web:

            What is Autocorrelation
            Autocorrelation is a mathematical representation of the degree of similarity between a given time series and a lagged version of itself over successive time intervals. It is the same as calculating the correlation between two different time series, except that the same time series is actually used twice: once in its original form and once lagged one or more time periods.

            I wouldn’t use that correct definition either, because most lay people wouldn’t understand it.

            And in fact, neither your definition of autocorrelation nor mine are completely correct. Here’s the 100% correct definition of autocorrelation:

            And no, I wouldn’t have used that 100% correct definition either. I used my correct definition because it was one I thought people could understand.

            w.

          • The problem with what you said Willis centers on the word never. You said, “I’d never use your explanation.” Why would you never use it? Because it would (in your words) cause eyes to glaze over. So, your primary reason is explicit. Additionally you stated, despite correctness. So given a correct explanation that causes eyes to glaze over you would never use it.
            ..
            From you quote, it is plainly evident that you value readability over correctness.

          • Keith Sketchley February 21, 2019 at 11:58 am

            The problem with what you said Willis centers on the word never. You said, “I’d never use your explanation.” Why would you never use it? Because it would (in your words) cause eyes to glaze over. So, your primary reason is explicit. Additionally you stated, despite correctness. So given a correct explanation that causes eyes to glaze over you would never use it.
            ..
            From you quote, it is plainly evident that you value readability over correctness.

            Well, yes and no. If you lose your reader, you have NOT given them the correct answer because they haven’t read your answer. So it seems to me you value being “right” in some theoretical sense over actually giving someone a correct answer.

            Other than the mathematical equation I posted up above, ALL explanations of autocorrelation will be incorrect in some sense or other. I’m trying to give non-mathematicians an understanding of what autocorrelation is. So I gave an explanation mostly relating to lag-1 autocorrelation. Why? In part because high lag-1 correlation is ubiquitous in natural datasets and in part because people can understand it.

            You seem to think it is “either-or”, where either I use your preferred explanation or I’m a bad person … but in fact there are dozens of different explanations of autocorrelation, all of which (including yours and mine) are correct in some sense and incorrect in others.

            So I picked one which like yours is both correct in part and incorrect in part, as all verbal descriptions of math are, and I used it.

            You don’t like it? So sue me. But please, don’t claim I value readability over correctness. All I’ve done is used a correct description which is also readable. Is it the most correct description? Nope. Neither is yours. Here’s the only 100% correct description, which I also wouldn’t use, for obvious reasons:

            Best regards,

            w.

  6. Ah, Willis.
    You’ve probably just earned a thread over in sou-eee land, mentioning one of her unmentionables ( however kindly.)

  7. From the prologue.

    They are a result of the LTP, and are not externally driven.

    I would say, “They are a result of the LTP, and are not externally driven by determinable causes“.

    That does not say there are no causes, just that they cannot be determined.
    It’s chaos. But it’s not necessarily meaningless.

    • M Courtney February 20, 2019 at 3:22 pm

      From the prologue.

      They are a result of the LTP, and are not externally driven.

      I would say, “They are a result of the LTP, and are not externally driven by determinable causes“.
       
      That does not say there are no causes, just that they cannot be determined.
      It’s chaos. But it’s not necessarily meaningless.

      You miss my point, likely my lack of clarity. These trends and cycles are NOT externally driven by anything, determined or not. The cycles and trends appear in random fractional noise with no external drivers of any kind.

      w.

  8. As I understand it something called a Durbin Watson test is often used to test for the presence of autocorrelation in regressions involving time series data. Is that relevant to the work you are doing Willis and what would the result of that test tell us about sea level data analysis?

  9. Willis,
    Just making this known, but allow it to show or not. Your call.

    “Brandon R. Gates”
    I don’t wish to derail this discussion, But …
    This appears to be the R.40%gates of some years ago?
    There is some back&forth on the judithcurry site,
    Here: /2014/06/01/global-warming-versus-climate-change/
    I did not find a post at WUWT with one of his comments.

  10. Now, I pointed out in my other post how it is … curious … that starting at exactly the same time as the satellite record started in 1992, the trend in the Church and White tide gauge data more than doubled.

    Ol’ Axe Moerner has claimed that the sats are zeroed in on a limited number of observation points and that those points are cherrypicked to favor points of natural subsidence. IIRC, the sats outstrip the tidal gauges by roughly a factor of two.

  11. Going up, or

    Going down.

    So the relevant question for a given weather dataset is never “is there a trend”. There is, and we can measure the size of the trend.

    Here’s the relevant question; is a given trend an UNUSUAL trend, or is it just a natural fluctuation?

    Now if only you could get people in the MSM to even begin asking themselves questions with that in mind.
    Then they might realize that some of these so-called climate scientists are not what they claim to be.

    Their second problem is that of “records” when this year’s “record” temperature is fractionally higher than last year’s “record” temperature, and they seem to think that that is unusual, unexpected, or surprising.

  12. WE, a terrific post.
    Econometricians have been wrestling with Autocorrelation for decades.
    Ross McKitrick has done a good climate explanation several times—but his scholarly math explanations are probably impenetrable for non-econometricians/statisticians.
    Your simple explanation for laymen is brilliant, and SLR is a very good example. Kudos.

  13. As a caveat, it should noted that Hurst’s treatment of “persistence” is a severe simplification of what has become recognized in modern signal analysis as the “effective auto-correlation length.” Unless the underlying signal has a monotonically declining power density (e.g., red noise), it cannot be relied upon for any refined detection or analysis of trend changes. Over decadal time-scales, such changes can be readily produced by much longer-period oscillations that give rise to spectral peaks of various bandwidths. Much more incisive spectral analysis is required to resolve these complicated matters.

  14. Church & White (2011) make a number of “corrections” for land motion, atmospheric water vapour, terrestrial water storage etc. which essentially gives them a steeper trend after ~1950:
    https://media.springernature.com/original/springer-static/image/art:10.1007/s10712-011-9119-1/MediaObjects/10712_2011_9119_Fig7_HTML.gif
    I’m not sure they have “corrected” for human populations that have increased since 1950 given 60% of the human body is water, not to mention animals and plants — anything to increase the ‘hidden’ trend.
    Their analysis of linear trends shows not much net change from 1930 except recently coinciding with the altimetric data since 1993 and, naturally, IPCC upper-end ‘projections’:
    https://media.springernature.com/original/springer-static/image/art:10.1007/s10712-011-9119-1/MediaObjects/10712_2011_9119_Fig8_HTML.gif
    It puzzles me why the inflection points from linear -> exponential on IPCC temperature and sea-level graphs seem to be around 1995, why not 1880 when CO2 supposedly began to increase or 1950 when human emissions supposedly ousted Mother Nature?

  15. Maybe someone can post error bars? I have a dock on a creek off the Southern Chesapeake Bay and the water levels vary tremendously over the year yet I’m told someone is out there measuring the tide levels to millimeters

    A simple wind shift out on the Bay can cause a 6” water level rise or drop. Seriously.

  16. I remember that awhile back there were also adjustments made for the depression of the sea floor due to the added weight of the water. If that is used to “correct” the sea level rise then might that be where the discrepancy comes from between the tide gauges and the sat?

    • Bear February 20, 2019 at 5:21 pm
      I remember that awhile back there were also adjustments made for the depression of the sea floor due to the added weight of the water. If that is used to “correct” the sea level rise then might that be where the discrepancy comes from between the tide gauges and the sat?

      With regards to Colorado University’s Sea Level Research Group, (http://sealevel.colorado.edu/). The GIA correction is 0.3 mm/yr, and was first implemented starting in 2011. In 2016 the graphic style was changed and the GIA Corrected note was dropped, however it is still being applied.

      What galls me, is folks on my side of the issue never said.”Boo” about dropping the note. If you’re not careful the “GIA Correction” looks like acceleration when it’s not.

  17. Willis, as you probably know, Brandon R. Gates, erred in his initial Twitter exchange by suggesting giving an alternative model/hypothesis is a requirement of find fault with a model/hypothesis. As I understand it the Scientific Method has no such requirement. If its wrong, its wrong. Full stop.

  18. Willis there is a more interesting thing with Church and White as a historic work in they show the sea level rise pre and post satellite. That was all fine with Jason 1 & Jason 2 but have you looked at the Jason 3 data?

    Jason 3 treats wave height very different to Jason 1 & 2 and what has been recorded is wave height was increasing in the period covered by Jason 1 & 2. It would appear Jason 3 isn’t seeing the increase 1 & 2 did based solely on the wave height processing.

    In fact when you bring the Jason3 data in it drops the average by about 0.3mm at the moment to 2.9mm, one of the few presentations I have seen to show it (note the purple line at end being Jason 3)
    https://www.star.nesdis.noaa.gov/sod/lsa/SeaLevelRise/LSA_SLR_timeseries.php

  19. In The rate of sea-level rise by Cazenave 2014 she reported sea level change based on satellite altimetery had accelerated to about 3.4mm/yr over the period of 1994-2002. The 5 groups that analyze satellite data were in rough agreement. However between 2003 and 2011 sea level decelerated to about 2.4 mm/yr . Again the 5 groups that analyze satellite data were in rough agreement. But because that conflicted with CO2 warming theory, that data has been adjusted in different ways by various researchers. Welcome t the adjustocene

    FYI Brandon Gates’ interactions with me have been far from honest.

    • FYI Brandon Gates’ interactions with me have been far from honest.

      At least he doesn’t pollute this forum anymore.

  20. Regarding: “And by “greatly”, as the paper points out, using regular statistical methods can easily overstate significance by some 25 orders of magnitude …” :

    This is a very tall claim, and I would like a clearly stated easy-to-follow link to this that makes such a claim easy to find being supported. 25 orders of magnitude has a usual meaning of by a factor of 10^25, or 1 followed by 25 zeros.

    I did check into https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2005GL024476
    I looked for and found the link for Mandelbrot and Wallis [1969b, pp. 230–231] and found that.
    That returned me to https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2005GL024476

    I even tried Googling for Mandelbrot and Wallis 1969 25 orders of magnitude and didn’t quickly see anything that looked good for this.

    • As I said, Donald, it’s from the paper. I pulled the paper up and did a quick search for “order”. Here’s the quote:

      [23] Table 1 contains estimates of the trends and corresponding p‐values for the tests discussed in section 3 applied to the N = 149 temperature observations. All of the tests report nearly the same estimated trend magnitude (equation image), which ranges from 0.0045 to 0.0053 °C/year. As far as the magnitude is concerned, it makes little difference which test is used. Choice of trend test, however, does matter when computing trend significance. The simplest test, Tβ,{0,0,0} (which assumes no LTP), finds strong evidence of trend, a p‐value of 1.8 * 10−27. Tβ,{ϕ,0,0} (which allows for short‐term persistence) yields a p‐value of 5.2 * 10−11, 16 orders of magnitude larger and still highly significant. The p‐value corresponding to either Tβ,{0,d,θ} or Tβ,{ϕ,d,0}, an unadjusted LRT trend test that considers both short‐term and long‐term persistence, is about 7%, which is not significant under the null hypothesis. In changing from one test to another, 25 orders of magnitude of significance vanished. This result is somewhat troubling given the uncertainty about the stochastic process and consequently about which test to rely on.

      See Table 1 for the details.

      w.

  21. Regarding the picture https://4k4oijnpiu3l4c3h-zippykid.netdna-ssl.com/wp-content/uploads/2019/02/church-white-sea-level-w-trends.png
    and also regarding “Inside The Acceleration Factory” linking to https://wattsupwiththat.com/2018/12/17/inside-the-acceleration-factory/ :

    The picture I cited is in the article that I cited. What I want to say is that the article I cited disputes a specific high rate of sea level rise acceleration (sea level rise rate increasing by 2.1 mm/year from one 21-year period ending around 1993 to the following 21-year period). This article does not completely refute acceleration of sea level rise; it finds a smaller number for acceleration of sea level rise (rise rate increasing by .76 instead of 2.1 mm/year from a 21-year period ending around 1993 to the following 21-year period).

  22. It amazes me to come to a post like this less than 6 months after Judith Curry published a free to the world summary of state of sea level rise research and not see her name, nor reference to the findings she reported:

    https://curryja.files.wordpress.com/2018/11/special-report-sea-level-rise3.pdf

    The consensus scientific community has found no solid evidence of acceleration in the North Atlantic. Where they have found it is in the Indian Ocean basin.
    And even then it isn’t an acceleration in the way normal people tank about it.

    The ocean floor of the Indian Ocean is receding and the volume of water container in the Indian Ocean basin is increasing at an accelerating rate.

    Thus, analysis of tide gauge data won’t find it. You have to have a model that incorporates the ocean floor.

    I’m not saying I agree or disagree, but if you want t to say consensus sea level s scientific analysis is wrong, you have know what it is,actually saying!

    https://curryja.files.wordpress.com/2018/11/special-report-sea-level-rise3.pdf

  23. Greg, my point has very little to do with either sea level rise or sea level research, and everything to do with statistics. As such, Dr. Judith’s most excellent paper is not particularly relevant.

    I will note that she, like many others, makes no mention of LTP or adjusting for LTP in her analysis …

    w.

    • Willis, I was talking more to the commenters. Few of which address the excellent content of your post. Your post was merely a launching pad.

  24. I’ve been saying this in various forms since at least my clinategate submission.

    To put it smply: you need a model of normal variation in order to know what is normal. And anyone with any gumption for this kind of issue uses a frequency based model. Statistical tests are pretty meaningless for complex noise.

  25. “And what is the Hurst Exponent of the Church and White data shown above? Well, it’s 0.93, way up near one … a very, very high value. I ascribe this in part to the fact that any global reconstruction is the average of hundreds and hundreds of individual tide records. When you do large-scale averaging it can amplify long-term persistence in the resulting dataset.”

    Hmm.

    How did you calculate that? There are a half a dozen different methods that all yeild different results.

    Also self-similarity and LTP assume that the time series of arrivals is second-order stationary :ie the variance of the time series doesnt change over time . running standard tests on Church will yeild values too high ( even over 1)

    Some folks have estimates more around .6

    • Steven Mosher February 21, 2019 at 2:26 am

      “And what is the Hurst Exponent of the Church and White data shown above? Well, it’s 0.93, way up near one … a very, very high value. I ascribe this in part to the fact that any global reconstruction is the average of hundreds and hundreds of individual tide records. When you do large-scale averaging it can amplify long-term persistence in the resulting dataset.”

      Hmm.

      How did you calculate that? There are a half a dozen different methods that all yeild different results.

      Method of Koutsoyiannis. As described in the head post AND in the link in the head post.

      Also self-similarity and LTP assume that the time series of arrivals is second-order stationary :ie the variance of the time series doesnt change over time . running standard tests on Church will yeild values too high ( even over 1)

      Some folks have estimates more around .6

      That’s why I use the method of Koutsoyiannis … it never gives answers over 1. For the C&W dataset it gives 0.93. This is quite close to that of Hurst’s original R/S method which gives 0.88.

      w.

      • Steve, let me add to my comment.

        First, the method of Koutsoyiannis directly measures the rate of decline of the standard error of the mean with increasing sample size.

        This is exactly the statistic of interest. And no, it is never greater than unity.

        Next, the method of Koutsoyiannis uses all possible contiguous subsamples of length K to estimate the standard error of the mean for sample size K. This means that it is not greatly affected by changes in variance over the length of the series. It all gets included at each sample size.

        Finally, I use the method of Koutsoyiannis because I can derive mathematically the effective N from a Hurst Exponent calculated in that manner. I do not know if that is true for the other methods for calculating the Hurst Exponent.

        Best regards,

        w.

  26. @Willis

    When such autocorrelation extends over long time periods, years or decades, it is often called “long term persistence”, or “LTP”.

    This statistical LTP concept that you are suggesting as a generic statistical term actually appears to be a very specialized term, used exclusively in hydroclimatology, as Koutsoyiannis discusses here:
    https://www.researchgate.net/publication/237280296_Statistical_Analysis_of_Hydroclimatic_Time_Series_Uncertainty_and_Insights

    The term itself suggests some kind of bias, i.e. what about “short-term”, “medium-term” persistence? Apparently LTP is important in hydroclimatology, not so much in other areas or study

    But, generically, what we are trying to do here is find a mathematical model which can explain and/or predict a numeric time-series. In that much broader context, the issues you describe are generally characterized as a tradeoff between ‘bias’ and ‘variance’:
    https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff

  27. “And what is the Hurst Exponent of the Church and White data shown above? Well, it’s 0.93, way up near one … a very, very high value. I ascribe this in part to the fact that any global reconstruction is the average of hundreds and hundreds of individual tide records. When you do large-scale averaging it can amplify long-term persistence in the resulting dataset.”

    Churches data

    http://www.cmar.csiro.au/sealevel/sl_data_cmar.html

    using R fractal package and DFA for calculating hurst exponent.

    .38 or .35

    • Steve, the DFA algorithm from the R fractal package returns a modified Hurst exponent, which is Hurst minus 0.5. You can test this by running the DFA algorithm on rnorm(10000) normal random data. It returns a modified Hurst exponent ≈ 0, which translates to a normal Hurst exponent of about 0.5 as we’d expect.

      This means it gives a value of 0.88 for the actual Hurst exponent. The method of Koutsoyiannis gives 0.93.

      Finally, DFA uses only a portion of the data. It partitions the data into blocks of size K, and there is no overlap. However, this throws away data, the data you get if you use all possible contiguous blocks of size K. Yes, these different ways to divide it overlap … but each division is equally probable. So I prefer to utilize every single possible string of size K when calculating the Hurst exponent.

      w.

      • The method of Koutsoyiannis gives 0.93.

        Sorry, did you detrend the data and check for stationarity?
        All the standard peer reviewed methods I am using dont come close to .93

        seems like there is considerable uncertainty in the calculation of this statistic.

        Other published reports show values of .62

        Rather than hide this it seems a most important analytic decision.

      • Willis and Steven: This passage from Koutsoyiannis (2007) suggests that high Hurst exponents here don’t provide conclusive evidence for LTP in all but the longest time series:

        In addition, because LTP is eventually an asymptotical property of the process (which should be detected on the tail, i.e., on the largest scales), even the detection of LTP is highly uncertain when dealing with time series with short length [Taqqu et al., 1995].

        [27] This point has already been made in some studies. For example, Koutsoyiannis [2002] showed that the sum of three Markovian processes (whose behavior, rigorously speaking is STP) is virtually indistinguishable from a process with LTP for lags as high as of the order of 1000. To demonstrate this point further, we fitted to the E02 series [Esper’s 2002 millennium temperature reconstruction] an ARMA(1, 1) process. Testing the autocorrelation function of the residuals of this, we concluded that they are indistinguishable from white noise; this means that the series is compatible with the ARMA(1, 1) process, i.e., it exhibits STP with Hurst coefficient 0.50. Furthermore, we generated with the fitted ARMA(1, 1) a synthetic series with sample size 2000, and all estimation methods we tried gave incorrect values of H on the order 0.79–0.93. Continuing this experiment, we also found that we need a series with length of about 20 000 to correctly estimate H, namely, to find a value around 0.50. These examples clearly point out that even the distinction between the extreme cases H = 0.5 and H → 1 is not statistically decidable with typical sample sizes.

        https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2006WR005592#wrcr11123-fig-0002

        Which leaves me confused as to how we can be sure that the Nilometer data (700 years long) must represent LTP rather than a ARMA process with high autocorrelation? Figure 5 of the Koutsoyiannis paper linked below shows: Logarithmic plots of standard deviation versus scale for the time series of (left) Boeoticos Kephisos annual runoff; (right) Nilometer with lines of slope -0.5 for classical statistics and H-1 for aN LTP model (fractional Gaussian model?). A line is needed for auto-correlated data, where Neffective is also less than N.

        https://www.itia.ntua.gr/en/getfile/673/1/documents/2005JoHNonStatVsScalingPP.pdf

        • Thanks for the interesting comments, Frank. For my purposes, it doesn’t matter what you might call the phenomenon. What I’ve done is to measure how fast the standard error of the mean decreases with increasing sub-sample size. I’ve then used that value to calculate the effective N.

          As a result, whether you say the decrease in the size of N to give us “effective N” is the result of LTP, or of autocorrelation, or of man in the moon marigolds, is immaterial. We are measuring, not estimating but measuring, a particular property (decrease in s.e.m.) for a particular dataset, and the name we might attach to the purported cause for the decrease doesn’t change the measurement.

          Finally, Steve Mosher asked if I had detrended the dataset and checked for stationarity. I did detrend the dataset, my algorithm does that unless directed otherwise. However, I don’t check for stationarity for a couple of reasons.

          First, I have still not seen a coherent explanation of why a lack of stationarity would affect the method of Koutsoyiannis.

          Next, the Nilometer dataset used by Hurst when he discovered LTP is NOT stationary.

          Next, my own experiments don’t show a problem using the method of Koutsoyiannis on non-stationary data.

          Next, what I’m actually measuring is the standard error of the mean, which includes variance (in the form of standard deviation) in the calculation. So changes in variance shouldn’t affect the final result.

          Finally, fractional gaussian noise is in general not stationary. But as I showed in my link above, the method of Koutsoyiannis does a good job estimating effective N for FGN.

          Best regards,

          w.

  28. Interesting post, as usual! Another strong argument in favor of adaption instead of mitigation.

    “Yeah, there is a trend, but have you accounted for LTP?” This should become a standard reply to warmist claims.

    Typo: Kousoyiannis -> Koutsoyiannis

    • @Blaire
      “Yeah, there is a trend, but have you accounted for LTP?”

      You have it completely misunderstood “LTP”. The warmists have indeed gladly accounted for, and adjusted, the ‘long-term persistence’ of warming in the 1930’s and other similar past historical ‘trends’.

      LTP is _not_ the same as ‘irreducible error’ in analysis. It can be interpreted as bias or variance, depending on your hypothesis. And it definitely has cause and effect.

      • “And it definitely has cause and effect”

        Depends on what you mean with cause and effect. LTP is possible in completely random processes. Ever heard of Hurst-Kolmogorov dynamics?

      • @Johanus

        From the head post:

        “Nature’s Style: Naturally Trendy”:

        In any case, powerful trend tests are available that can accommodate LTP.

        It is therefore surprising that nearly every assessment of trend significance in geophysical variables published during the past few decades has failed to account properly for long-term persistence.

        From your comment:

        The warmists have indeed gladly accounted for, and adjusted, the ‘long-term persistence’ of warming in the 1930’s and other similar past historical ‘trends’.

        I go with the authors of “Nature’s Style: Naturally Trendy”: the warmists have not accounted for LTP.

  29. Wow! three cups of coffee to get through this thread! Thank you Willis for this food ( coffee?) for thought.
    Leo Smith added a few more variables (unquantified):
    ” if tectonic plates are not colliding, erosion will ted to reduce land height and decrease sea depth as the high bits are washed away into the ocean floors.
    a greening planet will show falling sea levels as more water is locked up in hydrocarbons…
    are these significant?
    At the risk of tossing another porkpie into the mosque.. oops correction.. adding another variable to the mix: Is the ocean floor area increasing, decreasing or constant..V/A=d–>MSL?
    Cheers
    Mike

  30. I’ll point once more to my favorite sea-level gauge. Kungsholmsfort in southern Sweden. Sweden initiated an ambitious program of sea-level measurements in 1886 and many of the sites have been continuously monitored ever since. Kungsholmsfort is a coastal fortress built on Archaen bedrock of the Baltic Shield. There is probably very few places anywhere in the World that are more stable or tectonically quiescent.
    The only confounding factor is the isostatic adjustment still going on since the inland ice melted c. 15,000 years ago. This has been studied since the days of Linnaeus in the eighteenth century and is known to be very nearly linear over century-length intervals.

    Completely by accident Kungsholmsfort happened to lie on the line where the isostatic rise (c. 1.5 mm/year) was equal to the absolute sea-level rise in the late nineteenth century (c. 1.5 mm/yr), hence the relative sea-level rise was zero.

    Guess what? The relative sea-level rise is still zero:

    https://www.psmsl.org/data/obtaining/rlr.monthly.plots/70_high.png

    This can only be due to two things:
    1. There is no acceleration in absolute sea-level rise.
    2. The absolute sea-level rise and the isostatic rise accelerate in unison.

    • From Viking warrior women? Reassessing Birka chamber grave Bj.581

      The Viking Age settlement of Birka on the island of Björkö in LakeMälaren, Uppland, is well known as the first urban centre in what is now Sweden.

      It is about 10 miles west of Stockholm. If you look at Figure 1 on page 3 of the pdf, it shows the Viking Age “(c. AD 750–1050)” shoreline in relation to today’s shoreline. Using Google Earth, I eyeballed the Viking Age shoreline at about 16 feet above today’s shoreline. I just thought this might be interesting, given the Chicken Little fear mongering about sea level rise.

      P.S. Here the Supplementary Information for the paper.

  31. It is hard enough to measure sea level rise with tide gauges immersed in the sea. How is it possible to measure sea level from orbit? I contend that it is not.

    Same with ice volume from orbit, just how are these instruments calibrated? NASA and NOAA have a lot of radical activists who have gained positions of “authority.”

    Makes for lots of headlines, but not much if any good science.

    • Ice volume by radar measurements is probably marginally possible. It is after all much easier than sea-level. Icecaps stay still (more or less), don’t have waves, don’t have tides, aren’t affected by winds or air-pressure and have very slow “currents”. On the other hand they are affected by GIA and variations in snow density (but then oceans are affected by GIA too).

      Measuring ice volume gravitationally (GRACE) is much more dubious since the GIA uncertainty in that case is of the same order as the effect you are trying to measure.

  32. As Willis has pointed out many times, statistical analyses of data can be a powerful tool in evaluating the meaning of the data. But powerful as they are, statistics cannot tell you anything about the scientific validity of the data. If you apply statistical analysis to bad data you will still have bad data. A case in point is the validity of satellite sea level measurements. The data has been so ‘adjusted’ can we trust the results? Comparison of satellite data with tide gauge data shows a discrepancy–which should we trust? An apparent acceleration of sea level when you switch from tide gauge records to satellite records sounds highly suspicious.

    • Don: You might want to express your criticisms in terms of systematic vs random error. Satellite altimetry (IMO) has high potential for systematic error and the “adjustments” you and others complain about were corrections of systematic errors in the process of calculating the altitude of a satellite’s orbit and converting the time for a radar signal bounce off the ocean and return into a distance to the surface of the ocean. There are large correction factors (up to meters) for humidity and waves/wind speed (which are calculated from re-analysis, a process that changes over the years) and for the state of the ionosphere. Orbital drift in the vertical direction from normal satellite tracking is at least 1 cm/year, so satellite altitude is being calibrated by measuring the distance to the ocean at special sites (with tide gauges and GPS monitoring of VLM).

      Because satellites are measuring SLR “continuously”, the large number of data points produces a small confidence interval for SLR. However, each of the adjustments to correct for systematic error provided a new answer outside the confidence interval for the previous answer.

  33. Now, I pointed out in my other post how it is … curious … that starting at exactly the same time as the satellite record started in 1992, the trend in the Church and White tide gauge data more than doubled.

    I would love to see a blog post expounding on this topic. It truly seems to be the crux of the matter.

  34. Willis: Thanks for educating readers about how the Hurst coefficient can be used to calculate an Neffective for data with fractional Gaussian noise. I learned a lot from this and earlier posts.

    You haven’t, however, addressed the question of whether fractional Gaussian noise is an appropriate statistical model for the noise in SLR data. Nor how accurately one can determine the slope of the line that gives the Hurst exponent from limited data.

    A few year ago, Doug Keenan and others made a big deal with the Met Office about the fact that a random walk statistical model without a trend fit the historical global temperature record better than an AR1 statistical model with a trend. However, a random walk statistical model that included a “step size” of 1 degC/century would likely drift 10 degC over the 100 centuries of the Holocene and is inconsistent with the expectation that the planet’s climate feedback parameter is negative (which will serve to return temperature steady state mean under constant forcing). The choice of an appropriate statistical model for analyzing the noise in any data set is a difficult and tricky question. Postulating a model isn’t appropriate without discussion of alternatives, but is all too common.

    The Church record of SLR poses enormous challenges, because it is a compilation of different data sources over different periods of time. For most of the period, it is a record of coastal sea level, which gradually expanded from covering a few tide gauges to many. With the addition of satellite altimetry, it became a record of global sea level. The discontinuity at the beginning of the satellite era is a huge red flag. And the uncertainty in this record is much greater in the first half-century than the rest of the record. The Hurst coefficient works well for data with artificial fractional Gaussian noise, but is unlikely to be meaningful here. Time-dependent data inhomogeneity can produce the appearance of LTP, whether it exists or not.

    For the same reason, no other statistical model is likely to be appropriate for detecting acceleration in the Church (2011) data set. The authors provide the results for a quadratic fit to their data, but given the obvious surges and pauses in the rate of SLR (Figure 8), the noise is obviously auto-correlated. The best evidence for acceleration comes from the homogeneous satellite altimetry record alone. In both cases, the calculated acceleation is consistent with the IPCCs central estimates for sea level rise (about 0.5 m by the end of the century), not greater than 1 m. 1 m by the end of the century requires an acceleration of 1 inch/decade/decade or 0.25 mm/yr/yr. Clearly that hasn’t been happening!

    Winds can blow water towards or away from various tide gauges in the coastal record. There is nowhere winds can move ocean water where it can “hide” from satellite altimetry, except perhaps polar regions with sea ice that aren’t part of the satellite record. Huge changes in local sea level (1 foot?) occur in the Equatorial Pacific during El Ninos as trade winds weaken and shift direction. The influence of the AMO can be seen in some Atlantic tide gauge records. From a mechanistic perspective, the noise in local SLR is likely to be due to changing winds (do they show LTP?), with a trend arising from ocean heat uptake, melting of ice caps, and VLM. VLM is likely to be linear with time in some places, but linear is an oversimplification for these other factors.

    • You haven’t, however, addressed the question of whether fractional Gaussian noise is an appropriate statistical model for the noise in SLR data. Nor how accurately one can determine the slope of the line that gives the Hurst exponent from limited data.

      When estimates of the p-value can range over 25 orders of magnitude, there’s strong prima facie indication that the entire model for SLR data, not just the noise, is way off base.

  35. Willis,
    Long ago I was disenchanted by some customary mathematical ways to look at effects like autocorrelation and Long Term Persistence, so I started looking at other ways including the math of geostatistics rather than (say) conventional correlation coefficients. A few others were also looking, one references being
    http://climate.indiana.edu/RobesonPubs/janis_robeson_physgeog_2004.pdf
    which is a recommended paper.
    Early days for me yet, but I’d love to see some of the brain power here pick up the threads and run with it.
    Geoff.

  36. Earth being 4.4 billion years old and using standard statistical methods; no standard statistical methods here and none in the article based on my training; the minimum time increment for 4.4 billion years is around 100,000 years. Or anything less than 100,000 years is just weather. There are some ice cores over 100,000 years old; but not enough. The only way to find climate change for Earth is to date undisturbed rocks from specific locations; and measure the amount of gases in those rocks: nitrogen, oxygen, carbon dioxide, and miscellaneous gases for a comparison over the 4.4 billion year life of Earth. Or this is a geologist inquiry and everyone else is wasting their time talking about weather only; or talking about anything less than 100,000 years is not useful with climate change.

    • PLUS TEN!!
      It amazes me that if you look at a graph of the global temperature from the Carboniferous period to today the average temperature is around 18 to 20 degrees C. and today and for the last several thousand years we are around 13 to 15 degrees C. Thus, we are presently in an exceptionally COLD period of weather. Further, for about 200 million years of that 600 million years the CO2 level was 10 to 20 times greater than today and the global temperature leveled off at 25 degrees C. That is a rather comfortable temperature which should be great for providing an excess of plant foods for the animal/human population of the globe. Yet we are worried about disastrous irreversible climate change. That just does not compute and is illogical. What prevented the temperature from exceeding 25 degrees C for 600 million years? WHAT?

Comments are closed.