Sea level oscillations in Japan and China since the start of the 20th century and consequences for coastal management – Part 1: Japan

Albert Parker

https://doi.org/10.1016/j.ocecoaman.2018.12.031

Highlights

  • Japan has strong quasi-20 and quasi-60 years low frequencies sea level fluctuations.
  • These periodicities translate in specific length requirements of tide gauge records.
  • 1894/1906 to present, there is no sea level acceleration in the 5 long-term stations.
  • Those not affected by crustal movement (4 of 5) do not even show a rising trend.
  • Proper consideration of the natural oscillations should inform coastal planning.

Abstract

In Japan tide gauges are abundant, recording the sea levels since the end of the 19th century. Here I analyze the long-term tide gauges of Japan: the tide gauges of Oshoro, Wajima, Hosojima and Tonoura, that are affected to a lesser extent by crustal movement, and of Aburatsubo, which is more affected by crustal movement. Hosojima has an acceleration 1894 to 2018 of +0.0016 mm/yr2. Wajima has an acceleration 1894 to 2018 of +0.0046 mm/yr2. Oshoro has an acceleration 1906 to 2018 of −0.0058 mm/yr2. Tonoura has an acceleration 1894 to 1984 of −0.0446 mm/yr2. Aburatsubo, has an acceleration 1894 to 2018 of −0.0066 mm/yr2. There is no sign of any sea level acceleration around Japan since the start of the 20th century. The different tide gauges show low frequency (>10 years) oscillations of periodicity quasi-20 and quasi-60 years. The latter periodicity is the strongest in four cases out of five. As the sea levels have been oscillating, but not accelerating, in the long-term-trend tide gauges of Japan since the start of the 20th century, the same as all the other long-term-trend tide gauges of the world, it is increasingly unacceptable to base coastal management on alarmist prediction that are not supported by measurements.

And the Conclusion.

In three of the four long-term tide gauges of Japan, Oshoro, Wajima, Hamada, there is no sea level rise and there is no sea level acceleration. Hosojima has an acceleration 1894 to 2018 of +0.0016 mm/yr2. Wajima has an acceleration 1894 to 2018 of +0.0046 mm/yr2. Oshoro has an acceleration 1906 to 2018 of −0.0058 mm/yr2.

In the fourth term tide gauge of Japan, an apparent sea level rise and acceleration is only the result of the composite record obtained by coupling the long-term tide gauge record of Tonoura, of no acceleration and no sea level rise, with the short-term tide gauge record of Hamada II, sinking and rising at a much faster rate. Tonoura has an acceleration 1894 to 1984 of −0.0446 mm/yr2.

The other long-term tide gauge of Japan, Aburatsubo, which is significantly affected by crustal movement, has an acceleration 1894 to 2018 of −0.0066 mm/yr2.

There is therefore no sign of any sea level acceleration around Japan since the start of the 20th century.

All the long-term tide gauges considered show a clear multidecadal oscillation of periodicity quasi-60 years. This translate in the need of tide gauge records long enough to compute rates of rise (>60 years) and accelerations (>100 years).

Ocean and coastal management should be based on reliable data for sea level rise and acceleration, not on alarmist speculation.

Read the full paper here.

0 0 votes
Article Rating
37 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Donald Kasper
January 3, 2019 10:24 pm

It does not matter what the sea level changes are. What matters is the accuracy of the gauges. I would presume that even 1 mm measurement accuracy is preposterous, and is probably tens of millimeters. As such, the sea level change in Japan is zero. More significant digits than the instrumentation can record is called imaginary precision, is not real, and has no meaning. What this fool just claimed is that late 1800’s sea level was measured to one ten-thousandth of a millimeter. This is beyond idiocy and this paper is garbage. If the accuracy is 10 mm then taking two measurements produces an error of 20 mm, not a precision of 0.0001 mm. This guy is called bonkers and needs to learn about scientific precision.

Lasse
Reply to  Donald Kasper
January 4, 2019 12:33 am

Science is to measure. It has been done in a long time at gages all over the world.
Do not get rude if You do not understand the difference between measurements and calculations.
Look at this: https://tidesandcurrents.noaa.gov/sltrends/sltrends_station.shtml?plot=50yr&id=140-012

Geoff Withnell
Reply to  Lasse
January 4, 2019 4:36 am

Calculations are all well and good. And all digits should be retained while performing the calculations. HOWEVER, the end result of the calculations is only meaningful to the number of significant digits in the original measurements the calculations were based on. If the measurements were to millimeters, there is no way to calculate a meaningful result in tenths of millimeters, let alone the silly levels of accuracy claimed above. Even if the tide gauges are accurate to 1 mm (unlikely) when the averages and trends are calculated and then expressed to the correct number of significant digits, the answer will be 0.

Reply to  Geoff Withnell
January 4, 2019 4:51 am

What if the measurements were “adjusted”? 😉

Geoff Withnell
Reply to  Ed Reid
January 4, 2019 5:00 am

All calculations and “adjustments” in the world cannot add accuracy not present in the original measurements.

Tom Halla
Reply to  Geoff Withnell
January 4, 2019 6:41 am

A regular poster for this site (David Middleton?) noted the resolution of tide gauges is in centimeters, not millimeters. I agree that any claim of .001mm resolution is imaginary.

Johann Wundersamer
Reply to  Geoff Withnell
January 4, 2019 1:59 pm
Albert
Reply to  Donald Kasper
January 4, 2019 12:54 am

It seems that the ability to switch on the brain before start writing is not common any more. Data of sea level rise and acceleration, apart from japan, is from sealevel.info. rates of rise, same of noaa estimations, are given at 0.01 mm accuracy as customary. Accelerations need more digits, as they are mm/yr2. Probably not everybody completed primary school, but accelerations of 0.05 mm/yr2 already translate in considerable sea level rises. What is good of peer review is that stupid comments usually get filtered by peoples able sometimes somewhere to publish something in journals.

Ian W
Reply to  Albert
January 4, 2019 1:41 am

I think that Donald Kasper was pointing out something that needs to be explained to sceptics. Perhaps one of the wunderkind would explain in detail (not just a NOAA URL):

* How a tide gauge works to provide a reading with accuracy to a millimeter or better.
* How a tide gauge knows the level that accurately given the effect of waves, tide, onshore winds, storm surges (sometimes from distant storms)
* The error bars of tide gauges now and in the past.
* How much and what mathematical ‘adjustments’ are made.

Indeed while the wunderkind is at it the same information would be interesting for Satellite measurement of ‘sea level’. Is that against the WGS-84 ellipsoid or the EGM96 geoid or some other invented ‘sea level’?

Then having explained how the accuracy is achieved, the size of the error ‘bars’, and why a finer level of precision than the accuracy is used to report sea levels. Just because it can be done mathematically does not mean it makes metrologic sense.

Ron Long
Reply to  Ian W
January 4, 2019 2:41 am

Very clever, Ian, you are supporting Donald by attacking via the side door. Accumulated error in measurements is a common part of science and engineering study, and Donald is correct. Also looks like he is pissed off. By the way the area studied is part of the “Ring of Fire” and how there isn’t thermal/volcanic inflation and differential land mass movement is beyond me.

Greg
Reply to  Ron Long
January 4, 2019 4:43 am

I think the whole point of this paper is being misread by people wanting a rant about AGW.

“Ocean and coastal management should be based on reliable data for sea level rise and acceleration, not on alarmist speculation.”

What matters for coastal management is the REAL level of sea water relative to the land around. If somewhere is rising due to tectonic activity , all the better, whatever global changes are going is wiped out and there is NO problem.

This is a lot of the problem with alarmist claims about sea level. It is supposed to be life-threatening problem for coastal communities but the gatekeepers of satellite data keep tweaking the data to account for ocean basins getting deeper and other “corrections” so that their sea level is now floating, phantom like, above the waves.

Greg
Reply to  Ron Long
January 4, 2019 4:49 am

Similarly, if you are in Bangladesh, that land is sinking at about ten times the rate the global oceans are rising, so even if the whole world went “carbon neutral” next year, you are still going to lose you land because you are living in a massive river delta, pumping water and mismanaging water flow down the Ganges, which traditionally build up silt and maintained land levels.

So there is no point in warmists using Bangladesh as the poster child of coastal flooding due to AGW driven sea level rise since that is not the cause. But again what matters is REAL sea level for coastal management , not GAIA adjusted BS from satellites.

Ian W
Reply to  Ron Long
January 4, 2019 12:54 pm

Greg – my understanding was that it was the use of the river for irrigation, levees accelerating river flows so silt is carried further out to sea (as in the Mississippi at the Gulf), or, alternatively dams reducing silt reaching the delta. All these have more effect on the level of the deltas than the current sea level rise.
See https://www.sciencedirect.com/science/article/pii/S0921818105001827

commieBob
Reply to  Donald Kasper
January 4, 2019 1:50 am

This problem is more of a bear than it looks like at first glance.

I used to do a numerical experiment with my students where they buried a small sine wave signal in random noise that was a hundred times as powerful. Given enough data, they were able to recover the signal using the output of a comparator and simple math. The comparator tells you whether a voltage exceeds a certain level or not. Its output is either one or zero. It’s actually the noise that lets you resolve the signal’s waveform.

Basically, using the power of averaging and with enough data, you can dramatically increase the precision and accuracy of measurements. This is very well known. In fact it’s so well known that people assume that it’s always true. It isn’t. However, if you know what you’re doing, it is a very powerful tool. Consider the Voyager spacecraft.

Due to their respective distances, tens of billions of kilometres from home, the signal strength from both spacecraft is very weak, only one-tenth of a billion-trillionth of a watt. link

Here’s another tidbit:

So even at 100AU, with “only” 735 seconds acquisition and “only” a 20m dish you can detect Voyager

But sooner or later, the acquisition time and the antenna size become impractical… link

Look at the conditions of the above experiment.
The frequency of the signal was precisely known and the frequency and amplitude were unchanging (ie. zero bandwidth).
The noise was precisely random.
It was a numerical experiment so the comparator was perfect and didn’t suffer from the quirks of a real circuit.

In the case of the Voyager satellites, the conditions are tame enough that we get useful, and rather impressive, results.

In the case of tide gauges or the global temperature network, I am not convinced of the validity of the claimed precision and accuracy.

One issue is the kind of noise we’re faced with. In nature, the noise is not truly random. It is usually red noise. In that kind of noise, low frequencies predominate and it resembles a slow, but unpredictable, drift. It’s well nigh impossible to remove by simple signal processing.

IMHO, most researchers assume they’re looking at truly random noise and calculate their precision and accuracy based on that. It’s a bad mistake.

Steve O
Reply to  commieBob
January 4, 2019 4:55 am

“Basically, using the power of averaging and with enough data, you can dramatically increase the precision and accuracy of measurements.”

There’s a group here who simply won’t accept that. You can explain it with math. You can show the formulas. You can demonstrate it with experiments. You can show examples from the real world. It won’t matter.

DocSiders
Reply to  Steve O
January 4, 2019 6:32 am

Yes, resolution from lots of data will (usually) exceed the resolution of the individual instruments/readers taking the data (by a surprising amount some times), but one must know the accuracy and distribution of the errors of the measurements… as well as “n”…to do so.

Missing in this article is any error assessment of the data. There is no such thing as a PERFECT measurement. Error assessment is not always easy. The distribution and amplitude of measurement errors may or may not be random for instance. If the distribution is “normal” (random) then accuracy increases with the number of samples. If skewed, it still can if the distribution is known (but skew could chang over time…affecting acceleration assessments).

I’m sure that tidal measurement errors over this relatively long measuring period has been well studied. That’s what grad students do a lot of (analyze data and defend analyses).

Detecting sea level rise accelerations on the order of 0.02 +/- 0.05 mm/yr/yr seems feasible to me. This, since the data is from a >200 year period, and the measurement error distributions were probably fairly random (or the skew was fairly constant) and 3 sigma is probably less than +/-2mm.

DocSiders
Reply to  DocSiders
January 4, 2019 7:02 am

Missed some pesky zeros:
0.002 +/- 0.005 mm /yr/yr.

Steve O
Reply to  DocSiders
January 4, 2019 8:00 am

Those are good points and are worth raising. Skewness in error distribution can cause problems. If the skewness is constant, you’ll have a systematic measurement error but you may still be able to detect trends over time. If the skewness changes over time then you can’t be sure of anything.

But there are commenters here who deny the ability of the Central Limit Theorem to eliminate random errors. There are others who accept the Central Limit Theory but claim it doesn’t apply. Views are all over the board on this one.

Clyde Spencer
Reply to  DocSiders
January 4, 2019 11:51 am

Steve O

Basically, the Central Limit Theorem says, “… given a sufficiently large sample size from a population with a finite level of variance, the mean of all samples from the same population will be approximately equal to the mean of the population.”
https://towardsdatascience.com/understanding-the-central-limit-theorem-642473c63ad8

It says nothing about the the precision of the estimate of the population mean. It is really addressing the issue of accuracy.

Some have also tried using the Law of Large Numbers as a rationalization for increasing precision. However, it speaks to discreet probabilistic events like throwing dice or tossing a fair coin. It basically says that the phenomena of getting a run of, say heads, will be less important as more tosses are performed. That is, with a large number of tosses, the theoretical percentage will be approached with greater accuracy.

One of the problems with using the Standard Error of the Mean with a data set that doesn’t meet the criteria, namely data from a variable and skewed, is that while the Standard Error decreases, the Standard Deviation does not. That is, for the global annual temperature of Earth, the Standard Deviation is related to the Range and taking more samples might just even increase the Range as improbable tail values are sampled. Two Standard Deviations are commonly quoted as the uncertainty for the Mean of a number of samples. It seems that only in climatology is it common to find researchers rationalizing high precision of the Mean of a variable by quoting the Standard Error.

https://wattsupwiththat.com/2017/04/12/are-claimed-global-record-temperatures-valid/
https://wattsupwiththat.com/2017/04/23/the-meaning-and-utility-of-averages-as-it-applies-to-climate/

MarkW
Reply to  Steve O
January 4, 2019 8:14 am

SteveO, you missed the conditions that cB put on his statement.
You need to know that your noise is perfectly random. When it comes to climate data, that is not the case. In most cases, the noise is known to be not random.
A point that cB didn’t bring up because it wasn’t relevant to his example, was that taking 100 measurements with a 100 different instruments at 100 different places and times, can be averaged, but the accuracy does not increase because of this averaging.

Geoff Withnell
Reply to  MarkW
January 4, 2019 10:16 am

The Central Limit Theorem is very powerful. But it requires a number of assumptions. I am just about certain that the errors are not random in that the distribution of the errors was wider in the earlier samples than in the later. And we have very little way of being confident that the errors have been well centered around the mean, or that the lack of centering has been consistent. You can’t just wave the CLT at a bunch of data. Just like any other statistical tool, you must show that you have met the assumptions necessary for its use. So far as I can see, not done here.

Clyde Spencer
Reply to  MarkW
January 4, 2019 12:03 pm

MarkW
Yes, if there is an undetected malfunctioning instrument in the collection, it can corrupt the entire data set, albeit with only one out of a hundred, the effect will be small. However, the point being is that the number of instruments alone does not guarantee increased accuracy or precision. The key to successful improvement of precision is that the same thing must be measured multiple times by what is at least functionally an instrument of the same accuracy and precision, if not the same stable instrument. If one is measuring a variable over a period of time, a mean and standard deviation can be calculated, but the measurements would not be the same as those obtained by something with a constant value that coincidentally had the same value as the mean of the variable. In particular, the standard deviations will be significantly different!

skorrent1
Reply to  Steve O
January 4, 2019 8:32 am

Gotta love the EE’s and their signal recognition feats. If you know what you’re looking for, you can usually squeeze the data til it gives you what you want. Mikey Mann did that quite successfully. As a plain old Mech Engr, if I want the (actual, unknown) length of a part to precisely 0.001 in, I’m going to use a mike; I won’t measure with a ruler 100 times and take the average.

Robert Stewart
Reply to  skorrent1
January 4, 2019 10:59 am

+10

Clyde Spencer
Reply to  skorrent1
January 4, 2019 12:05 pm

skorrent1
+sqrt(100)

Reply to  skorrent1
January 4, 2019 1:08 pm

+10 more

commieBob
Reply to  skorrent1
January 4, 2019 6:40 pm

Gotta love the EE’s and their signal recognition feats.

Don’t lump the EEs in with the likes of Dr. Mann. Fully understanding the signal processing that goes on in your cell phone requires about six years of engineering education, ie. an MSc. or MEng. Properly applied, the techniques are very powerful indeed. Improperly applied by scientists who don’t really understand them, they just result in GIGO.

Duane
Reply to  Donald Kasper
January 4, 2019 7:26 am

The author discussed sea level acceleration in sea level rise, defining it to thousandths of a mm/year per year. That is not a measurement of seal level rise, but a statistical calculation based upon whole-mm direct measurements as recorded at these gages in Japan.

Measuring sea level rise to whole millimeters is not preposterous at all if the tide gauges are calibrated in millimeters.

Neil Jordan
January 3, 2019 10:35 pm

Approx. 20-year periodicity could represent the ~19-year lunar-solar Metonic Cycle. American Council on Surveying and Mapping identified this cyclic rise and fall of sea level for US tide gages. PDF can be downloaded at NOAA:
[Search domain tidesandcurrents.noaa.gov/publications/Understanding_Sea_Level_Change.pdf] https://tidesandcurrents.noaa.gov/publications/Understanding_Sea_Level_Change.pdf

Global Cooling
January 3, 2019 11:13 pm

Conclusion: No thermal expansion in Pacific ocean. (See of course Dr Curry’s post here last week). This is consistent with Argo-measurements that show practically zero temperature change. Sea surface temperatures in the atmosphere can’t be far from water temperatures and they correlate well with the temperatures on the continents.

What remains is Artic areas. Do we have good historical evidence about cycles in Artic ice.

kj
January 4, 2019 12:12 am

Albeit non evidenced, I was advised by a tour guide in the city of Fukuoka, Japan that the central business district that we were then bussing through was a location on the sea-port a few centuries earlier.

That location was 2-3 km from the 2015 current harbour-side areas in Fukuoka.

Sea level has not been rising there, it seems!

Reply to  kj
January 4, 2019 1:12 am

kj,
I think you may find that a few centuries ago the port was at the mouth of a river or on an inlet and served small sailing vessels.
Now vessels are much bigger and need deeper water, and I guess that the area between the new and old port areas has just been filled in, or “reclaimed” as the Netherlanders call it. C/W Rotterdam over the last 60 years.

spangled drongo
January 4, 2019 1:32 am

No long-term trend of sea level rise and certainly no acceleration.

This agrees with my area of the Pacific in eastern Australia where similar oscillations make current king tides lower than the 1940s/50s

Lasse
Reply to  spangled drongo
January 4, 2019 3:56 am

You have a wast number of evidence of a periodic change in sea levels.
Always look at the 50 Year trend here:https://tidesandcurrents.noaa.gov/sltrends/sltrends_station.shtml?plot=50yr&id=680-140

Non has a non periodic sign as far as I have noticed.

toorightmate
January 4, 2019 4:21 am

No one seems bothered by the fact that the lateral movement of tectonic plates greatly exceeds rises and/or falls in sea levels wrt time.
Similarly, everyone appears to ignore the plasticity of the earth’s crust.

Clyde Spencer
January 4, 2019 11:16 am

“Japan has strong quasi-20 and quasi-60 years low frequencies sea level fluctuations.”

There are well-known lunar influences of 18.61, 18.03, and 19.00 years period that are used in the predictions of tides. But, anyone who has used tide tables know that they can be wrong because of things like winds and pressure differences. So, it shouldn’t really be surprising that a “strong quasi-20” year sea level fluctuation was found.

Frank
January 4, 2019 10:34 pm

Albert: There may appear to be a requirement for long tide gauge records, but we live on a planet where SLR might have began to accelerate sometime after a significant warming trend began in the 1975 (but couldn’t be robustly detected for at least another decade). We can’t afford to wait a century for an answer. Can’t you add a quadratic term to a model with a linear component and 20- and 60-year sinusoidal components?