SEA LEVEL: Rise and Fall- Part 2 – Tide Gauges

Guest Essay by Kip Hansen

Why do we even talk about sea level and sea level rise?

tide-gauge_boardThere are two important points which readers must be aware of from the first mention of Sea Level Rise (SLR):

  1. SLR is a real concern to coastal cities, low-lying islands and coastal and near-coastal densely-populated areas. It can be real problem. See Part 1 of this series.
  2. SLR is not a threat to much else — not now, not in a hundred years — probably not in a thousand years — maybe, not ever. While it is a valid concern for some coastal cities and low-lying coastal areas, in a global sense, it is a fake problem. 

In order to talk about Sea Level Rise, we must first nail down Sea Level itself.

What is Sea Level?

In this essay, when I say sea level, I am talking about local, relative sea level — this is the level of the sea where it touches the land at any given point.  If we talk of sea level in New York City, we mean the level of the sea where it touches the land mass of Manhattan Island or Long island, the shores of Brooklyn or Queens.  This is the only sea level of any concern to any locality

There is a second concept also called sea level, which is a global standard from which elevations are measured.  This is a conceptual idea — a standardized geodetic reference point — and has nothing whatever to do with the actual level of the water in any of the Earth’s seas.  (Do not bother with the Wiki page for Sea Level — it is a mishmash of misunderstandings.  There is a 90 minute movie that explains the complexity of determining heights from modern GPS data — information from which will be used in the next part of this essay. Yes, I have watched the entire presentation, twice.)

And there is a third concept called absolute, or global, sea level, which is a generalized idea of the average height of the sea surface from the center of the Earth — you could think of it as the water level in a swimming pool which is in active use, visualizing that while there are lots of splashes and ripples and cannon-ball waves washing back and forth, adding more and more water (with the drains stopped up) would increase the absolute level of the water in the pool.   I will discuss this type of Global Sea Level in another essay in this series.

Since the level of the sea is changing every moment because of the tides, waves and wind, there is not, in reality a single experiential water level we can call local Sea Level.  To describe the actuality, we have names for the differing tidal and water height states such as Low Tide, High Tide, and in the middle, Mean Sea Level.  There are other terms for the state of the sea surface, including wave heights and frequency and the Beaufort Wind Scale which describes both the wind speed and the accompanying sea surface conditions.

This is what tides look like:

three_tide_patterns

Diurnal tide cycle (left). An area has a diurnal tidal cycle if it experiences one high and one low tide every lunar day (24 hours and 50 minutes). Many areas in the Gulf of Mexico experience these types of tides.

Semidiurnal tide cycle (middle). An area has a semidiurnal tidal cycle if it experiences two high and two low tides of approximately equal size every lunar day. Many areas on the eastern coast of North America experience these tidal cycles.

Mixed Semidiurnal tide cycle (right). An area has a mixed semidiurnal tidal cycle if it experiences two high and two low tides of different size every lunar day. Many areas on the western coast of North America experience these tidal cycles.

This image shows where the differing types of tides are experienced:

Tide_types_world_map

Tides are caused by the gravitational pull of the Moon and the Sun on the waters of the Earth’s oceans. There are several very good tutorials online explaining the whys and hows of tides:   A short explanation is given at EarthSky here.  A longer tutorial, with several animations, is available from NOAA here (.pdf).

There are quite of number of officially established tidal states (which are just average numerical local relative water levels for each state) — they are called tidal datums and they are set in relation to a set point on the land, usually marked by a brass marker embedded in rock or concrete, a “bench mark” — all tidal datums for a particular tide station are measured in feet above or below this point.  An image of the benchmark for the Battery, NY follows, with an example tidal datums for Mayport, FL (the tidal station associated with Jacksonville, FL, which was recently flooded by Hurricane Irma):

bench_mark_KV0587

Mayport_station_datum

The Australians have slightly different names, as this chart shows  (I have added the U.S. abbreviations):

Australian_Datums

Grammar Note:  They are collectively correctly referred to as “tidal datums” and not “tidal data”.  Data is the plural form and datum is the singular form, as in “Computer Definition. The singular form of data; for example, one datum. It is rarely used, and data, its plural form, is commonly used for both singular and plural.”  However, in the nomenclature of surveying (and tides), we say “A tidal datum is a standard elevation defined by a certain phase of the tide.“  and call the collective set of these elevations at a  particular place “tidal datums”.

The main points of interest to most people are the major datums, from the top down:

MHHW – Mean High High Water – the mean of the higher of the day’s two high tides.   In most places, this is not much different than Mean High Water. In the Mayport example, the difference is 0.28 feet [8.5 cm or 3.3 inches].  In some cases, where Mixed Semidiurnal Tides are experienced, they can be quite different.

MSL – Mean Sea Level – the mean of the tides, high and low.  If there were no tides at all, this would simply be local sea level.

MLLW – Mean Low Low Water – the mean of the lower of the two daily low tides. In most places, this is not much different than Mean Low Water.  In the Mayport example, the difference is 0.05 feet [1.5 cm or 0.6 inches]).  Again, it can be very different where mixed tides are experienced.

Here’ what this looks like on a beach:

Beach_Tides

On a beach, Mean Sea Level would be the vertical midpoint between MHW and MLW.

The High Water Mark is clearly visible on these pier pilings where the growth of mussels and barnacles stops.

high_water_mark_on_pilings

And Sea LevelAt the moment, local relative sea level is obvious — it is the level of the sea.  There is nothing more complicated than that at any time one can see and touch the sea.   If one can note the high water mark and observe the water at its lowest point during the 12 hour and 25 minutes tide cycle, Mean Sea Level is the midpoint between the two.  Simple!

[Unfortunately, in all other senses, sea level, particularly global sea level, as a concept,  is astonishingly complicated and complex.]

For the moment, we will stay with local Relative Mean Sea Level (the level of the sea where it touches the land).

How is Mean Sea Level measured, or determined, for each location?

The answer is:

Tide Gauges

tide-gauge_boardTide Gauges used to be pretty simple — a board looking very much like a ruler sticking up out of the water, the water level hitting the board at various heights as the tides came and went, giving passing vessels an idea of how much water they could expect in the bay or harbor.  This would tell them whether or not their ship would pass over the sand bars or become grounded and possibly wrecked.  One name for this type of device is a “tide staff”.

Since that time, tide gauges have advanced and become more sophisticated.

tide_guages

The image above gives a generalized idea of the older style float and stilling well tide gauges and the newer acoustic-sensor gauges with satellite reporting systems and a back-up pressure sensor gauge.  Modern ships and boats retrieve tide data (really, predictions) on their GPS or chart-plotting device which tells them both magnitude and timing of tides for any day and location.  Details on the specs of various types of tide gauges currently in use in the U.S. are available in a NOAA .pdf file, “Sensor Specifications and Measurement Algorithms”.

The newest Acoustic sensor — the “Aquatrak® (Air Acoustic sensor in protective well)” — has a rated accuracy of “Relative to Datum ± 0.02 m  (Individual measurement) ± 0.005 m (monthly means)”.  For the decimal-fraction impaired, that is a rating of plus/minus 2 centimeters for individual measurements and plus/minus 5 millimeters for monthly means.

Being as gentle as possible with my language, let me point out that the rated accuracy of the monthly mean is a mathematical fantasy.  If each measurement is only accurate to ± 2 cm,  then the monthly mean cannot be MORE accurate than that — it must carry the same range of error/uncertainty as the original measurements from which it is made.   Averaging does not increase accuracy or precision.

[There is an exception — if they were averaging 1,000 measurements of the water level measured at the same place and at the same time — then the average would increase in accuracy for that moment at that place, as it would reduce any random errors between measurements but it would not reduce any systematic errors.]

Thus, as a practical matter, Local Mean Sea Levels, with the latest Tide Gauges, give us a measurement accurate to within ± 2 centimeters, or about ¾ of an inch.  This is far more accuracy than is needed for the originally intended purposes of Tide Gauges — which is to determine water levels at various tide states to enable safe movement of ships, barges and boats in harbors and in tidal rivers.   The extra accuracy does contribute to the scientific effort to understand tides and their movements, timing, magnitude and so forth.

But just let me repeat this for emphasis, as this will become important later on when we consider the use of this data to attempt to determine Global Mean Sea Level from Tide Gauge data, although Local Monthly Mean Sea Level figures are claimed to be accurate to ± 5 millimeters, they are in reality limited to the accuracy of ± 2 centimeters of their original measurements.

 

What constitutes Local Relative Sea Level Change?

Changing Local Relative Mean Sea Level determined by the tide station at the Battery, NY (or any other place) could be a result of the movement of the land and not the rising of the sea.  In reality, at the Battery,  it is both; the sea rises a bit, and the land sinks (or subsides) a bit, the two motions adding up to a perceived rise in local mean sea level.  I use the Battery, NY as an example as I have written about it several times here at WUWT. (see the important corrigendum at the beginning of the essay there – kh)  In summary, the land mass at the Battery is sinking at about 1.3 mm/year, about 2.6 inches over the last 50 years.  The sea has actually risen, during that same time, at that location, about 3.34 inches — the two figures adding up to the 6 inches of apparent Local Mean Sea Level Rise experienced at the Battery between 1963 and 2013 reported in the New York State Sea Level Rise Task Force Report to the Legislature — Dec 31, 2010.

This is true of every tide gauge in the world that is attached directly to a land mass (not ARGO floats, for instance) — the apparent change in local relative MSL is the arithmetic combination of change in the actual level of the sea plus the change resulting from the vertical movement of the land mass. Sinking/subsiding land mass increases apparent SLR, rising land mass reduces apparent SLR.

We know from NOAA’s careful work that the sea is not rising equally everywhere:

uneven_SLR

[Note: image shows satellite derived rates of sea level change]

nor are the seas flat:

lumpy_sea

This image shows a maximum difference of over 66 inches/2 meters in sea surface heights — very high near Japan and very low near Antarctica, with quite a bit of lumpiness in the Atlantic.

The NGS CORS project is a network of Continuously Operating Reference Stations (CORS), all on land, that provide Global Navigation Satellite System (GNSS) data in support of three dimensional positioning.  It represents the gold standard for geodetic positioning, including the vertical movement of land masses at each station.

In order for tide gauge data to be useful in determining absolute SLR (not relative local SLR) — actual rising of the surface of the sea in reference to the center of the Earth — tide gauge data must be coupled to reliable data on vertical land movement at the same site.

As we have seen in the example of the Battery, in New York City, which is associated with a coupled CORS station, the vertical land movement is of the same magnitude as the actual change in sea surface height —  2.6 inches of downward land movement and 3.34 inches of rising sea surface.  In some locations of serious land subsidence, such as the Chesapeake Bay region of the United States, downward vertical land movement exceeds rising water. (See The Chesapeake Bay Bolide Impact: A New View of Coastal Plain Evolution and Land Subsidence and Relative Sea-Level Rise in the Southern Chesapeake Bay Region )  In some parts of the Alaskan coast, sea level appears to be falling due to the uplifting of the land resulting from 6,000 years of glacial melt.

falling_sea_levels_Alaska

Who tracks Global Sea Level with Tide Gauges?

The Permanent Service for Mean Sea Level (PSMSL) has been responsible for the collection, publication, analysis and interpretation of sea level data from the global network of tide gauges since 1933. In 1985, they established the Global Sea Level Observing System (GLOSS), a well-designed, high-quality in situ sea level observing network to support a broad research and operational user base. Nearly every study published about Global Sea Level from tide gauge data uses PSMSL databases.   Note that this data is pre-satellite era technology — the measurements in the PSMSL data base are in situ measurements — measurements made in place at the location — they are not derived from satellite altimetry products.

This feature of the PSMSL data has positive and negative implications.  On the upside, as it is directly measured, it is not prone to satellite drift, instrument drift and error due to aging, and a host of other issues that we face with satellite-derived surface temperature, for instance.  It gives very reliable and accurate (to ± 2 cm) data on Relative Sea Levels — the only sea level data of real concern for localities.

On the other hand, those tide gauges attached to land masses are known to move up and down (as well as north, south, east and west) with the land mass itself, which is in constant, if slow, motion.  The causes of this movement include glacial isostatic adjustment, settling of land-filled areas, subsidence due to the pumping of water out of aquafers,  gas and oil pumping, and the natural processes of settling and compacting of soils in delta areas.  Upward movement of land masses results from isostatic rebound and other general movements of the Earth’s tectonic plates.

For PSMSL data to be useful at all for determining absolute (as opposed to relative) SLR, it obviously must be first corrected for vertical land movement.  However, search as I may, I was unable to determine from the PSMSL site that this was the case.  The question in my mind?  — Is it possible that the world’s premier gold-standard sea level data repository contains data not corrected for the most common confounder of the data? — I email the PSMSL directly and asked this simple question:  Are PSMSL records explicitly corrected for vertical land movement?

The answer:

“The PSMSL data is supplied/downloaded from many data suppliers so the short answer to your question is no. However, where possible we do request that the authorities supply the PSMSL with relevant levelling information so we can monitor the stability of the tide gauge.”

Note: “Leveling” does not relate to vertical land movement but to the attempt to ensure that the tide gauge remains vertically constant in regards to its associated geodetic benchmark.

If PSMSL data were corrected for at-site vertical land movement, then we could  determine changes in actual or absolute local sea surface level changes which could be then be used to determine something that might be considered a scientific rendering of Global Sea Level change.  Such a process would be complicated by the reality of geographically uneven sea surface heights, geographic areas with opposite signs of change and uneven rates-of-change. Unfortunately, PSMSL data is currently uncorrected, and very few (a relative handful) of sites are associated with continuously operating GPS stations.

 

What this all means

The points made in this essay add up to a couple of simple facts:

  1. Tide Gauge data is invaluable for localities in determining tide states, sea surface levels relative to the land, and the rate of change of those levels — the only Sea Level data of concern for local governments and populations. However, Tide Gauge data, even the best station data from the GLOSS network, is only accurate to ±2 centimeters. All derived averages/means of tide gauge data including daily, weekly, monthly and annual means are also only accurate to ±2 centimeters.  Claims of millimetric accuracy of means are unscientific and insupportable.
  2. Tide gauge data is worthless for determining Global Sea Level and/or its change unless it has been explicitly corrected by on-site CORS-like GPS reference station data capable of correcting for vertical land movement. Since the current standard for Tide Gauge data, the PSMSL GLOSS, is not corrected for vertical land movement, all studies based on this uncorrected PSMSL data producing Global Sea Level Rise findings of any kind — magnitude or rate-of-change — are based on data not suited for the purpose, are not scientifically sound and do not, cannot, inform us reliably about Global Sea Levels or Global Sea Level Change.

 

# # # # #

Author’s Comment Policy:

I am always eager to read your comments and to try and answer your on-topic questions.

Try not to jump ahead of the series in comments — this essay covers only the issues of Tide Gauges, the accuracy of their data and the implications of these details.

I will cover, in future parts of the series: How is sea level measured by satellites?  How accurate are satellite sea level measurements anyway?  Do we know that sea level is really rising? If so, how fast is it rising?  Is it accelerating? How can we know?  Should I sell my sea front property?

Please remember, Sea Level Rise is an ongoing Scientific Controversy.  This means that great care must be taken in reading and interpreting new studies and especially media coverage of the topic — bias and advocacy are rampant, opposing forces are firing repeated salvos at one another in the journals and in the press.  In the end, the current consensus — both the alarmist consensus and the skeptical consensus — may well simply be an accurate measure of the prevailing bias in the field from each perspective.  (h/t John Ioannidis)

# # # # #

 

Advertisements

  Subscribe  
newest oldest most voted
Notify of

Here’s a 2016 paper in Nature Climate Change, by Aimée Slangen, John Church, and four other authors, which told us when it was that anthropogenic forcings had kicked in and begun driving sea-level rise:
http://www.nature.com/nclimate/journal/v6/n7/full/nclimate2991.html
Slangen & Church were both at CSIRO (in Australia), so I annotated a NOAA graph of sea-level at Australia’s longest tide gauge, to illustrates the findings of that paper:
http://www.sealevel.info/680-140_Sydney_2016-04_anthro_vs_natural.png
Now, why do you suppose they didn’t didn’t include a graph like that in their paper?

Tom13

“Anthropogenic forcing dominates global mean sea-level rise since 1970”
The title of church’s article in nature – hat tip daveburton

Hugs

This is awesomely funny! Yeah, why not, that would finally finish out the unnecessary debate.
Of course, the people touting CAGW-SLR talk about open water, satellite measured SLR with sea botton adjustment, and mix that with local relative mean sea level in informal cmmunication. Add some cherry picking and you get +100% acceleration, add some exponential fit and we’ll all drown, DROWN I tell you.
Kip will return to this later…

rocketscientist

um…if its a linear trend (doubtful) are the authors implying that some where between 1950 and 1970 natural forcings began shutting down?
If so, its a damn good thing mankind had stepped up to take over so that old mother nature could put her feet up for a bit. What astounds me is that mankind was able to pick up the slack in the natural decline so perfectly so as not to allow a dip in the linear trend. [sarc]

richard verney

Whilst that plot clearly demonstrates that there is no correlation between rising levels of CO2 and rising levels of sea level rise (no acceleration in the rate of change), one of the important points to emerge from that plot is that tide level rise was virtually flat, for some 20 years, between ~1960 and 1980, and there was only a very slight rise, during the 30 year period, between 1960 and 1990.

richard verney

To avoid confusion, my comment is referring to the plot posted by Don B

Duane

Scarlet’s statement: “The land movement at any place is essentially constant* in the hundred year time frame.”
It is quite well documented that localized phenomena, such as large scale pumping of hydrocarbons or groundwater, can have a very rapid effect on land elevations relative to the center of the earth. Other phenomena include construction of dams that create large freshwater impoundments, and erection of tall buildings, can effect local land elevations. The Wilmingon oil field near Long Beach is an extreme example, having experienced over 30 feet of subsidence since production of oil began in the mid 1920s.
As for the other processes that operate on geologic timescales, such as rebounding from the former ice sheet in northern North America, or tectonic subduction at plate boundaries, yes, those are going to be relatively constant over a 100 year timeframe.

Tom Halla

Yet another case, like temperature, of claiming more accuracy in a compilation than existed in the original measurement. How repeated measurements of different things somehow becomes more accurate is beyond me.

Clyde Spencer

Tom,
I think that you mean precision rather than accuracy.

donb

Precision, in principle, may be increased with number of measurements. Accuracy may be biased due to unknown factors in data taking, in which case increased number of data may not increase accuracy.
If one measures the length of a rope QUICKLY using a ruler, increasing the number of measurements can increase precision of the rope’s length.
But if one thought the ruler was graduated in inches, but it really was graduated in centimeters, the accuracy of the measurements would be poor, no matter what the precision.

donb, using anomalies fixes the problem of bias, when you are interested in measuring the change that occurs over time.

MarkW

Only if the bias is constant.

donb

Yes. An example being if the ruler is made of metal and the temperature changes during the measurements.

rocketscientist

For a quick visualization of accuracy v. precision:
Accuracy is having 3 shots land within the center target.
Precision is having a tight 3 shot grouping regardless of location on the target.

>>
rocketscientist
October 8, 2017 at 9:37 am
<<
That explains the difference between accuracy and precision; however, in real life, there is no target present. Precision is the easy part. Accuracy can only be inferred from repeated measurements–preferably using different techniques and hoping that systematic and random errors are cancelled out.
Jim

Mark Fife

My understanding of precision is the level of refinement in a measurement. Meaning a micrometer which reads down to the nearest .0001″ is more precise than one which is graduated down to the nearest .001″. Where as accuracy is a measure of the difference between measured and actual. Averaging repeated measurements of a characteristic of the same object will not yield greater precision as precision is determined by the instrument used. You cannot produce a higher resolution than that provided by the individual measurements.

Hugs

Repeating measument lowers random errors. That’s I measure twice to use the circular saw only once.

old white guy

when the water is over your head make sure you can swim or have a boat.

Hugs

If the water is up to ears, recycling it is an option. Old toilet adage.

Paul Penrose

This is true, but you have to be measuring the same thing, which means it can’t be changing between measurements. Usually we accomplish that by measuring it quickly multiple times. For example, for a tide gauge say we are recording a single datum once each hour. For that hourly datum we might actually make a thousand measurements during a one minute period and average them together. That would reduce the single measurement error due to noise in the system. However averaging the 24 readings for one day to a single value won’t reduce the error any further. This is the common mistake, and an elementary one at that.

AndyHce

But when a thousand people measure a thousand different pieces of wood using a thousand different tools, how can there be any increase in either accuracy or precision in the sum and average of all those measurements?

Paul Penrose wrote, “…for a tide gauge say we are recording a single datum once each hour. For that hourly datum we might actually make a thousand measurements during a one minute period and average them together.”
Tide gauges do that short-term averaging mechanically.
You start with what’s called a “stilling well” — just a vertical pipe, fastened securely to something solid, with the low end submerged, sealed at the bottom except for a small hole. Inside the pipe you have a nice, quiet “sea level” which rises and falls with the tides, but not with the waves. That’s why they call it a “tide gauge.”
The stilling well averages out the waves, but not the tides. There are no waves, chop, swell or foam in a stilling well.
You use surveying techniques to precisely measure the height of the tide gauge relative to nearby geodetic “survey markers” (permanent geographical benchmarks), so that if a big storm washes away your tide gauge you can replace it without introducing a step-change in the data.
Then, in the simplest case, you just dip a measuring stick (a “tide pole” or “tide staff”) into the pipe, and read the water level periodically.
If you read it on a rigorous schedule, based on the timing of the tides, you can get a good quality sea-level record with nothing more than a stilling well and a measuring stick. Some such measurement records go back more than 200 years!
As long as you follow well-established best practices (don’t let the pipe fill up with mud, don’t let the hole near the bottom get plugged, etc.), tide gauges are simple, elegant, precise, and reliable.
Note that even in the 19th century they had strong incentives to not botch or fudge their readings, because the measurement sites were usually near channels and harbors, and if they didn’t know the correct water levels and accurately predict the tides, ships might run aground! I trust 19th century tide gauge measurements, done by hand with a tide stick, more than I trust 21st century satellite altimetry, for sea-level measurement.
An improvement is to put a float in the stilling well, and connect the float to a pen on a strip-chart recorder, for continuous readings, as shown in this diagram:
http://www.sealevel.info/tide_gauge_diagram_01.jpg
Here’s a photograph of one such tide gauge, on display in a Swedish museum:comment image
Of course, modern tide gauges use somewhat fancier methods. But it really doesn’t matter very much whether you have a human being reading a tide stick on a schedule synchronized with the tides, or a float attached to a strip-chart recorder, or an acoustical sounder phoning home its readings 10× per hour. You’ll get pretty much the same numbers for MSL, HWL, LWL, etc.
Note: when upgrading your tide-gauge to use improved technology, it is very easy to ensure that the new system doesn’t bias the data. Just keep an old-fashioned tide stick in the stilling well, and check it against your strip-chart recorder or acoustic sounder readings, for consistency.
 
The contrasts with temperature measurements and satellite altimetry are pretty obvious:
With temperatures you never know when the minimum and maximum will be reached, so even if you used a min-max thermometer your time-of-observation (“TOBS“) could introduce a bias (“correction” of which is an opportunity for introducing other biases). That’s not a problem for sea-level measurement with tide gauges.
With temperatures, the surroundings can greatly influence the readings. That’s generally not a problem for sea-level measurement with tide gauges (though channel silting and dredging can sometimes have an effect on some locations, especially on tidal range).
With temperature measurements, changes in instrumentation, or even in the paint used on the Stevenson Screen, can change your readings. Analogous issues affect satellite altimeters, too, as is obvious by the differences between the measurements from different satellites. But it’s not a significant problem for sea-level measurement with tide gauges.
Also, unlike tide gauges, which are referenced to stable benchmarks, there’s no trustworthy reference frame in space, to determine the locations of the satellites with precision. NASA is aware of this problem. In 2011 NASA proposed (and re-proposed in 2014 / 2015) a new mission called the Geodetic Reference Antenna in SPace (GRASP). The proposal is discussed here, and its implications for measuring sea-level are discussed here. But, so far, the mission has not flown.
Satellite measurements are affected/distorted by mid-ocean sea surface temperature changes, and consequent local steric changes, which don’t affect the coasts.
The longest tide-gauge measurement records are about 200 years long (with a few gaps)! The longest satellite measurement records are about ten years, and the combined record from all satellites is less than 25 years, and the measurements are often inconsistent from one satellite to another:
http://sealevel.info/MSL_Serie_ALL_Global_IB_RWT_NoGIA_Adjust_2016-05-24.png
With temperatures, researchers often go back and “homogenize” (revise) the old data, to “correct” biases that they believe might have distorted the readings. The same thing happens with satellite altimetry data. But it doesn’t happen with sea-level measurement at a particular location by a single tide gauge.
Unlike tide-gauge measurements (but very much like temperature indices), satellite altimetry measurements are subject to sometimes-drastic error and revision, in the post-processing of their data (h/t Steve Case):
http://www.sealevel.info/U_CO_SLR_rel2_vs_rel1p2_SteveCase.png
http://www.sealevel.info/2061wtl.jpg
Those are graphs of the same satellite altimetry data, processed differently. Do you see how much the changes in processing changed the reported trend? In the case of Envisat (the last graph), revisions/corrections which were made up to a decade later tripled the reported trend.

John Bell

I wonder what place on the oceans has the least tide?

Thinking that the poles would be less directly affected by sun/moon gravity. Perhaps I should say differently, as the pull would be less of an effect at poles. But then I am not a scientist.

Don K

Tides are extremely complex. The Wikipedia article on tides https://en.wikipedia.org/wiki/Tide will tell you more than you want to know about the subject. There are a regions — the Mediterranean and Gulf of Mexico that have minimal tides. I think I read a few years ago that there are a few spots (six I think) in the open oceans where tides would be close to zero were there any land there from which to observe tides. I have no idea where I read that or whether it is correct.

RAH

A study of the hydrographic surveys done in preparation for D-day Normandy brings home just how much tidal conditions can vary across a stretch of just 50 miles of coast line. I imagine most people don’t know that the landings at the British beaches started a full hour after those on the US beaches. Part of that difference due to the presence of sandbars off some of the British beaches which sloped even more gradually than the US beaches and part due to a later high tide.

David A

It would vary with 18 year lunar cycles….

indefatigablefrog

This is all far too simple.
If we allow the public to understand that sea level is measured at a number of relevant locations on the coast, and over a relevant period of time before and after industrialization then they may spot that nothing all that remarkable or concerning has occurred.
What needs to be done, is that we should show the tidal gauge methodology until 1993 and then jump to another methodology generated via a flawed interpretation of satellite altimetry data.
Then chuck in some dodgy calibrations and adjustments.
And – BINGO!! – a hockey stick graph.
Everybody likes a hockey stick. Don’t they?
http://www.columbia.edu/~mhs119/SeaLevel/SL.1900-2016.png

Joe - the non climate scientist

Two points
1) Prior to the satellite adjustment, the tide gauge ran at 1.5mm ish per year and the satellite ran at a rate of rise of 3.xmm ish per year, both with the same rate of doubling of the rate of rise (basically a doubling of the rate over 150years. ) Ie the rate of sea level rise would reach about 6mm a year after 150 years.
2) They adjusted / “recalberated” the rate of sea level using satellite data such that the rate of sea level matched the tide gauges in order to make satellite match the tide gauges in 1993. Not withstanding the satellite doesnt match tide gauges today.

Tom13

Good point – wish I had remembered that adjustment a few months ago –
Skeptical science ran their typical article on the acceleration in the rate of SL along with the likelihood of a doubling of the rate in just 20 or so years and their frequent commentary on 3 – 6 foot rise by the end of the century.
They posters did not seem to grasp that almost the entire increase in the rate of accelleration was due to the change in the method of measurement – not with the empirical / reality rate of sea level rise

indefatigablefrog

Well they really do seem to behave like a throng of uncritically starry-eyed true believers.
Nobody contributing to SkS seems to have the capacity to question even the most grotesquely blatant distortions and misdirections. The fact that they call themselves skeptical is really quite shocking.
Perhaps they do really unskeptically believe that they are skeptical.
Even when they discovered that Al Jazeera wanted to promote their website, nobody there was capable of noticing that a propaganda outlet funded by Qatar might have skewed motives:
http://www.populartechnology.net/2012/09/skeptical-science-from-al-gore-to-al.html

David A

” Nobody contributing to SkS seems to have the capacity to question even the most grotesquely blatant distortions”
Indeed! Real skeptics are censored at SKS.

Indefatigablefrog, I think that graph is Hansen’s, right?
Tony Heller (a/k/a Steven Goddard) memorably called that the “IPCC Sea Level Nature Trick,” to make the point that such spliced graphs molest the sea-level data much like Mann molested temperature data with his “Nature Trick.” Both conflate two very different kinds of data to create a misleading apparent trend.
(In fairness to Hansen, though, at least his bogus sea-level graph draws the two different sorts of measurements in different colors. Mann didn’t do that.)

Andy & MarkW…..adjustments are adjustments are adjustments are adjustments. Doesn’t matter if they are temperature or sea level adjustments, they are all ADJUSTMENTS.

AndyG55

Still haven’t figured out the difference between validated technical engineering adjustments…
… and agenda driven fantasy adjustments, have you Mark’s johnson.

indefatigablefrog

I think that that particular example may be “Hansen on steroids”.
But, a similar examples can be found by googling “sea level rise columbia”.
I found it originally in Columbia University educational material.
And yes, Hansen’s name is associated with a very similar presentation.
It’s shocking to think that university students are being presented with this guff, and then expected to uncritically believe what the graph appears to show.
Quite clearly there has NOT been a critical step change in SLR rate occurring in 1993.
If an apparent step change is produced by the switch between methodologies, then surely we should suspect that the switch is the only cause. Obviously.
The fact that Hansen was happy to attempt to pass this off, is only more evidence of his progressive derangement, as his earlier predictions fail to manifest within his lifetime.

Sorry, when I wrote “Indefatigablefrog, I think that graph is Hansen’s, right?” I meant James Hansen, not Kip, and that’s a link to the web page where he and Makiko Sato have a very-frequently-updated version of the hockey stick sea-level graph which Indefatigablefrog posted:
http://www.columbia.edu/~mhs119/SeaLevel/
The graph is the 2nd figure on that page
Also, some of the older versions can be retrieved from TheWaybackMachine:
http://web.archive.org/web/*/http://www.columbia.edu/~mhs119/SeaLevel/

Mark: adjustments
are necessary to correct
for known biases.
how would you prefer to
correct for these
biases?

AndyG55

Early satellite “adjustments” .. Started around 2002/3 just when they needed some actual SLR.comment image

Current satellite temperature data is also “adjusted.” For example, UAH 5.6 versus UAH 6.0

MarkW

The reason and method for the satellite adjustments are published and are very defensible.
Neither is true for the ground based network.

AndyG55

You really are a brain-washed AGW sychophant/cultist, aren’t you Mark’s johnson.
So funny watching your ignorant inane remarks.
Adjustments:
UAH ..known technical engineering issues, validated
Satellite SLR..: agenda whim, non-validated.
Note that early TOPEX matched tide gauges well….. then the AGW scám got started.
Everything above about 2mm/year in the satellite SL is from “adjustments™”

AndyG55: ” You really are a brain-washed AGW sychophant/cultist, aren’t you”

LOL name calling?
..
https://www.realskeptic.com/2013/12/23/anthony-watts-resort-name-calling-youve-lost-argument/

AndyG55

Do you DENY you are brain-washed?
Do you DENY you are an AGW cultist.
Not name-calling at all.
Just facts.
Learn the difference.
(Andy,drop this useless chatter,debate instead) MOD

Mark: UAH data are
adjusted each and
every month.
it’s not difficult to
understand why, if you
read their papers.
(Crackers: Warning — I will not tolerate off-topic trolling on this essay. This essay is about Tide Gauges and Sea level. Stick with that please. — kh)

EW3

Having worked as a GPS engineer for (too) many years, I can tell you that no satellite can produce millimeter accuracy of sea level. Orbits are just not that stable. GPS birds are not accurate to mm/year. Even with adjustments to their ephemeral data on a daily basis.
And if someone says the satellites used for altimetry rely on GPS data, they should appreciate GPS is not very accurate in the vertical dimension.

Thank you for that, EW3. Dr. Willie Soon agrees with you. He explains the problems starting at 17:37 in this very informative hour-long lecture:

NASA agrees with you to, I think. At least it seems like they agree with you, when they argue for the proposed GRASP (Geodetic Reference Antenna in SPace) mission.
BTW, if your identity is not a secret, I’d be grateful for an email. My address is here:
http://sealevel.info/contact.html

Bill Illis

GPS is not accurate on a day-to-day basis but once a GPS station is operating for 5 years or so, a definitive signal emerges which is accurate to the tenth of a millimetre.
Sonel.org maintains a database of GPS stations which are co-located with Tide Gauges and there are more than 200 co-located stations which are operating past the 5 years now.
This is the local land uplift around the world (there is newer version of this now but the graphic available is not very good).
http://www.sonel.org/IMG/png/ulr5_vvf.png
The data can be obtained here:
http://www.sonel.org/-Sea-level-trends-.html?lang=en
1960-1992, GPS adjusted tide gauges – 1.82 mms/year.
1992 to 2013, GPS adjusted tide gauges – 2.12 mms/year.
In 2013, GPS adjusted tide gauges -0.345 mms.
In 2012, GPS adjusted tide gauges +4.25 mms.
In 2011, GPS adjusted tide gauges +2.79 mms.
Since sea level changes with the ENSO, we should expect a large rise in 2015 and then a decline in 2016 and 2017.

Bill Illis wrote, “Since sea level changes with the ENSO…”
It depends on where you are. In San Diego, and in the satellite-altimetry graphs, sea-level changes with ENSO. But in the western tropical Pacific sea-level changes opposite to ENSO.
Here’s the J.Hansen / M.Sato graph showing the strong positive correlation between ENSO and satellite altimetry measurements of sea-level:
http://www.columbia.edu/~mhs119/SeaLevel/SL+Nino34.png
But look how San Diego and Kwajalein are mirror-opposites of each other:
http://sealevel.info/1820000_Kwajalein_San_Diego_2016-04_vs_ENSO.png
With proper weightings, it should be possible to build a “global sea-level” index/average from coastal tide-gauges which mostly eliminates the ENSO influence.
Because in the western tropical Pacific sea-level changes opposite to ENSO, I posit that you should be able to also construct a good ENSO proxy by calculating the ratio of news stories about “record high temperatures” to news stories about “drowning island paradises.”

Rah

Bill Ellis thank you. I will keep that Sonel link. Now I have a question.
How can there be GPS data available for the 1960s? Though the Navy had a limited system up in the 70s, the 24 satellite NAVSTAR system as we know GPS today did not become fully operational until 1993.

u.k.(us)

@ Bill Illis,
I always try to read your carefully constructed comments.
So it surprises me that you say ” a definitive signal emerges which is accurate to the tenth of a millimetre.”
Our little orb is being stretched by forces, call it gravity or what you will, and you really think there is some kind of center point where all these forces can be measured from ?

Bill Illis

The idea is that a local land uplift or subsidence rate is a geologic phenomenon. The rate will be stable for decades if not thousands of years.
Most of the local GPS uplift/subsidence rates will defined by the Earth rebounding/adjusting from the last ice age glacial loads. These rates have probably changed some through time but for the last several thousand years, they would have been very stable.
The other two impacts will be from:
– tectonic movement (which is again a million year type time-frame although a recent local earthquake can influence the GPS signal occasionally which is treated as a break-point when they happen); and then,
– underground water depletion or resupply (which is stable enough in terms of a decade or more).
Thus, the GPS rate of the last 5 years is probably the rate that has existed for at least a few decades if not thousands of years.

Clyde Spencer

Bill Illis.
You said, “Thus, the GPS rate of the last 5 years is probably the rate that has existed for at least a few decades if not thousands of years.” Major faults such as the San Andreas have an average lateral motion of about 2 cm per year, but sections can become locked and move much less — until they release! These dominantly strike-slip faults also have a vertical component as well. The only way that the average vertical motion over thousands of years can be calculated is to calculate the average of the episodic events, not through monitoring a short, quiet interval in-between events.

EW3 – that’s why the GRACE
mission was so important.
(2 sats)

Bill Illis

GPS can measure vertical movement and East-West and North-South Movement. This has revolutionized continental drift movement and theory (we actually know now).
This is the data from the GPS station on the western side of the San Andreas fault at San Francisco (Tiburon Peninsula). The data is actually quite stable other than the earthquakes. Almost all GPS stations are like this with fairly stable trends. Wait five years and that is enough to be reasonably sure.
– west at 19.0 mms/year;
– north at 25.0 mms/year; and,
– down up 1.0 mms/year (although a Magnitude 5 Earthquake in 1999 shifted the station up by 120 mms)comment image

Bill Illis

Kip Hansen October 8, 2017 at 3:07 pm
The data comes from the USGS although Sonel.org also uses this station (Sonel’s charts can’t be linked to since I imagine they are right in the middle of the great global warming debate so they need to be careful and obscure at times).
https://earthquake.usgs.gov/monitoring/gps/SFBayArea/tibb
http://www.sonel.org/spip.php?page=gps&idStation=3024

Clyde Spencer

Bill Ellis,
The two blue diagonal lines (EW & NS) represent the nominal 2 cm/yr ‘creep’ that takes place along unlocked sections of the fault line. It is generally thought that the creep does not relieve all the stress and therefore abrupt movements (earthquakes) can be anticipated at multi-centennial intervals to release the stored strain. The blue lines do not represent the long-term behavior of the faults.

Kip wrote, “For the complexity of the calculations that must be performed to arrive at the long-term trend or vertical movement, you might watch the 1.5 hour presentation on how this needs to be done to be accurate.”
When I started to play the video, it reported that the total length is 3 hours! Yikes!
The web version uses FlashPlayer, so the speed is not adjustable. But there’s a link to an .mp3 version. I guess the thing to do is download the .mp3 version and play it in VLC or similar, so that it can be sped up to save some time.
Thanks for the link!

Hugs

This grafting exercise has a certain taste of Excel in it. Who was the author, by the way?

Hugs

Hansen I’m told, but not sure?

Are you talking about the 2nd graph on this J.Hansen / M.Sato page?
http://www.columbia.edu/~mhs119/SeaLevel/

Hugs

Sato and Hansen. Yes, I’m surprised. OTOH, Hansen was ready to do the congress scam, so why not.

I think I might see a way to average over a month and improve the accuracy somewhat, although I’m not sure how to calculate the improvement. If we assume (as would be reasonable) that actual sea level doesn’t rise more than .17mm per month (which would be the average gain if the actual rise was 2mm/year), then for monthly purposes, we could assume that the daily measurements would likely center around this very small variation, and the kind of accuracy improvement that the author refers to (multiple measurements at a single point in a single day) could be applied, at least within the theoretical variance over the course of a month. Now that’s a lot of assumptions (including the notion that sea level rise is truly constant vs. fits and starts), but if you made those assumptions, you could (theoretically) improve the precision of the monthly measurement.
Am I off base here? Please chime in if so.

TonyL

Let us be clear about our topic.

In this essay, when I say sea level, I am talking about local, relative sea level

OK, good. We are talking about local and relative, as pertains to the people living there.

On a beach, Mean Sea Level would be the vertical midpoint between MHW and MLW.

OK, good. On a beach, and the same at a tide gauge.
The gauge constantly measures the tide coming in, and going out. This gives me 2 opportunities a day to measure high water and low water. True, the tides get larger and smaller according to the phase of the moon, but the change is symmetrical (or close enough). So I calculate MSL twice a day or 60 per month.
It seems to me that as we average all the individual MSL readings, we do, in fact, gain precision.
Standard statistics requirements:
1) No systematic error in the instrument calibration. (a topic unto itself)
2) Measurement errors are what is said to be random, and evenly distributed about the mean.
3) With conditions 1 and 2 satisfied, precision increases proportional to the square root of N.

TonyL

it will still only be properly represented by adding back on the +/- 2 cm.

The only way this makes sense to me is if you are talking about calibration error, not a measurement error.
I appreciate your comments, but I stand my ground.
(I would concede the point if it could be shown that individual determinations of MSL can *not* be averaged together.)

james whelan

Kip is absolutely correct. It doesn’t matter how many times you take a measurement, the accuracy of the instrument determines the error band. There is no reduction in the probability of the actual event being anywhere in the band +-2cm.
This basic misunderstanding of how errors should be dealt with is endemic throughout ‘climatology’. It is the underlying reason behind the results of the ‘random walk’ analysis paper recently published on this site. The error bands that should surround all the data points used in the ‘climate change’ debate completely swamp any perceived ‘trends’.
Basically its all numerology , with no foundation in science at all.

The correlation between emissions and sea level rise
https://ssrn.com/abstract=3023248

not a peer reviewed
journal paper, just an
amateur
(Crackers: I repeat my warning one last time — no trolling. If you have something constructive to say, and it is on topic, you may do so. Sniping not allowed here. — kh)

Please repair the link to Part 1.

Steve Case

Do a daily search on “Sea Level” in the news. The usual story is a meter or more by 2100 and what are we going to do about it.
Here’s a story from Marinij.com this past Thursday:

[QUOTE]Marin thinkers join effort to tackle sea-level rise
San Francisco Bay Conservation and Development Commission maps show a 3-foot rise over the next 100 years[/QUOTE]

California has a very low rate of sea level rise. The San Francisco tide gauge records back to 1856 has an over all rate of 1.5 mm/yr and for the last thirty years the rate has been 1.9 mm/yr. Over much of the 20th century that 30 year rate was between 2 and 3 mm/yr.
Source
http://www.psmsl.org/data/obtaining/rlr.annual.data/10.rlrdata
Three feet over the next 100 years comes to an average rate of over 9 mm/yr the question to ask the folks who write these articles is when is this acceleration to these higher rates going to begin to happen? I sometimes doubt that these people even know that there’s a tide gauge in their area.

Steve Lohr

Thanks for the article, very interesting and I look forward to more. It coincides with a recent experience I had with water levels. I just came back from muskie fishing in Minnesota and had a conversation with my fishing buddy about “tides” on one of the large lakes we fished. I hadn’t thought about it much until I observed what was obviously an “inter tidal zone” along the shore. Not wanting to leave it at that I checked it out with some research when I came home. While there apparently are tides of a few centimeters on large lakes there is a significant change in lake levels caused by seiches. It is a phenomenon where wind and barometric changes can make standing waves that come and go with very low frequencies and is related to the same physics as storm surges. Interesting stuff. Thanks, again.

Mark Luhman

Years ago camping one of the the many islands I watch the level of the Lake of the Woods shift a foot with a wind change. Some fisherman unknown to them were running over a reef in a middle of a channel for two day in a roll, did not make over it the third day. The Aluminum boat returned after the grounding the fiberglass boat did not. Yes in large lake wind does make a difference. Of course all could have been avoided it those fisherman had bought a lake map.

LewSkannen

“Averaging does not increase accuracy or precision.”
That is a point that has been totally lost in climate ‘science’. They even think that taking averages of uncorrelated model results somehow adds information. We have a hell of a battle to overcome this one.

MarkW

They think that if they take a few thousand measurements from a few thousand locations using a different instrument at each location and a few dozen different types of instruments, on approximately the same day, they can improve their precision by averaging all those readings.

Actually MarkW they can. For example, if you wish to measure the global temperature, going out to your back yard, and reading the value off of your thermometer is not a good measure. If you take ” a few thousand measurements from a few thousand locations using a different instrument at each location,” you are going to do much better than what is happening in your back yard.

AndyG55

” you are going to do much better than what is happening in your back yard.”
WRONG !!!!!

1sky1

A number of statements and claims here need to be corrected. I have time only for the following:

If each measurement is only accurate to ± 2 cm, then the monthly mean cannot be MORE accurate than that — it must carry the same range of error/uncertainty as the original measurements from which it is made.

Tide gauge accuracy is limited primarily by the residual effects of wind waves. Since they introduce short period, zero-mean fluctuations, averaging a large number of independent readings decidedly reduces the inaccuracy of sea-level estimation from this high-frequency effect.

If one can note the high water mark and observe the water at its lowest point during the 12 hour and 25 minutes tide cycle, Mean Sea Level is the midpoint between the two. Simple!

The most rudimentary reflection should alert us that this cannot remotely work in the case of mixed tides. Even a full diurnal cycle is not enough. Experience indicates that a rough estimate of MSL can be obtained from hourly tide-gauge readings over a lunar month. For close oceanographi work, a period covering a full cycle of the precession of lunar nodes (18.6 years) is the preferred standard.
All in all there’s a lack of analytic comprehension of the issues involved. A professional summary of vertical datums can be found here: https://www.ngs.noaa.gov/datums/vertical/

james whelan

It is very clear from ‘warmists’ responses to this article as well as the one on ‘random walk’ earlier, that they completely misunderstand or misuse how statistical techniques can be used.
The =-2cm error band applies to all and every reading, however many you take with the same equipment. That means there is an equal probability of the actual measurement being anywhere in this band. All you do with the thousands of readings is produce a probability distribution and calculate a mean. However that just says that it is more likely that the center of the band is in one place, it does absolutely nothing to reduce the equal probability that it actually lies anywhere within =-2cm from that mean. Kip will no doubt follow with the equations that demonstrate this mathematically. But it is very simple logic if you use your noggin rather than rely on ‘computers’.

stevefitzpatrick

You don’t understand what you are talking about. Neither does this post’s author. Uncertainty in the estimate of the true mean of a population falls as 1/(n -1)^0.5; where n is the number of independent meadurements from that population. It is perfectly reasonable to consider the true mean sea level at a location as the average of a ‘population’ of different measured levels over time. The suggestion that the accuracy of the mean sea level at a location is not improved by taking many readings over an extended period is risible, and betrays a fundamental lack of understanding of physical science.

Clyde Spencer

stevefitzpatrick,
What you claim is only true for something with a fixed value where the uncertainty comes from either estimating a Vernier or trying to guess the correct last value for a digital display that is fluctuating.
Consider this: There have been hundreds of thousands, if not millions, of individuals who have taken IQ tests. The mean is nominally reported as 100 (not 100.000). Individual scores are typically reported to, at most, a single digit. It is usually acknowledged that scores for an individual may fluctuate several points when re-tested. Thus, it is not particularly informative to even report an IQ to more than probably about +/- 5 points. Re-testing may provide bounds, but averaging to try to improve the precision is pointless.

james whelan

stevefiztpatrick, I feel like I am trying to educate someone about the difference between velocity and acceleration! Yes of course your equation is correct, and is exactly what I said. The more readings taken will produce a distribution around a mean, that mean is the center of a 4cm wide band. It doesn’t matter how many times you take the reading, the 4cm band remains because there is an equal probability of the ‘actual’ being within that band for every reading taken. Think of it this way, every time you take a reading you just move the 4cm band up or down a bit, you never reduce it.

1sky1

It is very clear from ‘warmists’ responses to this article as well as the one on ‘random walk’ earlier, that they completely misunderstand or misuse how statistical techniques can be used.

What’s truly clear is the abject lack of any recognition that sea-level estimation is not an ordinary statistical problem but a signal detection and estimation problem in geophysics. Sadly that issue is obscured by those who have never actually done the science, but assume that whatever challenges their Wiki-expertise must be wrong.
More on this tomorrow.

stevefitzpatrick

Kip Hansen,
Not from my education, but from 45 years work in science and engineering. The uncertainty in an estimate of a population mean most certainly becomes smaller with more measurements of the population. The uncertainty in the mean sea level at a tide gauge location becomes smaller when many readings are collected over an extended period. The uncertainty in a single reading of a tide gauge is due to external influences like wind driven waves, and not due to limited accuracy of the measuring hardware. Variation due to external influences will average over time; a long term secular trend (eg long term sea level rise… or even a long term fall where glacial rebound is large) will not average out. Any suggestion that it is impossible to measure average sea level at a location with an irreducible uncertainty equal to the uncertainty in a single measurement is simply wrong.

stevefitzpatrick

James Wbeeland,
And I feel like I am trying to educate someone who is disconnected from actual science (or engineeting) experience. The uncertainty in a population mean in a single measurement can be (and routinely is) reduced via taking multiple measurements of the population. To suggest that the mean sea level at a location is uncertain to +/- 2 cm because a single measurement has that uncertainty (due to things like waves) is risible.

stevefitzpatrick

Kip Hansen,
If you want to send something then I will read it… and tell you where it is wrong.

Clyde Spencer

stevefitzpatrick,
You should keep in mind that many of us commenting here have similar education and work experience as yourself, and stating yours isn’t going to cut it because we generally don’t give much credence to anonymous authority. Is it possible that after 45 years you misremember some of the details of what you ‘learned?’
Have you read the material at the links I provided above? I address the issue of improvement of precision, with multiple measurements, with some detail in my article, with references.
I have stated that the accuracy of the estimate of the mean will improve with multiple measurements, but the precision will not.

LdB

I am not entering the debate as both are sort of right. What you are arguing over is measurement background, Clyde Spencer in his discussion of IQ showed a moving background because IQ measurement is somewhat subjective on test conditions a fact he noted.
If the background is static you can use statistics to improve the accuracy in a simple way if the background is moving it gets a lot trickier. It crops up in physics in all sorts of place like trying to take measurements in a spacetime that is expanding, rotating and moving.
Any physical measurement of even a ruler has a problem that the place you are measuring on Earth is rotating and the ruler is slightly longer or shorter depending where you measure it regardless of how many times you average it. The averaging is telling you more about where you measure it rather than any accuracy.
The question you are all trying to sort out is the measurement background stable enough to allow statistics. I don’t know the subject well enough to have a view but both groups are failing to describe what they are arguing over, so hopefully this sorts that out.

LdB

As a suggestion both groups could state what they believe the background tide gauge movement is. For example on the ruler case I gave above I would accept a measurement in millimeters with a couple of decimal places. You try to give me a number in millimeters with 20 decimal places and I am going to laugh myself silly at you.

SteveF is right. Having more samples does improve the estimate of mean in the way he describes. That’s why, in so many circumstances, people go to much cost and trouble to get a large sample.

Clyde Spencer

NS,
You said, “That’s why, in so many circumstances, people go to much cost and trouble to get a large sample.” Perhaps they “go to much cost and trouble” because they haven’t done a formal cost-benefit analysis and just assume that more is better.

Steve Fitzpatrick

Clyde Spencer,
Yes, I read the essays… utter rubbish, betraying the same gross misunderstanding of the subject as your comments on this thread. Really, you have not a clue what you are talking about.

Clyde Spencer

stevefitzpatrick;
I want to thank you for taking the time to respond in detail regarding how and why my essays were wrong, and not just ranting about how things you disagree with are rubbish, as happens all too often here with people who are convinced that they are right and everyone else is stupid. I’m glad that you aren’t the kind of person who has a closed mind and thinks he knows everything and doesn’t feel any need to provide references for his claims. Readers here will see you for the kind of person that you are, and you should consider that your reward for your efforts.

Latitude

Kip….great piece….very informative and a good read….can’t wait for the second installment!

Stevan Reddish

I admit it has been many years since I sailed the strait of Juan de Fuca. It was long enough ago that I did not have an electronic chart plotter. I had only depth charts and tide tables. As I remember, the soundings printed on the depth charts were actual depth at mean low tide in order to show typical depth at minimum water. Printed tide tables were corollated with mean low tide, also. Low neap tides often had positive numbers. If the tide table listed the low tide as +2.3, I knew to expect at least 2.3 feet more depth than chart soundings indicated. This system was developed in the days before accurate time pieces. If a captain new the minimum depth for a given day, he didn’t need to know what it was at every moment of the day.
While I never checked, I always assumed the zero point on a tide staff marked mean low tide so that tide tables would aligned with the tide staff.
SR

1sky1

The datum level for USGS charts of West Coast waters is mean LOWER low water (MLLW), appropriate for the semi-diurnal tides experienced there.

1sky1

MLLW is the mean, not the minimum, of the lower of the two diurnal lows.
BTW, what happened to my comment from half an hour ago?

Bob boder

Just saying, if I am a sailor 100 years ago and I was measuring low water marks i think I would purposely say the low was a little shallower then it really is, just for an extra margin of safety.

Which is why the British and others use LAT as a chart datum to give a little extra safety margin.

Macha

I live on the west coast of Australia and seen no evidence of the sea level rise shown in the NOAA Satellite altimeter chart… Ie red =+20cm. Surely Australian land mass would be one of the most stable on the planet, so that probably leaves shrinkage from bore extraction and lower rainfall.???.

Don K

Kip
It’s an excellent article. No typos at all that I spotted. (But I’m a lousy proof reader). A few points:
You didn’t mention the need to filter out wave motion. That’s typically done with settling wells of one sort or another although it can be done digitally if your measurement interval is sufficiently short.
You left out barometric pressure as a variable. It is important enough to affect sea level measurements. Roughly 1mm of water level change for one mm HG of air pressure change.
You should not be surprised that PSMSL measurements are not corrected for tectonic changes in gauge elevation. Measuring how fast sites are rising and sinking is extremely difficult . It’s possible to detect a tide gauge that is actually sinking into the muck using surveying, but regional isostasy from glacial rebound or above a subduction zone requires something like GPS — which is just barely able to do the job and takes years of observations to do that. You can’t just plunk your GPS down next to the gauge, go off and get a beer, and come back to get your reading. Not surprisingly, not all, or even most, stations have good tectonics information.
One special case of local sea level. Wherever tidal gauges are situated along a coastline that is parallel to a chain of active volcanoes — e.g. the coasts of Oregon and Washington — it is likely that the tidal gauges are being pushed upward by material being piled up under the coastline by the underlying subduction phenomenon. The problem is that at some point that subduction zone will likely come “unstuck” with a magnitude 9 earthquake and the tidal gauge will abruptly drop a meter or two. We don’t currently know how to correct apparent SLR for that.

NW sage

I haven’t seen here any discussion of the change in the shape of the earth itself (not just the seas) as a result of the pull of the sun/moon. The earth is an oblate spheroid made that way because of it’s spin and in addition the nearest and most massive neighbors continuously change that shape with their gravitational pull. This obviously creates differences in the distance to the theoretical center of mass and will correspondingly influence – at least theoretically – any tidal/sea surface height information.
Is this effect orders of magnitude too small to be of any use here?

Leo Smith

Averaging does not increase accuracy or precision.

I have seen this myth quoted time and again here. It is wrong. Given normal probability distribution of errors, measuring the same thing many times on many instruments gives a vastly improved accuarcy and range of error.
This is such a fundamental principle of science – that the more times you do an experiment to establish a particular numerical result the more accurate is the averaged result, provided that the reasons for inaccuracy are random – that I fail to see why it is not understood.

+100 Leo

As I have posted elsewhere, you can measure the average height of an American male, with a stick having marks at 1 foot intervals, if you take enough samples.

TimTheToolMan

Mark writes (again)

As I have posted elsewhere, you can measure the average height of an American male, with a stick having marks at 1 foot intervals, if you take enough samples.

Except this is not generally true.
Specifically say your stick has graduations of say 3 feet, not 1 foot and lets assume that the true average height is 6 foot 1 inch. So now when you measure you’ll find many measurements of 6 feet, essentially none of 9 feet but still quite a few of 3 feet which will mean an overall average of something less than 6 feet.
So you measured “6 feet” many times but your average was less accurate.

Dale S

This doesn’t make intuitive sense, but is it true? According to tall.life, the average american male height is 5’9.3″, with a sd of 2.94″. They give 13.1 percentile at 5’6″ exactly, and 99.846 at 6’6″ exactly. So measuring all the American males with a 1 foot marker would give 13.1% at 5 feet, 86.746% at 6 feet, and 0.154% at 7 feet. (They give 0 percentile for 4’6″ in below, but it’s actually 1 in a bit over 10 million). Calculating the average from that gives 5.87 feet, or 5’10.4″ — it overestimates the average height by over an inch! The problem is that because there are far more men being rounded up to six feet instead of being rounded down to six feet, the granularity of the measuring stick is introducing a systematic upward bias.
This is measuring a (assumed) normal distribution, so at least the final result, if way off, is still closer than the six inch error margin. What if the distribution isn’t normal? What if we’re using these one-foot marker sticks on manufacturing output that is normally 1’7″, but 10% of the time comes out at 7″. The true average length is 1’5.8″, the measuring stick method pegs the length at 5’10.8″ — a whopping 5 inches off and nearly outside the 6″ error margin — and larger than *any* of the gadgets actually produced.
Now suppose we take the same one-foot measuring stick and measure the *same thing* a million times — a single man of height 5.7″. The original reading tell us the height is 6′ +/- 6″. And after a million readings of this man with the same measuring stick, the results should tell us that his height is — 6′ +/- 6″. It should not tell us that his height is 6.0000′ thanks to the law of large numbers.

Dale, measuring the height of an individual a million times is not measuring the average height of a population. Apples and oranges. It’s the same reason you don’t look at a thermometer in your back yard to measure global temperatures.

Oh, by the way Dale, there are tests to determine if things such as “height” are normally distributed: https://en.wikipedia.org/wiki/Normality_test

Dale S

Mark, your response isn’t responsive to the fact that I showed your claim was wrong. You claimed a one-foot increment would be sufficient to estimate the average height of the American adult male. But if the numbers represent a normal distribution with the mean and sd provided at tall.life, your claim was wrong — even if you sample the entire population.
Linking to a “normality test” is also unresponsive — if you tested every American adult male with your one-foot-increment ruler, you’d find most values at 6, a small fraction at 5 and a tiny number at 7. You can’t prove that the distribution is normal from those results, even if it is.
The example of measuring the same man a million times was simply to show that the number of measurements *by itself* does not magically confer accuracy on the results of many crude measurements.
The population here is not the population of *men*, it is the population of *measurements*. Taking many measurements will drive the mean towards the true value, but it’s the true *measured value*, not the actual value, and in this case there’s five inches difference between the two.
If that bothers you, consider a thought experiment where there are a million *completely independent* samples are taken from a population of men who are from a normal distribution centering on 5’7″ with an sd of 1/10″, the law of large numbers will *still* get nowhere near 5’7.000″ .

Tom Halla

Where I and you and several other commenters disagree it that it is not multiple measurements of the same thing, but multiple measurements of change in a quantity. The analogy of measuring the average height of a population with an eight foot ruler marked only every foot would do a rather bad job of measuring the growth of adolescents.

Tom, measuring the “average” and measuring the change in the average are not the same. There is the element of repetition and the time interval which is applicable to measuring the average itself. I believe you are concerned with “anomalies” and not averages themselves. Note that “anomalies” and “averages” are two distinct measurements that although related are not identical.

Tom, if you used the stick to measure the average in, oh, say 2010, then repeated the same measure in 2016 with the same number of samples, you would be able to detect a change if one occurred.

Tom Halla

In the example of the eight foot ruler and adolescent’s growth, the change is typically less than the measurement granularity of one foot. So, to use myself as an example, i grew from 5’2″ to 5’11” in eighteen months, and the ruler would show no change, as it only measures to the whole foot.

Tom, apples and oranges. The example of the stick I used is for measuring the population mean not an individual’s height. You don’t look at the thermometer you have outside your home to find out what the global temperature is.
….
Do you understand the difference between a “population mean” and an “individual measurement?”

Tom Halla

Yes, and the mythical stick would have Samoans and Thais having the same average height.

Tom, maybe, maybe not. You would actually have to measure each population. After you do the measures, you’d see if there was a difference.

Tom Halla

To get back on topic, how one purportedly finds a change of .001 with an instrument that reads to the nearest whole number. . .

By taking enough measurements. The standard error is equal to the standard deviation of the measuring instrument divided by the square root of the number of samples
..
https://en.wikipedia.org/wiki/Standard_error

Tom Halla

And as far as climate, the average temperature does not matter all that much, when one considers such things as what one can grow where. What has changed since the LIA was the gradient by latitude more than the overall temperature, with high latitudes with the significant warming.
As an example of how average temperature does not matter much, I could grow citrus in Concord, CA (eastern Bay Area) while I cannot in Cottonwood Shores TX, which has a rather warmer average temperature. It has snowed twice here in nine years, which did not happen in CA.

AndyG55

“By taking enough measurements. ”
ROFLMAO
You have a lot to learn about maths, don’t you little johnson.
You need to go and actually learn about when that rule can be used.. and when it can NOT.

Tom, what you are saying here would be self-evident to a reasonably thoughtful child. That your interlocutor appears to be incapable of grasping such a simple and blindingly obvious truth is revealing indeed.

AndyG55

“measuring the same thing many times ”
But you are NOT measuring the same thing.
assumptions of normal distribution are a load of suppositories.

tty

“Given normal probability distribution of errors, measuring the same thing many times on many instruments gives a vastly improved accuarcy and range of error.”
Only partially true, it should be
“Given normal probability distribution of errors and independent and identically distributed variables, measuring the same thing many times on many instruments gives a vastly improved accuarcy and range of error.

tty

And incidentally hydrological time-series data are usually not normally distributed. More commonly they are Hurst-Kolmogorov distributed.

Don K

“This is such a fundamental principle of science – that the more times you do an experiment to establish a particular numerical result the more accurate is the averaged result, provided that the reasons for inaccuracy are random – that I fail to see why it is not understood.”
Well .. yes … Basically, I’m on your side. HOWEVER, there are obviously some constraints. For example, if you are sampling a periodic phenomenon like tides, you need to avoid sampling at the frequency of the event or multiples thereof because if you sample at an integer multiple of the period, you’ll always get the same value. I think that’s probably Nyquist and that probably you need to sample at a rate greater than 1/2 the period of the phenomenon. Likewise, if your measuring stick is calibrated in km, all the measurements will be 0. And even if your measurement units are somewhat smaller than the tidal excursions, I’d worry about quantization and aliasing problems.
I’m going to go off and think about this in hopes that I’ll learn something.

tty

If you had been doing serious statistical analysis of field data you would know that data are very often not normally distributed. Always do a normality test first of all.
Then you know if “standard” statistical methods can be used or not.
And by the way there are plenty of pitfalls even for normally distributed data, for example did you know that standard line regression cannot be used for time-series data where there is uncertainty in the sampling times (which does not apply to tidal gauges but almost alway to proxy data).

Don K

tty – AFAICS virtually nothing except some aggregate measurements on unthinkably large numbers of objects in physics, chemistry, astronomy actually distributes normally. But a lot of stuff comes close.enough for gaussian approaches to be useful. And Central Limit Theory really does seem to work most of the time — even for cases like miltipeaked distributions..
That said, the stuff we were taught in Statistics 101 really does need to be viewed with a lot more skepticism than is typically applied. In fact statistics and probability is HARD and most of us — me included — don’t seem to have much aptitude for it.

Averaging many measurements reduces random error. It does not reduce systematic error.

james whelan

Mr Smith, but in this case they are not ‘random’, they are physically the same every time, ie the instrument has an equal probablity of its reading representing an actual event somewhere in the band +-2cm. This cannot be ‘averaged out’ as if its a random event. The only thing that can be ‘averaged out’ is the position of the center of the band.
This fundamental error of logic applies throughout ‘climate change’ modelling.

Clyde Spencer

LS,
You said, “… the more times you do an experiment to establish a particular numerical result the more accurate is the averaged result,…” Implicit in your statement is that there is a single answer, such as Avogadro’s Number, and the only variation is from random measurement error.
What we are focusing on here is something that does not have a unique, intrinsic value, but has a range of values over time, and we are trying to characterize it with an average value and the observed variance. Assuming a well-calibrated measuring device, the accuracy of the estimated mean varies directly with the number of measurements, approaching the correct value asymptotically, as per the Law of Large Numbers. However, the precision does not increase with increasing measurements. In fact, the apparent precision of the mean may decrease as more measurements take in extreme values on the tails of the probability distribution. Fundamentally, the precision of the estimate cannot be greater than the precision of the measuring device, and may well be lower because of a large variance in the variable. Also, there is no guarantee that the variable has a normal distribution.
Constants are a whole ‘nother ball game!

Clyde Spencer

LS,
You said, “…measuring the same thing many times ON MANY INSTRUMENTS gives a vastly improved accuarcy and range of error.” You would have us believe that conflating a distance measured by chaining, with the results of a laser range finder, will provide increased accuracy and precision over that provided by the laser range finder alone? The chaining might provide a ‘sanity check,’ but is hardly to be considered to be of equal reliability and precision as modern technology. The incorporation of the chaining measurement(s) would lead to reduced precision, as would be evident by a rigorous analysis of error. The rigorous analysis of error is what seems to be blatantly missing from much of the work in climatology.

JBom

Yet another wonderful confirmation that the Earth and Moon co-Orbit the Barycenter, i.e. gravitational center of attraction at a location about 1700 meters deep in Earth’s mantle below the Lithosphere. And the joys of Plate Tectonics do live.

Maria

What about rising sea levels from polar ice caps melting? Did I miss that part?

Don K

Maria – As Kip mentions, sea ice that isn’t “grounded” by contact with the land’s surface is floating and displaces precisely enough water to compensate for any “rise” when it melts. That’s called Archimedes Principle. If it isn’t clear, try Googling it. You should come up with lots of illustrations that will hopefully clarify what’s going on.
BTW, the IPCC sea level folks seem nowhere near as obtuse as the climate and economic folks. They have been working throughout the history of the Assessment reports on a “budget” for sea level rise that matches known causes to the observed rise. The budget is still not quite right (their opinion). But it’s getting closer. As of AR5 they ascribe about half the observed SLR to thermal expansion of the oceans, a quarter to glacial melt and the remaining quarter to some combination of polar ice (mostly Greenland) melt and depletion of ground storage.

Fred

The point i would make is do we need to measure it at all,sea level rise is so slow if we saved all the money wasted on this kind of research we would have a whole lot of spare money and time to fix the problems as they arise better planning would also mitigate a whole lot of these scare sciences.

John of Cloverdale, WA, Australia

“It’s so much easier to suggest solutions when you don’t know too much about the problem.”
― Malcolm Forbes

Don K

Fred — The problem is that we don’t sensibly restrict sea front development to stuff that has to be there like docks and things that can be quickly moved inland like “tents” and things that can be inundated without harm like parking lots. Instead, we build all manner of stuff as close as 60cm (two feet) above the highest high tides. And, of course, when a big storm comes along, billions of dollars worth of infrastructure get drowned/busted up.
Therefore SLR becomes a significant economic issue. .If we get a foot of SLR in the 21st century, half the already inadequate buffer between people and ocean will be gone.

RoHa

“we must first nail down Sea Level itself.”
I think if you try to stop the sea level rise that way, you will find some technical difficulties.

tty

When it comes to measure the acceleration (or not) of sea-level rise uncorrected tide gauge data are perfectly good. My favorite example is the Kungsholmsfort gauge in Sweden.
All of Sweden is rising due to the isostatic effect of the last glaciation. This has been known (and studied) for almost 300 years and the amount of rise is therefore known to a high degree of precision (and has been verified by GPS measurements at a large number of sites in recent years (SWEPOS system, 350 sites))
http://www.klimatupplysningen.se/wp-content/uploads/2014/04/Absolut.jpg
In southernmost Sweden this rise is less than the sea-level rise and the relative sea level is very slightly rising. Kungsholmsfort (an old coastal fort sited on solid Archaean bedrock) happened to be situated almost exactly on the line where sea-level rise and land rise coincided when the tidal gauge was built in 1886.
And – surprise, surprise – it still is:
http://www.psmsl.org/data/obtaining/rlr.monthly.plots/70_high.png
Mean annual sea level for the first full year of measurement (1887) was 7037 mm above datum and for the last full year of measurement 2016 was 7036 mm above datum.
Since the land rise is very nearly linear – it has been going on for 15,000 years, and will probably go on for quite a while more – the only possible explanation is that there is no “acceleration”. The absolute sea level at Kungsholmsfort was rising by slightly less than two millimeters per year 130 years ago, and it still is.
There is one small complication, because of self-gravity effects melting of the Greenland icecap has very little effect on sea-levels in northern Europe (about 10% of global average at Kungsholmsfort), so if the presumed acceleration is exclusively due to increased melting in Greenland it would not be noticeable at Kungsholmsfort.

As you say, tty, the effect of accelerated Greenland ice melt wouldn’t be very noticeable at Kungsholmsfort, because of gravity field changes, as explained here:

But it would be noticeable at Honolulu, if it were substantial. Do you see any acceleration there?
http://www.sealevel.info/MSL_graph.php?id=Honolulu
http://sealevel.info/1612340_Honolulu_vs_CO2_annot1.png

tty

True, and as a matter of fact the Central Pacific is the area where glacial melt in Antarctica and Greenland adds up maximally, which is probably also the reason that most islands there show clear evidence of significantly higher sea-levels in the mid-Holocene.
So “acceleration” would be expected to show up clearly at Honolulu.

james whelan

tty, you see a similar situation in Britain, the pivot point is approx Teeside, north of that point is rising ( ie Scotland) and south is sinking ( England), there is also a lesser west/east effect as well.

prjindigo

Tide gauges are the nautical equivalent of thermometers at airports: They are not scientific, they only show the local conditions that pilots need to know about.

tty

Often wrong. Many tide gauges were installed for scientific reasons. For example all tide gauges in Sweden (and other Baltic countries), since there are no tides in the Baltic.

AndyG55

That is just fabricated nonsense.
Tide gauges are installed so there is as little interference from local conditions as possible, eg waves, boat wash etc
Thermometers at airports …… no such consideration.
And of course tides are very constant in period and amplitude, so much so that high tide and its level can be predicted with very good precision.
Entirely different from airport thermometers.

Coeur de Lion

What about earth’s rotation rate? Is Nils Axel Morner in on this?

tty

Earth rotation rate is affected by transfer of mass from the Arctic to lower latituden and vice versa, i e melting/expansion of polar icecaps, but not by sea-level rise per se.

LdB

I thought they were putting in tide gauges with GPS to try and sort out the mess with sea level rise?

tty

Yes, but it takes several years to get good enough data to sort things out. And there is no law that says that ground subsidence (or rise) has to be linear. If it is natural isostatic or tectonic effects it is probably close to linear on century timescales, but if it is due to human action (compaction, loading by buildings or fill material, groundwater extraction/drainage) it might be strongly non-linear.

u.k.(us)

Until someone can explain the difference between accuracy and precision, in one simple sentence, I’ll keep thinking it’s all just a matter of semantics.
So there.

NZ Willy

Precision is how many significant digits you present, accuracy is the gap between your value and the true value.

Clyde Spencer

NZ Willy,
Very good!

Southern Leading

Interesting article, but why do you not talk about air pressure. The tide will be at a different height depending on whether there is a high pressure system or low pressure system overhead. These differences can be significant. There appears to be no compensation for air pressure which makes readings suspect.

NZ Willy

Uplift is usually a sudden earthquake event, while subsidence is gradual. So subsidence is built into models but upfilt is not. The resulting absence of the uplift in the models produces a sea level rise all by itself.

Clyde Spencer

NZ Willy,
While uplift is often associated with an earthquake (especially in NZ), in the northern hemisphere there is an adjustment to the loss of ice load after the glaciers melted. It is evidenced as a rather continuous uplift in high latitudes.

tty

There is isostatic uplift in Antarctica as well, though it is only measured by GPS at a few sites. As a matter of fact the uncertainty about this is so large that in practice it makes the GRACE measurements of changes in Antarctic ice volume completely meaningless – the result is completely dominated by the (guessed) isostatic adjustment.

tty

NZWilly
Look at the map in the 1.41 am post. It shows the continuous isostatic uplift in Northern Europe.
And continuous uplift is actually quite common in other areas as well. For example the longest series of uplifted beaches anywhere in the World is on the very aseismic coast of South Australia (150 (!) uplifted shorelines dating back to the Miocene).

dp

You can probably learn all that can be known by exploring anchialine pools around the world. One of the newest is the Sailor’s Hat pond on Kaho`olawe island in Hawaii.

Johan M

Accuracy of GPS?
If we want to separate seal level rise from the local vertical movement of a tidal gauge we could of course add a GPS receiver to it and wait for some time to get a very good measurement of the local movement, problem solved?
The GPS position that we get could of course be very precise but it is in relation to the GPS reference system . This reference system is maintained by keeping the satellites in position and this is of course done using ground stations. These ground station do however move and instead of having a observatory in London as the only reference point we adjust the system based on some twenty reference points. How accurately can we track the movements of these stations and to what degree is it done? Do we have absolute gravimeters at these locations to determine their movement in vertical position? With what accuracy can we do this?
I’m not claiming that it is not done, nor that it can’t be done, but wonder to what degree you could trust a GPS reading during then years that tells you that your position has been elevated by 0.5 cm (let’s skip movement in x and y).

“let me point out that the rated accuracy of the monthly mean is a mathematical fantasy. If each measurement is only accurate to ± 2 cm, then the monthly mean cannot be MORE accurate than that”
Kip,
It can because +/- 2 cm is a random error. You can plot the magnitude of error in the x-axis and the number of occurrence in the y-axis and you’ll have a frequency distribution of errors. You can examine the curve whether it looks like a normal curve or like a rectangle, which means almost equal probability for big and small errors. Since the positive and negative errors are equally probable, they can cancel each other and reduce the average error by doing many measurements.

Excellent point Dr. Strangelove. Further, the tide gauges, with their local SLR and land movement, are quite randomized. If you cannot control a variable, randomize it. This is a standard analysis tool. If you are using the same tide gauges, for the vast majority, changes in sea level over time will be valid. It doesn’t really matter if the means of land movement average to zero, which they most probably do. The only ones who should be concerned about subsidence are the ones in that local area. And you are measuring one thing, global SLR, at several points around the planet with multiple measurements at each location and 1,000’s of stations. The error bar of the answer is a complex issue but solvable. It is clearly better than +/- 2 cm.

“If PSMSL data were corrected for at-site vertical land movement, then we could determine changes in actual or absolute local sea surface level changes which could be then be used to determine something that might be considered a scientific rendering of Global Sea Level change.”
For practical purposes, the relative local sea level changes are more important. This is the metric that determines if a coast area will flood and how deep the flooding. Absolute local sea level is hypothetical. It is based on the premise – what if the land does not move vertically? But the land does move!

Richard Greene

One big, enormous, colossal, humongous, monumental, huuuuuuuuggggggggggggeeeeeee improvement to this article, would be to move the “What This All Means” two paragraph summary section away from the last two paragraphs, and start the article with those two paragraphs.
After reading the many tedious comments where readers tried to prove the author was wrong about precision and accuracy (I think he was right), I noticed everyone missed one very important point about data accuracy, that especially applies to government bureaucrat “climate science”:
(1) Are the people collecting the data competent and trustworthy,
or are they biased when collecting, “adjusting” and reporting data,
… perhaps because of how they were selected for “goobermint science” work
in the first place (only CO2 is Evil believers will be hired?)
… or caused by their own confirmation bias,
because they expected to see accelerating sea level rise?

Southern Leading

Thanks to Matt for his comments on air pressure. If we understand that tides can move sea levels by metres, air pressure by say 30 – 50 cm, wind and weather conditions by something, and rising/falling land by a few millimetres, then the job of measuring “sea level” is complex. Long data series will help, but there are many variables to try and determine a permanent rise of a few millimetres per year.

kip hansen wrote, “let me point out that the rated accuracy of the monthly mean is a mathematical fantasy. If each measurement is only accurate to ± 2 cm, then the monthly mean cannot be MORE accurate than that”
this is wrong, as anyone who’s studied
basic measurement theory understands.
dr strangelove is completely right — assuming the
errors are randomly distributed, the uncertainty
of the
average is much
less than the uncertainty
of their average.
if each of n measurements has an
uncertainty
of e, the uncertainty of the
average of these measurements will
be e/squareroot(n)

Tom Halla

Crackers345, one is measuring different things multiple times, not one thing multiple times. The large number rule does not apply.