Sea Level Rise Acceleration – An Alternative Hypothesis

Guest Essay by Alan Welch – facilitated by Kip Hansen –14 May 2022

Nerem et al Paper, 2018, 4 Years on

by Dr Alan Welch FBIS FRAS, Ledbury, UK — May 2022

(Note: One new image has been added at the end of the essay.)

Abstract   Having analysed the NASA Sea Level readings over the last 4 years it has been concluded that the accelerations derived by Nerem et al. are a consequence of the methodology used and are not inherent in the data.  The analyses further predict that the perceived accelerations will drop to near zero levels in 10 to 20 years.

                                  ———————————————————————-

It is now 4 years since the paper by Nerem et al. (2018) 1 was released.  It spawned many disaster pictures, such as the Statue of Liberty with the sea lapping around her waist, and a proliferation in the use of Climate Crisis or Climate Catastrophe in place of Climate Change by the likes of the BBC and the Guardian.

It also kick-started my interest in Climate Change, not by what it presented, but by the unacceptable methodology used in determining an “acceleration”.  I have inserted acceleration in quotes as care must be used in interpreting the physical meaning behind the coefficients derived in fitting a quadratic equation.  In the paper by Nerem et al. there were 3 stages.

Mathematical – Coefficients are calculated for a quadratic equation that fits the data set.

Physical – Attaching a label – ”acceleration” – to 2 times the quadratic coefficient.

Unbelievable – Extrapolating to the year 2100.

The first is straightforward and acceptable.  The second is very dependent on the quality of the data and the length of period involved.  The third is fraught with danger as the quadratic term dominates the process when used outside the range of data.  The last point is illustrated in Figure 1.  This appeared in https://edition.cnn.com/2018/02/12/world/sea-level-rise-accelerating/index.html with the caption “Nerem provided this chart showing sea level projections to 2100 using the newly calculated acceleration rate”.

Figure 1

As a retired Civil Engineer with 40 years’ experience in engineering analysis I appreciate that curve fitting can, with care, be used to help understand a set of data.  But to extrapolate 25 years’ worth of data for more than 80 years into the future is in my mind totally unacceptable.  But it is this “acceleration” that generated the alarmist press following publication.

I will now discuss several aspects concerning the sea level data, including what could have been done differently in 2018,  what the current situation is and what can be learnt from studying the last 10 years’ worth of data.  Prior to 2012 the data, and any related analysis, were more erratic, but with time a steadier picture is emerging.

Situation in 2018.

 The Feb 2018 data were the first data analysed.  The data used were derived from the NASA site https://climate.nasa.gov/vital-signs/sea-level/.  These data do not include any of the adjustments introduced by Nerem et al. but calculated values for slope and “acceleration” are not too dissimilar.  In discussing “acceleration” the process can be made simpler by taking the values of the straight-line fit, i.e., the slope, away from the actual readings and working with what are called the “residuals”.  Using residuals or the full data results in the same “acceleration” but it is easier to see the trends using the residuals.

Figure 2 below shows quadratic and sinusoidal fits starting in Jan 1993 up to the Feb 2018 using the latest values of sea levels.  (See Note 1 below)

Figure 2

Situation in 2021.

A later set of results refer the period from Jan 1993 up to Aug 2021. The above graph has been updated in Figure 3  to show that the x2 coefficient is now 0.0441.

Figure 3

This update shows that the sinusoidal form curve is still a reasonable alternative to the quadratic curve although the period could be extended to 24 or 25 years and the amplitude increased slightly.  The 22-year period and the amplitude have been retained for the sake of continuity although the quadratic curve is reassessed at each update, which has the effect of slightly modifying the slope and residuals.

Study of the last 10 years set of data.

The NASA data were analysed over the last 10 years on a quarterly basis using the full Aug 2021 set of data.  The “acceleration” was calculated for each time step using the data from 1993 up to each date.  In tandem with this a second set of “accelerations” were derived by assuming the data followed the pure sinusoidal curve listed on the figures above.  In the long-term these “accelerations” will approach near zero but when the wavelength and period being analysed are similar unrepresentative “accelerations” will be derived.  Note 2 gives more detail for the sinusoidal curve to explain the process and results and illustrate the curve fitting process.

 The results of these 2 sets of analyses are plotted in Figure 4 as “accelerations” against date NASA data set was released and analysed.  For example, the two “accelerations” for 2018 would be those derived using a quadratic fit for both the NASA data and the pure sinusoidal curves over the period Jan 1993 to Jan 2018 respectively.  The graph on the left shows the “acceleration” for the NASA data and the sinusoidal curve.  Their shapes are very similar but offset by about 3 years.  Shifting the sinusoidal curve over 3 years shows how closely the 2 curves follow each other.  This close fit is of interest.  The NASA “acceleration” peaked in about Jan 2020 and is reducing from then onwards dropping by about 8% over 2 years.  Working backwards from the peak the “accelerations” keep reducing until at about Oct 2012 they were negative, i.e., deacceleration.  The close fit with the shifted sinusoidal curve may be coincidental but there seems to be a clear message there, that is the high “acceleration” quoted by Nerem et al. is more an outcome of the method used and not inherent in the basic data.

Figure 4

The next few years will be telling as to whether the sinusoidal approach is more representative to the actual behaviour and if the NASA data continues to produce a reducing “acceleration”.  If the actual “acceleration” curve follows the trend of the sinusoidal curve the perceived “acceleration” will have halved in about 6 years and reached near zero values in about 15 years.

1.      Nerem, R. S., Beckley, B. D., Fasullo, J. T., Hamlington, B. D., Masters, D., & Mitchum, G. T. (2018). Climate-change-driven accelerated sea-level rise detected in the altimeter era. (full text .pdfProceedings of the National Academy of Sciences of the United States of America, 115(9).  First published February 12, 2018 

Note 1.  The NASA data changes from month to month.  Usually this is confined to the last month or two of data due to the method used in smoothing the readings.  In July 2020 there was a major change to all the data by up to 2.5 mm which had little effect on the slope, but the “acceleration” was reduced by about 0.005 mm/yr2.  I have been unable to ascertain the reason behind these adjustments, but they have little effect on the overall findings.

Note 2.  The Sinusoidal Curve shown in figure 5 will be analysed.

Figure 5

The “accelerations” derived from analysing this sinusoidal curve over a range of periods from 2.5 years to 70 years are shown in figure 6.

Figure 6

The next 5 figures illustrate the curve fitting process for various time periods.

Figure 7 uses a short 5-year period and the fitted quadratic curve is very close to the actual sinusoidal curve and in this instant gives an “acceleration” of -0.2716 mm/yr2 very close to the curve’s maximum acceleration of 0.285 mm/yr2 obtained by differentiating the equation twice.

Figure 7

Figure 8 uses a 15-year period and the “acceleration” drops to -0.0566 mm/yr2.

Figure 8

Figure 9 is very close to the period used by Nerem et al. in that it uses 25 years.  The resulting “acceleration” is 0.0904 mm/yr2 similar to that paraded by Nerem.

Figure 9

Figure 10 covers 35 years and results in a rapid reduction in “acceleration” to 0.0118 mm/yr2. (typo corrected with thanks to Steve Case)

Figure 10

Finally extending the period to 65 years, which is nearly 3 periods of 22 years, results in a near zero “acceleration” as shown in Figure 11.

Figure 11

The following image has been added at the request of Dr. Welch (15 May 2022, 9:00 am EST):

Fig 2 from Nerem 2022 with prediction by Welch

# # # # #

About Dr. Alan Welch:

Dr. Welch received a B.Sc.(Hons 2A) Civil Eng. From the University of  Birmingham and his PhD from the University of Southampton.  He is a Chartered Civil Engineer (U.K.), a member of the Institution of Civil Engineers (U.K.) (retired), a fellow of the British Interplanetary Society, and a fellow of the Royal Astronomical Society.

Currently retired, he has over thirty years’ professional experience across many fields of engineering analysis.  Complete CV here.

# # # # #

Comment from Kip Hansen:

Dr. Welch has been working on this analysis for years and has put his findings together at my suggestion as an essay here.  The above is the result of many edited versions and is offered here as an alternative hypothesis to Nerem (2018) ( .pdf )and Nerem (2022).  In a practical sense,  Nerem (2022) did not change anything substantial from the 2018 paper discussed by Welch

On a personal note:  This is not my hypothesis.  I do not support curve fitting in general and an alternate curve fitting would not be my approach to sea level rise. I stand by my most recent opinions expressed in  “Sea Level: Rise and Fall – Slowing Down to Speed Up”. Overall, my views have been more than adequately aired in my previous essay on sea levels here at WUWT.

I do feel that Dr. Welch’s analysis deserves to be seen and discussed.

Dr. Welch lives in the U.K. and his responses to comments on this essay will be occurring on British Summer Time : UTC +1.

Praise for his work in comments should be generous and criticism gentle.

# # # # #

5 15 votes
Article Rating
Subscribe
Notify of
225 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Tom Halla
May 14, 2022 6:09 am

Agreed, what curve the data fits cannot be determined by much less than one possible cycle.

Alan Welch
Reply to  Tom Halla
May 14, 2022 6:14 am

Thanks Tom for your comment. In reality 2 or even 3 cycles would be required in order to be confident with the analysis The data are no where near that.

Gerald the Mole
Reply to  Alan Welch
May 16, 2022 3:02 am

I seem to remember being told that for extrapolation of a time series that you needed ten historic points for every one point extrapolated. is my memory at fault or was if fed duff gen? Duff gen RAF WW2 slang for rubbish.

Alan Welch
Reply to  Gerald the Mole
May 16, 2022 5:46 am

A lot depends on the quality and consistency of the data, the understanding of all the physics involved and how much chaotic behaviour is present.
Sea level measurements do not score highly in these areas.
To continue the RAF terminology perhaps Nerem is a half pint hero who might soon have to become an umbrella man with his ideas gone for six.

Gerald the Mole
Reply to  Kip Hansen
May 17, 2022 3:34 am

Kip and Alan, many thanks for your helpful replies.

Best wishes, Gerald

Latitude
Reply to  Tom Halla
May 14, 2022 12:52 pm

they cherry picked 2010

Scissor
May 14, 2022 6:20 am

Nerem will be riding into the sunset before his methodology is definitively shown to be the shit that it is.

Alan Welch
Reply to  Scissor
May 14, 2022 6:28 am

Hello Scissor. Agreed. Interesting in Nerem’s 2018 paper he extrapolates 80 plus years. In his 2020 paper he extrapolates 30 years. At that rate it will not be too long before he gives up extrapolation – or am I being too optimistic!

Alan Welch
Reply to  Alan Welch
May 14, 2022 6:30 am

Should have said his 2022 paper, not 2020. Still got my fingers crossed that he will see the light.

Scissor
Reply to  Alan Welch
May 14, 2022 7:15 am

He likes noisy time series data sets, coupled with some kind of fear factor, climate change and perhaps some other hook to blame humanity.

https://insidecires.colorado.edu/rendezvous/uploads/Rendezvous_2022_8463_1651104526.pdf

bigoilbob
May 14, 2022 6:34 am

Acceleration from post 1980 sea level data – about the most relevant statistical/physical period – is indeed ominous. And in spite of alt.analyses based on irrelevant data, cumulative and accumulating GHG emissions could easily result in forcings large enough to give us the “impossible” year 2100 “instant slopes” that are part of those extrapolations.

But what responsible “warmists” advocate for action based on only quadratic extrapolation? Not rhetorical.

Countermeasures should be based on properly ranged (Heaven Forbid!) modeling, coupled with stochastic, incremental economic analyses, to catalyze actions to reduce “all in” losses. IMO, sea level rise is not the worst aspect of man made warming. We can adapt to it much easier than to the drought, fungilence, pestilence, floods, temperature extremes that nearly every unbiased scientific and Ag organization world wide tells us will come without those responsible countermeasures…..

Bob boder
Reply to  bigoilbob
May 14, 2022 7:05 am

And you are a fool, doing what you advocate would play endless ruin on the economies of the world. Models have shown zero predictively to date.

fretslider
Reply to  bigoilbob
May 14, 2022 7:05 am

“man made warming”

And your evidence of that is?

Rich Davis
Reply to  bigoilbob
May 14, 2022 7:22 am

The thing about big oily boob is that you can be 100% certain that if in 11 years or so, the sinusoidal curve proves to have been correct, he will be off chasing some other specious claims with these specious claims securely memory-holed.

Barry James
Reply to  bigoilbob
May 14, 2022 8:07 am

The one thing that never ceases to amaze me is the zeal with which followers of the Climate Mafia use opinions, especially those from ignorant journalists, to dismiss factual data and observations presented by real “frontline” scientists. Here we see a prime example of this from “bigoilbob”.

Scissor
Reply to  Barry James
May 14, 2022 8:48 am

That observation, which I fully realize myself, is what give’s me a pit in my stomach about the numerous possibilities of what’s next?

I am surrounded by climate zealots and I’ve come to realize that most are just harmless parrots, except a few can inflict irreparable economic damage to oneself. Even to this day, many are still paralyzed by fear of the “COVID” and wear useful cloth masks with gaping holes about the eyes, nose and mouth.

My hope is that the internet, and places like this website, provide a counterbalance to propaganda.

jorgekafkazar
Reply to  bigoilbob
May 14, 2022 8:22 am

Your imagination is to be saluted, BigOilBob! Particularly the “responsible warmists” and “unbiased scientific” parts. Well done!

LdB
Reply to  bigoilbob
May 14, 2022 10:08 am

Or we could grab some popcorn and enjoy the show.

Jim Gorman
Reply to  bigoilbob
May 14, 2022 10:51 am

From the essay.

The third is fraught with danger as the quadratic term dominates the process when used outside the range of data. “

Did you not understand this? Anyone familiar with business of any kind will know that you can use powers (exponents) to create a curve matching your data. However, to rely on it for forecasting even 5 years in the future is not financially sound. Talk about chaotic systems being very dependent on starting points, this is no different. The exponent will continue to amplify any and all errors until the cows come home to roost.

bigoilbob
Reply to  Jim Gorman
May 14, 2022 11:04 am

 However, to rely on it for forecasting even 5 years in the future is not financially sound.”

From my post:

“But what responsible “warmists” advocate for action based on only quadratic extrapolation? Not rhetorical.”.

b.nice
Reply to  bigoilbob
May 14, 2022 1:23 pm

Get sober, blob !

DonM
Reply to  bigoilbob
May 14, 2022 5:38 pm

‘responsible warmists”

you have termed the question in a manner that cannot yield a positive value response.

(those berkely guys advocate based on only quadratic extrapolation all the time; although he is not ‘responsible’; ¿ <‘((())}} )

Last edited 12 days ago by DonM
PCman999
Reply to  bigoilbob
May 14, 2022 12:57 pm

Responsible warmunists don’t exist.

Responsible people who believe temperatures will continue to rise steadily would either push for ocean fertilization if they really thought CO2 had something to do with it.

If they were really, really responsible they would do due diligence on all the data and claims and see:
1 – people and the biosphere do better in warmer weather, even warmer than now.
2 – temperatures have been warming since 1700 not just 1850, or other arbitrarily chosen date picked to frame human industrial development.
3 – said development has been responsible for pulling most of humanity out of abject poverty, and most sane people would be cautious about calls for immediately stopping that.
4 – seems the most effective scary climate stories have to due with rising sea levels and forest fires, both of which are better handled directly by the localities involved than by billing the whole world for the particular choices of the few who choose to live near the coasts on land that is probably sinking for other reasons.

You know what, I an getting bored of ennumerating the obvious, but how about this: responsible people don’t hand over blank cheques to every con artist who comes to the door with a scary story.

b.nice
Reply to  bigoilbob
May 14, 2022 1:22 pm

blob’s post is one of the most bizarre posts I think I have ever read.

Its as though he has taken several different hallucinogenics at the same time.

Ron Long
May 14, 2022 6:36 am

Mosquito farts in a hurricane! If natural climate cycles produce 50 meters higher and 140 meters lower sea levels (compared to current sea level), trying to finesse a millimeter acceleration from 25 years of sea level data is tilting at windmills. This is not a negative comment about Alan Welch (PhD), who is only trying to show the Nerem, et al, attempt is flawed.

Alan Welch
Reply to  Ron Long
May 14, 2022 6:45 am

Thanks Ron. Can not comment on Mosquitos as we don’t have them in UK! The sad thing about Nerem and his work is that it is all so unbelievable but his findings (predictions) are taken as gospel by the powers to be.

save energy
Reply to  Alan Welch
May 14, 2022 7:32 am
Alan Welch
Reply to  save energy
May 14, 2022 9:16 am

sorry – didn’t do biology at school – I’ll stick to Engineering!!

Jeff Alberts
Reply to  Alan Welch
May 14, 2022 9:51 am

Probably just don’t have them where you live. I’m in the same boat, thankfully. In my little corner of the Pacific Northwest, I haven’t seen a mosquito around my house the entire 20 years I’ve lived here. But just 10 miles away, they will eat you alive.

Geoff Sherrington
Reply to  Alan Welch
May 14, 2022 5:53 pm

Alan,
Nice analysis, thank you.
Even though you did not study biology, can you tell the difference between male and female?
Or should you plan to study law, to better understand the USA Supreme Court?
Geoff S

Phil.
Reply to  Geoff Sherrington
May 15, 2022 8:41 am

Can you? Here’s a picture of a Polish sprinter who had her Olympic medals taken away because the test procedure classified her as a man.
polands-ewa-klobukowska-wins-the-womens-100-metres-in-1967-ewa-the-picture-id637472438

A year later she gave birth to a son!
Apparently not that easy.

fretslider
May 14, 2022 7:03 am

“But it is this “acceleration” that generated the alarmist press following publication.”

All they need is suitably alarming headline. No corrections ever follow.

TimTheToolMan
May 14, 2022 7:13 am

The third is fraught with danger as the quadratic term dominates the process when used outside the range of data. “

Suggesting a curve fit using a quadratic exposes a phenomenally bad understanding of sea level rise.

Sea level rise is proportional to the energy absorbed by the ocean causing expansion and proportional to the energy absorbed by ice to melt it. Basically to double sea level rise we can expect to double the amount of energy absorbed by the earth.

So that means to expect a quadratic increase in the amount of sea level rise, the top of the atmosphere energy imbalance must also experience that quadradic increase of imbalance. And that could mean a number of things for cause but the most obvious reason is that the anthropogenic greenhouse effect has increased by the same quadratic function.

Is that predicted? No.


Alan Welch
Reply to  TimTheToolMan
May 14, 2022 7:21 am

It would be nice to think Nerem read this site and the comments but he is too busy burying his head in the sand. A few years ago I sent an earlier and shorter version of this essay to PNAS and all was going well until it reached the referee stage and I believe Nerem may have been involved in this so out the window I went.

Izaak Walton
Reply to  Alan Welch
May 14, 2022 5:03 pm

Claiming that it “was going well until it reached the referee stage” suggests that you don’t understand the whole publication process. What other stages are there worth worrying about?

TimTheToolMan
Reply to  Izaak Walton
May 14, 2022 6:44 pm

It sounds like you know how the process works. Does every response to a paper get set to the reviewers? Or does the journal editor arbitrarily throw out responses that dont appear to have any merit?

TimTheToolMan
Reply to  Kip Hansen
May 14, 2022 7:07 pm

That may well be true. But Isaak asked the question What other stages are there worth worrying about?” and I think getting the comments into review isn’t the first step.

Izaak Walton
Reply to  TimTheToolMan
May 14, 2022 9:20 pm

Hi Tim,
From the PNAS’s website:
“PNAS will consider manuscripts for review as long as all components listed above are included in the submission. “
So all manuscripts that are properly formatted get sent out for review.

TimTheToolMan
Reply to  Izaak Walton
May 14, 2022 10:11 pm

PNAS will consider manuscripts for review”

Isn’t the same as “PNAS will review manuscripts as long as…” Consider for review means there is a step of consideration before the step of review.

meab
Reply to  TimTheToolMan
May 14, 2022 11:22 am

Very insightful comment. In general, curve fitting only works if you know the physical process driving the change to let you know what function to use. We don’t understand many of the processes.

However, we do know that a quadratic is completely out of the question. GHG radiation imbalance is logarithmic with CO2 concentration and disequilibrium models of atmospheric CO2 concentration are sublinear as the greater the disequilibrium between ocean and atmospheric CO2 concentration the greater the proportion of CO2 emitted into the atmosphere is that will be removed by the ocean (about half of all CO2 emitted gets removed by the ocean now and that proportion will likely grow). Therefore, the added heat from radiation imbalance will grow slower than logarithmic. That’s an entirely different function than quadratic.

You can tell that Nerem is either an amateur or a shyster by the fact that he didn’t even try to justify using a quadratic and then, having chosen a completely wrong function, proceeded to misuse the fit.

meab
Reply to  Kip Hansen
May 14, 2022 5:26 pm

I know who Nerem is. He might be a competent astrodynamicist, but he’s an extremely poor excuse for a mathematician/statistician. You do not fit data with any old function and then extrapolate the future with that function.

If the sea level rise is accelerating, one thing we know for absolute sure is that it is not accelerating quadratically.

Jim Gorman
Reply to  Kip Hansen
May 15, 2022 4:45 am

It does indicate that he is unfit for physical science analysis. Just read the comments here, including yours, to understand that there are people who understand what should be done.

He is probably unfit due to the teaching today. Imagine going to school now and being told there is no way to experiment on climate because we don’t have a second earth. Consequently, you must BELIEVE in the hypothesis that GHG’s and especially CO2 is the reason for climate change. Everything you do is directed toward proving that erroneous hypothesis rather than trying to falsify it. In addition, your advisors tell you to not bother with advanced training in statistics since there are math majors who can do that work. (My nephew got that advice but thankfully ignored it on his way to a doctorate in microbiology.) Most of those math majors have NO training in physical science and wouldn’t know a periodic function if it bit them on the arse.

I expect the reviewers of this paper have had the same training and wouldn’t recognize a basic failure of assuming how physical phenomena actually works.

Phil.
Reply to  meab
May 15, 2022 8:50 am

Actually if sea level rise is accelerating we know ‘for absolutely sure’ that sea level is not following a linear function.

Graham
Reply to  Kip Hansen
May 15, 2022 2:30 am

I am quite sure that you are right .
We had a big announcement 2 weeks ago here in New Zealand from a couple of boffins from Victoria University in Wellington .
Naish fronted it but James Renwick was in the back ground pushing the scary ( news ) that our sea level rise was or is about to accelerate.
New Zealand coastal sea levels have been very constant at 1.5 mm per year .
That is what tide gauges are showing .
These two muppets announced that their work with models show that in the next 40 years sea level will rise by 40 centimeters from the measured rise of 6 centimeters over the last 40 years .
They also predicted that a lot of our coast is gradually sinking and that will accelerate into the future.
I have it on good authority that the Auckland harbour is actually rising and that it is becoming slightly shallower ,1 or 2 millimeters per year .
The general public who are buying very expensive houses in our beach resorts do not seem worried as values have doubled in the last 5 years for waterfront properties .

Phil.
Reply to  Graham
May 15, 2022 9:04 am

 have it on good authority that the Auckland harbour is actually rising”

The published GPS data shows that it is sinking by about 0.5mm/yr.
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019JB018055

Phil.
Reply to  Kip Hansen
May 15, 2022 12:47 pm

Well I was answering “the Auckland harbour is actually rising”.

Phil.
Reply to  Kip Hansen
May 16, 2022 8:09 am

He was quite clear, he said: “They also predicted that a lot of our coast is gradually sinking and that will accelerate into the future.
I have it on good authority that the Auckland harbour is actually rising “
He was referring to the ‘coast’, in other words ‘the land mass’, which is the point I addressed. I had no difficulty with his language.

TonyL
May 14, 2022 7:26 am

???????????????

Standard practice would be to calculate the 95% confidence interval lines + and -. Plot up the confidence interval lines together with the fitted line and the data.
See how the confidence interval, low value to high value, grows rapidly as you extrapolate past the end of the data.
Marvel at how the confidence interval goes “Floor to Ceiling” at 80 years.
This puts a stop to:
1) overfitted data
2) unjustified extrapolations that go too far

Yet nobody is doing it. WUWT???

TonyL
Reply to  Kip Hansen
May 14, 2022 7:44 am

Yes!
Exactly Correct.
A quadratic with confidence intervals, Perfect.
Bonus points: They open up “wider than a barn door” in right short order.

Reply to  TonyL
May 14, 2022 1:04 pm

“Yet nobody is doing it. WUWT???”

You could try looking at what Nerem et al actually did. Here is a copy of the abstract. The confidence intervals are calculated, 65±12 cm by 2100. Not wall to wall.

comment image

The abstract doesn’t give the CIs for rate of increase, but a later highlighted section does (3±0.4mm/yr). He put those together in a standard quadratic calculation.

b.nice
Reply to  Nick Stokes
May 14, 2022 1:41 pm

Any acceleration in the satellite sea level data is purely from adjustments at changes in satellites.

These adjustments are far larger than any acceleration component.

There is no acceleration at tide gauges.

The whole issue is moot and pointless.

Reply to  Kip Hansen
May 14, 2022 5:18 pm

Kip,
One can argue about whether the bounds are too spread to be useful. One can argue about whether they are correctly calculated. I was responding to a claim that they hadn’t been calculated at all. The whole paper is about error analysis.

I don’t think his calculation depends on RCPs. It is just time series based. He compares with RCP based modelling.

DonM
Reply to  Nick Stokes
May 14, 2022 6:16 pm

*

Last edited 12 days ago by DonM
Barry James
May 14, 2022 7:51 am

When used in the context of anything to do with global climate science, the use of the “NASA” brand should be immediately recognised as pertaining to the propaganda produced at Columbia University under the direction of the current leader of the Climate Mafia, Gavin Schmidt, the successor to the originator of the climate scam, James Hansen. President Carter, in 1981, appointed James Hansen as the director of GISS, which up until that time was primarily concerned, as a division of NASA with space studies (the “SS” in GISS).

Hansen immediately repurposed GISS into its current role as the principal source and expounder of the fraudulent climate propaganda being published by GISS in NASA’s name. A significant example of this is the false sea level data being posted by “NASA”. This is the product of the Jason series of experiments, which attempt to measure sea levels using satellite based radar altimetry, a system which is unfit for purpose, as acknowledged by the leading participant, NOAA, until recently. I recall that Kip Hansen made similar disparaging remarks about this last May in WUWT.

NOAA’s website “Tides and Currents” provides excellent data, based on the historical global records from tide gauges, maintained by CO-OPS (formerly PSMSL), which finds that, since records have been kept globally, sea levels have been rising at the steady rate of 1.7/1.8 mm per year with zero acceleration. The “NASA” data is pure bullshit.
https://tidesandcurrents.noaa.gov/sltrends/globalregionalcomparison.html?fbclid=IwAR3-pDl-npQ2o8gRnUIA43sJRJw_0bvxsLZkUTAm2kkYiSsNR-t_thkiTdk

Steve Case
Reply to  Barry James
May 15, 2022 5:05 am

Great link, I’ve got it bookmarked(-:

TonyL
May 14, 2022 7:57 am

?????????
A second issue, this one for the statistics people.
A while back, I fit quadratics to tide gauge station annual data (not monthly), and calculated the p-values for the lines. As you know p-value < 0.05 is significant, larger than that, not significant.

What happened?
for Y = aX^2 + bX + c

The X^2 term *is* signoificant, (the “a” term).

The X term is not, and by a large margin. (the “b” term)

So how on Earth does the X^2 term get to be significant, while the more fundamental linear X term is *Not*?????
What is going on here?
Does the more fundamental term losing it’s significance mean that all higher orders are insignificant, regardless of p-value calculation?
This seems to be the only rational way to interpret the statistics.

Jeff Alberts
Reply to  TonyL
May 14, 2022 9:55 am

What happened?
for Y = aX^2 + bX + c”

X/yz – P[dq]


TonyL
Reply to  TonyL
May 14, 2022 12:00 pm

WUWT has a huge readership which has an uncommon level of knowledge and skill in a variety of scientific and mathematical fields.
I came across this very interesting occurrence when poking around some data. I was hoping some statistics person could explain the significance of what I was looking at.
So far, not a single response.
Oh well……

Phil.
Reply to  TonyL
May 17, 2022 6:49 am

Without seeing the data I would suggest it’s related to the nature of the curve. If it’s nearly a pure quadratic with a very small linear term it wouldn’t surprise me, alternatively you could have a nearly pure linear and small quadratic and again the fit of each term wouldn’t be equally significant.

Rud Istvan
May 14, 2022 8:04 am

I have a more basic problem with Nerem. He used the NASA data. Follow the link to find it is the satellite era sea level data. In guest post ‘Jason 3–fit for purpose?’ I showed it was not and never could be. In the second guest post on the new Sentinel 6 I showed that it wasn’t either.
In addition to the inherent accuracy problem, the NASA satellite data does NOT close with ARGO estimated thermosteric rise plus ice sheet loss. The dGPS corrected long record tide gauge estimate of ~2.2 mm/year does close. See guest post ‘Sea Level Rise, Acceleration, and Closure’ for details.

Nerem is a goofy extrapolated analysis of a bad data set.

Rud Istvan
Reply to  Kip Hansen
May 14, 2022 4:23 pm

Kip, was not meaning criticize you or Welch. Was simply trying to point out that this specific ground has been well trodden here.

Steve Case
Reply to  Kip Hansen
May 15, 2022 5:10 am

We do get a lot of new readers here each week who are not familiar with the subjects and the authors.
________________________________________________

BINGO! Posting something once isn’t productive.

jorgekafkazar
May 14, 2022 8:13 am

Thanks to Kip Hansen and to Dr. Welch for a very thorough analysis and for generating the essay. I don’t disagree, but I look at it from a more theoretical POV. We were told in school that differentiation, i.e., taking the first derivative, was an inherently inaccurate process (especially compared to, say, taking an integral.)

Imagine that you are estimating the slope of a “curve” like the raw data, above. You can see that the slope changes very rapidly at every inflection point, going all the way from positive to negative, or vice versa. The slope is, essentially, indeterminate at each such point. Taking the second derivative (“acceleration”) through many such points, no matter how many, involves considerable imagination.

And, indeed, Dr. Welch shows us above how the mere assumption of one type of curve over another changes the result radically, giving entirely different values for acceleration; even the sign is different. We have little reason to favor one type curve over another, especially over such a small interval, which would seem to be Dr. Welch’s point.

Clyde Spencer
Reply to  jorgekafkazar
May 14, 2022 9:24 am

We were told in school that differentiation, i.e., taking the first derivative, was an inherently inaccurate process (especially compared to, say, taking an integral.)

Are you saying that taking the derivative of a function, and then integrating it, that you won’t get the same function? Yes, you lose the value of the constant in the original function, but the basic shape of the function remains the same. It seems that you have your claim backwards.

Last edited 12 days ago by Clyde Spencer
jorgekafkazar
Reply to  Clyde Spencer
May 14, 2022 10:25 am

I’m not saying anything close to that. What I’m saying is that if the first derivative is inherently inaccurate, the second is total hoskaplop.

Tim Gorman
Reply to  jorgekafkazar
May 14, 2022 3:39 pm

The slope is, essentially, indeterminate at each such point.”

A point at which the slope changes is not necessarily indeterminate. Meaning the first derivative is not inherently inaccurate.

The slope of y = sin(x) changes from positive to negative at pi/2 radians. Yet this is not an inflection point. The derivative at sin(pi/2) does exist. Same for y = x^^2.

I think your definition of an “inflection point” needs to be stated more precisely.

Having a pole at a certain point in a function does not rule out piecewise analysis either. But it does cause problems with statistical analysis of such a function.

Izaak Walton
Reply to  jorgekafkazar
May 14, 2022 5:11 pm

Jorge,
you appear to be confusing numerical differentiation with analytically differentiating a curve. Numerically differentiation magnifies any noise while numerical integration smoothes it out but analytically at least it is 100% accurate.

Rxc
May 14, 2022 8:24 am

The very first lesson I learned from my first lab course was that you cannot just fit any curve you want to the data. I did an experiment that produced 7 data points. I had special access to a mainframe computer in 1969 which had several curve fitting functions. With 7 data points I chose a 7th order polynomial to fit the data, and got a curve that went exactly thru all 7 points.

The prof had a good talk with me about using a 7th order polynomial fit for a function that should have been pretty linear, and I learned that you have to have a theoretical basis for the curve you want to fit before you do the fit. Without a basis, you just present the data and the uncertainty bars. Without a basis, a fitted curve is misleading.

Alan Welch
Reply to  Rxc
May 14, 2022 9:04 am

Your point shows the danger of fitting a polynomial. The highest power term dominates once outside the range of data. One small point with 7 data points you only need to go up to the 6th order but not critical.

Clyde Spencer
Reply to  Rxc
May 14, 2022 9:28 am

Yes, what you did is called “over-fitting.” It is a common error for those with access to computing power beyond their understanding of the processes they are modeling.

stinkerp
May 14, 2022 8:45 am

Cherrypicking. Look at the 4-year acceleration from 1999 to 2002. After the fall in sea levels in 1998 due to a super El Niño. And the 5-year acceleration from 2011 to 2015. After the fall in sea levels in 2010. Notice a pattern here? #junkscience

comment image

stinkerp
Reply to  stinkerp
May 14, 2022 8:55 am

The CU satellite-measured sea level graph appears to show acceleration over its brief 29-year period. Notice the lack of acceleration over the entire 160-year period of the Battery Park tide gauge record. It’s all about perspective and where you start and end your measurements.

comment image

stinkerp
Reply to  stinkerp
May 14, 2022 8:59 am

And for more perspective on sea levels, let’s took at a million year timeline.

https://www.researchgate.net/profile/Leonid-Sorokin/publication/299446187/figure/fig1/AS:669401518968870@1536609167977/Changes-in-Global-Mean-Sea-Level-during-the-last-18-million-years-based-on-the-content.ppm

Humans’ affect on sea levels, if even measurable, is insignificant.

SocietalNorm
May 14, 2022 8:48 am

The only reason you would apply a second-order (or linear, or 3rd order, or sinusoidal, etc.) curve fit would be if you believed that there was a physical real-world reason to apply it. You might test several hypotheses with different curve fits to see if you can eliminate some hypotheses.
Also, you would need to have data from a long enough time period to be able to determine what order an equation would have to be to get a decent curve fit at the boundaries of the data and not have spurious ends that drive things to ridiculous ends.
Second-order, third-order, fourth-order or even sinusoidal curves can all match each other very closely over a limited slice. They end up at very different points, though.

Alan Welch
Reply to  SocietalNorm
May 14, 2022 9:10 am

Point accepted. One question to ask is why a 22 year (25 Year?) curve. Are there any physical reasons for this. One could be that the satellite data only cover 95% of the sea. The missing portions are mainly the Arctic and Antarctic oceans. Should there de a decadal oscillation across the + or – 66 degree latitudes this could induce a sinusoidal change. We are only looking for +/- 3.5 mm.

David A
May 14, 2022 8:52 am

While it is very good to examine the satellite SL methodology, I think any such project should include error bars, and even more importantly, note that tide gauges, which are far more accurate and located where folk actually live, show that SL rise is not only much lower annually, it is NOT accelerating at all. (Which should also increase satellite error bars.)

Last edited 12 days ago by David A
Alan Welch
Reply to  David A
May 14, 2022 9:14 am

Thanks David. There may be a perceived low acceleration of tidal gauge readings of the order of 0.01 mm/yr2, but even this could be caused by very long (millennium) variation for which 200 years (max) of readings are not adequate to analysis.

Steve Case
Reply to  Alan Welch
May 15, 2022 5:47 am

There may be a perceived low acceleration of tidal gauge readings of the order of 0.01 mm/yr2,
_________________________________________________________

Thanks for posting that, tide gauges with ~100 years of data do show a narrow distribution that centers around 0.01mm/yr². Of 63 such tide gauges a quadradic fit shows the following distribution:

<pre>
mm/y²
-0.02 — 1
-0.01 — 2
 0 —– 14
 0.01 — 23
 0.02 — 13
 0.03 — 8
 0.04 — 2
</pre>

Steve Case
May 14, 2022 9:04 am

The close fit with the shifted sinusoidal curve may be coincidental but there seems to be a clear message there, that is the high “acceleration” quoted by Nerem et al. is more an outcome of the method used and not inherent in the basic data.

Figure 10 covers 35 years and results in a rapid reduction in “acceleration” to 0.118 mm/yr2.
_____________________________________________________________

Part of the method used was to alter the data from 1992-1998 which produced a 2nd order polynomial (quadradic) that curved upwards.

Figure 10 shows 0.118 mm/yr²?? Should be 0.0118 mm/yr² as shown on the chart

Kip, thanks for dogging this issue. The spectre of rising sea levels inundating coastal cities around the world is one of the biggest scares Climate Science generates. A google news search on “sea level rise” produces endless stories and images of flooded ocean front properties. You don’t have to follow very many links to find predictions of a meter or more of sea level rise by 2100. First one out of the box on my search a few minutes ago: Daily Mail

Alan Welch
Reply to  Steve Case
May 14, 2022 9:34 am

Thanks Steve for pointing out my Typo.

In the intro by Kip he mentioned the Nerem 2022 Paper. In that Nerem produces a similar plot (fig 2) to the RHS of my fig 4. He only goes up to end 2020 and says the acceleration is leveling off. I added my “accelerations” up to end 2021 to his fig 2 and they show a definite downturn from the peak. Interesting if he will continue this curve into the future. He says “level” whilst I’ve shown a 8%(ish) drop over 2 years and am predicting (dangerous) an extra 12%(ish) over the next 2 years. Before 2012 the “accelerations” were affected by the shorter time periods involved and a bigger influence from El Nino and La Nina events.

whiten
May 14, 2022 9:04 am

There is this thing, the sea’s boundaries.
The body of sea water, in between two different seas.

Does any body know if any research, scientific or otherwise, ever undertaken to study such as?

In proposition of the important parameter indicator of climate, the sea lvl variation;
the sea boundaries could offer a much clear indication of sea level rise, as naturally bound to respond to it, as a dedicated slave to a potential master.

My self do not know of any research what soever about such matter!

Any one that could help with some more enlightment in this particular issue?

Perhaps, maybe Rud could!

cheers

whiten
Reply to  Kip Hansen
May 14, 2022 11:34 am

Thanks Kip.

Let’s see, how we can deal with the terminology and deformation of it, and try to keep it as rational as possible. outside of some blasting brilliance of wiki like mess.

Ok, let say that I whiten am considering fictional things… that according to the terminology, can not be or have any meaning.

Like for example;
that if one flies around the world observing the coastal bodies of water, one will clearly see not only different large bodies of water, touching in considerable long distances “lines” (geographically), but also one can clearly see where these ‘lines’ touch the coast.
In this fictional scenario, one can simply record by only usage of photo camera where actually such ‘lines’ touch the coasts, all around the world, where such ‘lines’ or boundaries actually exist.

So living aside for a moment all the might of complex terminology.

What would you say about this fictional proposition of mine… does it carry any kind of meaning at all in reality?
According to you, does such claimed natural condition exist or not?

Mind you, this proposition, fictional or not, addresses a (maybe supposed) natural condition that predates humanity.

I am open to correct my superficial understanding in this regard, but it will help a lot if the argument not solely based in the terminology and the assumed authoritative correctness of it.

Assure you that only trying to contribute, first and foremost towards my own understanding… in this particular issue.

cheers

Last edited 12 days ago by whiten
whiten
Reply to  Kip Hansen
May 14, 2022 2:18 pm

Good, sorry for wasting your time.

But if I may say, and still further persist.

Look at the map, global or otherwise, and there is many seas there, put on the map with their own names, next to each other, with indisputable different physical parameters and different characteristics, very clearly distinguishable from each other… in reality there, via simple observations.
Consisting as large bodies of water with different easy observable characteristics, and therefore with different thermal responses.

Well, hopped that we could have got a little further, but seems difficult now, with your straight dismissive response.
Your ball your choice.

cheers

Last edited 12 days ago by whiten
Old Man Winter
Reply to  whiten
May 14, 2022 2:06 pm

Methinks you should’ve added the “/s” tag to your original comment OR whatever
you’ve taken/done is affecting you more than you realize & a personal rule of not
commenting after using/doing it should’ve been made.

Last edited 12 days ago by Old Man Winter
whiten
Reply to  Old Man Winter
May 14, 2022 2:40 pm

Old Man Winter

Thanks for your reply.

Not being judgemental but your reply too obscure in consideration of my comment.

It could have being simpler if you helped by telling me, that all my understanding, and the point tried by me, in my comment, is basically simply based in fiction or fantasy.
That there does not happen to be any thing like I claimed in reality there.

Can you at least confirm such a simple thing to me… so I clearly get the message?

To me believe it or not, this is a serious enough issue.
So I got to do my best of not misunderstanding.

Thanks in advance.

cheers

ATheoK
Reply to  whiten
May 14, 2022 7:30 pm

Not entirely following your question, whiten.

The body of sea water, in between two different seas.”

Are you referring to something like the Panama Canal, where the Pacific Ocean is 20cm (7.9 inches” higher than the Atlantic side of the canal?

PSMSL

“Sea level is about 20 cm higher on the Pacific side than the Atlantic due to the water being less dense on the Pacific side, on average, and due to the prevailing weather and ocean conditions. Such sea level differences are common across many short sections of land dividing ocean basins.

The 20 cm difference is determined by geodetic levelling from one side to the other. This levelling follows a ‘level’ surface which will be parallel to the geoid (see FAQ #1). The 20 cm difference at Panama is not unique. There are similar ‘jumps’ elsewhere e.g. Skagerrak, Indonesian straits.

If the canal was open sea and did not contain locks, i.e. if somehow a deep open cutting had been made rather than the canal system over the mountains, then there would be a current flowing from the Pacific to the Atlantic.

An analogy, though imperfect because there are many other factors, is a comparison between Panama and the Drake Passage off the south tip of Chile, which has a west-east flow. (The flow in the Drake Passage is primarily wind-driven, but Pacific-Atlantic density must play some role.)”

Or are you referring to the abundance of imagery documenting water heights along the world’s coasts.
Much like early 20th century pictures of New York’s battery tide station?

One should not dismiss the abundant “fall-lines” worldwide where fresh water waterways meet tidal waters.

I’ve lived near fall-lines most of my errant 6 plus decades and I have yet to see a fall-line registering significantly higher tidal waters.
Placing them in the same level of danger from sea level rise as New York’s battery.

whiten
Reply to  ATheoK
May 15, 2022 2:05 am

ATheoK

Thank you for asking.

Let’s just put it this way.

What I am talking about, is addressing a (maybe supposedly) natural condition, that exist about different bodies of water.
That holds true in concept and otherwise for both kinds, the large deep oceanic bodies of water and smaller shallow coastal bodies of water, but in the case of the small shallower ones is easier to observe and distinguish.

I am talking about the actual seas contiguous next to each other, with different parameters, like different saltines (salinity), different clarity, different sea floor, different depths, with different coast lines in front them, and other different physical characteristics, separated by a body of water, which is like a mix of both, where the different respective sea floors separated by a “sea floor boundary” that is like a mix of both respective sea floors.

Yes in consideration of large and deep oceanic bodies of water is difficult and very complex hard to see this but, in shallower smaller coastal bodies of water is more easy and realistic.

Why I think this is important.

Because, the separating body of water, in between contiguous seas, is bound to shift one way or the other over time, as the result of sea level variation… therefore carrying the sea level variation signal.

Different contiguous bodies of water have different thermal responses.
The condition of maintaining thermal equilibrium of such, between each other, and between each of them respectively with atmosphere above, under the condition of sea level variation, will result in the shifting position of the “boundary” one direction or the other over time.

There is no way to actually consider or expect any benefit by way of argument, in proposition of any given issue, if first not clearly established the validity of the main core subject.
So the first point to consider or establish here is;
whether this is a real observable natural condition, or simply just a figment of my imagination.

Thank you, and please feel free for further engagement, if you like to.

cheers

Last edited 12 days ago by whiten
ATheoK
Reply to  whiten
May 15, 2022 6:09 am

It seems to me that you are expecting a replicable neutral sea level measurement environment where water bodies meet.

Sort of like the mechanically isolated tide measurement stations worldwide.
comment image

Where two large bodies of water meet are often tumultuous. Whirlpools and maelstroms are located globally between two bodies of water;

The Saltstraumen strait, located near the Arctic Circle in Bodø, in the Norwegian county of Nordland, has the strongest maelstrom in the world. About 400 million cubic meters of water funnels through the narrow strait each day, resulting in highly turbulent waters and a giant maelstrom. Ships are allowed to pass through this strait only in specific periods of the day when the currents are less dangerous in nature.”

On a smaller basis, my little Hydra Sport 17 foot center console is tossed around like a cork when transiting some smaller junctions between bodies of water; e.g., Cape May Canal which joins Cape May Harbor to the Delaware Bay. Small tide changes cause dangerous water conditions.

The world is full of separate but adjacent bodies of water. I’m sure many of their intersections have tidal stations measuring sea level height.
Isolating them and then comparing sea level height to neighboring bodies of water may satisfy your curiosity.

whiten
Reply to  ATheoK
May 15, 2022 7:33 am

ATheoK

Thanks again.

First, if you read my first comment here, is pretty clear stated that I know not of any research or study in the proposition of the main core subject I tried to brink up,
so I am not expecting or pretending for some kinda of replication of any kind…

What happens to be the main point raised by me, especially in the first comment, is like:

If a condition, like real observable physical barrier buffers, do exist in proposition of contiguous large bodies of waters (seas) that we call seas, and also we have them represented in maps as seas, with even their own specific names,
then;
Is there actually any research or study there related or about such a condition?

You see, in my reply to you I think I made it very clear, that when it comes to large deep oceanic bodies of water,
such expectation, of any substantial research, could be far too much to expect, and most probably non achievable.

Also I think that attempting to research and study in the proposition of smaller and smaller compartmentalized portions, of contiguous bodies of water, like in the size of bays and harbors and capes, will be completely useless.

Again, for what it could be worth;

“Why I think this is important.

Because, the separating body of water, in between contiguous seas, is bound to shift one way or the other over time, as the result of sea level variation… therefore carrying the sea level variation signal.”

Am not claiming about the possibility, of another method of any direct actual sea level measurement.

If the barrier buffers, the ‘boundaries’ between contiguous seas carry the sea level variation signal,
then if there any adverse or out of normal acceleration of sea level rise, it will be detectable.
But, well, that only if such condition(s) is/are real in nature, and then researched and studied.

cheers

alastair gray
May 14, 2022 9:06 am

All the tide gauges in the world are contained herein
https://psmsl.org/data/obtaining/map.html#plotTab
all of these charts tend to show an overall linear rise or fall in relative local sea level rise with quite a large amount of random noise.
Not a single tidal gauge anywhere in the world shows a tendency to acceleration. Therefore whatever suppose acceleration can be wrung out of satellite data sets CPS corrected land sites or whatever is a piece of totally worthless data manipulation by pseudo-scientists who should find more worthwhile targets for their data manipulation skills.
Looking for psychkinesis would be a more rewarding and intellectually more honest enterprise.

Phil.
Reply to  alastair gray
May 15, 2022 9:34 am

From that dataset:
comment image

Phil.
Reply to  Kip Hansen
May 15, 2022 1:21 pm

Yes, monthly vs annual both show the acceleration. Alistair Gray chose the psmsl dataset so I took an example from it.

Walter Sobchak
May 14, 2022 9:07 am

Proving once again, as if more proof were needed, that figures do not lie, but liars figure.

observa
May 14, 2022 9:10 am

In hype they trust-
SIDS study shows the risks of science hype (msn.com)
Just perfume the stench of BS with some sciencey computer output and let noble cause do the rest. What could be more noble than saving a few bubbies every year? Now let me think……?

markl
May 14, 2022 9:46 am

The “narrative” has long since replaced science.

Bob
May 14, 2022 1:08 pm

Kip, I think you did the right thing encouraging Dr. Welch to complete and show us his work. Your personal note is well taken. Most of this is above my head the important point is that if a party is going to use curve fitting to make their point, they should use the proper curve. This seems so obvious after reading it I am thinking why hasn’t this topic been discussed here at WUWT before?

Alan Welch
May 14, 2022 1:14 pm

In the UK its coming to the end of a hectic day trying to keep up with all the comments.

I would like to take this opportunity to thank Kip for our discussions over the last 2 years or so, his support in helping me put together this essay and for his spiritual support following the death of my daughter from cancer 3 months ago. I’m a non-believer but his words helped me considerably.

Thanks also to all (or most) of those posting comments. This was my first taste of posting an essay on WUWT and the most difficult part was keeping track of new comments and responses to comments coming in.

Sorry for the break in responding but I needed to get my climbing bean frame erected! Ledbury, UK is at 80 metres elevation and using Nerem’s last “acceleration” I worked out the sea would be hampering my bean growing on 23rd Feb, 3319 so I had better get a move on!!

Funnily, we must also thank Nerem for his contributions otherwise we would have nothing to write about.

b.nice
May 14, 2022 1:18 pm

As the satellite data acceleration is based totally on adjustments and changes in satellites, the whole thing is a load of cow droppings anyway.

Jim Gorman
May 14, 2022 1:35 pm

Dr. Welch –> Thank you for a short and concise description of one of the problems with doing curve fitting without a mathematical function to describe the variables involved and how what they each do to the output.

Curve fitting some data is only done to show an enhanced understanding of the function you are describing. In this case curve fitting is being used to show “my curve will show what will happen 80 years from now” by Nerem.

I’m sad to say this scientist knows little about science and math. I understand simple regression can give you a function OF ONE VARIABLE. However, climate is not determined by one variable (i.e., CO2). You will end up needing something like “ax^2 + by^2 + cz^2 + dx + ey + fz + some trig functions” to describe what is happening close enough to forecast out 80 – 100 years. This is where current GCM models fail.

Robert of Ottawa
May 14, 2022 3:35 pm

Well, may I suggest we continue to accumulate, and not “correct” the data for 1000 years then make a decision. Curve fitting pover such a small period, even less than one cycle, is bad practice

Izaak Walton
May 14, 2022 4:58 pm

So when exactly did the sea level stop rising in a linear fashion (the standard dogma around here is that that is a natural consequence of the end of the little ice age) and start varying sinusoidly as suggested by Dr. Welch? If you take his curve fitting and extrapolate back a hundred years or more you will see just how wrong it is.

Geoff Sherrington
Reply to  Izaak Walton
May 14, 2022 6:20 pm

IW
Alan is not suggesting that evidence exists for a natural or anthro sinusoidal variation, so much as showing that two different choices for curve fitting leads to a didactic conclusion. Geoff S

Izaak Walton
Reply to  Geoff Sherrington
May 14, 2022 9:26 pm

Geoff,
using the same analysis I could fit the data with any curve whose Taylor series expansion had a positive coefficient for the x^2 term. Which doesn’t mean anything except that everything look parabolic locally.

Furthermore Dr. Welch’s analysis also seems flawed since he uses one equation for his fit to the data and then a second different one to fit the acceleration. When he says that the sine wave is shifted by 3 years when discussing Fig. 4 he is saying that he needs two curves to fit the data, one for the values themselves and then a completely different one for the acceleration. If a sinusoid fit was valid then the same sinusoid would fit both the data and the acceleration derived from it.

Alan Welch
Reply to  Izaak Walton
May 15, 2022 1:19 am

The 3 year shift comes about due to the fact I stuck to the 22 year cycle first derived in 2018. Later I said the curve could be improved to a slightly longer period but I didn’t want to keep tuning all the while. It was the close fit of the shapes that is of interest. In Nerem 2022 paper he plots a similar graph (fig 2) but stops a year earlier saying it has leveled off.
I have added my sinusoidal based accelerations to this figure and sent a copy to Kip. I am unable to submit figures so I’ll contact Kip to see if he can upload it in a comment.

Alan Welch
Reply to  Izaak Walton
May 15, 2022 1:10 am

sorry for late reply due to time differences.
I am not implying SLR is rising sinusoidal but the readings may have a sinusoidal variation due to the method of measurement. I have already commented that the less than 100% coverage may have a bearing due to possible decadal ocean osculations. We are only talking of a 3.5mm amplitude but show that applying the methodology of Nerem et al leads to similar exaggerated “accelerations”. I have then predicted the trend (again dangerous) for the next few years – so watch this space.

Geoff Sherrington
May 14, 2022 6:33 pm

Thank you Kip and Alan for perseverance with this analysis.
And Kip, for continuing to request proper error analysis and reporting that is a fault with many climate papers.
Which raises the point that the measurement accuracy of satellite distance measurements seems a larger error than that quoted in conclusions from papers. Reliance seems to be placed on old saws like central limit and laws of large numbers. Has this apparent discrepancy been resolved? Or do we have two scientific communities who use and accept different ways to calculate accumulated error? Both cannot be right.
Also, there needs to be more clarity in explanations why discrete tide gauge data shows no acceleration while satellite data are sometimes claimed to. Explanations for fundamental discrepancies are better than ignoring them. Geoff S

spangled drongo
Reply to  Kip Hansen
May 14, 2022 9:30 pm

Kip, when the latest mean sea level [Feb 2022] is 99mm LOWER than the first MSL [May 1914] at probably the best Pacific Ocean tide gauge there is not only no acceleration, there is possibly no SLR [as Morner always said].
This is supported by the increase in Pacific atoll areas, too.

http://www.bom.gov.au/ntc/IDO70000/IDO70000_60370_SLD.shtml

Alan Welch
Reply to  Kip Hansen
May 15, 2022 1:28 am

Geoff may I throw in my pennies worth. The difference between Satellite and tidal gauges is they measure different things as their coverage is different and they are affected by decadal ocean oscillations differently. There may be a low background (0.01 mm/yr2) acceleration in all readings but this could easily be due to long term (millennium) variations that may exist in sea levels and temperatures.

Steve Case
Reply to  Alan Welch
May 15, 2022 6:06 am

See my post above LINK about the 0.01mm/yr² acceleration.

Climbing beans? Glad to know some people have a real life with real goals. I’ve tried Kentucky pole beans, and the local deer population just north of Milwaukee, WI loves them.

Moby
Reply to  Kip Hansen
May 15, 2022 5:26 am

 
Kip, you say tide gauge acceleration is not showing up any tide gauges at all, in particular since the start of the satellite record 30 yrs ago (if I have understood you correctly).
Looking at Sea Level Info (sidebar of this website) there are some tide gauges with records longer than 100 years. Of those practically all show acceleration in the last 30 years (except Scandanavia which is affected by Post Glacial Rebound) .
For example: Fremantle long term 1.75 mm/yr, 1992-2022 4.85mm/yr; Fort Denison 0.78, 3.34; The Battery 2.89,4.19; Honolulu 1.55,2.58; Trieste 1.32,3.3; Newlyn (UK) 1.91.4.00; Brest 1.03, 3.02; Key West 2.53,4.77; San Diego 2.21,3.3.

Steve Case
Reply to  Moby
May 15, 2022 7:47 am

 there are some tide gauges with records longer than 100 years. Of those practically all show acceleration in the last 30 years
______________________________________________________

comment image

That spaghetti chart is a few years old now, but an update would probably show the same thing. Namely the rate of sea level rise over three decades or so tends to undulate porpoise or whatever you would like to name the 30 some odd years woggle.

Your post illustrates the point that extrapolating a mere 30 years or so of sea level rise out to 2100 is likely to produce large errors.

Phil.
Reply to  Kip Hansen
May 15, 2022 1:39 pm

What do you term the “proper uncertainty range”?

Phil.
Reply to  Kip Hansen
May 16, 2022 8:00 am

That’s the resolution of the instrument, they’re measuring a quantity with a daily fluctuation of about 2m multiple measurements certainly can yield a mean with better uncertainty than that.

Phil.
Reply to  Kip Hansen
May 16, 2022 9:52 am

You’re calculating a mean value and the uncertainty of the mean does depend on the number of points:

“The average value becomes more and more precise as the number of measurements 𝑁 increases. Although the uncertainty of any single measurement is always ∆𝑥, the uncertainty in the mean ∆𝒙avg
becomes smaller (by a factor of 1/√𝑁) as more measurements are made.”

https://www.physics.upenn.edu/sites/default/files/Managing%20Errors%20and%20Uncertainty.pdf
I suggest you read it.

Tim Gorman
Reply to  Phil.
May 16, 2022 3:13 pm

I think *YOU* are the only one that needs to read your link.

kip: “measurements taken at different times of a changing object (property).”

Uncertainty is a measure of accuracy, not precision. More measurements only defines the precision with which you can calculate the mean, it does not help with accuracy.

If you have *any* systematic error your precisely calculated mean from many measurements will still be inaccurate.

Look at “C” in your link. High precision, low accuracy. That’s what you get from measurements of different things by taking lots of measurements.

Look at “D”. High precision and high accuracy. You simply cannot get this from measuring different things.

Kip is correct here. Consider two piles of boards with an infinite number of boards in each pile. One pile has boards of length 4′ and the other 8′. You can take an infinite number of measurements, average them, and get a very precise mean, 6′.

But that mean will be very inaccurate because it won’t describe *any* of the boards in the distribution. Situation “C” – high precision but low accuracy.

(ps: this doesn’t even consider the uncertainty of the length of each individual board which will also affect the accuracy of that very precise mean you calculate)

Phil.
Reply to  Tim Gorman
May 16, 2022 3:49 pm

That’s not what is being done here, you have a quantity which is varying continuously over time which is being measured at regular intervals (6 mins) and the average is calculated monthly. During that period the measurements are varying by about 2m to a resolution of 2cm.

Tim Gorman
Reply to  Phil.
May 16, 2022 4:24 pm

How is that average calculated? (max-min)/2? You realize that is *not* an average of a sinusoid, right? Nor is the uncertainty of a set of measured values, each with an individual uncertainty, diminished by taking an average.

An average requires performing a sum. The uncertainty of a sum is additive, either directly or by root-sum-square. The uncertainty goes UP, not down. When you divide that sum by a constant the constant has no uncertainty so the uncertainty of the average remains the uncertainty of the sum.

The *only* way you can reduce the uncertainty is to ensure that *all* uncertainty is random and has a Gaussian distribution, i.e.. a +error for every -error. Then you can assume a complete cancellation of uncertainty. That is *very* difficult to justify for field measurements where the station can suffer from all kinds of problems such as hysteresis (i.e. the device reads differently going up than when coming down), non-linearity between max measurements vs min measurements, drift in calibration, etc.

When you are talking about trying to find an acceleration of mm/time^2 where the measurements have an uncertainty in cm, your signal gets lost in the uncertainty!

Phil.
Reply to  Kip Hansen
May 17, 2022 5:47 am

+/- 2 cm is the specification for the accuracy of each individual measurement by a modern tide gauge. Nothing to do with the variation of the tides.  That accuracy defines the uncertainty range of the individual measurements, the 6 minute averages, he daily averages, the monthly averages, the annual averages…..”

Not true, a measurement is made every 6 mins, the daily average is the average of 240 such measurements, the standard error of that mean is the standard deviation of those measurements divided by √240, (~1/15), the monthly value would be divided by √7200 (~1/80). The accuracy of the mean is not limited to the resolution of the instrument. By the way I understand that the accuracy specified by GLOSS for modern tide gauges has been changed to +/-1cm.

Phil.
Reply to  Kip Hansen
May 17, 2022 8:12 am

“You are still stuck on the “accuracy of the mean” which is a simplistic mathematical idea and not a real world physical property.”

It certainly is a ‘real world physical property’, it tells you how close you are to the real mean of the population you are measuring.

“The dumbest modern computer can turn any series of numbers into an average (mean) with near infinite “accuracy””.

Certainly can not without additional data.

“It is impossible to turn “many inaccurate measurements of different conditions at different times” into an real-world accurate mean. The mean retains the uncertainty of the original measurement error. ”

It does not.

“You confuse statistical ideas with the real world — and make the same mistake that many others make. No matter how many decimal points you feel are justified on the end of your calculated mean, you still must tack on the rel world uncertainty from the original measurements ”

No you do not, you quote it to the first significant figure of the standard error of the mean.

“For tide gauge data, this is +/- 2 cm from the technical pecs of the actual measuring instrument.”

No!

Phil.
Reply to  Kip Hansen
May 17, 2022 12:31 pm

As one who is well trained in statistics I can say that:
Many measurements of different object (surface level of the sea at Point A) at a different time under different conditions can be used to reduce the uncertainty range of the mean value.

“I have had a lot of practice arguing this point over the last decade.”
I’m sure you have, doesn’t make you right.
There can also be issues with the measurement such as bias, drift etc. but those have to be dealt separately.

E.G.
full-jtech-d-18-0235.1-f4.jpg

https://journals.ametsoc.org/view/journals/atot/36/10/jtech-d-18-0235.1.xml

Tim Gorman
Reply to  Phil.
May 17, 2022 2:29 pm

You are describing what statisticians call standard error of the mean. That is the basic point of misunderstanding as to what it is.

It should be named “standard deviation of the sample means”.

The other misunderstanding that is common among statisticians is that the standard deviation of the sample means can substitute for the final uncertainty of the mean caused by uncertainty in the individual elements making up the data set.

I’ve purchased quite a number of statistics books and perused even more. They *all* ignore the uncertainty of the individual elements when calculating the standard deviation of the sample means – EVERY SINGLE ONE! There are two possible explanations – 1. They assume that all error contributed by the individual elements of the data set are only random and perfectly fit a Gaussian distribution so that all of the uncertainty in the individual elements cancel, or 2. they just ignore the uncertainty in the individual elements out of ignorance or because they just don’t care.

If you have a million data elements and each one is 50 +5 (not +/-, just +). So each measurement can be anywhere from 50 to 55. Now you select 100 samples of 100 elements each and calculate the mean of the stated values in each sample – and you get a mean of 50 for every sample. So what is the standard deviation of the sample means? ZERO! A seemingly accurate mean calculated from the means of the individual samples IF you use the standard deviation of the sample means as your measure of uncertainty.

Yet you know that can’t be right! All of the actual measurements can’t be 50 since they have identical uncertainty of +5 – a systematic error perhaps induced by a faulty calibration process.

So what you get from all the samples is a very PRECISE number for the mean but that mean calculated from just the stated value while ignoring the uncertainty associated with each stated value is *not* accurate.

At a minimum your mean will have an uncertainty of +5. But since uncertainties that are not totally random and Gaussian add, either directly or by root-sum-square, your total uncertainty will be far higher. Uncertainties that are not purely random and Gaussian stack. The more uncertain measurements you take the higher the uncertainty gets.

This should be obvious to a statistician but somehow it never is. If you think about adding two random variables what happens to the variance? The total variance goes UP. How is it calculated?

V_t = V_1 + V_2

or as standard deviations

(σ_t)^2 = (σ_1)^2 + (σ_2)^2

So the combined σ = sqrt[ (σ_1)^2 + (σ_2)^2 ]

Combining uncertainties of measurements of different things is done exactly the same way – root-sum-square. Root-sum-square assumes that *some* of the uncertainties will cancel but not all.

Think about building a beam using multiple 2″x4″ boards to span a foundation. Each board will have an uncertainty. That uncertainty may not be the same if you have different carpenters using different rulers. Let’s say you use three boards end-to-end to build the beam. What is the total uncertainty of the length of the beam?

You simply can’t assume that all the uncertainty is totally random and Gaussian. Your beam may come up short. If it’s too long you can always cut it but that’s not very efficient, is it? And it still doesn’t cancel the uncertainty you started with!

In this case assume all the uncertainties are equal and are +/- 1inch. In this case they uncertainties directly add and your beam will be X +/- 3in where X is the 3 times the stated value of the boards.

Again, this should all be easily understood by a statistician. But it just seems to elude each and every one! It can only be that they are trained to ignore uncertainty by the textbooks. And that *does* seem to be the case based on the statistics textbooks I’ve read. The only ones that seem to learn this are physical scientists and engineers who live and die by proper evaluation of uncertainty. It’s especially true for engineers where ignoring uncertainty has personal liability (i.e. money!) consequences!

Phil.
Reply to  Kip Hansen
May 18, 2022 9:46 am

Well as long as you keep getting it wrong you will need to be rebutted.
We’re talking about making measurements to estimate the mean of a quantity and we get a rambling example of using three boards to build a beam! To make it worse the example uses a weird constraint on the uncertainty so that the beam can only be X+3, +1, -1 or -3, how is that relevant?
The other example: “If you have a million data elements and each one is 50 +5 (not +/-, just +). So each measurement can be anywhere from 50 to 55.”
So that would be properly described as 52.5±2.5.
I set that up on Excel as a random number between 50 and 55 and ran some simulations of 100 samples
mean: 52.347 52.435 52.554 52.551 52.688
sd: 1.38 1.47 1.35 1.40 1.52
sem: 0.138 0.147 0.135 0.140 0.152
All sample means comfortably within the range of ±2sem as expected.

Out of curiosity I repeated it for samples of 400 and got the significantly narrower range of values
mean: 52.50 52.56 52.56 52.48 52.52
sd: 1.48 1.49 1.46 1.50 1.46
sem: 0.074 0.074 0.073 0.075 0.073
again comfortably within the range of ±2sem

Tim Gorman
Reply to  Phil.
May 19, 2022 4:08 pm

“Well as long as you keep getting it wrong you will need to be rebutted.”

He’s not getting it wrong.

“The other example: “If you have a million data elements and each one is 50 +5 (not +/-, just +). So each measurement can be anywhere from 50 to 55.”
So that would be properly described as 52.5±2.5.”

Nope, you just changed the stated value, i.e. the value you read off the tape measure from 50 to 52.5. Did you change tape measures?

“I set that up on Excel as a random number”

In other words you are confirming the consequent. A logical fallacy. You set your experiment up to prove what you wanted to prove,

You changed the example to work out exactly how you needed to in order to assume random, Gaussian error. Ransom errors provided by Excel are assured to be Gaussian!

The point of the uncertainty being only positive was to highlight the fact that there would be no cancellation of random error. You changed it so you could assume random, Gaussian error without having to justify the assumption.

When you are measuring different things at different times you simply cannot just assume a random, Gaussian error distribution. That is what statisticians do but it is wrong! It’s why statistic textbooks never show any uncertainty values for any data elements in any distribution set – only stated values!

(p.s. if you think you can’t have only positive or only negative error then try thinking about using a gauge block marked 10mm while actually being 9mm (or 11mm) because of an error in manufacturing. Nothing random about that error, nothing Gaussian about it)

Last edited 7 days ago by Tim Gorman
Phil.
Reply to  Tim Gorman
May 20, 2022 8:06 am

“The other example: “If you have a million data elements and each one is 50 +5 (not +/-, just +). So each measurement can be anywhere from 50 to 55.”
So that would be properly described as 52.5±2.5.”
Nope, you just changed the stated value, i.e. the value you read off the tape measure from 50 to 52.5. Did you change tape measures?
“I set that up on Excel as a random number”
In other words you are confirming the consequent. A logical fallacy. You set your experiment up to prove what you wanted to prove, ”

No I set it up exactly as described, you said “each measurement can be anywhere from 50 to 55”, so I generated a series of numbers that met that description.

“You changed the example to work out exactly how you needed to in order to assume random, Gaussian error. Ransom errors provided by Excel are assured to be Gaussian!”

As said several times I did not use a Gaussian error, I used the function RANDBETWEEN which generates a series of numbers between the max and min values. The ones I used approximated a uniform distribution, e.g. one series was as follows:
50-51.25 25 values
51.25-52.5 24 values
52.5-53.75 27 values
53.75-55 24 values

“The point of the uncertainty being only positive was to highlight the fact that there would be no cancellation of random error. You changed it so you could assume random, Gaussian error without having to justify the assumption.”

No, as shown above I set it up exactly as described by you, you asserted that the mean would be 50, as I showed that it would converge on 52.5. I can only assume that your description of the example wasn’t what you intended.

“When you are measuring different things at different times you simply cannot just assume a random, Gaussian error distribution.”

Which I did not do, as pointed out multiple times

(p.s. if you think you can’t have only positive or only negative error then try thinking about using a gauge block marked 10mm while actually being 9mm (or 11mm) because of an error in manufacturing. Nothing random about that error, nothing Gaussian about it)”

No and it’s what a real scientist/engineer would eliminate by calibration.

Jim Gorman
Reply to  Phil.
May 18, 2022 8:11 am

I read through the study you linked. The authors appear to have mixed “accuracy” and “uncertainty” all together. Most of what they did was to detect inaccuracies as compared to to a reference or bias due to time.

Uncertainty appears in each and every measurement, even those made with reference devices. It is partly due to the resolution whereby there is no way to know what the value beyond the resolution actually is. It can also be due to systematic things that reduce the ability to match conditions when the measuring device was calibrated along with any drift.

Clyde Spencer
Reply to  Phil.
May 18, 2022 10:05 am

I think the problem that you don’t perceive is that the “population” you are sampling is not a fixed population. It is a composite of many additive/subtractive astronomical sinusoids with periods of up to at least 19 years, (including 209 centuries for the solar perigee) and some random weather-related parameters. Depending on the interval of time over which you take several samples, one will get very different numbers that lead to a large variance when you calculate the standard deviation. The mean will drift (actually oscillate) depending on when the samples were taken.

Phil.
Reply to  Clyde Spencer
May 18, 2022 10:31 am

I perceive that and understand it very well, I first had to deal with the misleading comments here made with reference to a ‘fixed’ population. So the idea that the mean does not get closer to the true mean as the number of samples is increased has been successfully rebutted, hopefully we won’t hear that nonsense again.
Regarding the sinusoidal variation of tides, the major period is twice a day, that is sampled every six minutes so 240 times a day and the average is usually reported monthly (so 7200 measurements).

Tim Gorman
Reply to  Phil.
May 19, 2022 4:30 pm

 So the idea that the mean does not get closer to the true mean as the number of samples is increased has been successfully rebutted, hopefully we won’t hear that nonsense again.”

It hasn’t been rebutted. You keep assuming all error cancels (random and Gaussian) when that just isn’t true in the real world.

You set your examples up so the error cancels. You failed to address where the error does *NOT* cancel. So you didn’t actually rebut anything.

You even admitted that systematic (i.e. NON RANDOM) error exists in the measurements but apparently you think that will cancel as well! Sorry, but it won’t!

You didn’t even rebut the fact that uncertainty in the measurements which form a sine wave winds up with the uncertainty showing up in the average! You even assume the average of a sine wave is zero – i.e. it doesn’t oscillate around a set value as opposed to zero. What does zero mean in sea level?

Reply to  Kip Hansen
May 18, 2022 11:58 am

“You are still stuck on the “accuracy of the mean” which is a simplistic mathematical idea and not a real world physical property.”
The mean is not a real world physical property. There is no instrument that will measure a mean. You have to calculate it.

The persistence of this dumb idea that more samples does not improve the accuracy of the mean is bizarre. Anyone actually trying to find something out, as in drug testing, say, pays a lot of money to get many measurements. They know what they are doing.

Say you have a coin, and want to test whether it is biased (and by what amount). One toss won’t tell you. You need many.

If you score 1 for heads, 0 for tails, what you do is take the mean of tosses. The mean of 2 or 3 won’t help much either. But if you take the mean of 100, the standard error if unbiased is 0.05. Then it is very likely that the mean will lie between 0.4 and 0.6. If it doesn’t, there is a good chance of bias. More tosses will make that more certain, and give you a better idea of how much.

This is all such elementary statistics.

Jim Gorman
Reply to  Kip Hansen
May 19, 2022 4:38 pm

A coin toss has two certain values, a die has 6 certain values, polls have certain answers. They have infinitely accurate and precise discrete values. Probablilities rule the day.

None of these are continuous physical phenomena that have uncertainties when they are measured. Measurements have a limit of their resolution. Nothing is exactly accurate and precise.

Mathematicians that have no engineering background or even mechanical expertise do not understand what this means. They would build a Formula 1 engine by assuming that each and every measurement was exact and ignore the tolerances involved. I’ll bet none of them understand why valve lapping is done in an engine.

Last edited 7 days ago by Jim Gorman
Tim Gorman
Reply to  Jim Gorman
May 21, 2022 2:30 pm

Good point. I suspect you are correct. Not one understands valve lapping or what plastigauge is used for, all journal bearings are exact with no uncertainty.

Tim Gorman
Reply to  Nick Stokes
May 19, 2022 4:50 pm

The persistence of this dumb idea that more samples does not improve the accuracy of the mean is bizarre. “

What is bizarre is that you and the rest of the statisticians on here think that all error is random and cancels!

You apparently can’t even conceive that someone might be doing measurements with a gauge block that is machined incorrectly. Every measurement made with it will have a fixed, systematic error that is either all positive or all negative – i.e. no cancellation possible.

It simply doesn’t matter how many measurements you take using that gauge block or how many samples you take from those measurements. The means you calculate *will* carry that systematic error with it. When you have multiple sample means that are all carrying that systematic error then the population mean you calculate using the sample means will also carry that systematic error. The mean you calculate will *NOT* be accurate. The more samples you take the more precisely you can calculate a mean but that is PRECISION of calculation, it is *NOT* accuracy of the mean! The standard deviation of the sample means only determines precision, not accuracy.

See the attached picture. The middle target shows high precision with low accuracy. *THAT* is what you get from systematic error. Systematic error doesn’t cancel. And *all* field measurements will have systematic error. And if you are measuring different things each time the error will most likely not be random either.

The problem with most statisticians is they have no skin in the game like engineers do. I assure you that if you have personal liability for the design of something that physically impacts either a client or the public you *will* learn quickly about uncertainty. If you don’t you will be in the poor house in a flash! And perhaps in jail for criminal negligence if someone dies.

Say you have a coin, and want to test whether it is biased (and by what amount). One toss won’t tell you. You need many.”

This is probability, not uncertainty. You can’t even properly formulate an analogy.

Difference-Between-Accuracy-And-Precision-.png
Phil.
Reply to  Tim Gorman
May 20, 2022 7:22 am

““The persistence of this dumb idea that more samples does not improve the accuracy of the mean is bizarre. “
What is bizarre is that you and the rest of the statisticians on here think that all error is random and cancels!
You apparently can’t even conceive that someone might be doing measurements with a gauge block that is machined incorrectly. Every measurement made with it will have a fixed, systematic error that is either all positive or all negative – i.e. no cancellation possible.”

As I have pointed out multiple times that’s not the problem being discussed. Real scientists and engineers such as myself deal with that problem by calibration.

Tim Gorman
Reply to  Phil.
May 21, 2022 2:32 pm

Calibration drifts over time. Does each measuring device have its own dedicated calibration lab that is used before each measurement?

Tim Gorman
Reply to  Tim Gorman
May 23, 2022 5:49 am

No reply. I can only guess that you didn’t consider drift in the measuring device. And you call yourself an engineer?

Jim Gorman
Reply to  Phil.
May 17, 2022 11:27 am

What you are discussing as Standard Error is often taught incorrectly and used incorrectly. Read this site:

https://byjus.com/maths/standard-error/

Let me summarize where some of the misunderstandings originate.

1) You must declare whether your data is the entire population or if it consists of a sample. This is important.

2) If it is a sample, then the mean of those samples is a “sample mean”. This is an estimate of the population mean.

3) The standard deviation of the sample(s) IS THE STANDARD ERROR. More importantly you DO NOT subdivide this standard deviation of the sample(s) by a number called N in order to reduce it.

4) Another name for Standard Error is Standard Error of the sample Mean, i.e. SEM.

5) The SEM/SE provides an INTERVAL within which the the population may lay. The Sample Mean is only an estimate of the population mean, not the true mean.

6) Now here is the key mathematically.

SEM/SE = σ / N where

SEM/SE –> standard deviation of the sample distribution
σ –> Standard Deviation of the population
N –> is the sample size

7) Now that we have said that we have a sample, we can calculate the population statistical parameters.

Population Mean = Sample mean
Population σ = SEM/SE * N

Note: the Central Limit Theory says that with sufficient sample size and number of samples, you will get a sample means distribution that is normal regardless of the distribution of the population data.

8) “N” is never the number of data points when dealing with a sample unless you have only one sample. In that case to find the population standard deviation you would multiply by the square root of the number of data points in your sample, say √4000.

9) If you declare your data to be the entire population then there is no reason to perform sampling at all. IOW, you don’t need to have a sample size. You just calculate the population mean and standard deviation as usual. The number of data points only enter into the standard deviation.

Here is another site to read:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1255808/ 

Play with this site to find out what happens with sampling.

Sampling Distributions (onlinestatbook.com) 

Phil.
Reply to  Jim Gorman
May 17, 2022 5:20 pm

What you are discussing as Standard Error is often taught incorrectly and used incorrectly.”
Maybe so, however I was using it correctly, I consistently referred to the standard error of the mean.

Tim Gorman
Reply to  Phil.
May 17, 2022 8:50 pm

“Maybe so, however I was using it correctly, I consistently referred to the standard error of the mean.”

Except the standard error of the mean is not the uncertainty of the mean. And it is truly better described as the standard deviation of the sample means in order to emphasize that it is not the uncertainty of the sample means.

Phil.
Reply to  Tim Gorman
May 18, 2022 9:56 am

No it is best described as a measure of how closely the sample mean will be to the actual mean, see the calculated example above.

Tim Gorman
Reply to  Phil.
May 18, 2022 5:14 pm

No it is best described as a measure of how closely the sample mean will be to the actual mean, see the calculated example above.”

If you don’t include the uncertainty of the individual elements in your calculation of each sample mean then you have no idea of how accurate that sample mean actually is. You must carry that uncertainty into the calculation of the sample means.

If your data is “stated-value” +/- uncertainty then your sample mean should be of the same form – “stated-value” +/- uncertainty.

Not doing this means you have made an assumption that all error is random and Gaussian. It may be an unstated assumption but it is still an assumption. It means you assume that the mean you calculate from the sample means is 100% accurate.

When you are measuring different things you have no guarantee that all error is random and Gaussian so you can’t just assume that the calculated mean of a sample is 100% accurate. Therefore you *must* carry that uncertainty through all calculations using that uncertainty.

I can only point you back to my example of an infinite number of measurements, each which come back as 50 + 5. If you pull 100 samples each with 100 elements and calculate the means of each sample using only the stated value of 50, then the standard deviation of the sample means will be zero. But there is no way that mean can be accurate.

You keep conflating precision with accuracy. They are not the same.

Phil.
Reply to  Tim Gorman
May 18, 2022 7:42 pm

I showed the result of the calculation above, it included the uncertainty and showed the expected reduction in the standard error of the mean. Also that didn’t use a Gaussian distribution, it was ~uniform.

Tim Gorman
Reply to  Phil.
May 19, 2022 4:17 pm

I showed the result of the calculation above, it included the uncertainty and showed the expected reduction in the standard error of the mean. Also that didn’t use a Gaussian distribution, it was ~uniform.”

You changed the error into a random, Gaussian distribution using Excel. In other words you set your experiment up to give the answer you wanted.

You set it up so the error cancelled and you didn’t have to propagate it into the calculation of the sample mean. You assumed each sample mean to be 100% accurate.

That simply doesn’t work in the real world, only in the statistician world.

Phil.
Reply to  Tim Gorman
May 20, 2022 7:15 am

“You changed the error into a random, Gaussian distribution using Excel. In other words you set your experiment up to give the answer you wanted.” 

You’ve shown yourself to have comprehension difficulties so I’ll repeat again it was not a Gaussian distribution, just a random value between the stated uncertainty limits.

“You set it up so the error cancelled and you didn’t have to propagate it into the calculation of the sample mean. You assumed each sample mean to be 100% accurate.”

No I used the error in the calculation of the sample mean, no such assumption was made.
Kip stated that the measured value could be any value between 50 and 55 so I generated a random series of numbers between 50 and 55, exactly as he specified.
I then calculated the mean and standard deviation of that series of 100 numbers.
mean: 52.347 52.435 52.554 52.551 52.688
sd: 1.38 1.47 1.35 1.40 1.52
sem: 0.138 0.147 0.135 0.140 0.152
All sample means comfortably within the range of ±2sem as expected.

Out of curiosity I repeated it for samples of 400 numbers and got the significantly narrower range of values
mean: 52.50 52.56 52.56 52.48 52.52
sd: 1.48 1.49 1.46 1.50 1.46
sem: 0.074 0.074 0.073 0.075 0.073

Last edited 7 days ago by Phil.
Phil.
Reply to  Phil.
May 20, 2022 8:34 am

Kip stated that the measured value could be any value between 50 and 55″

Sorry Kip, it should have been ‘Tim’

Phil.
Reply to  Kip Hansen
May 20, 2022 10:30 am

“Phil, Jim, Tim, Nick ==> This little discussion has gone far afield. The mathematics of finding a mean are not in questions.”

Actually they have been and I’ve been trying to keep the discussion to that question and avoid issues of calibration and building beams.

“The real issue is that when Tide Gauges take a measurement, the number recorded is a representation of a range, 4 cm wide (two cm higher than the number recorded and 2 cm lower then the number recorded. That is notated as X +/- 2 cm. 
Thus, tide gauge records can be subjected to “find the mean” — but it has to be a mean of the ranges, not the mid-points.”

Which is exactly what I did, and as shown the uncertainty of the mean decreases as the number of samples taken increases.

The resolution of an instrument says that any value in the range X+/- 2 will be recorded as X. As long as the range of variation in the quantity being measured is greater than that resolution then the mean can be determined to an accuracy which depends on the number of readings taken. In the case of the Battery tide gauge the tidal range is about 2m which is measured every 6 minutes.

Phil.
Reply to  Kip Hansen
May 21, 2022 1:47 pm

So if you measure a tide with range of 2m using a measure which is marked every 4cm and reported as the midpoint of each 4cm interval it would be impossible to determine the mean more accurately than ±2cm?

Phil.
Reply to  Kip Hansen
May 21, 2022 5:35 pm

OK i’ll use a ruler to measure sin(x)+1 with that resolution, equivalent to a tide of 2m from 0 to 2.
1.00 1.05 1.10 1.16 1.21 1.26 1.31 1.36 ……..
which will be measured as:
1.00 1.04 1.08 1.16 1.20 1.24 1.32 1.36 ……

Phil.
Reply to  Phil.
May 22, 2022 5:19 am

So when I measure sin(x)+1 from 0-2π in 120 intervals with that resolution I get a mean of 0.999 instead of 1.000.

Tim Gorman
Reply to  Phil.
May 23, 2022 5:48 am

So when I measure sin(x)+1 from 0-2π in 120 intervals with that resolution I get a mean of 0.999 instead of 1.000.”

You didn’t include an uncertainty interval for each measurement nor did you properly propagate the uncertainty to the mean.

Phil.
Reply to  Tim Gorman
May 23, 2022 7:39 am

I did it exactly the way one would do it if using the pole method of tide measurement with 4cm intervals as I defined it above. Recovers the mean of the sine wave to within 1mm.

inline-jtech-d-18-0235.1-f1.jpg

Tim Gorman
Reply to  Phil.
May 23, 2022 4:02 pm

I did it exactly the way one would do it if using the pole method of tide measurement with 4cm intervals as I defined it above. Recovers the mean of the sine wave to within 1mm.”

So what? You didn’t properly state the measurements using a stated value +/- uncertainty nor did you properly propagate the uncertainty into the mean.

Tim Gorman
Reply to  Phil.
May 23, 2022 5:46 am

1.00 1.05 1.10 1.16 1.21 1.26 1.31 1.36 ……..
which will be measured as:
1.00 1.04 1.08 1.16 1.20 1.24 1.32 1.36 ……”

Where is your uncertainty?

Revised:

1.00 +/- 2cm, 1.05 +/- 2cm, 1.10 +/- 2cm, 1.16 +/- 2cm, 1.21 +/- 2cm, 1.26 +/- 2cm, 1.31 +/- cm, 1.36 +/- cm, …..

When you determine the mean the mean will come out as an example 1.18 +/- sqrt[ 8 * 2cm] = 1.18 +/- 4cm

Phil.
Reply to  Tim Gorman
May 23, 2022 8:37 am

Reading problems again?
The first row was the ideal sine series, no uncertainty:
1.00 1.05 1.10 1.16 1.21 1.26 1.31 1.36 ……..

The second row is the result that would be obtained if measured using a ruler with ±2cm resolution:
1.00 1.04 1.08 1.16 1.20 1.24 1.32 1.36 ……

The resulting mean of the measurements was 0.999 as opposed to the actual 1.000.

Tim Gorman
Reply to  Phil.
May 23, 2022 4:23 pm

“Reading problems again?
The first row was the ideal sine series, no uncertainty:
1.00 1.05 1.10 1.16 1.21 1.26 1.31 1.36 ……..
The second row is the result that would be obtained if measured using a ruler with ±2cm resolution:
1.00 1.04 1.08 1.16 1.20 1.24 1.32 1.36 ……
The resulting mean of the measurements was 0.999 as opposed to the actual 1.000.”

You STILL didn’t do it right! Uncertainty doesn’t directly add or subtract from the stated value. The uncertainty is uncertainty and is handled separately from the stated value!

So if your ideal sine wave values were 1.00, 1.05, 1.10, 1.16, 1.21, 1.26, 1.31, and 1.36 then the *actual* measurements should be stated as I laid out – stated value +/- uncertainty!

1.00 +/- 2cm, 1.05 +/- 2cm, 1.10 +/- 2cm, 1.16 +/- 2cm, 1.21 +/- 2cm, 1.26 +/- 2cm, 1.31 +/- cm, 1.36 +/- cm, …..

And you get the result I gave: 1.18 +/- 4cm.

Remember, uncertainty is an INTERVAL. You don’t know the exact value within that uncertainty interval so you can’t add/subtract it from the stated value, it just remains an interval throughout.

You created a random, Gaussian error distribution and used that as the absolute error value when that is impossible in the real world where systematic error exists along side random error. That is why uncertainty is an interval and not a value.

Tim Gorman
Reply to  Phil.
May 23, 2022 5:39 am

Which is exactly what I did, and as shown the uncertainty of the mean decreases as the number of samples taken increases.”

That is probably *NOT* what you did. If you used the RAND function you generated a Gaussian distribution of random error. In other words you set up a scenario in which all the error cancels! Exactly what doesn’t happen with a field measurement device that has both random and systematic error.

The resolution of an instrument says that any value in the range X+/- 2 will be recorded as X.”

Resolution is *NOT* the same as uncertainty. The gauge can have a resolution of 0.1mm and still have an uncertainty of +/- 2cm. It’s stated value +/- uncertainty.

“As long as the range of variation in the quantity being measured is greater than that resolution then the mean can be determined to an accuracy which depends on the number of readings taken.”

You *still* have to follow the rules of significant digits. You are still confusing precision with accurace.

Phil.
Reply to  Tim Gorman
May 23, 2022 8:24 am

““Which is exactly what I did, and as shown the uncertainty of the mean decreases as the number of samples taken increases.”

That is probably *NOT* what you did. If you used the RAND function you generated a Gaussian distribution of random error.” 

As stated multiple times that is exactly what did, I did not use the RAND function, learn to read!

Tim Gorman
Reply to  Phil.
May 23, 2022 5:18 am

You’ve shown yourself to have comprehension difficulties so I’ll repeat again it was not a Gaussian distribution, just a random value between the stated uncertainty limits.”

What kind of distribution do you suppose Excel gives a set of random values?

“No I used the error in the calculation of the sample mean, no such assumption was made.”

You can’t do that! Uncertainty is an interval, not a value. How do you add uncertainty into a mean? You calculate the mean of the stated value and then propagate the uncertainty onto the mean. E.g. the mean winds up being X_sv +/- u_p where X_sv is the mean of the stated values and u_p is the propagated uncertainty!

Kip stated that the measured value could be any value between 50 and 55 so I generated a random series of numbers between 50 and 55, exactly as he specified.”

Which tries to identify a specific value of error within the uncertainty interval. The problem is that you simply do not know the value of the error! That’s why uncertainty is given as an interval and not a value! When you generate a random variable within a boundary in Excel you *are* generating a Gaussian distribution which is guaranteed to cancel. If you think you didn’t create a normal distribution then tell us what Excel function you used because RAND gives a normal distribution.



Phil.
Reply to  Tim Gorman
May 23, 2022 8:13 am

““You’ve shown yourself to have comprehension difficulties so I’ll repeat again it was not a Gaussian distribution, just a random value between the stated uncertainty limits.”
What kind of distribution do you suppose Excel gives a set of random values?”

You really do have comprehension difficulties don’t you, the same one I’ve told I was using at least twice!
Randbetween, I even presented the distribution of a sample I used.

“When you generate a random variable within a boundary in Excel you *are* generating a Gaussian distribution which is guaranteed to cancel. If you think you didn’t create a normal distribution then tell us what Excel function you used because RAND gives a normal distribution”

I know I didn’t generate a Gaussian as stated above. I did say which Excel function I used and even quoted a sample, try reading.

https://wattsupwiththat.com/2022/05/14/sea-level-rise-acceleration-an-alternative-hypothesis/#comment-3519801

Tim Gorman
Reply to  Phil.
May 24, 2022 4:51 am

I know I didn’t generate a Gaussian as stated above. I did say which Excel function I used and even quoted a sample, try reading.”

Actually I believe I misspoke earlier, both RAND and RANDBETWEEN appear to generate a uniform distribution, RAND between 0 and 1 and RANDBETWEEN between a bottom number and a top number and can generate negative numbers.

A RANDBETWEEN generated list, with a uniform distribution around zero, will do the exact same thing a Gaussian distributed list will do – the errors will cancel.

So, once again, you have proven that a symmetric set of error values will cancel.

And, once again, uncertainty is *NOT* a value, it is an interval. You *must* propagate uncertainty as uncertainty. You simply don’t know that all the errors will cancel, especially when you are measuring different things. It’s exactly like adding variances when combining two independent, random variables – the variances add.

Clyde Spencer
Reply to  Jim Gorman
May 18, 2022 10:24 am

Jim,
I think that an important point to be made is that you are basically talking about a constant, or some objects that have a nominal value. That is, 1″ ball bearings cluster around a fixed specification with a normal distribution.

However, variables such as temperatures or sea level surfaces change with time and a sub-sampled time-series will give a mean that varies depending on the time of sampling, and will change if the length of sampling time is changed. One can calculate a mean for a variable, but one must ask just what does the mean mean? Fundamentally, it becomes a smoothing operation if a moving window of time is used. One can’t improve precision if the target is moving.

Clyde Spencer
Reply to  Kip Hansen
May 18, 2022 9:53 am

Unless the time series encompasses the longest period affecting the tide (~20 years) there will be a drift in the calculated mean. That is, averaging over short periods may produce less accuracy than an instantaneous reading.

Jim Gorman
Reply to  Clyde Spencer
May 18, 2022 11:01 am

Does Nyquist ring a bell?

Clyde Spencer
Reply to  Phil.
May 18, 2022 9:30 am

If a fixed object or parameter is measured many times with the same instrument, by the same observer, with all environmental variables held constant, then the precision can be improved by cancelling random measuring error.

However, if everything is in flux, the best you can do is an instantaneous measurement, which is limited by the resolution of the measuring instrument. You can average several measurements, but if the parameter is varying with time (as with tides) the mean will drift, not converge.

Phil.
Reply to  Clyde Spencer
May 18, 2022 2:39 pm

Try it and see, take a sinewave, period of 12 hrs, amplitude 2m, uncertainty ±2cm, sampling every 6 mins, you’ll find it’s fairly stable after 2 days.

Tim Gorman
Reply to  Phil.
May 19, 2022 4:34 am

It may be stable but it may not be accurate. If that +/- 2cm includes any systematic error, such as sensor drift over time, you may get a stable reading but it’s accuracy can’t be guaranteed.

If your curve is a true sine wave then each measurement will have a value of:

sin(x) ± u

Integrate that function to find the area under the curve from 0 to π/2

∫sin(x)dx ± ∫u dx

This gives:

-cos(x) ± ux evaluated from 0 to π/2

For the first term
-cos( π/2) = 0, the second term is just u(π/2)

-cos(0) = -1, ux = 0;

Second term subtracted from first term :

±u(π/2) – (-1) = 1 ± u(π/2)

Multiply by 4 to get the integral of the entire curve and you get

4 ± 2πu

divide by 2π to get the average and you wind up with

2/π ± u

You can’t just ignore the uncertainty in each measurement. It just keeps showing up, even in the average. In order to cancel error you would have to have several measurements at the same point in time and have the generated error values be random and Gaussian distributed. Since that doesn’t happen the error can’t be assumed to cancel.

Phil.
Reply to  Tim Gorman
May 19, 2022 7:32 am

It may be stable but it may not be accurate. If that +/- 2cm includes any systematic error, such as sensor drift over time, you may get a stable reading but it’s accuracy can’t be guaranteed.”

We’re not discussing systematic errors, it has already been stated that error in calibration, drift and bias are different issues that have to be addressed in any measurement system. Stick to the subject, which is what is the error of the mean when measuring a quantity over time when there is an instrument uncertainty in the measurement.

Multiply by 4 to get the integral of the entire curve and you get
4 ± 2πu”

Get the basic maths right, the integral of sin(x) from 0 to 2π is 0!
Not 4 times the integral of sin(x) from 0 to π/2.

If you did the maths correctly you get:

0+∑u/N where N is the number of measurements.
Since the uncertainty is both positive and negative then they tend to cancel out resulting in a value lower than the instrument uncertainty. If random and a symmetrical distribution (either Gaussian or uniform) then for sufficiently large N the error reduces to zero.

Tim Gorman
Reply to  Phil.
May 19, 2022 8:19 am

We’re not discussing systematic errors,”

When discussing uncertainty how do you separate random error from systematic error? If you don’t know how to separate the two then you *are* discussing systematic error.

Stick to the subject, which is what is the error of the mean when measuring a quantity over time when there is an instrument uncertainty in the measurement.”

The standard deviation of the sample means (what you call the error of the mean) is meaningless if you don’t know the uncertainty of the means you use to calculate the standard deviation of the sample means.

“Get the basic maths right, the integral of sin(x) from 0 to 2π is 0!

Not 4 times the integral of sin(x) from 0 to π/2.”

And just how does make the integral of uncertainty (u) equal to zero? “u” is positive through both the negative and positive part of the cycle. Or it is negative throughout the entire sine wave.

Even at the zero point you will still have 0 +/- u.

BTW, the average of a sine wave during the positive part of the cycle *is* .637 * Peak_+ value. During the negative part of the cycle it is -.637 * Peak_-. What is the average rise and fall of the sea level? Is it zero? If it is zero then how do you know what the acceleration in the rise might be?

0+∑u/N where N is the number of measurements.”

This is a common mistake that statisticians make.

If each measured value is x +/-ẟx and y = (∑x)/N

then the uncertainty of the average is ∑u + N. Since the uncertainty N = 0, the uncertainty of the mean is just ∑u.

“Since the uncertainty is both positive and negative then they tend to cancel out resulting in a value lower than the instrument uncertainty.”

Again, uncertainty only cancels *IF* they are random and Gaussian. You’ve already admitted that the errors include systematic error such as drift, etc. Therefore the measurement errors cannot cancel. The error at t-100 won’t have the same error as t+100 because of drift, hysteresis, etc. The error distribution won’t be Gaussian, it will be skewed.

The u might be done using root-sum-square if you think there will be some cancellation of error but it can’t be zero. The error will still grow with each measurement. Primarily because you are measuring different things with each measurement. Errors only cancel when you are measuring the same thing with the same thing multiple times. And even then only if there is no systematic error.

“If random and a symmetrical distribution (either Gaussian or uniform) then for sufficiently large N the error reduces to zero.” (bolding mine, tpg)

If you are measuring different things, especially over time, error can only be Gaussian by coincidence.

Nor is uncertainty considered to be uniform. That’s another mistake statisticians make. In an uncertainty interval there is ONE AND ONLY ONE value that can be the true value. That value has a probability of 1. All the other values can’t be the true value because you can’t have more than one “true value”. Thus all the other values have a probability of zero. The issue is that you don’t know which value has the probability of 1. Thus the uncertainty interval.

If you are going to truly claim that sea level measurements only have random and Gaussian error then you probably are going to have to justify it somehow and retract your agreement that the measurement devices do have systematic error.

Phil.
Reply to  Tim Gorman
May 19, 2022 9:28 am

““We’re not discussing systematic errors,”

When discussing uncertainty how do you separate random error from systematic error? If you don’t know how to separate the two then you *are* discussing systematic error.”

It’s called calibration, I’ve already referenced a paper evaluating that for sea level measuring systems.
Kip’s false assertion was that the uncertainty of the mean evaluated by making multiple measurements of a quantity could not be reduced below the instrumental uncertainty of the instrument. That’s what’s being discussed so stop trying to change the subject.

““0+∑u/N where N is the number of measurements.”

This is a common mistake that statisticians make.”

Rubbish! ∑u is the sum of all the measurement errors and is divided by N to find the average.

And just how does make the integral of uncertainty (u) equal to zero? “u” is positive through both the negative and positive part of the cycle. Or it is negative throughout the entire sine wave.”

The measurement was stated to have an uncertainty between -u and +u so for each measurement it can be either positive or negative. Also we’re talking about the mean of a sinewave not the mean of ∣sin(x)∣ which you were doing in your faulty maths.

 Errors only cancel when you are measuring the same thing with the same thing multiple times.”

That’s exactly what we are doing, measuring the sea level with the same instrument which has the same range of uncertainty. Unless all the errors are positive or negative they must cancel to a certain extent. The more measurements one makes the closer to zero it gets.

Tim Gorman
Reply to  Phil.
May 21, 2022 6:39 am

It’s called calibration, I’ve already referenced a paper evaluating that for sea level measuring systems.”

Calibration of field equipment is fine but the field equipment *never* stays in calibration. Even in the Argo floats, with calibrated sensors, the uncertainty of the float itself is +/- 0.6C because of device variations, drift, etc. You can’t get away from aging.

“Kip’s false assertion was that the uncertainty of the mean evaluated by making multiple measurements of a quantity could not be reduced below the instrumental uncertainty of the instrument. That’s what’s being discussed so stop trying to change the subject.”

Kip is correct. You simply can’t obtain infinite resolution in any physical measurement device. The numbers past the resolution of the instrument are forever unknowable. That resolution ability remains the *minimum* uncertainty. You simply cannot calculate resolution finer than that. It is a violation of the use of significant figures. No amount of averaging can help.

You are still stuck in statistician world where a repeating decimal has infinite resolution!

The measurement was stated to have an uncertainty between -u and +u so for each measurement it can be either positive or negative.”

You *forced* that to be the case by restating the example I provided. There is simply no guarantee in the real world that error is random and Gaussian. You must *prove* that is the case in order to assume cancellation. You failed to even speak to the use of a gauge block that is machined incorrectly – how does that error become + and – error?

“Also we’re talking about the mean of a sinewave not the mean of ∣sin(x)∣ which you were doing in your faulty maths.”

And *YOUR* faulty math assumes the sine wave oscillates around zero. With a systematic bias the sine wave does *NOT* oscillate around zero.

“That’s exactly what we are doing, measuring the sea level with the same instrument which has the same range of uncertainty.”

Just like temperature, sea level changes over time. Thus you are *NOT* measuring the same thing each time. It is *exactly* like collecting a pile of boards randomly picked up out of the ditch or the land fill. You wind up measuring different things with each measurement. There is absolutely no guarantee that the average of the length of those boards will even physically exist. The average of those different things will give you *NO* expectation of what the length of the next randomly collected board will be. That is exactly like adding two random variables together – the variance of the combined data set increases and thus the standard deviation does as well, just like you do with uncertainty.

σ_total = sqrt[ (σ_1)^2 + (σ_2)^2 ]

Why would you think something different will happen with uncertainty associated with different things like sea level or temperature?

Unless all the errors are positive or negative they must cancel to a certain extent. The more measurements one makes the closer to zero it gets.” (bolding mine, tpg)

“Certain extent” is *NOT* complete. If you don’t get complete cancellation then you simply cannot approach a true value using more measurements. The more measurements you have the more the uncertainty grows. If your error is +/- 0.5 and you cancel all but +/- .1 of it then the total uncertainty will be +/- 1 for ten measurements and +/- 10 for 100 measurements.

Phil.
Reply to  Tim Gorman
May 21, 2022 12:31 pm

Calibration of field equipment is fine but the field equipment *never* stays in calibration.”
That’s why you have maintenance and recalibration.

“You simply can’t obtain infinite resolution in any physical measurement device. The numbers past the resolution of the instrument are forever unknowable. That resolution ability remains the *minimum* uncertainty.”

But you can determine the ‘mean’ of a sample of readings of that device to better uncertainty than the device resolution. Which appears to be the point you fail to understand and bring in unrelated issues such as drift and bias.

““The measurement was stated to have an uncertainty between -u and +u so for each measurement it can be either positive or negative.”
You *forced* that to be the case by restating the example I provided.”

No you said: “If you have a million data elements and each one is 50 +5 (not +/-, just +). So each measurement can be anywhere from 50 to 55.”, which is exactly what I calculated.

You failed to even speak to the use of a gauge block that is machined incorrectly – how does that error become + and – error?”

I certainly did address it I pointed out that such an operator error would be eliminated by calibration, and it’s irrelevant to the issue under discussion.

“Also we’re talking about the mean of a sinewave not the mean of ∣sin(x)∣ which you were doing in your faulty maths.”
And *YOUR* faulty math assumes the sine wave oscillates around zero. With a systematic bias the sine wave does *NOT* oscillate around zero.”

You stated that the integral of sin(x) from 0 to π/2 was 1 (which is correct) but then made an elementary error that the integral from 0 to 2π was four times that i.e. 4 which is nonsense, it is 0 as I stated. The bias/uncertainty term was a separate term in your equation the sine wave certainly does oscillate around zero.

Just like temperature, sea level changes over time. Thus you are *NOT* measuring the same thing each time. It is *exactly* like collecting a pile of boards randomly picked up out of the ditch or the land fill. You wind up measuring different things with each measurement. There is absolutely no guarantee that the average of the length of those boards will even physically exist.”

Which is a terribly wrong analogy, measuring a continuously varying quantity in a time series is nothing like sampling from a collection of unrelated items. In the continuously varying quantity the mean certainly does exist.

“Certain extent” is *NOT* complete. If you don’t get complete cancellation then you simply cannot approach a true value using more measurements. The more measurements you have the more the uncertainty grows. If your error is +/- 0.5 and you cancel all but +/- .1 of it then the total uncertainty will be +/- 1 for ten measurements and +/- 10 for 100 measurements.”

But the error of the ‘mean’ which is what we are discussing will tend towards +/-0.1 (e.g. 10/100) which is less than the originally quoted uncertainty of +/- 0.5, so you appear to have proved my point.
I just ran a simulation of a series of measurements of a mean value of 10 +0.5/-0.4, after 10 measurements the mean was 9.95, after 30 10.03, 60 10.05 with the standard deviation stabilized at 0.06. So the asymmetric error cancellation does lead to a small bias in the mean but less than the uncertainty of the instrument.

Tim Gorman
Reply to  Phil.
May 21, 2022 3:22 pm

That’s why you have maintenance and recalibration.”

And just how often is the measuring device taken out of service and sent to a certified calibration lab for recalibration?

What happens in between recalibrations? Increased systematic uncertainty?

“But you can determine the ‘mean’ of a sample of readings of that device to better uncertainty than the device resolution. Which appears to be the point you fail to understand and bring in unrelated issues such as drift and bias.”

No, you can’t, not if you use significant figure rules. The average can only be stated to the resolution of the values used to determine the average. Anything else is assuming precision you can’t justify!

“So each measurement can be anywhere from 50 to 55.”, which is exactly what I calculated.”

No, you changed it to 52.5 +/- 2 in order to get a Gaussian distribution. Did you forget what you posted or just try to slip it by?

I certainly did address it I pointed out that such an operator error would be eliminated by calibration, and it’s irrelevant to the issue under discussion.”

What happens to anything the gauge is used for before the next calibration? It is totally relevant and you are just using the fallacy Argument by Dismissal to avoid having to address it.

“4 which is nonsense, it is 0 as I stated”

Like I said, if you have systematic error the sine wave will not oscillate around zero. You keep making assumptions like a statistician and not a physical scientist or engineer working in the real world!

“Which is a terribly wrong analogy, measuring a continuously varying quantity in a time series is nothing like sampling from a collection of unrelated items. In the continuously varying quantity the mean certainly does exist.”

Another use of the fallacy of Argument by Dismissal. Since each measurement is independent you have the same situation – a collection of unrelated terms – which can have varying systematic errors based on measuring device parameters like hysteresis.
I’ve seen barges on the Mississippi make waves that lap at the shore for literally minutes after it passes. I’m sure the same thing happens to coastal sea level measuring devices. That alone will cause deviations from the sine wave in your measurements. So will storm fronts with winds that cause choppy water. The exact same things that keep temperatures from exactly matching the sine wave the sun follows in its path on the earth!

Phil.
Reply to  Tim Gorman
May 22, 2022 4:30 am

“And just how often is the measuring device taken out of service and sent to a certified calibration lab for recalibration?
What happens in between recalibrations? Increased systematic uncertainty?”

That would depend on the design of the equipment and it’s location, but it has nothing to do with the question of the propagation of errors when determining the mean.

““So each measurement can be anywhere from 50 to 55.”, which is exactly what I calculated.”
No, you changed it to 52.5 +/- 2 in order to get a Gaussian distribution. Did you forget what you posted or just try to slip it by?”

No it was a comment that that is how a scientist would describe it.
As I stated before I set it up exactly as described, you said “each measurement can be anywhere from 50 to 55”, so I generated series of numbers that met that description.
I used the function RANDBETWEEN which generates a series of numbers between the max and min values. The ones I used approximated a uniform distribution, e.g. one series was as follows:
50-51.25 25 values
51.25-52.5 24 values
52.5-53.75 27 values
53.75-55 24 values

Clearly not Gaussian!

“Like I said, if you have systematic error the sine wave will not oscillate around zero. You keep making assumptions like a statistician and not a physical scientist or engineer working in the real world!”

But no Physical scientist or engineer would multiply the integral of sin(x) from 0-π/2 by 4 to get the integral from 0-2π, they wouldn’t be that stupid. That’s what you did to get the value of 4, nothing to do with systematic error, just operator error!

““Which is a terribly wrong analogy, measuring a continuously varying quantity in a time series is nothing like sampling from a collection of unrelated items. In the continuously varying quantity the mean certainly does exist.”
Another use of the fallacy of Argument by Dismissal. Since each measurement is independent you have the same situation – a collection of unrelated terms” 

They are a series off measurements made of a continuously varying quantity, so they are related.

I’ve seen barges on the Mississippi make waves that lap at the shore for literally minutes after it passes. I’m sure the same thing happens to coastal sea level measuring devices. That alone will cause deviations from the sine wave in your measurements.”

Which design of the apparatus and siting of it would be done to minimize such events, of course such a wave would introduce an oscillating error. The data is presented as a monthly average of readings taken every 6 minutes. We’re discussing the effect of the resolution of the measuring instrument on the error of that mean, your introducing irrelevant issues shows the weakness of your case.

Tim Gorman
Reply to  Phil.
May 21, 2022 3:28 pm

But the error of the ‘mean’ which is what we are discussing will tend towards +/-0.1 (e.g. 10/100) which is less than the originally quoted uncertainty of +/- 0.5, so you appear to have proved my point.”

The issue is that the final error *will* grow if you don’t have complete cancellation. By the time you have five measurements you will be back to the original +/- 0.5! Add in more measurements and you mean becomes more and more uncertain! And how do you separate out random and systematic error represented by the uncertainty interval? Don’t use the copout of recalibration because you won’t know what the systematic error was during the interval between calibrations. That is why you use root-sum-square to add the uncertainties instead of direct addition – the assumption that *some* cancellation will occur.

Phil.
Reply to  Tim Gorman
May 22, 2022 3:39 am

The issue is that the final error *will* grow if you don’t have complete cancellation. By the time you have five measurements you will be back to the original +/- 0.5! Add in more measurements and you mean becomes more and more uncertain!” 

Nonsense, we’re calculating the ‘mean’, so we divide by N so the error of the ‘mean’ will be +/- 0.1.

Phil.