Big Claims About Tiny Numbers

Guest Post by Willis Eschenbach

[UPDATE: An alert commenter, Izaak Walton, notes below that I’d used 10e21 instead of 1e21. This means all my results were too large by a factor of 10 … I’ve updated all the numbers to fix my error. Mea maxima culpa. This is why I love writing for the web … my errors don’t last long.]

Our marvelous host, Anthony Watts, alerted me about a new paper yclept “New Record Ocean Temperatures and Related Climate Indicators in 2023“.

Of course, since I’m “the very model of a modern major general”, my first thought was “Is there gender balance among the authors as required by DEI?”. I mean, according to the seminal paper “Ocean sciences must incorporate DEI, scholars argue“, that’s a new requirement. Not balance by sex. Balance by gender.

However, it turns out that there are thirty-five authors of the new paper. I downloaded the citation. It says “Cheng, L., Abraham, J., Trenberth, K., Boyer, T., Mann, M., Zhu, J., Wang, F., Yu, F., Locarnini, R., Fasullo, J., Zheng, F., Li, Y., Zhang, B., Wan, L., Chen, X., Wang, D., Feng, L., Song, X., Liu, Y., Reseghetti, F., Simoncelli, S., Gouretski, V., Chen, G., Mishonov, A., Reagan, J., Von Schuckmann, K., Pan, Y., Tan, Z., Zhu, Y., Wei, W., Li, G., Ren, Q., Cao, L., Lu, Y.”

Ooogh … gonna be hard to determine their genders. Can’t just check their names, that would be transphobic. Have to contact each one and ask them about their sexual proclivities … that’ll go over well …

In addition, there’s a numerical problem with genders.

Here, from the San Francisco “GIFT” program, which will give $1,200/month in taxpayer money preferentially to illegal alien ex-con transgender prostitutes with AIDS who can’t speak English, is their checkbox list of genders. (And no, I’m not kidding—that is their preferred recipient, the person that goes to the head of the line for “free” taxpayer money. But I digress…)

So buckle up and keep your hands in the vehicle at all times, let’s take a ride through their official list of genders.

GENDER IDENTITY (Check all that apply)

Cis-gender woman
Woman
Transgender Woman
Woman of Trans experience
Woman with a history of gender transition
Trans feminine
Feminine-of-center
MTF (male-to-female)
Demigirl
T-girl
Transgirl
Sistergirl
Cis-gender man
Man
Transgender man
Man of Trans experience
Man with a history of gender transition
Trans masculine
Masculine-of-center
FTM (female-to-male)
Demiboy
T-boy
Transguy
Brotherboy
Trans
Transgender
Transsexual
Non-binary
Genderqueer
Agender
Xenogender
Fem
Femme
Butch
Boi
Stud
Aggressive (AG)
Androgyne
Tomboy
Gender outlaw
Gender non-conforming
Gender variant
Gender fluid
Genderfuck
Bi-gender
Multi-gender
Pangender
Gender creative
Gender expansive
Third gender
Neutrois
Omnigender
Polygender
Graygender
Intergender
Maverique
Novigender
Two-spirit
Hijra
Kathoey
Muxe
Khanith/Xanith
X-gender
MTX
FTX
Bakla
Mahu
Fa’afafine
Waria
Palao’ana
Ashtime
Mashoga
Mangaiko
Chibados
Tida wena
Bixa’ah
Alyha
Hwame
Lhamana
Nadleehi
Dilbaa
Winkte
Ninauposkitzipxpe
Machi-embra
Quariwarmi
Chuckchi
Whakawahine
Fakaleiti
Calabai
Calalai
Bissu
Acault
Travesti
Questioning
I don’t use labels
Declined
Not Listed: _________________

Heck, there are only about a hundred “genders” there. That means there shouldn’t be any problem determining which author in this paper is a “Calabai” and which is a “Calalai” …

In addition, the number of authors brings up what I modestly call “Willis’s First Rule Of Authorship”, which states:

Paper Quality ≈ 1 / (Number Of Authors)2

But enough digression … moving on to the paper, there’s a fascinating claim in the abstract, viz:

In 2023, the sea surface temperature (SST) and upper 2000 m ocean heat content (OHC) reached record highs. The 0–2000 m OHC in 2023 exceeded that of 2022 by 15 ± 10 ZJ (1 Zetta Joules = 1021 Joules) (updated IAP/CAS data); 9 ± 5 ZJ (NCEI/NOAA data).

So … what is the relationship between ZJ and the temperature of the top 2000 meters? Let me use the NCEI/NOAA data. Here are the calculations, skip them if you wish, the answer’s at the end. Items marked as [1] are the computer results of the calculation. Everything after a # is a comment.

> (seavolume=volbydepth(2000)) #cubic kilometers

[1] 647,988,372

> (seamass = seavolume * 1e9 * 1e3 * 1.025) # kg

[1] 6.641881e+20

> (specificheat=3850) # joules/kg/°C

[1] 3850

> (zjoulesperdeg=specificheat * seamass / 1e21) #zettajoules/°C, to raise seamass by 1°C

[1] 2557.124

> (zettajoules2023 = 9) # from the paper

[1] 9

> (tempchange2023 =zettajoules2023 / zjoulesperdeg) # °C

[1] 0.0035

So all the angst is about a temperature change of three and a half thousandths of one degree. EVERYONE PANIC!!

But that wasn’t the interesting part. The interesting part is their uncertainty, which per NCEI/NOAA is ± 5 ZJ. Let me note to start that the results of the two groups, IAP/CAS and NCEI/NOAA, differ by 6 ZJ …

Using the above calculations, 5 ZJ is ± 0.0019°C … they are seriously claiming that we can measure the temperature of the top 2,000 meters of the ocean to within ±0.0019°C.

And how are they doing that?

They say “The main subsurface observing system since 2005 is the profiling floats from the Argo program”. These are amazing floats that sleep a thousand meters down deep in the ocean, then periodically wake up, sink further down to two thousand meters, and then rise slowly to the surface, measuring temperature and salinity along the way. When they reach the surface, they phone home like ET, report the measurements, and sink down a thousand meters to go to sleep again. They’re a fascinating piece of technology. Here’s a map of the float locations from a few years back.

There are about 4,000 floats, each of which measures the temperature as it rises from 2000 meters up to the surface every 10 days. Note that they tend to concentrate in some areas, like the intertropical convergence zone by the Equator and the US East Coast, while other areas are undersampled.

So to start with, ignoring the uneven sampling. each float is theoretically representative of an area of about 92,000 square kilometers and down to two kilometers depth. That’s a bit more area than Austria, Portugal, or the state of South Carolina.

Now consider their claim for a moment. We put one single thermometer in Austria, take one measurement every 10 days for a year … and claim we’ve measured Austria’s annual average temperature with an uncertainty of ±0.0019°C???

Yeah … that’s totally legit …

But wait, as they say on TV, there’s more. That’s just measuring the surface temperature, but the Argo floats are measuring a 3D volume, not the surface. So their claimed uncertainty is even less likely.

Here’s another way to look at it. We’re talking about the uncertainty of the average of a number of measurements. As we get more measurements, our uncertainty decreases … but it doesn’t decrease directly proportionally to the number of measurements.

Instead, it decreases proportionally to the square root of the number of measurements. This means if we want to decrease the uncertainty by one decimal point, that is to say we want to have one-tenth of the uncertainty, we need one hundred times as many measurements.

And of course, this works in reverse as well. If we have one-hundredth of the number of measurements, we lose one decimal point in the uncertainty.

So let’s apply that to the ARGO floats.

Claimed uncertainty with 4,000 floats = ± 0.0019°C

Therefore, uncertainty with 40 floats = ± 0.019°C

And uncertainty with 4 floats = ±0.19 time the square root of 10 = 0.06°C …

Their claimed uncertainty says that four ARGO floats could measure the temperature of the entire global ocean to an uncertainty of less than one tenth of one degree … yeah, right.

Sadly, I fear that’s as far as I got in their paper … I was laughing too hard to continue. I’m sure it’s all sciency and everything, but they lost me by hyperventilating over an ocean warming of three and a half thousandths of a degree and put me over the edge by claiming an impossibly small uncertainty.

Here, a sunny morning in the redwood forest after a day of strong rain, with football playoffs (not the round ball kind) starting in a little while—what’s not to like?

My very best to all,

w.

[ADDENDUM] To close the circle, let me do a sensitivity analysis. The paper mentions that there are some other data sources for the analysis like XBTs (expendable bathythermographs) and other ship-deployed instruments.

So let’s assume that there were a further 4,000 scientific research vessels who each made a voyage where they made thirty-six XBT measurements. That would double the total number of measurements taken during the year. Never mind that there aren’t 4,000 scientific research vessels, this is a sensitivity analysis.

That would change the calculations as follows:

Claimed uncertainty with 8,000 floats + measurements = ± 0.0019°C

Therefore, uncertainty with 80 floats + measurements = ± 0.019°C

And uncertainty with 8 floats + measurements = ±0.019 time the square root of 10 = 0.06°C …

We come to the same problem. There’s no way that 8 thermometers taking temperatures every 10 days can give us the average temperature of the top two kilometers of the entire global ocean with an uncertainty of less than 0.1°C.

MY USUAL: When you comment please quote the exact words you are discussing. It avoids endless misunderstandings.

4.9 55 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

281 Comments
Inline Feedbacks
View all comments
January 15, 2024 10:15 am

ROFL 😀

Scissor
Reply to  Krishna Gans
January 15, 2024 10:23 am

Too many Wangs in that circle jerk for equity.

Gums
Reply to  Scissor
January 15, 2024 2:54 pm

Ditto

January 15, 2024 10:31 am

It’s blatantly dishonest to quote the figures in zetajoules knowing that will confuse the reader into thinking it’s an impressive amount, and especially knowing that the amount is well inside the error bars so means nothing.

Reply to  PCman999
January 15, 2024 10:58 am

That is for sure, PCman.
It is the same kind of dishonesty as talking about Manhattans of ice, or Hiroshimas of heat.

Reply to  PCman999
January 15, 2024 12:01 pm

The Guardian sometimes go with “Hiroshimas” when recording global changes over a year.

Reply to  MCourtney
January 16, 2024 3:59 am

Didn’t Al Gore use that one- he said something like “ever second the Earth is experiencing (I don’t recall the number he used) Hiroshimas of added energy due to carbon pollution” or something like that.

Reply to  PCman999
January 15, 2024 12:32 pm

You’re making the cardinal error in thinking this dreck is to be read by numerate scientists and engineers. Actually, the intended audience is science “communicators”, Guardian journalists, and funding organizations.

Reply to  Graemethecat
January 15, 2024 12:52 pm

No I’m not – numerate types will quickly see the temperature – energy shell game, I know that 100% – it gets me angry that the team of scientists would write this crap knowing that it is intended to defraud people who really on the ‘experts’ to be truthful and not take advantage of them!!!

Reply to  PCman999
January 15, 2024 12:47 pm

And I forgot to add that it’s also dishonest to intentionally not quote the figures in degrees since that is the original thing actually measured – the mini-robo-subs measure temperature with their probes, not energy!!!

rhs
Reply to  PCman999
January 15, 2024 1:14 pm

The only useful zetajoule measure is that 1 tenth is roughly the amount of electricity produced world wide per year.
The rest is just stupid large scary, and without point – https://phys.org/news/2024-01-global-ocean-temperatures.amp

Michael S. Kelly
Reply to  rhs
January 15, 2024 4:13 pm

Be thankful that they didn’t express the temperature in electron-volts (eV). One zettajoule equals 6.24E39 eV. Of course, the unit prefix would be more amusing than the “zetta,” which corresponds to 1E21. There isn’t a prefix for 1E39 that I can find, but there is for 1E30: “quetta”, denoted by the symbol Q. To get 1E39, then, you’d have to have a giga (symbol G) quetta. Thus we could talk about ocean energy content in terms of GQ eV.

How hip would that sound?

rhs
Reply to  Michael S. Kelly
January 15, 2024 5:11 pm

I know the numbers are accurate and yet totally hilarious in the same sentence.
Well played!

Reply to  Michael S. Kelly
January 15, 2024 5:36 pm

Almost sounds “woke.”

mleskovarsocalrrcom
January 15, 2024 10:32 am

It’s not so much the numbers being presented, it’s the hype around the numbers. Argo is the best measurement method to date of ocean temperatures that I’m aware of. Any chance for the AGW crowd to interpret data to their advantage will be taken advantage of.

Reply to  mleskovarsocalrrcom
January 15, 2024 12:12 pm

Except that according to theory, Anthropogenic Global Warming takes place in the Troposphere, not the oceans.

Reply to  doonman
January 15, 2024 12:55 pm

But mlesk… is right – they took the flat-line temperature data from the nifty cool Argos and turned it into doomsday porn. AGW theory basically says “shut-up, gives us your money”.

Reply to  PCman999
January 15, 2024 12:55 pm

Ugh, where’s that delete button!!!

Reply to  PCman999
January 15, 2024 12:56 pm

Ugh! Where’s that ‘edit’ button!!!

Reply to  doonman
January 16, 2024 1:42 am

But Trenberth is still looking there for his “missing heat”.

January 15, 2024 10:36 am

So … a “Rose” by any other name is now a Buoy?

Scissor
Reply to  Gunga Din
January 15, 2024 10:45 am

Good one. It’s no longer true that buoys will be buoys.

Reply to  Scissor
January 15, 2024 10:50 am

Nope, nowadays Buoys will be gulls…

Reply to  Phil R
January 15, 2024 5:39 pm

The facility I used to learn to SCUBA dive had restrooms marked “Gulls” and “Buoys.”

Federico Bar
Reply to  Clyde Spencer
January 16, 2024 7:04 am

…We watched the buoys and gulls playing in the wa­ter..
.#

J Boles
January 15, 2024 10:41 am

OT but a funny poem –

I am the very model of a modern major gardener,

with every pest my enemy and every bloom my partn-er.

I scrutinize the listings in the newest nursery manuals

and thoroughly have trained myself in handling of perennials.

I’m practiced in the use and care of half a hundred garden tools,

I know the mixing ratios for all the two-stroke motor fuels,

I highly value safety and I follow the most stringent rules –

And I never fill my mower until well after the engine cools!
He never fills his mower until well after the engine cools

he never fills his mower until well after the engine cools

he never fills his mower until well after the engine, engine cools!
I know my taxonomy from Abutilon to Zinnia;

I know a sickly yucca from a juvenile dracinia;

With every pest my enemy and every bloom my partn-er,

I am the very model of a modern major gardener!
I seize all opportunities for hunting beetles Japanese,

and if I please, on bended knees, I greet the eager honeybees.

Should aphids dare to venture there,

I’ll spare no care toward their despair,

but share my garden fair with any mantis gallivanting there!

Of grafting, double-digging and deadheading I know quite a bit.

Whatever you would care to name I’ll wager I have planted it.

To purchase fertilizer, I don’t buy just any brand of … manure…

and I’m always careful with my speech and you can count on that for sure!
He is always careful with his speech and you can count on that for sure,

he’s always careful with his speech and you can count on that for sure,

always careful with his speech

and you can count on that for, that for sure!
Then I can re-create the hanging gardens of old Babylon,

or grade a level lawn for you to put a picnic table on.

With every pest my enemy and every bloom my partn-er

I am the very model of a modern major gardener!
In fact when I know what is meant by “chloroplast” and “cambium;”

When I can tell at sight mite infestation on geranium;

When black spot, mildews, smuts, rust, scorch and dodder I’m more wary at,

and when I know precisely what is in Lasso and Lariat,

then I can drape a table with the harvest’s flavor subtleties,

Or spread a bed of flowers making color for your eye to see.

In short, when I can please the various senses with such things as these –

You’ll say a better Major Garden-er has never stained his knees!
For I’ve applied my genius to the wond’rous field of Botany

where I find fascination where most others find monotony;

with every pest my enemy and every bloom my partn-er,

I am the very model of a modern major gardener!

Neil Jordan
Reply to  J Boles
January 15, 2024 12:01 pm

Very nice for gardening. Here’s one for harvesting climate grants:

I am the very model of a modern Climate-Scientist,
I’ve information computationist and climaticist,
I know the I P C C, and I quote the temps historical
From glacial to manniacal, in order categorical;
Although I’m really not acquainted well with matters statistical,
My theory is the basis for the equations marcottical,
About the causes and effects I’m teeming with a lot o’ news, (bothered for a rhyme)
With many cheerful facts about the data that I choose to choose.

I’m very good at integral and differential calculus;
But I have no clue about least squares – I think that’s miraculous:
In short, in matters computationist and climaticist,
I am the very model of a modern Climate-Scientist.

I know our mythic history, Arrhenius’ and the Goracle’s;
I answer hard inquiries as long as they’re pal reviewable,
I quote in elegiacs all the schemes and all the climate tricks,
In forecasts I can floor peculiarities hiatus-ics;
I can’t tell undoubted measurements from temperatures Ouija-ous,
I know the croaking chorus from errors of models numerous!
Then I can hum a fugue of which I’ve heard the music’s din of late, (bothered for a rhyme)
And delete all the emails from that infernal nonsense Climategate.

Then I can write a global temp from inverted Tiljander,
Using public funds and grants for which I always pander:
In short, in matters computationist and climaticist,
I am the very model of a modern Climate-Scientist.

In fact, when I know what is meant by “HadCRUT” and “Nature trick”,
When I can tell at sight a regression line from a hockey stick,
When such affairs as lawsuits and surprises I’m more wary at,
And when I know precisely where the ocean heat is hidden at,
When I have learnt what progress has been made in modern climatery,
When I know less of ethics than a novice in a nunnery –
In short, when I’ve a smattering of the science of the clime – (bothered for a rhyme)
You’ll say a better Climate-Scientist has hidden the decline.

For my climate science knowledge, though I’m plucky and adventury,
Has only fit the curve down to the beginning of this century;
But still, in matters computationist and climaticist,
I am the very model of a modern Climate-Scientist.

J Boles
Reply to  Neil Jordan
January 15, 2024 12:15 pm

YES! Just the type of poem I would have written, were I so gifted.

Writing Observer
Reply to  Neil Jordan
January 16, 2024 1:15 am

Clipped! (With attribution, of course – I’m not a Harvard academician.)

Rud Istvan
January 15, 2024 10:42 am

Nice post, WE. 25 authors to write the climate equivalent of the Bard’s comedy, Much Ado About Nothing.

DD More
Reply to  Rud Istvan
January 16, 2024 1:44 pm

Another way to look at it would be Lake Superior 
Volume: 3 quadrillion gallons, or 2,900 cubic miles (12,100 cubic kilometres)

Argo Floats – representative of an area of about 92,000 square kilometers and down to two kilometers depth.
Or a covered volume of 184,000 Km^3

184,000 Km^3 / 12,000 Km^3 = 15.33 times a much. So it only takes 1 Argo to get the correct temperature of 15+ Lake Superior’s to ±0.0019°C.?

January 15, 2024 10:55 am

I have a problem with the entire concept of using statistical means to improve the value of ocean temp measurements.
When we use multiple measurements to improve accuracy and/or precision, it is only valid when we are making repeated measuremtns of the same thing using the same instrument.
We cannot use the same statisitcasl principles to say anything about a whole bunch of measurements of a differnt thing using a different instrument on a differnt day in a different place.
There is an inherent fallacy in talking about the temperature of “the ocean”, and how it is changing over time, as if the whole ocean is all one thing, and those ARGO floats are all one instrument.

I have a lot more to say about this, but this will do for now.

Rud Istvan
Reply to  Nicholas McGinley
January 15, 2024 11:17 am

As explained 5 years ago in guest post “ARGO fit for purpose?” all the floats actually now use the same temperature sensor, the SBE, calibrated to 0.001C with a drift of 0.001C per six months. And in a meaningful sense they all measure the same thing, the temperature profile from 2000meters to the surface. And the accuracy can be verified since the ocean temperature at 2000meters is remarkably uniform all over the world (why the depth 2000 meters was chosen). Just not the same place every time.

Reply to  Rud Istvan
January 15, 2024 12:09 pm

You are quoting the calibration of the sensor itself. The measurement uncertainty of the float itself is much greater, the last info I found was about +/- 0.5C, typical of many temperature measuring devices. The sensors used in ASOS measuring stations is similar to the argo floats yet the measuring station has an uncertainty of +/- 1.8F which is about +/- 1.0C.

They are not measuring the same thing in a meaningful sense since they are not measuring the same thing.

The stability of the ocean temp at 2000meters isn’t really determinative since the profile is measured all the way to the surface and the depth of the initial thermocline varies widely throughout the ocean. They are talking about the heat content of the “upper 2000 m” which can also vary widely.

Reply to  Tim Gorman
January 15, 2024 6:14 pm

Again, to measure something multiple times to improve the precision, the data must have the property of ‘stationarity.’ That is, the mean and standard deviation must not change over time. Therefore, it is justifiable for a surveyor to make multiple measurements of an angle turned, and divide by the square-root of the number of measurements. The exact same thing (measurand) is measured with the same instrument, with no reasonable expectation of the measurand or transit changing during the time of the measurements. Therefore, all differences can be assumed to be random error.

In the case of the Argo buoys, there are nearly 4,000 of them, not one. While they may have all come from a factory with calibrations that were within tight tolerances, electronics are known to drift over time. That means some will be reading slightly high after a period of use, while others will be reading slightly low, and all by different amounts. Some can be expected to fail completely before being retrieved, and some will be providing unreasonable results that are not fit for use, but it may not always be obvious. Because there are both vertical and lateral currents in the oceans, one cannot assume that even a particular buoy is measuring the same thing multiple times; it isn’t intended to. For the ‘same’ point in the ocean, it will require multiple drifting buoys, none of which can be assumed to be in exactly the same position, and certainly not at the same time. Thus, the unstated assumption is that “close enough” is an unspecified location error.

Once again, these researchers need a competent metrologist on hand, which they don’t seem to have.

Reply to  Clyde Spencer
January 15, 2024 8:18 pm

The GUM defines the requirements for repeatability.

B.2.15 repeatability (of results of measurements)

closeness of the agreement between the results of successive measurements of the same measurand carried out under the same conditions of measurement

NOTE 1 These conditions are called repeatability conditions.

NOTE 2 Repeatability conditions include:

— the same measurement procedure

— the same observer

— the same measuring instrument, used under the same conditions

— the same location

— repetition over a short period of time.

NOTE 3 Repeatability may be expressed quantitatively in terms of the dispersion characteristics of the results.

[VIM:1993, definition 3.6]

This whole process of taking multiple devices and measurements at different locations and subsequently combining them into a “measurement” by averaging is a joke.

I am including an image from Dr. Taylor’s book that shows the derivation of using the divide by √n. Read closely, it is simple sampling theory. Basically, “N” measurements of the same thing done in multiple experiments. IOW, multiple samples of size “N” that create a normal distribution around “X”.

One must note, one large sample DOES NOT give a a normal distribution around “X”, that takes multiple experiments (samples) of measuring the same thing. In essence, you can’t use 4000 measurements that create one single measurand (by averaging them) and then use those same 4000 measurements again to create a normal distribution of what statisticians call a “sample means distribution”.

Photo-Marker_Jan152024_215509
bdgwx
Reply to  Clyde Spencer
January 16, 2024 5:51 am

Clyde, It might be beneficial to review NIST TN 1900 E2 and see if you can frame your position on stationarity to be consistent with the example provided by NIST in which they take individual temperature measurements on different days and then combine them via a measurement model to find the monthly average temperature and its uncertainty.

Reply to  bdgwx
January 16, 2024 7:14 am

A paper which you still cannot understand.

List the assumptions…

Reply to  karlomonte
January 16, 2024 2:23 pm

He won’t list them. He doesn’t even understand what they are!

Reply to  bdgwx
January 16, 2024 7:59 am

Read it again for understanding. What you don’t seem to understand is how measurements are done.

1) This is an example with many assumptions.

2) The measurand is defined as the monthly average Tmax. A unique and simple property.

3) No systematic uncertainty.

4) No measurement uncertainty.

5) The example discussed some details:

“””””Adoption of this model still does not imply that τ should be estimated by the average of the observations — some additional criterion is needed. In this case, several well-known and widely used criteria do lead to the average as “optimal” choice in one sense or another: these include maximum likelihood, some forms of Bayesian estimation, and minimum mean squared error. The associated uncertainty depends on the sources of uncertainty that are recognized, and on how their individual contributions are evaluated. One potential source of uncertainty is model selection: in fact, and as already mentioned, a model that allows for temporal correlations between the observations may very well afford a more faithful representation of the variability in the data than the model above. However, with as few observations as are available in this case, it would be difficult to justify adopting such a model. “””””

6) “””””The {Ei} capture three sources of uncertainty: natural variability of temperature from day to day, variability attributable to differences in the time of day when the thermometer was read, and the components of uncertainty associated with the calibration of the thermometer and with reading the scale inscribed on the thermometer.”””””

7) The repeatable condition of “repetition over a short period of time” as defined in the GUM is obviously pushed to the limit, but is also recognized by defining part of the uncertainty as the variance of the data “from day to day”. That is, the variance in the data is part of the uncertainty.

8) The same device is used. This is important. TN 1900 does not discuss averaging with other measuring devices nor over long periods of time.

Show some references about calculating measurement uncertainty when using different devices to measure different things.

I’ll repost Dr. Taylor’s derivation of when to divide by the √n. Have you shown the μ’s and σ’s are all the same between all the stations as his derivation shows is necessary? Do you have multiple measurements of the same measurand?

1000001225
cuddywhiffer
Reply to  Rud Istvan
January 15, 2024 12:16 pm

Rud, These argo buoys drift with the ocean currents. How do they deal with littoral areas where the depths are much less than 2000m?

Rud Istvan
Reply to  cuddywhiffer
January 15, 2024 12:24 pm

Quick answer, they don’t. If they cannot reach 2000 meters they don’t measure, and drift at 1000 meters for another ten days. If cannot reach 1000 meters, they go shallower until they drift until can and then wait ten days.
ARGO floats are really amazing. Two years of conceptual system design before ever starting the float engineering. The details are in my old post on Argo from 5 years ago, including links to the authoritative documentation.

Reply to  Rud Istvan
January 15, 2024 8:35 pm

Speaking of old posts and articles on the Argo floats:

Correcting Ocean Cooling 2008

Reply to  cuddywhiffer
January 15, 2024 12:43 pm

They don’t need to go 2000m deep to perform their primary goal, which is to report thermocline depths for undersea navigation, and magnetic fields of passing submarines.

Reply to  Rud Istvan
January 15, 2024 12:31 pm

Dissimilar metals coming in contact with each other form a temperature dependent junction that is the basis for the thermoelectric effect. No one can manufacture 4000 or more test instruments that have identical thermoelectric junctions or that are identical to each other in any fabrication technique. So while laboratory testing of a sensor may yield a particular accuracy statement, using that accuracy as the accuracy of a manufactured item doesn’t work in practice. If it did, then all calibration labs would be out of business because they would be unnecessary.

Reply to  doonman
January 15, 2024 12:49 pm

There are electronic elements that are needed to use the sensor. They also have a calibration drift and a manufacturing drift. Even a small change in the water flow channel due to temperature change as it moves to the surface can introduce measurement uncertainty that has nothing to do with the sensor itself.

As I remember, the float also measures salinity of the water in order to compensate the temperature measurement. That measurement can introduce measurement uncertainty as well.

It’s the FLOAT measurement uncertainty that is the controlling factor, not the sensor measurement uncertainty.

Rud Istvan
Reply to  Tim Gorman
January 15, 2024 1:12 pm

The salinity is calibrated at 750 meters, because fairly constant there and below. Primary use is to estimate ‘ocean fresh water store’ (the language in the conceptual design documentation) translation ocean rainfall. Secondary use is density to calibrate depth above 750 meters. Is in the design documentation on line.

Reply to  Rud Istvan
January 15, 2024 2:08 pm

If it’s calibrated at 750m and below because the salinity is constant then what happens to the measurement uncertainty above 750m?

If ocean rainfall changes the salinity then how can calibration be maintained for depth above 750m? Various amounts of precipitation over an area will cause differences in the calibration above 750m over that same area.

It all contributes to measurement uncertainty for the “float”, regardless of the sensitivity and/or calibration drift of the sensor. Probably needs to be a Type B measurement uncertainty because it can’t be measured using observation.

As I understand it the Argo float actually measures conductivity, not salinity. The conductivity is then manipulated with “adjustments” for an assumed “normal” ocean salinity and for an assumed “sensor calibration drift” in order to get a salinity “value”. A whole bunch of “uncertainty” for any specific reading as well as over time as the float drifts.

Any time I see individual field measurements of different things given a measurement uncertainty in the thousandths digit I truly take it with a grain of salt. That just sounds like the typical climate science meme of “all measurement uncertainty is random, Gaussian, and cancels” so that the precision of calculation of the average can be assumed to be the measurement uncertainty.

Michael S. Kelly
Reply to  doonman
January 15, 2024 4:38 pm

The SBE temperature measuring device is a thermistor, which utilizes a metal wire whose resistance as a function of temperature is known. In the ARGO floats, the sensor is a platinum resistance thermometer (PRT). What you described is a thermocouple, which depends on the Seebeck effect in the junction of two dissimilar metals to genereate an electric potential that varies with temperature. The PRT is very repeatable sensor to sensor, though that doesn’t matter very much. The SBE is more than just the PRT. It includes all of the circuitry and housing, with a plug out interface. Each unit is individually calibrated at the factory, and they are just stupidly accurate. Moreover, units recovered after years of operation have been found to still be within the factory calibration bounds.

Reply to  Michael S. Kelly
January 16, 2024 7:08 am

“calibrated at the factory”

The operative phrase in all of of the hype.

From: “New method of temperature and conductivity sensor calibration with improved efficiency for screening SBE41 CTD on Argo floats” (2019)
Once a float is deployed, direct confirmation of sensor accuracies is impossible, in contrast to a shipboard CTD observation, which can be compared by post-calibration in a laboratory. Therefore, when the Argo program began, it was decided by the Argo data management team that a data flow and quality control method would be established in order to maintain high-quality uniform data. This method involves real-time quality control (rQC), which is analyzed by the Argo data assembly centers (DAC) within 24 h after measurement, as well as a delayed-mode quality control (dQC), which is carried out for research purposes by the principal investigator (PI) within 1 year of the rQC. (bolding mine, tpg)

In other words, the data is ADJUSTED to fit what someone things the actual reading should be. As opposed to just developing a Type B measurement uncertainty interval to be propagated with the raw data.

Why is climate science so stubborn about not using standard metrology protocols? Why must all measurements be just stated values with no uncertainty intervals? Why does climate science assume that all measurement uncertainty is random, Gaussian, an cancels? Is it just to make things *easier*?

Reply to  Tim Gorman
January 16, 2024 10:23 am

Once you lose calibration traceability against known standards using predetermined recall periods and published results to assure instrument calibration stability and reliability, all you have left is armwaving. There is no other way to describe it.

sherro01
Reply to  Willis Eschenbach
January 15, 2024 2:48 pm

Thank you, Willis,
That graph of 87 float responses is a very good example of the proper use of data versus assertions. It is hard to argue against this picture that tells 1,000 words.
Sadly, the average reader is challenged to comprehend it, simple though it is. It leads from the simple demo that ocean temps are not “remarkably uniform all over the world” to more material about uncertainty shown for example by that sky blue jagged trace.
In the fullness of time, history will not be kind to “scientists” who made (knowingly?) stupid claims about uncertainty, a major, vital concept for proper science.
Geoff S

Dave Andrews
Reply to  sherro01
January 16, 2024 7:41 am

When Fridtjof Nansen explored the Arctic in the Fram from 1893 – 96 he took many temperature measurements of the ocean and found

“These temperatures of the water are in many respects remarkable. In the first place the temperature falls…..from the surface downward to a depth of 80 metres after which it rises to 280 metres, falls again at 300 metres then rises again at 326 metres then falls to rise again at 450 metres, then falls steadily down to 2000 metres, to rise once more slowly at the bottom. Similar risings and fallings were to be found in almost all the series of temperatures taken”

This series of temperatures were taken in August, September and October 1894

‘Farthest North’ Fridtjof Nansen Volume 1 p 263-264

This was the Arctic but presumably all oceans have variable temperature profiles.

Michael S. Kelly
Reply to  Willis Eschenbach
January 15, 2024 5:15 pm

WE, how are the float temperature profiles averaged to give the “final” result? I have a vague memory of the problem of changing float locations and consequent coverage changes being dealt with by the following method: first, use the profile data to calculate, by interpolation, the temperature profile at a standard grid location, and then average all of the standard grid location profiles together to get one grand temperature.

If that is how they really do it, then they can’t claim to have used temperature measurements to arrive at an overall average temperature; they’re using calculated temperatures, or, as Mosh (accurately, as it transpires) dubbed them, predictions. I have a real problem with that, since I’ve had experience with weather station data from six stations in a two by two mile square area providing six different temperatures at exactly the same time. As far as I’m concerned, averaging them throws away information, and averaging the average with other averages of measurements furthers the loss of information. But what do I know”?

bdgwx
Reply to  Michael S. Kelly
January 15, 2024 5:32 pm

They use ensemble optimal interpolation to fill the grid. The process is similar to how reanalysis datasets work. That is they use a physically realistic model of the space to make a first guess and then adjust the fields (again…in a physically realistic way) to match the observations. It’s not unlike how 3D-VAR works. They then just do a trivial area weighted average of the grid mesh.

Reply to  bdgwx
January 15, 2024 8:56 pm

Fake Data.

Reply to  bdgwx
January 16, 2024 1:36 am

So, what you are saying is they MAKE UP most of the data.

Thanks. 🙂

Reply to  bdgwx
January 16, 2024 4:49 am

A mathematicians answer to everything, just make up a “physically realistic model” to create data and then claim the results as “measured”. No uncertainty, no recognition of varying possibilities seen in the real world. This is “pseudoscience” at its best.

Reply to  Jim Gorman
January 16, 2024 10:31 am

With no possible traceabilty to maintained accepted standards.

bdgwx
Reply to  Willis Eschenbach
January 15, 2024 5:34 pm

Is this the range or standard deviation of the profile or something else?

Reply to  bdgwx
January 16, 2024 7:17 am

Look at the labels, duh.

Erik Magnuson
Reply to  Willis Eschenbach
January 15, 2024 9:25 pm

The lines look remarkably parallel to my eye, albeit with a few outliers.

Reply to  Nicholas McGinley
January 15, 2024 2:31 pm

+100. I wonder what the definition of their measurand is.

Here is an image of the steps the GUM recommends.

For all the scientists that think they have made a definition that is unassailable by saying the measurand is defined as an average of all the floats, they have missed the fact that definition also means there is only one measurement. That means “n” = 1. The uncertainty then equals the variance and standard deviation of the data.

1000001221
Robert B
Reply to  Nicholas McGinley
January 16, 2024 2:26 am

When you measure an unchanging value, you assume that the errors are perfectly random. I was taught, as a rule of thumb, you can’t safely make that assumption to quote an error only a quarter of the precision of the measurement or less. E.g. If you measure a length of a piece of string to the nearest mm and the SD of 100 measurements is 0.5 mm, you report an uncertainty of twice this – 1 mm. If it comes out 0.05 mm, you report an uncertainty of 0.3 mm, not 0.1 (round up, don’t report it as two significant figures) because you do not know that it’s perfectly random errors.

Even more important if sampling.

bdgwx
Reply to  Robert B
January 16, 2024 7:00 am

0.5 mm ==> 1 mm

0.05 mm ==> 0.3 mm

??

Robert B
Reply to  bdgwx
January 16, 2024 12:07 pm

A quarter of 1 mm, rounded up. Each measurement was made with a precision of 1 mm, not 0.1 mm.

Do you ever understand the points being made?

bdgwx
Reply to  Robert B
January 16, 2024 2:32 pm

No. I don’t always understand the points being made. That’s why I often follow up with questions.

What is being reported for the length of the string? The average of the 100 measurements or something else?

Under what circumstances might an instrument with 1 mm resolution sometimes result in an SD of 0.5 mm and sometimes 0.05 mm when doing 100 measurements of an unchanging value?

Reply to  bdgwx
January 16, 2024 2:45 pm

resolution is only related to accuracy. It is not the same as accuracy. As I’ve pointed out before, I have a frequency counter that can resolve down to 1hz – but it’s measurement uncertainty is in the 10hz digit. I can state that unit digit but I’m only fooling myself in doing so.

Robert B
Reply to  Tim Gorman
January 16, 2024 4:26 pm

I should have used “resolution” rather than “precision”, as in using a ruler with the smallest increments being 1 mm.

Robert B
Reply to  bdgwx
January 16, 2024 4:43 pm

Do you ever understand, was what I was questioning.

If you did 10000 measurements and the SD popped up as 0.05.

So something like 9000 readings of 100 mm and a thousand of 101 mm. The true value might be 100.4 mm so rounded down most of the time. You get an average of 100.1 mm with an SD of 0.05 and you report 100.1 +/- 0.1 mm, which would be underestimating by 0.2 mm, at least. Not a big deal unless you conclude the world needs to go vegetarian because of it.

bdgwx
Reply to  Robert B
January 16, 2024 5:18 pm

9000 100’s and 1000 101’s is σ = 0.3 mm.

Anyway, what would you do if σ = 0.005 mm?

Robert B
Reply to  bdgwx
January 16, 2024 8:12 pm

I was wondering if you would actually bother. I didn’t. Just had a rough guess.

Throw in 9900 and 100.

You can calculate the SD but miss a simple point about the practical use? The recalcitrance is strong in this one.

The point is, one last time, the statistics work if you have perfectly random errors. A perfectly symmetrical distribution. So if you round to a certain increment, you cannot rely on that being the case. You can’t expect tens of thousands of measurements to give you a more precise value than half the resolution (you would measure to the nearest 0.5 mm with a ruler so a quarter).

With Argo, the actual thermistor might put out a PD with increments corresponding to +/- 0.005°C, so you can pretend that sampling the whole ocean is like measuring a single sample thousands of times simultaneously, and calculate an error that is less than half that. But like rounding to the nearest increment, it’s highly unlikely that measurements are a perfectly normal distribution around the true value. The actual experiment is putting the thermistor in an instrument that will sink 2000 metres and surface. It will not follow the exact same path in 10 days time, and the water in that path will not be the same water that was there ten days before.

An analogy that I used for the surface temperatures is it’s like measuring the average height of humans by measuring the shortest and tallest person walking past the same spot each day. You are relying on the Grade 1 class excursion or college basketball team going past in the middle of your survey period to not get a meaningless result.

Reply to  Robert B
January 17, 2024 7:07 am

It really doesn’t matter one iota what the resolution of the sensor is if the measurement uncertainty of the entire float is higher than the resolution of the sensor.

bdgwx is a blackboard statistician with no knowledge of actual field measurement. He is just using this argument as a red herring so he doesn’t have to address the measurement uncertainty of the float itself.

bdgwx
Reply to  Robert B
January 17, 2024 7:31 am

Relax. I’m just trying to figure out your rule. If you don’t want to explain it then just say so and I’ll move on.

rhs
January 15, 2024 10:58 am

If all I have to do, to get free Other People’s Money, aka getting some of my own old white guy (yes, 50 is old to them) money back is to change a label without a change of lifestyle. I’d game that system with no hesitation.
Now, willing to live in that system, oh he’ll no! My proximity to Boulder Co is nearly too close for sanity.

2hotel9
January 15, 2024 11:03 am

Ok, so, water is wet and the temperature of said wet water chances constantly and humans have zero to do with it. Alrightee then, glad that got sorted out.

January 15, 2024 11:17 am

And because they cluster in certain regions, in part due to currents and winds presumably, what is the effect of an El Niño pushing warm water to the eastern pacific, or a la Nino doing the reverse, or a tropical cyclone, etc. Surely that can’t cause an artifact in the measurements that exceeds 1/50th of a degree C….. or may be it can several fold over.

The proportion of individuals immersed in academentia who have disqualified themselves for life from ever earning the respect of sentient beings is astounding.

strativarius
January 15, 2024 11:25 am

Speaking of DEI…

Cop29
Azerbaijan appoints no women to 28-member Cop29 climate committee
https://www.theguardian.com/environment/2024/jan/15/cop29-climate-summit-committee-appointed-with-28-men-and-no-women-azerbaijan

strativarius
Reply to  rhs
January 15, 2024 1:15 pm

Blimey, the idiocracy is here.

michael hart
Reply to  rhs
January 15, 2024 2:49 pm

I’m fairly sanguine about that.
There’s only a certain number of stupid people to share around.

If climate science already employs most of the idiots then the aviation industry may find it hard to recruit many people of such calibre.

1saveenergy
Reply to  michael hart
January 15, 2024 3:11 pm

X 100

Reply to  michael hart
January 15, 2024 6:23 pm

Boeing seems to be doing a good job.

January 15, 2024 11:33 am

Claimed uncertainty with 8,000 floats + measurements = ± 0.019°C

As per usual in climate science, the magic of averaging allows them to ignore the instrumental measurement uncertainty of the ARGO floats.

And even with this claim, 9 ± 5 ZJ is not much of a measurement. One significant digit, and the relative uncertainty is 9 / 5 * 100 = 56%!

Reply to  karlomonte
January 15, 2024 12:11 pm

It is not the sensor that determines the measurement uncertainty. It is the overall device. Any obstruction in the water flow channel changes the temp reading and therefore the measurement uncertainty of the stations itself.

Reply to  Tim Gorman
January 15, 2024 12:28 pm

±0.019°C * sqrt(4000) = ±1.2°C!

Errata: Relative uncertainty should have been 5 / 9, not 9 / 5…

January 15, 2024 11:38 am

Thanks again Willis for an enlightening and entertaining review of what constitutes climate “science”.

January 15, 2024 11:46 am

I once asked John Christy how many readings a satellite takes in a day of the oceans. I recall the answer was tens of thousands, because it is scanning pervasively from a polar orbit while the Earth rotates beneath the orbit with sideways looking instruments, 20 times a day. This is so much more dense and consistent in nature of its single common instrument for all readings than buoys that make the surface for 20 mins every now and again. The buoys, buckets, engine water intakes etc , filled in mainly with guesswork interpolation across most of the ocean area, , of the main control of global climate, the surface temperature the controlling evaporation is cased by, are not even close to the satellite data quality.

bdgwx
Reply to  Brian Catt
January 15, 2024 4:47 pm

Here is how the MSUs sample the Earth. The coverage is so sparse UAH had to do spatial infilling from measurements from up to 4175 km away spatially and 2 days away temporally. [1][2]

comment image

Reply to  bdgwx
January 15, 2024 11:41 pm

UAH uses more than 2 days, so why not show the 30 day coverage.

Of course, the surface data has coverage of a tiny fraction of the surface, unless you consider urban placement counts for all the surrounding area.

Neil Jordan
January 15, 2024 11:47 am

Willis, regarding your transcription of the abstract, the SI prefix was spelled “zetta”. I have read in other references and one comment here that “zeta” is used. Checked with NIST and Wiki:
https://www.nist.gov/pml/owm/metric-si-prefixes
https://en.wikipedia.org/wiki/Unit_prefix
It’s zetta.
BUT (always a but), online math tutor shows “zeta”.
https://tutoroctavian.blogspot.com/2013/12/the-twenty-si-prefixes.html
BUT a few more dives brought up more zettas than zetas. A cautionary note if you should do an etymological investigation, the final largest and smallest prefixes will be the fourth through eighth Marx Brothers.

Footnote – one of the authors is K Trenberth. A quick dive brought up Kevin Trenberth American climatologist. I dove into WUWT to find Trenberth of the missing heat:
https://wattsupwiththat.com/2018/11/02/friday-funny-at-long-last-kevin-trenberths-missing-heat-may-have-been-found-repeat-may-have-been/
You might look into whether this latest paper confirms finding the missing heat down there along with Bathybius haeckelii.

Editor
Reply to  Neil Jordan
January 15, 2024 1:26 pm

My favorite Trenberth reference comes from Chris Landsea, whose career started under Bill Gray, the creator of the seasonal hurricane forecasts. Landsea joined NOAA and spearheaded efforts to coax more accurate information ship records of tropical storms mostly lost to history.

https://courses.seas.harvard.edu/climate/eli/Courses/global-change-debates/Sources/Hurricanes/more/Landsea-letter-resigning-from-IPCC.pdf say in very small part:

After some prolonged deliberation, I have decided to withdraw from participating in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). I am withdrawing because I have come to view the part of the IPCC to which my expertise is relevant as having become politicized.

It is beyond me why my colleagues would utilize the media to push an unsupported agenda that recent hurricane activity has been due to global warming. Given Dr. Trenberth’s role as the IPCC’s Lead Author responsible for preparing the text on hurricanes, his public statements so far outside of current scientific understanding led me to concern that it would be very difficult for the IPCC process to proceed objectively with regards to the assessment on hurricane activity.

I personally cannot in good faith continue to contribute to a process that I view as both being motivated by pre-conceived agendas and being scientifically unsound. As the IPCC leadership has seen no wrong in Dr. Trenberth’s actions and have retained him as a Lead Author for the AR4, I have decided to no longer participate in the IPCC AR4. 

January 15, 2024 12:02 pm

I have not read the paper BUT I suggest most of any heat change is seen in the top 200 surface metres and the thermocline down to

Reply to  Brian Catt
January 17, 2024 1:32 am

I suggest that your suggestion is a wild ass guess with no evidence to suggest if or how it may be the case, or not.

January 15, 2024 12:02 pm

Using the above calculations, 5 ZJ is ± 0.019°C … they are seriously claiming that we can measure the temperature of the top 2,000 meters of the ocean to within ±0.019°C.”

this is a foolish misunderstanding of uncertainty.

the +- ESTIMATES the ERROR not the uncertainty.

this has been explained many times. Ill give you an analogy but you wont answer my question

you have a thermometer it measures temperture in full degrees.

the shallow end of your pool is 77 F the deep end is 76F.

now PREDICT or estimate the tempeture you would record with a perfect instrument tossed randomly in the pool 1000 times

Reply to  Steven Mosher
January 15, 2024 12:53 pm

The error of what? How precisely you can locate the average? Or the accuracy of the average. Precision is not accuracy.

old cocky
Reply to  Willis Eschenbach
January 15, 2024 2:22 pm

You missed

  • shape of the pool
  • what fluid is filling the pool.
  • location of additional heat sources
  • location of additional heat sinks
  • thickness of walls
  • thermal conductivity of walls
  • thickness of top covering
  • thermal conductivity of top covering
old cocky
Reply to  old cocky
January 15, 2024 2:26 pm

Oh

  • the presence of internal separators
  • thickness of internal separators
  • thermal conductivity of internal separators
  • location of internal separators
old cocky
Reply to  old cocky
January 15, 2024 2:40 pm

and

  • what scale does the “perfect” instrument use?
  • what is its resolution?
  • what is its response time?

One would assume that perfect implies any scale to infinite resolution with instantaneous response. That then raises the question of transitional readings between entering the pool and the sampling location.

Rich Davis
Reply to  Willis Eschenbach
January 16, 2024 4:03 am

yesbut mosh ddnt ask about temperature he asked about tempeture so your rong

(Written in mosh style, free of punctuation, with no confusing big letterz and a smattering of typos and grammatical errors, as might be expected from ‘a english mager’!)

bdgwx
Reply to  Willis Eschenbach
January 16, 2024 8:19 am

I believe the point was in regards to a fundamental truth regarding the relationship of individual measurement uncertainty and how that uncertainty propagates into the average of a spatial domain when the domain is randomly sampled. We don’t need to concern ourselves with the minutia of details here. In fact, the minutia of details often makes it more difficult for some to understand the fundamental truth. So in that spirit create a simple scenario and randomly sample it and see what happens to the dispersion of the errors of the sample means with the population mean.

Reply to  bdgwx
January 16, 2024 8:45 am

Basically a word salad.

Have you not bothered to read WE’s experience in scuba diving? Or the post about temperature variation found in the Arctic ocean by an explorer around the turn of the last century?

These are experience talking, not some spatial model you make up in your mind. Do you have ANY references that support physical experimental finding that support your use of a contrived spatial model?

You are also ignorant of the difference between measurement uncertainty and experimental data uncertainty. These are two different things and both add to the combined uncertainty of measurements.

Tell you what, give us a model for the attached spatial distribution of temperatures. Then I will give you another snapshot for another day and we’ll see if the model fits. I won’t hold my breath.

Photo-Marker_Aug182021_090041
Reply to  bdgwx
January 17, 2024 1:50 am

We don’t need to concern ourselves with the minutia of details here.”
Waves away the entire subject of uncertainty and error, for no particular reason at all except that it is hard to understand!
Replaces with a simplistic thought experiment that not only mis-states the issue but is also guaranteed to have nothing to do with the actual value of the parameter to be ascertained.

bdgwx
Reply to  Nicholas McGinley
January 17, 2024 7:25 am

NM: Waves away the entire subject of uncertainty and error, for no particular reason at all except that it is hard to understand!

Not at all. You can prove this out for yourself using a monte carlo simulation. Construct two scenarios: one complicated and one simple. You will see that the fundamental truth that 1) sampling a spatial domain is adequate to estimate the average of that domain and 2) the uncertainty of the average of the sample is lower than the uncertainty of the individual measurements even when you only sample a fraction of the domain.

Reply to  bdgwx
January 17, 2024 8:14 am

What Monte Carlo simulation?

Monte Carlo simulation:

  • “A Monte Carlo simulation is a model used to predict the probability of a variety of outcomes when the potential for random variables is present.
  • Monte Carlo simulations help to explain the impact of risk and uncertainty in prediction and forecasting models.
  • A Monte Carlo simulation requires assigning multiple values to an uncertain variable to achieve multiple results and then averaging the results to obtain an estimate.
  • Monte Carlo simulations assume perfectly efficient markets.

You are generating two random data sets, not creating a model. A model requires a functional relationship made up of uncertain random variables which can be assigned multiple values.

We used to do this in long range planning for a major telephone company to evaluate capital investment projects. Our variables included things like labor costs, interest rates, tax rates, inflation, etc. All kinds of different combinations of the factors would be run for each project and the rate of return evaluated — which allowed allocating capital to the most profitable projects.

You are just creating two data sets of random numbers. We don’t even know if you are creating a Gaussian set for each or that each have the same identical distribution.

 the uncertainty of the average of the sample is lower than the uncertainty of the individual measurements even when you only sample a fraction of the domain.”

You are probably calculating the standard deviation of the sample means. That is *NOT* the accuracy of the mean and can’t be substituted for measurement uncertainty. The standard deviation of the sample means tells you how precisely you have located the average value but it tells you nothing about how accurate that average is. The SDOM is just a measure of the sampling error, it is *NOT* a metric for accuracy.

You have had this pointed out to you MULTIPLE times yet you continue with the delusion that the standard deviation of the mean can tell you about the accuracy of the mean. It only does so if certain restrictions are met – 1. independent random data, 2. Gaussian distribution of measurement uncertainty, 3. same measurand, 4. same measuring device.

Nothing you do EVER meets any of the restrictions let alone all of them. Therefore the SDOM can *NOT* be substituted for measurement uncertainty.

Reply to  Willis Eschenbach
January 17, 2024 12:03 pm

How many were outside an expanded interval when using a multiplier of 2.262 (9 degrees of freedom 95%).

bdgwx
Reply to  Willis Eschenbach
January 17, 2024 5:19 pm

1) Negative autocorrelation in a temperature field would raise serious questions regarding the 2LOT.

2) The salient truth isn’t regarding the distribution of the error of the sample means. It is that the sample means a) approximate the actual mean of the spatial domain and b) can do so with an uncertainty lower than that of the individual measurements even when the sample size is a fraction of the spatial domain.

3) Regardless I did try replicating your experiment. I too created a population of 1000 values with negative autocorrelation exhibiting various Hurst exponent values. I then sampled the population randomly with replacement 1000 times with a size of 10 and computed the mean and compared it to the mean of the population. The distribution of the errors tended toward normal with the expected 30ish% exceeding the SD. I reran the experiment several times with different distributions of the 1000 values. I could not replicate your result. I believe your result so there must be something I’m missing.

4) Side note…estimating the Hurst exponent in Excel is time consuming and laborious. I used the rescaled range method.

bdgwx
Reply to  Willis Eschenbach
January 17, 2024 7:26 pm

Agreed. There is definitely a limit at which the 1/sqrt(N) relationship holds. Nevermind that the 1/sqrt(N) relationship is only for uncorrelated measurements in the first place. In reality measurements are going to exhibit some correlation effectively causing us to lose degrees of freedom. And in the case of ARGO the miniscule sampling of the ocean means that the uncertainty of the global average (±0.004 C) is actually higher than the individual measurements (±0.002 C).

BTW…I did round all of my mock measurements to the nearest integer to simulate ±0.5 C rectangular uncertainty as well. I don’t remember the exact figures but I believe n=10 samples (1/100th of the population) resulted in a RMSD of 5x the individual ±0.5 C uncertainty. But by the time I got to n=100 (1/10th of the population) it started to cross below that ±0.5 C uncertainty threshold. I’d have to rerun the simulation to be sure and it’s late so I’m going to bow out the night.

Reply to  bdgwx
January 18, 2024 5:45 am

As Willis tried to tell you but which went right over your head, you are determining how precisely you are calculating the average. It tells you *nothing* about how accurate that average is. It is the ACCURACY of that average that is most important, not how many digits you can use in determining the average value.

You *still* haven’t internalized the concept that uncertainty is not error and that you can’t know the true value no matter how many measurements you make. Therefore you can’t determine error either.

In order to make Willis’s example match global temperature you would have to measure the diameter of the pencil lead in MULTIPLE different pencils using different rulers under different environmental conditions.

In that case the average doesn’t even physically exist and therefore can’t be a measurand.

Reply to  Willis Eschenbach
January 18, 2024 8:50 am

Perfect example of resolution limitations!

It illustrates very well why standard deviation of the data is a better indicator of the dispersion of measurements attributal to the measurand. The standard deviation of the mean only shows the interval where the population mean may lay.

From these two, as you say, the range of measurements may be very wide, yet the SEM can approach zero. I like the nuclear power plant example. Would you rather see released radiation over 24 hours stated as 10.00 ± 0.02 rads or 10 ± 7 rads?

Reply to  bdgwx
January 17, 2024 7:13 pm

1) Negative autocorrelation in a temperature field would raise serious questions regarding the 2LOT.

You’ve never studied thermodynamics, have you? Why do you think a temperature gradient can’t be addressed in either direction? The only difference is the signs. You don’t know what you’re talking about.

2) The salient truth isn’t regarding the distribution of the error of the sample means. 

Let’s change that to read more accurately. The distribution of the sample means has a variance that arises due to sampling error. If the population was normal, and all samples were also normal with the same μ and σ, there would be no sampling error. The mean of the sample means distribution would be the same as each sample and the population. That never occurs.

You just can’t get by the problem with single measurements of different things not being samples with a mean can you. Heck, they are not even samples of the same thing.

This truly doesn’t even follow TN 1900. At least, it uses measurements of the same thing, i.e., Tmax in the same screen.

a) approximate the actual mean of the spatial domain and b) can do so with an uncertainty lower than that of the individual measurements even when the sample size is a fraction of the spatial domain.

The base data, that is, the temperatures, ALL HAVE MEASUREMENT UNCERTAINTY. That uncertainty propagates throughout each and every calculation based upon them.

UNCERTAINTIES ADD. ALWAYS. That means if you have sampling error, IT ADDS, to the propagated measurement uncertainty. As does any systematic uncertainty.

I then sampled the population randomly with replacement 1000 times with a size of 10 and computed the mean and compared it to the mean of the population.

This just illustrates your inability to recognize that in a real physical environment, each of the numbers you used would have an uncertainty associated with them.

Lastly, why did you use replacement? Do you really think this is appropriate when you are really taking the average of single measurements to find a mean? I’ll say it again, with single measurements, there is no sample means distribution because there is no means of samples. There is only the single temperature data which ARE NOT SAMPLES.

Reply to  bdgwx
January 18, 2024 8:25 am

Read WE”s reply post to you very carefully. It is a perfect example of resolution limitations!

I don’t know how to get through to you that measurements are being read, recorded, and analyzed and not just numbers on a number line.

From Willis;

Doesn’t matter if the SEM is a ten-thousandth of a mm. The limit is not in the sample size. The limit is in the accuracy of the instrument itself. We still can’t tell the difference between the two pencil leads using a ruler.

Read this, especially for the measurement scenario. Please note as usual, this is only for measuring the same thing with a Gaussian distribution.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4654598/

Reply to  bdgwx
January 17, 2024 12:19 pm

We keep trying to explain to you why scientists, engineers, machinists use the standard deviation rather than the standard deviation of the mean when making measurements.

Here is one more reference to help explain it.

The confidence interval the same as standard deviation?

No, they’re different. The standard deviation shows how much individual measurements in a group vary from the average. Think of it like how much students’ grades differ from the class average.

A confidence interval, on the other hand, is a range that we’re pretty sure (like 95% sure) contains the true average grade for all classes, based on our class. It’s about our certainty in estimating a true average, not about individual differences.

https://www.simplypsychology.org/confidence-interval.
html

People continually show you references yet you almost never show any. You rely solely on people believing you are THE expert and everyone who contradicts you is wrong.

Reply to  Nicholas McGinley
January 17, 2024 8:03 am

It’s the standard climate science meme of “all measurement uncertainty is random, Gaussian, and cancels.”

Reply to  Steven Mosher
January 15, 2024 2:54 pm

“””””you have a thermometer it measures temperture in full degrees.

the shallow end of your pool is 77 F the deep end is 76F.”””””

If it only measures to the integer value, then temps could be anywhere within the interval of 76±0.5 and 77±0.5 or any XX±0.5.

Let me add that predicting what a measurement value might be is akin to fortune telling. It is why measuring the temperature of a water bath is extremely difficult.

How about I predict 78F? That is in the realm of possibility!

old cocky
Reply to  Jim Gorman
January 15, 2024 5:27 pm

If it only measures to the integer value, then temps could be anywhere within the interval of 76±0.5 and 77±0.5 or any XX±0.5.

if it’s still reading within spec.

Reply to  Jim Gorman
January 15, 2024 9:07 pm

The centre of a pool is often a bit colder or warmer than the edges because the edges pick up warming from the surrounds. Depends on sunshine angle, weather, etc etc etc…

Mosh’s question is absolute NONSENSE from someone totally unaware of how anything in the real world actually works.

old cocky
Reply to  Steven Mosher
January 15, 2024 3:00 pm

you have a thermometer it measures temperture in full degrees.

the shallow end of your pool is 77 F the deep end is 76F.

now PREDICT or estimate the tempeture you would record with a perfect instrument tossed randomly in the pool 1000 times

What is the question? You would record 1000 temperature readings, which would give a better temperature profile than 2 measurements taken with a low resolution instrument which may or may not have been calibrated and may or may not have been given sufficient time to stabilise the 2 readings.

Reply to  old cocky
January 15, 2024 9:13 pm

An of course ignoring that fact that taking 1000 measurements takes time, and the water temperature would change to so degree over time.

That means you CANNOT use the maths law of large sample.

bdgwx
Reply to  Steven Mosher
January 15, 2024 4:24 pm

I thought this would be a fun exercise. I did a monte carlo simulation of a volume with a 1000 cell grid mesh and a linear temperature gradient from 70 F to 80 F. I then then randomly sampled only 100 of the 1000 cells with ±0.5 F of rectangular uncertainty on each probe of a grid cell. The RMSD between the mean the samples and the mean of the population was 0.17 F. Increasing the sample size to 500 decreased the RMSD to ±0.10 F.

Reply to  bdgwx
January 15, 2024 5:06 pm

That you would think a Mosh rant is a “fun exercise” isn’t a surprise.

Reply to  bdgwx
January 15, 2024 9:15 pm

The fact that you changed a stupid anti-science exercise into a theoretical one bearing no link to reality, doesn’t surprise anyone, either.

Reply to  bdgwx
January 16, 2024 4:23 am

So you used a cube of 10x10x10. Did all the “cells” at one side measure 70F and all the “cells” at the opposite side measure 80F

Why did you use a linear gradient?

In a water bath, warmer temperatures tend to diffuse meaning a linear gradient just doesn’t cut it.

Why did you bother with both a linear gradient and a Monte Carlo simulation?

A linear gradient across the cells would suffice to calculate the temperature of each intervening cell. Regardless of the shape a linear gradient means a linear change as you progress from cell to cell!

The fact that your samples had any difference from the mean simply demonstrates that sampling error exists, and a random walk doesn’t fix it.

Reply to  bdgwx
January 17, 2024 1:57 am

Golly, you mean to say if you impose a linear gradient, it becomes very easy to interpolate/extrapolate a value at any particular point in the volume b y just measuring a few places?
What an amazing leap of intuition!
Now explain how exactly this correlates with or says anything at all about a real world situation where the gradients are anything but linear, we do not know what they are, we have known and unknown sources of error of an unknown magnitude, and our methodology has never been independently verified as being valid ahead of time?

bdgwx
Reply to  Nicholas McGinley
January 17, 2024 7:21 am

NM: Golly, you mean to say if you impose a linear gradient, it becomes very easy to interpolate/extrapolate a value at any particular point in the volume b y just measuring a few places?

No. I didn’t say anything remotely close to that. I made no attempt to interpolate/extrapolate a value at particular point.

NM: Now explain how exactly this correlates with or says anything at all about a real world situation where the gradients are anything but linear, we do not know what they are, we have known and unknown sources of error of an unknown magnitude, and our methodology has never been independently verified as being valid ahead of time?

It shows that randomly sampling of a spatial domain with a non-homogeneous temperature distribution can adequately estimate the true average of that spatial domain. Furthermore, the difference between the average of the sample and the population is less than the uncertainty of the individual random measurements even when we have only know the values of 1/10 of the grid cells. And this is true regardless of spatial variation of temperatures in the spatial domain. I ran the monte carlo simulation with various distributions of temperature. It had no effect whatsoever on the result.

Reply to  bdgwx
January 17, 2024 8:13 am

Furthermore, the difference between the average of the sample and the population is less than the uncertainty of the individual random measurements even when we have only know the values of 1/10 of the grid cells.

Uncertainties ADD! The “difference ” you define is the variance. The individual measurement uncertainty must be ADDED to the “difference between the average of the sample and the population”.

You are lost in the woods and don’t even know what you don’t know!

Reply to  bdgwx
January 17, 2024 8:25 am

It shows that randomly sampling of a spatial domain with a non-homogeneous temperature distribution can adequately estimate the true average of that spatial domain.”

NO! In order for the average to represent a “true value” you must have multiple measurements of the same thing using the same device under identical environmental conditions.

YOU ARE SUBSTITUTING THE STANDARD DEVIATION OF THE SAMPLE MEANS FOR THE MEASUREMENT UNCERTAINTY.

The SDOM does *NOT* represent accuracy of the mean. It is the accuracy that determines the “true value”.

And no field measurements can ever give the “true value”. It’s why the GUM has transitioned to “estimated value of the measurand +/- measurement uncertainty. You can *never* know the true value and therefore can never know the exact error of a measurement.

All you are proving that small samples give about the same average value as large samples for the mean. Neither give you a measure of accuracy for the mean. You are assuming that the stated values are 100% accurate and you can ignore their measurement uncertainty.

Why do you never seem to learn the difference no matter how often it is explained to you?

Reply to  bdgwx
January 17, 2024 10:08 am

Furthermore, the difference between the average of the sample and the population is less than the uncertainty of the individual random measurements

Hand waving—you don’t know this to be true. You want it to be true, granted.

Reply to  karlomonte
January 17, 2024 1:03 pm

It *is* handwaving. It doesn’t matter how precisely you calculate the average, it doesn’t tell you the accuracy of the average.

The accuracy of the average *is* determined by the individual random measurement uncertainties.

Reply to  Nicholas McGinley
January 17, 2024 8:19 am

Measurements should be given as “stated value +/- uncertainty”.

In climate science, and in statistics world as well, all uncertainty is random, Gaussian, and cancels leaving the stated values as 100% accurate.

Corollary: everything is always linear.

Reply to  Steven Mosher
January 17, 2024 1:11 am

How about when the bottom of the pool is near freezing, except where the heater from my rooftop solar collector is pumping in an unknown amount of water which has been further super-heated to several hundred degrees by my heat pump heater; the north side of the pool has someone tossing in ice by the truckloads, bags full, and blocks of, at random but frequent intervals, the Sun is shining hotly in some places on the pool, but in a few other places are hail storms, blizzards, areas of heavy rain and huge areas of clouds with drizzle, plus a whole bunch of rainspouts are dumping in water all over the pool edges which may be any temperature at all?
Then assume that the thermometer was “calibrated” by using someone’s notions about the Top of the Atmosphere radiation imbalance, and thus overriding the factory set calibration of this “perfect” instrument.

Now take into account the fact that only certain area are sampled and some areas never are, such as the shallow parts where the temp varies the most, the parts under the large ice floes where the temp varies hugely over short spans of distance, and certain other places for all sorts of other reasons.

And then consider that we are in fact attempting to measure temp, but only reporting how some opaquely determined statistical mean of the temperature is changing over time, and not reporting what is actually measured.

But first kindly explain please what exactly is a “perfect” thermometer, and how it eliminates all questions regarding device resolution, random errors, systematic errors, and basically the entire library of human knowledge known as metrology, but ignoring for now inconvenient portions of the branch of mathematics known as statistical analysis?

bdgwx
Reply to  Nicholas McGinley
January 17, 2024 7:12 am

Those details of the factors that are modulating the temperature of your pool do not invalidate the fact that randomly sampling the pool and averaging the values will approximate the true average temperature of the pool and that the more times you measure it the less error becomes between the average of your sample and the true average.

BTW…the SBE-41 is calibrated in a NIST laboratory with a bath controlled to within ±0.0005 C of the triple point of water and the gallium melt point. This procedure has nothing to do with the top of the atmosphere or radiation balance.

Reply to  bdgwx
January 17, 2024 8:04 am

In other words, the temperatures in the pool follow a random, Gaussian distribution and that all the samples are IID with the same μ and σ. This is basically what TN 1900 does.

ΒTW, you still haven’t figured out that you are defining the measurand as Tavg which is calculated by

f(x1, …, xN) = (x1 + … + xN)/N

That means all your measurements are involved in calculating ONE instance of the measurand Tavg. You can not turn around and say you have multiple instances of measuring the measurand from which a sample means can be developed and used to find statistics.

If you treat the measurement as individual samples then you must do at least what TN 1900 does. Find the variance of the data, divide by the square root of the measurements taken, and lastly expand the standard uncertainty of the mean. Then a combined uncertainty must be calculated adding measurement uncertainty of each measurement, Type B uncertainties, etc.

Remember, what TN 1900 does is inform you of what values the mean may have. The standard deviation of the data tells you what the dispersion of temperatures around the mean is. Per the GUM, when a ± value is given it should be stated what the value represents.

Reply to  bdgwx
January 17, 2024 8:45 am

values will approximate the true average temperature of the pool”

“True” value? How do you know that? You are measuring different measurands under different environmental conditions. That does *NOT* meet the requirements for assuming the average represents a “true value”.

 the more times you measure it the less error becomes between the average of your sample and the true average.”

Nope. It does *NOT* tell you anything other than that the sampling error goes down with more samples. It tells you NOTHING about the accuracy of the measurements — which is what you need to know to get anywhere near a “true value”.

I posted on here already the fact that it is recognized that the floats have calibration drift after being lab calibrated and deployed in the field. The calibration drift is GUESSED at for later times. That GUESS needs to be included as a Type B measurement uncertainty for the float – yet it never is. It is always assumed that the GUESS makes the data 100% accurate. It’s called the “delayed” value.

NOTHING deployed in the field ever maintains its lab calibration. Get away from your blackboard with its 100% accurate stated values and get some real world physical experience. Try telling a manufacturing plant manager that how accurately you can calculate the mean of samples pulled from the production line is a measure of the quality of the product!

January 15, 2024 12:04 pm

According to these others the oceans have indeed become too warm for fish.

Reply to  general custer
January 15, 2024 2:47 pm

They forget the Earth is still in a 2+ million-year ice age with 20 percent of the land frozen. Fish have been around for much longer than that when it was much warmer than today.

Reply to  scvblwxq
January 17, 2024 2:19 am

Turns out there is a reason why single individual fish often lay eggs by the tens of millions. And why often the vast majority of them never do survive to adulthood, but sometimes a whole bunch of them do survive to adulthood.
Why would a fish need to lay such prodigious numbers of eggs, when for a population to be stable, each individual within any population only needs to produce exactly one offspring that reaches reproductive maturity?
Maybe it has something to do with that most hallowed of all principles in all of biology: Shit happens.

Reply to  Nicholas McGinley
January 17, 2024 2:21 am

(A few random or maybe not so random) Examples:

  • Salmon, which lays over 20,000 eggs at once.
  • Tuna, which can lay up to 2 million eggs at one time.
  • The ocean-dwelling grey grouper, which can lay close to 340 million eggs in one year.
  • The mola fish, which can lay up to 300 million eggs in one spawning season.
  • The mouthbreeder, which can lay up to several thousand eggs at a time.
Reply to  Nicholas McGinley
January 17, 2024 2:23 am

BTW, a single female Mola Mola fish can live in excess of 100 years.
Math, it’s how stuff survives.

January 15, 2024 12:09 pm

When people bring up ocean heat content, I like to post this graph

OHC-in-perspective-2
January 15, 2024 12:16 pm

This is all we need: Note that they tend to concentrate in some areas, like the intertropical convergence zone by the Equator and the US East Coast, while other areas are undersampled.

A/ How many times do I say – “Water heated by ‘man’ coming off heated cities and farmland is flushing into the Conveyor, the sun heating the mud it carries is heating it more and it and blah blah blah (is what’s melting Arctica)
It is also why hurricanes follow the tracks that they usually do – they are following that (artificial heat)

B/ The ‘Convergence Zone’ is all we need to hear = where all the hottest water on this Earth exists

That the floats are converging on those places and sending back data which, surprise haha surprise, shows ‘warming’
We don’t even need to know the numbers – the buoys are patently recording an oceanic version of the Urban Island Effect.

Surely Shirley that means that, with those places being relatively small parts of The Ocean, that The Ocean for the most/larger part, is cooling

Another feather in the cap for Katabatic Heating – which is a cooling effect or the release of heat energy. That very hot/dry descending air and overheated landscapes below it act like mirrors for incoming solar.
The extra descending air came from extra rising air occurring over the oceans, sucking heat out of them
IOW Continent sized Hadley cells

i.e a warming atmosphere = a cooling Earth
We are heading rapidly into an ice age – as how many recent stories assert?

Reply to  Peta of Newark
January 15, 2024 12:36 pm

Sometime this last year, a Sputnik noted that the soil temp somewhere in Spain was 60°C
Taking that as being = sand (Emissivity of 0.8), that patch of ground was radiating (via Stefan) about 550Watts/m²

On June 21st solstice at 40° latitude (about = Madrid), averaged over 24 hours, the solar input there would be 350Watts/m²

Reply to  Peta of Newark
January 15, 2024 2:54 pm

The Earth is already in a 2+ million year ice age, named the Quaternary Glaciation, in an interglacial period that might be getting ready to end bringing on another very cold glacial period. The Grand Solar Minimum that has just started and the Beaufort Gyre dumping its fresh water into the North Atlantic may be the triggers. https://en.wikipedia.org/wiki/Quaternary_glaciation

Reply to  Peta of Newark
January 17, 2024 2:52 am

The ‘Convergence Zone’ is all we need to hear = where all the hottest water on this Earth exists”

Actually, the hottest water is probably mostly in areas that the ITCZ never wanders to, such as the surface waters in the desert zones. Persian Gulf is the hottest of the sea temps, with the Red Sea garnering honorable mention.
Then there are some hot springs, geysers and such, ocean vents spewing superheated water at any and all latitudes, etc.
For example, the Red Sea, which is a rift valley splitting apart a tectonic plate, has been measured to be over 132 degrees all the way down at 6500 feet deep. Pretty hot.
In fact the ITCZ has a reputation for being kind of cloudy and always mostly raining really hard and dropping huge amounts of water from as much as 86,000 feet up in the sky, where it is, hear tell, mostly very chilly.
The convergence zone has a notably cooler temp of water below it. You can spot it by the thin strip of cooler water between the far warmer waters of the trade wind zones.
Just sayin’, since you brought it up.
This is a science site after all.

I have looked at a lot of the images which are purporting to show the actual locations of the floats, and they move around over time, and do not always show the same pattern, at all.
They are also not dropped in any sort of spatially equal pattern either, instead being deployed in lines. This may in itself be problematic, if they start out in lines and drift to places where currents (and hence winds) tend to push them, and at times become stuck in circulatory cul-de-sacs of one sort or another…
Sometimes they seem to form lines, but not at ceatin other times, but this may be because of poor reportage of the actual positions, who knows?

But one thing seem more certain: They tend to be deployed at high latitudes and almost never at or near the equator, but then they drift into a more spread out distribution that puts many and maybe most of them at very low latitudes, so, on average, each one starts in a colder place and tends to drift to a warmer place. See the pic below of deployment locations. Almost none are anyplace close to a low latitude, but many of them wind up at low latitudes.

ARGO-deployment
Giving_Cat
January 15, 2024 12:16 pm

Anyone who has scuba dived to any depth can tell you average ocean temperatures are an anorexic and obese person standing on a scale together and dividing by two.

bdgwx
January 15, 2024 12:25 pm

Their claimed uncertainty says that four ARGO floats could measure the temperature of the entire global ocean to an uncertainty of less than one degree … yeah, right.

They are not saying that 4 floats could measure the global average temperature to less than 1 C.

Your method of extrapolating ±0.6 C from ±0.019 C does not account for the increase in spatial sampling uncertainty.

The ARGO floats use a few different temperature instruments. The most common one is the Sea-Bird Scientific SBE 41/41-CP which has a stated accuracy of ±0.002 C and a stability of 0.0002 C/year.

What this means is that the spatial sampling uncertainty dominates over the individual measurement uncertainty by a significant margin. Notice that the uncertainty of the global average temperature is actually higher than the uncertainty of the individual measurements. This is because the ensemble optimal interpolation method for field construction is the source of almost all of the uncertainty.

Sadly, I fear that’s as far as I got in their paper … I was laughing too hard to continue.

Have you discussed the publication with the authors yet?

Reply to  bdgwx
January 15, 2024 12:43 pm

The most common one is the Sea-Bird Scientific SBE 41/41-CP which has a stated accuracy of ±0.002 C and a stability of 0.0002 C/year.”

This is the SENSOR accuracy and stability, it is *NOT* the accuracy and stability of the float itself.

ASOS stations use similar sensors but have a station uncertainty of +/- 1.8F (about +/- 1C). The only data I have seen on the float itself indicates a float measurement uncertainty of +/- 0.5C.

Averaging multiple stations does not increase accuracy of the data set. More measurements only allows a more precise calculation of the average, it says nothing about the measurement accuracy of the average.

Substituting the precision of calculation of the average for the measurement accuracy of the average is endemic in climate science. It is incorrect methodology and is done to make the research “look” more accurate than it actually is.

Reply to  Tim Gorman
January 15, 2024 2:46 pm

He refuses to understand reality.

Reply to  bdgwx
January 15, 2024 2:46 pm

Have you discussed the publication with the authors yet?

How many more time are you and bell going to trot this tired old horse out of the barn?

Get some new material, please.

bdgwx
Reply to  bdgwx
January 15, 2024 2:59 pm

I did a Monte Carlo simulation with a 2×2 degree grid (11700 cells between 65S and 65N) with a temperature variance about 36 K^2. The RMSD of the mean of 4 randomly selected grid cells and the mean of the 11700 cell population was about 3 K. Doubling the sample size to 8 cells decreased the RMSD to about 2 K. Anyway, with a sample size of 4 we might expect the uncertainty to be on the order ±6 K (2σ). In reality the ensemble optimal interpolation method used IAP is far more complex than my simple monte carlo simulation model so the ±6 K figure is intended only as a first approximation of the uncertainty when denying all but 4 floats.

Reply to  bdgwx
January 15, 2024 3:20 pm

Your RMSD is the difference between a predicted forecast from the random values generated by the Monte Carlo random generator and the *stated* observational values. That does not represent measurement uncertainty since only the stated part of the observational values were used and the measurement uncertainty of the observed values were ignored.

This is just more of the climate science meme that all measurement uncertainty is random, Gaussian, and cancels so the actual measurement uncertainty can be ignored in favor of using the precision in calculation of the average value can be substituted.

Reply to  bdgwx
January 15, 2024 5:08 pm

Uncertainty is not error.

Can you not understand this simple concept?

Reply to  bdgwx
January 15, 2024 9:21 pm

So you did a pointless theoretical exercise using pointless erroneous assumptions.

And ended up with a pointless and irrelevant result.

Good for you !. 😉

Reply to  bdgwx
January 15, 2024 4:49 pm

Uncertainties ADD. Your description makes spatial sampling uncertainty a Type B that should be ADDED to the measurement uncertainty.

You have made no estimate of this uncertainty so it is difficult to accept your assertion that measurement uncertainty is negligible.

As mentioned by TG, you are quoting sensor uncertainty from the ARGO site. Show some references on what the temperature uncertainty is for the whole float as a system. USCRN uses very accurate sensors similar to ARGO, yet the NOAA quoted error is ±0.3C.

Have you learned nothing about measurements?

UK-Weather Lass
January 15, 2024 12:39 pm

Willis’s posts never fail to amuse as much as they enlighten. 
 
This post from Guy de la Bédoyère for The Daily Sceptic helps to outline the disease that has eaten away at academia and is now working its way through the lives of the innocent and making them guilty when they patently were not. The video embedded within the piece is well worth the watch.

 
https://dailysceptic.org/2024/01/15/the-cult-of-technology-seeks-absolute-control/
 

Computers are not miracle workers since every single last thing they do has been created by a two legged homo sapiens who has hopefully mastered the art of logic. If they haven’t then the UK’s Post Office scandal is just the tip of a very deep ocean indeed and one that can only get worse without skilled craftspeople who know exactly what their trade requires and why.

taxed
Reply to  UK-Weather Lass
January 15, 2024 2:37 pm

l agree the old skills of weather watching have largely been replaced with weather models and AWS’s now doing the work for you.
l feel lucky l learned about the weather the “old school way” as it allows me to see through the BS the climate lobby try’s to feed us about it

January 15, 2024 12:45 pm

I do these numbers when calculating the effect of submarine volcanism on oceans. I can confirm your figure as regards the delta T that 10 Zeta Joules represents. Well spotted. 0.035 deg is close enough for government science

As the great philosopher Bruce Willis said, “You wanna scare me? Play me some rap music”.

I have not read the paper BUT I suggest most of any real and significant heat change from surface warming is actually contained in the top 200 surface metres and the thermocline down to 700m. Which is mostly dominated by solar insolation, because the atmosphere does not heat the ocean for various good reasons of the 2nd Law and relative heat capacity for a start.

See graphic. At 700m the ocean temperature is down to a tad over 5 deg so it can’t get much colder, as max density occurs at around 4 degrees.

FYI I started with a quick check on your number for the mass of the first 2,000 metres of ocean, which is fine as are the others numbers. How can they even publish such overtly pointless waffle. Noise within noise. And all too much for the poor reviewers to actually read or even better check before signing off on it. CEng, CPhys.

Ocean-Thermocline
Reply to  Brian Catt
January 15, 2024 8:12 pm

.At 700m the ocean temperature is down to a tad over 5 deg so it can’t get much colder, as max density occurs at around 4 degrees.”
Not true in the ocean, that is freshwater. In the ocean density increases as the temperature gets colder.

Reply to  Phil.
January 16, 2024 7:03 am

And what , then, is the temperature in the deep oceans according to your data, with p published science references we can check?

I believe you are making it up. It appears that the Challenger deep gets down to 3 deg, at 1,000Bar. But generally, at 4km average depth it’s closer to 4 deg at 400Bar.

https://oceanexplorer.noaa.gov/facts/temp-vary.html#:~:text=Therefore%2C%20the%20deep%20ocean%20(below,coldness%20of%20the%20deep%20ocean.

Reply to  Brian Catt
January 16, 2024 11:29 am

The point I was making that liquid water has a maximum density at ~5ºC, seawater does not.
page1-1599px-T-S_diagram.pdf.jpg

Reply to  Brian Catt
January 17, 2024 3:33 am

For sea water, the max density is always the same as the freezing point.
We had a very detailed conversation on the subject of deepwater, density, and related topics, some years ago here, and it was coincidently enough a different Willis post.
It was in fact one of the first times I ever commented on WUWT.
I can see iffen I can find it, as it had a lot of great graphs that took me some time to dig up.
At the time I was interested because so many people had made statements about water density, while seemingly oblivious to the complex relationship between salinity, temp, freezing point, max density, etc.

Many otherwise intelligent and edumacated people seemed to have forgotten something that we all know and make use of whenever we spread salty on our walkway in a snowstorm…that salt (or any other impurity in solution) lowers the freezing point of water, while also obviously increasing it’s density.
Less obvious, or not at all so, and 100% non-intuitive, is that the addition of salt to water also alters the temperature of max density, such that by the time we have the salinity levels found in the ocean, which span a range of from 30 to 50 salinity units, the water behaves like most other liquids, increasing in density all the way down to the melting point (in physical chemistry we always speak of melting points, not freezing points, since, you know, supercooling and all…).

In fact, if we think way back to general chemistry or basic physics classes, we will recall boiling point elevation and melting point depression, and also recall that it does not even matter what the solute is, just how much of it is present.

As well, variations in pressure also alter many physical characteristics of substances, and under water, there is a lot of pressure and a huge pressure gradient. Pressure, in units of weight per unit area (ex: psi), is equal to the weight of all the water above a given point, plus all the air above that, to the TOA.

Now, you said you thought he was making something up, but then asked for a reference to a parameter he did not mention. Phil correctly pointed out that it is only pure water that has a max density of 4 degrees C, not saline water and certainly not sea water.
You are the only one who seems to think that there is a causal connection between the temp of max density of water and the lowest temp of the ocean.

There are all sort of factors at play when it comes to each of these separate parameters.
Are you asking for proof of the assertion that max density of salt water differs from that of fresh water?
Or only of what you state is the lowest temp found in the ocean?

Do you even know how and where deep water forms?
Ever seen the videos of brinicles as sea water freezes and squeezes out the salt a supercooled temps?
Seems maybe not.

Plenty of documentation exists of sea water below the normal melting point of fresh water. IOW, the coldest sea water at the sea bed is known to be at least a degree C below zero (-1.0°C or 30.2°F).

Here is a great video of brinicles, which may be helpful in imagining what is going on when tens of thousands of square miles of the Arctic Ocean freezes every year: https://youtu.be/BtQhb8sWJNw?si=UuP9zH7Pc-bj3MJW

I’ll see about locating that article I was mentioning.

Bob
January 15, 2024 12:57 pm

I can’t see the reason for using zettajoules.

Rud Istvan
Reply to  Bob
January 15, 2024 1:14 pm

Easy. Sounds sciency but isn’t.

Reply to  Rud Istvan
January 15, 2024 8:31 pm

Sounds scary – like so many “Hiroshimas” of energy – when in fact it works out to tiny fractions of a degree.

If the Argo measures temperature in degrees, which everyone understands, then why did they go to the trouble of calculating the total energy of that tiny fluctuation?

Yes, it’s a big number but then so is the number of kilos of water that that energy is spread over. And since that temperature difference reading is so tiny, was there really a difference over time or was it just the variation of due to the process, the depths, the change in location?

If we are taking temperature of a beaker of water, it’s one thing to use the device’s accuracy in your calculations but when measuring cubic kilometers of water it’s totally another.

To be fair, the Argos have the temperature of the deepest layers as a sort of check – not as good as a beaker full of melting ice cubes to give you a solid 0°C reference point – since the deepest water is expected to be a standard 4°C. I have no idea how precise or stable that 4°C is supposed to be, but at least it’s something.

It will definitely be interesting to see if the warming profile continues as the bed wetters expect, or if it follows the ENSO cycle, or it surprises us.

Dave Andrews
Reply to  Rud Istvan
January 16, 2024 8:04 am

Plus they are always represented by the colour red so they look more scary to the general public.

Verified by MonsterInsights