Guest Post by Willis Eschenbach
[UPDATE: An alert commenter, Izaak Walton, notes below that I’d used 10e21 instead of 1e21. This means all my results were too large by a factor of 10 … I’ve updated all the numbers to fix my error. Mea maxima culpa. This is why I love writing for the web … my errors don’t last long.]
Our marvelous host, Anthony Watts, alerted me about a new paper yclept “New Record Ocean Temperatures and Related Climate Indicators in 2023“.
Of course, since I’m “the very model of a modern major general”, my first thought was “Is there gender balance among the authors as required by DEI?”. I mean, according to the seminal paper “Ocean sciences must incorporate DEI, scholars argue“, that’s a new requirement. Not balance by sex. Balance by gender.
However, it turns out that there are thirty-five authors of the new paper. I downloaded the citation. It says “Cheng, L., Abraham, J., Trenberth, K., Boyer, T., Mann, M., Zhu, J., Wang, F., Yu, F., Locarnini, R., Fasullo, J., Zheng, F., Li, Y., Zhang, B., Wan, L., Chen, X., Wang, D., Feng, L., Song, X., Liu, Y., Reseghetti, F., Simoncelli, S., Gouretski, V., Chen, G., Mishonov, A., Reagan, J., Von Schuckmann, K., Pan, Y., Tan, Z., Zhu, Y., Wei, W., Li, G., Ren, Q., Cao, L., Lu, Y.”
Ooogh … gonna be hard to determine their genders. Can’t just check their names, that would be transphobic. Have to contact each one and ask them about their sexual proclivities … that’ll go over well …
In addition, there’s a numerical problem with genders.
Here, from the San Francisco “GIFT” program, which will give $1,200/month in taxpayer money preferentially to illegal alien ex-con transgender prostitutes with AIDS who can’t speak English, is their checkbox list of genders. (And no, I’m not kidding—that is their preferred recipient, the person that goes to the head of the line for “free” taxpayer money. But I digress…)
So buckle up and keep your hands in the vehicle at all times, let’s take a ride through their official list of genders.
GENDER IDENTITY (Check all that apply)
Cis-gender woman
Woman
Transgender Woman
Woman of Trans experience
Woman with a history of gender transition
Trans feminine
Feminine-of-center
MTF (male-to-female)
Demigirl
T-girl
Transgirl
Sistergirl
Cis-gender man
Man
Transgender man
Man of Trans experience
Man with a history of gender transition
Trans masculine
Masculine-of-center
FTM (female-to-male)
Demiboy
T-boy
Transguy
Brotherboy
Trans
Transgender
Transsexual
Non-binary
Genderqueer
Agender
Xenogender
Fem
Femme
Butch
Boi
Stud
Aggressive (AG)
Androgyne
Tomboy
Gender outlaw
Gender non-conforming
Gender variant
Gender fluid
Genderfuck
Bi-gender
Multi-gender
Pangender
Gender creative
Gender expansive
Third gender
Neutrois
Omnigender
Polygender
Graygender
Intergender
Maverique
Novigender
Two-spirit
Hijra
Kathoey
Muxe
Khanith/Xanith
X-gender
MTX
FTX
Bakla
Mahu
Fa’afafine
Waria
Palao’ana
Ashtime
Mashoga
Mangaiko
Chibados
Tida wena
Bixa’ah
Alyha
Hwame
Lhamana
Nadleehi
Dilbaa
Winkte
Ninauposkitzipxpe
Machi-embra
Quariwarmi
Chuckchi
Whakawahine
Fakaleiti
Calabai
Calalai
Bissu
Acault
Travesti
Questioning
I don’t use labels
Declined
Not Listed: _________________
Heck, there are only about a hundred “genders” there. That means there shouldn’t be any problem determining which author in this paper is a “Calabai” and which is a “Calalai” …
In addition, the number of authors brings up what I modestly call “Willis’s First Rule Of Authorship”, which states:
Paper Quality ≈ 1 / (Number Of Authors)2
But enough digression … moving on to the paper, there’s a fascinating claim in the abstract, viz:
In 2023, the sea surface temperature (SST) and upper 2000 m ocean heat content (OHC) reached record highs. The 0–2000 m OHC in 2023 exceeded that of 2022 by 15 ± 10 ZJ (1 Zetta Joules = 1021 Joules) (updated IAP/CAS data); 9 ± 5 ZJ (NCEI/NOAA data).
So … what is the relationship between ZJ and the temperature of the top 2000 meters? Let me use the NCEI/NOAA data. Here are the calculations, skip them if you wish, the answer’s at the end. Items marked as [1] are the computer results of the calculation. Everything after a # is a comment.
> (seavolume=volbydepth(2000)) #cubic kilometers
[1] 647,988,372
> (seamass = seavolume * 1e9 * 1e3 * 1.025) # kg
[1] 6.641881e+20
> (specificheat=3850) # joules/kg/°C
[1] 3850
> (zjoulesperdeg=specificheat * seamass / 1e21) #zettajoules/°C, to raise seamass by 1°C
[1] 2557.124
> (zettajoules2023 = 9) # from the paper
[1] 9
> (tempchange2023 =zettajoules2023 / zjoulesperdeg) # °C
[1] 0.0035
So all the angst is about a temperature change of three and a half thousandths of one degree. EVERYONE PANIC!!
But that wasn’t the interesting part. The interesting part is their uncertainty, which per NCEI/NOAA is ± 5 ZJ. Let me note to start that the results of the two groups, IAP/CAS and NCEI/NOAA, differ by 6 ZJ …
Using the above calculations, 5 ZJ is ± 0.0019°C … they are seriously claiming that we can measure the temperature of the top 2,000 meters of the ocean to within ±0.0019°C.
And how are they doing that?
They say “The main subsurface observing system since 2005 is the profiling floats from the Argo program”. These are amazing floats that sleep a thousand meters down deep in the ocean, then periodically wake up, sink further down to two thousand meters, and then rise slowly to the surface, measuring temperature and salinity along the way. When they reach the surface, they phone home like ET, report the measurements, and sink down a thousand meters to go to sleep again. They’re a fascinating piece of technology. Here’s a map of the float locations from a few years back.

There are about 4,000 floats, each of which measures the temperature as it rises from 2000 meters up to the surface every 10 days. Note that they tend to concentrate in some areas, like the intertropical convergence zone by the Equator and the US East Coast, while other areas are undersampled.
So to start with, ignoring the uneven sampling. each float is theoretically representative of an area of about 92,000 square kilometers and down to two kilometers depth. That’s a bit more area than Austria, Portugal, or the state of South Carolina.
Now consider their claim for a moment. We put one single thermometer in Austria, take one measurement every 10 days for a year … and claim we’ve measured Austria’s annual average temperature with an uncertainty of ±0.0019°C???
Yeah … that’s totally legit …
But wait, as they say on TV, there’s more. That’s just measuring the surface temperature, but the Argo floats are measuring a 3D volume, not the surface. So their claimed uncertainty is even less likely.
Here’s another way to look at it. We’re talking about the uncertainty of the average of a number of measurements. As we get more measurements, our uncertainty decreases … but it doesn’t decrease directly proportionally to the number of measurements.
Instead, it decreases proportionally to the square root of the number of measurements. This means if we want to decrease the uncertainty by one decimal point, that is to say we want to have one-tenth of the uncertainty, we need one hundred times as many measurements.
And of course, this works in reverse as well. If we have one-hundredth of the number of measurements, we lose one decimal point in the uncertainty.
So let’s apply that to the ARGO floats.
Claimed uncertainty with 4,000 floats = ± 0.0019°C
Therefore, uncertainty with 40 floats = ± 0.019°C
And uncertainty with 4 floats = ±0.19 time the square root of 10 = 0.06°C …
Their claimed uncertainty says that four ARGO floats could measure the temperature of the entire global ocean to an uncertainty of less than one tenth of one degree … yeah, right.
Sadly, I fear that’s as far as I got in their paper … I was laughing too hard to continue. I’m sure it’s all sciency and everything, but they lost me by hyperventilating over an ocean warming of three and a half thousandths of a degree and put me over the edge by claiming an impossibly small uncertainty.
Here, a sunny morning in the redwood forest after a day of strong rain, with football playoffs (not the round ball kind) starting in a little while—what’s not to like?
My very best to all,
w.
[ADDENDUM] To close the circle, let me do a sensitivity analysis. The paper mentions that there are some other data sources for the analysis like XBTs (expendable bathythermographs) and other ship-deployed instruments.
So let’s assume that there were a further 4,000 scientific research vessels who each made a voyage where they made thirty-six XBT measurements. That would double the total number of measurements taken during the year. Never mind that there aren’t 4,000 scientific research vessels, this is a sensitivity analysis.
That would change the calculations as follows:
Claimed uncertainty with 8,000 floats + measurements = ± 0.0019°C
Therefore, uncertainty with 80 floats + measurements = ± 0.019°C
And uncertainty with 8 floats + measurements = ±0.019 time the square root of 10 = 0.06°C …
We come to the same problem. There’s no way that 8 thermometers taking temperatures every 10 days can give us the average temperature of the top two kilometers of the entire global ocean with an uncertainty of less than 0.1°C.
MY USUAL: When you comment please quote the exact words you are discussing. It avoids endless misunderstandings.
Willis, from your paper quality formula:
Paper Quality ≈ 1 / (Number Of Authors)2
Did you intend to say that the denominator is (Number of Authors) squared or was that 2 a missing footnote/citation? Trying to determine the paper quality within an order of magnitude. So is this PQ=~.0285714 or PQ=~.0008163?
recommended update
1/(2*(number authors) * (number of author genders))
Squared.
w.
A publication documenting the Higgs Boson mass had 5154 authors. It’s PQ would be 0.00019 or 0.000000038 (depending on if it was squared or not). That means this publication would be 150x or 21500x (again…depending on if it was squared or not) higher quality than one documenting the mass of a fundamental particle.
Obviously this is an absurd (and probably mostly tongue-in-cheek) method for determining quality since authorship is based on who contributed and since larger efforts have a larger number of contributors.
It is absolutely tongue-in-cheek.
However, your publication included as authors everyone involved in very complex physical experiments conducted at dozens of institutions.
This one involved the analysis of a few datasets …
w.
It was just one institution…CERN.
Try looking up ATLAS Collaboration and CMS Collaboration.
Exactly. That’s how we know the experiment was done at just one institution…CERN.
Willis,
your calculations differ by about an order of magnitude from those in the original paper. From their abstract:
“Associated with the onset of a strong El Niño, the global SST reached its record high in 2023 with an annual mean of ~0.23°C higher than 2022 and an astounding > 0.3°C above 2022 values for the second half of 2023”.
So they have converted a 15 ZJ energy difference into a 0.23 degree temperature change while you give 0.035. Do you know who is correct or why there is such a big difference?
Izaak, one is the average sea surface temperature (SST), and the other is the average temperature of the layer from the surface down to 2,000 meters depth.
Regards,
w.
Hi Willis,
you are right. Thanks.
Hi Willis,
Can you check your figures? You use 1e9 to represent 10^9 but 10e21 to represent 10^21.
If you replace 10e21 in your calculation by 1e21 the temperature change goes down by a factor of 10 which I am sure you would be pleased by.
Oh, very well spotted. You’re right, I was 100% wrong.
I was wondering why the numbers seemed so large, as I’d done the exercise before and remembered smaller numbers. I’ll have to edit the head post.
This is why I love writing for the web … my mistakes don’t last long …
Thanks,
w.
Good catch Mr. Walton!
Nice catch. I was so blinded by the fact that the significant figures matched my results that I didn’t even notice they were missing a zero.
SST = Sea Surface Temperature. Willis’ figure is for the first 2000m. His figure is very close to that which I calculated independently.
So meaninglessly tiny !
It is worth noting that the Argo dataset itself is subject to ongoing QA, with subsequent adjustments and edits, and datasets which have passed QA filters and adjustments.
I always wonder whether that results in an inadvertant bias towards what people think should be happening….
https://argo.ucsd.edu/data/data-faq/#RorD
“Most of these errors are the result of sensor drifts. D files have passed expert QC inspection and have had sensor drifts removed.”
Really? They have ship-board calibration labs out checking the 4000 buoys? How do they find the darn things?
What they are saying is that what they can identify AT THE CALIBRATION LAB is measured, before dumping them into the sea? Afterwards? GUESSWORK!
Why don’t they just identify adjustments to the field data as TYPE B measurement uncertainty estimates and tell us how they arrived at that value?
Let’s remember that values reported for the Earth Energy Imbalance (EEI) such as in Loeb, et al 2021 ( https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2021GL093047 ) rely on the Argo float data and also the CERES EBAF TOA data which itself is “constrained to the ocean heat storage.”
To my mind therefore, any claim to have determined the EEI to the precision stated (two decimal points W/m^2) is unconvincing for the same reasons Willis gives in this post. If everything depends on ocean heat content trends, and we can’t know the value close enough for the calculation, then EEI cannot be determined accurately in any case.
“CERES_EBAF_Edition4.1 is the Clouds and the Earth’s Radiant Energy System (CERES) Energy Balanced and Filled (EBAF) Top-of-Atmosphere (TOA) and surface monthly means data in netCDF format Edition 4.1 data product. Data was collected using the CERES Scanner instruments on both the Terra and Aqua platforms. Data collection for this product is ongoing.
CERES_EBAF_Edition4.1 data are monthly and climatological averages of TOA clear-sky (spatially complete) fluxes and all-sky fluxes, where the TOA net flux is constrained to the ocean heat storage.” (Emphasis mine.)
Source https://asdc.larc.nasa.gov/project/CERES/CERES_EBAF_Edition4.1
I would welcome a correction if I am somehow misunderstanding how this works.
CERES estimates an uncertainty of ±0.48 W/m2 for the EEI in 2019 according to the publication you linked to.
Its a nonsense number considering the uncertainties of the constituent irradiance measurements is 5 W/m2 or greater.
To your point, I note that the 2km resolution near-real-time GOES East images for Band 16 show how unreasonable it is to suppose that the CERES sensors can ever result in a 1 deg x 1 deg gridded product with an uncertainty better than your number.
I take it you are referring to the statement “The linear trend of CERES implies a net EEI of 0.42 ± 0.48 W m−2 in mid-2005 and 1.12 ± 0.48 W m−2 in mid-2019.”
Yes, the EEI in W/m^2 (and also the +/- uncertainty) are stated to a precision of two decimal places. That is what I am pointing out as unconvincing.
It is unconvincing that the uncertainty is ±0.48 W/m2?
Would it be convincing if it were stated as ±0.5 W/m2?
Still no.
If not 1 or 2 decimal places then how many would be convincing?
Its an order of magnitude thing.
KM is correct. Watch from space with an open mind to be shown that it is at least an order of magnitude (base 10) beyond our capabilities to diagnose a tiny imbalance. Perhaps some will not be persuaded even by these observations from a single instrument from a fixed location in space.
https://youtu.be/Yarzo13_TSE
Source of these visualizations: https://cdn.star.nesdis.noaa.gov/GOES16/ABI/FD/16/
So it is not the number of decimal places you are unconvinced by but the magnitude of the value? No?
Are you thinking the EEI could be as high 6 W/m2? No?
It is not the magnitude of the EEI itself, but the order(s)-of-magnitude variation of emitter outputs and solar inputs that make it unrealistic to think of an EEI computed from sensors as useful for diagnosis.
Part of the problem here is the use of “averages” – typical for climate science.
Since the radiation varies as T^4 the actual radiation total will be much higher (i.e. the integral of the radiation curve) than that given by an average measurement?
Consider the exponential decay of temperature at night. The radiation as sunset, for example, will be T0^4. The radiation from the average temperature of the curve will be no where near this high.
The use of averages should be anathema to science – averages lose far too much necessary information.
Good point about averages. In the GOES East images, the Atacama Desert area along the western coast of South America is a good example of the rapid rise and decay of longwave radiation.
It’s not the number of decimal places and it’s not the magnitude of EEI itself? Correct?
It’s is the number of decimal places and order of magnitude of the two components of EEI or ASR and OLR? Correct?
Please go back and read my original comment.
I’m responding to “To my mind therefore, any claim to have determined the EEI to the precision stated (two decimal points W/m^2) is unconvincing for the same reasons Willis gives in this post.” Is there something else in your original comment you want me to focus on?
You are free to disagree and give the reason.
I don’t have a question about anything else in your post.
The question I have right now is…It’s is the number of decimal places and order of magnitude of the two components of EEI or ASR and OLR? Correct?
The reason I’m asking the question is because I can abate you concern regarding ASR and OLR (assuming that is your concern) by informing you that the publication cited in Willis’ post does not calculate EEI using the ASR – OLR method. In fact, it doesn’t use ASR or OLR at all.
Willis’ post was about ocean heat content, and that is why I mentioned it in my comment. If you disagree with my original comment or any of my subsequent replies, then you are free to specify and support your alternative viewpoint. I am not asking you for anything.
My intent was to provide useful information. I got the feeling you felt Loeb et al. 2021 was saying they knew the EEI to within 0.01 W/m2. They don’t. They only know it to within 0.48 W/m2.
Still nonsense.
“I got the feeling you felt Loeb et al. 2021 was saying they knew the EEI to within 0.01 W/m2. They don’t.” Glad that’s cleared up. No, I did not take the value reported in Loeb et all 2021 that way.
“They only know it to within 0.48 W/m2.” I am unconvinced the bounds of the interval can be known to that precision. .
To find an imbalance you need accurate measurements. The image shows CRN station error ranges for insolation and soil temps. These are supposed to be the best available measuring systems.
Think carefully about how one-hundredths decimal places are arrived at!
And Michael Mann is one of the co-authors of the report. He’s everywhere!
Anyway, thank you, Willis! Some comments from a layman:
NASA’s Vital Signs Sea Level chart does show a jump in sea level in 2023, which could be partly explained by some increase in ocean temperature among other possible factors. Reflects what the sea level gauge in Boston also showed – an atypical rise in 2023 following a drop in 2022 from 2019-2021 levels. Basically, the ocean has been rising around 3.5-
4.0 mm/yr for 30 years and the recent 2023 global rise pretty much keeps it within that range.
Some quibbles. The report states on its first page:
“…the global SST reached its record high in 2023 with an annual mean of ~0.23degC and an astounding > 0.3degC above 2022 values for the second half of 2023…”
First quibble – Why are they calling a fast temperature increase in the second half of 2023 “astounding”? Why the extreme emotional wording in a scientific paper?
Second quibble -Their own Fig. 3 shows that there have been near-comparably dramatic one-year SST jumps in approximately 1957, 1977 and 1997. (In each instance, the SST leveled off or dropped the next year or soon after.) Why weren’t they called out as dramatic events in the paper?
Final quibble – Why focus on a short 6-month period in a very strong El Nino year that the report points out is following two years of an SST-suppressing La Nina, as shown in their Fig.1? Which could have explained the short-term jump as an understandable short term temperature rebound. Also, were there other dramatic 6-month changes (in either direction) in the historical record (as shown in Fig.3) that could have also earned the “astounding” rating? This would have helped to put recent developments in perspective.
Just askin’.
I’ll give you a hint. They are not “disinterested observers.” They are advocates for their preferred view of reality.
If Mann is a co-author, you can be absolutely sure it is one big non-scientific CON-job.
IMHO, seeing M. Mann as one of the authors leads me to wonder how much torturing the data had to endure before being shown to the public?
I thought, with a title including “tiny numbers” I might learn why I and most of the rest of Canada and the U. S. have had a week of tiny-number temperatures. Fortunately, I have less snow and less cold than many, so I empathize with my many comrades.
All those buoys going up and down and floating about also made me think of Bernie Taupin/Elton John’s “Tiny Dancers“.
I’ll get my coat. 🙂
Off subject but has anybody notice that the ass has fallen out of ENSO and there is a large cold water tongue from Chile making its way quickly across the Pacific?
and today
The floats seem to be particularly sparse in the area where the University of Maine, using the Reanalyzer program to predict global temperatures around the 4th of July, created a stir with their claim of unprecedented warmth.
I noticed all the authors with Asian names are academics in the People’s Republic of China. Not that I am claiming there is a conspiracy by China to hype climate change to induce us to enact policies which would weaken us economically, but what would happen if these scientists staged public protests about China not doing enough to fight climate change? Has there EVER been such a protest in the PRC?
I don’t recall St Greta ever going there.
What is this int he first sentence Willis?
“yclept”
typo? (like mine…)
Dictionary
Definitions from Oxford Languages
y·clept
/iˈklept/
adjective ARCHAIC
Thanks Willis. I’ve never seen it before and thought I’d ask.
The first letter ‘y’ pronounced long-‘e’ tells me it likely came from French.
No, it originated from the past participle of the Old English verb ‘clepen’, to call or name. In those olden days past participles were formed by adding the prefix ‘ge-‘ like in German or Dutch: geclept. In Middle English the prefix was change to ‘y-‘ -> ‘yclept’, which has survived into modern English (purely for humorous or poetic effect), making it the only English verb still being used having this archaic participle prefix. https://en.wiktionary.org/wiki/yclept
Interesting, thanks.
Thanks, Johanus. I love writing for the web because I learn so much from the commenters.
Regards,
w.
As far as I know middle english was heavily influenced by french. So the idea that the “ee” sound was imported from french seems not too farfetched. Consider modern french words like “éclair” or “étrange” or even “excuse”.
🙂
Yes, the Norman invasion (aka Battle of Hastings) in 1066 had an enormous effect on the English language. In fact it marked the end of Old English, spoken by everyone. The conqueror, King William, spoke French which trickled down on the entire English society. Middle English became the language of the lower classes.
So, perhaps, the transition of ‘ge-‘ to ‘y-‘ was facilitated by francization.
https://english.stackexchange.com/questions/551683/palatization-of-y-from-ga
But the sound of é (accent aigu) in your examples is ‘aye’ not ‘ee’. That usually evolved from Latin words beginning with ‘ex’.
Does your browser not support a search engine?
I’m reasonable well read and have never ever seen anything like it before so I didn’t go looking. Did you need to work at being a dick or did it come naturally?
Considering my gender, it comes naturally. The point is, you were asking Willis to explain something that was perfectly legitimate and you could have discovered that on your own in 7 seconds and avoided this ugly exchange.
Or you could just have not responded in the first place and avoided the exchange, but you are a dick, so there is that. I’m of the same gender, yet managed to not be a dick (at least not as a start point), go figure.
I didn’t start out with a crude insult! You seem to be spun up a little tight to take a gentle jab about your laziness and turn it into a crude personal attack. I’m sorry, but somehow a button got pushed when you demonstrated a lack of responsibility for educating yourself. If Willis had spelled the word wrong, or it was so arcane that it doesn’t show up in an online search, then it would have been appropriate to ask him about it. However, you went directly to him, not showing any initiative to explore it on your own. Might it be that your lashing out at me is a defense because you recognize you came off looking helpless?
Upon some reflection, I suspect my reaction comes from the days when I used to teach. All too often, I had students ask me questions whose answers were either in the assigned reading in the text, or in the instructions for the lab procedure. They apparently hadn’t read either, and were taking my time away from other students who had invested the time but were still having difficulty understanding it. My priority was always for those who had made the effort but needed additional help to get passed some roadblock. However, there is also that I’m becoming a grumpy old man with less patience than when I was young.
Folks talking about the amazing precision of the temperature sensors should take a look at some of the datasets. Here’s a comment on one of them.
As you can see, there were no problems for 5 years, cycle after perfect cycle then big problems after that.
In the ARGO Users Manual there is page after page after page regarding quality control and the elimination of bad data.
Of course, this brings up a huge problem—how do you identify the bad data? For one thing, they look at nearby floats, and if a float is very different from the others, it gets marked as bad.
The problem with this is that computers generally don’t do edges—if you have two points with two different temperatures, the default assumption is some kind of smooth transition between them.
But nature loves edges. If you have a point inside a cloud and a point ten miles away in clear air, there’s no slow transition. It’s like the poet said:
Glory be to God for dappled things –
For skies of couple-colour as a brinded cow;
For rose-moles all in stipple upon trout that swim;
Fresh-firecoal chestnut-falls; finches’ wings;
Landscape plotted and pieced – fold, fallow, and plough;
And áll trádes, their gear and tackle and trim.
However, I’ve spent a good chunk of my life on and under the ocean, and I can assure you that slow steady transitions are not always the case. For example, where I used to fish commercially off the coast of northern California where I live, it’s often foggy with a cold green-colored ocean along the coast.
But when you go offshore twenty or thirty miles or so, you’ll suddenly emerge from the fog … and when you look over the side, the water is blue and warm. And there is a very clear dividing line between the two.
I’ve seen the same far out at sea, a clear line of demarcation between one water body and the next. Commercial fishermen like myself look for these lines because the fish tend to congregate along them.
As another example, at night the top 100 meters or so of the ocean “overturns”. The top surface cools and becomes more dense, and at some point during the night, overturning starts.
However, the surface doesn’t just sink evenly. It quickly settles into Rayleigh-Benard convection, where a number of columns of sinking cooler water are interspersed among much larger areas of warmer ascending water.
If you scuba dive at night as much as I have, you may have experienced this. You’re swimming in warm water, then suddenly you’re in a cold-water column, and the temperature difference is both quick and large.
All of these quick transitions make the quality control of the ARGO data a far from simple conundrum.
Best to all,
w.
Every time I read one of your articles or comments my IQ goes up!
Thank you for being you!
Even fresh water fisherman know this if the spend much time with depth finders. Why would fish suspend in the middle of a lake halfway to the bottom. Temperature is one variable.
Can I play “Devil’s advocate”?
Given the Argo system, with it’s 4000 units roughly randomly distributed in non-polar regions, if we compare whatever number it comes up with now, say an average temperature, and compare it to a similar number calculated a different time, can a valid comparison be made? Could we really, confidently say that it’s warmer or colder (or higher/lower total energy content) than another period?
The Argo system is basically taking random readings over the most of the globe, isn’t that enough to filter out noise? How many Argos would it take to have some serious confidence in the data? There’s millions of square kilometers of surface – as a ballpark estimate, I feel happy with 1 per kilometre – how to know if that is enough or too excessive?
Back in engineering class in the Pleistocene era, we were doing experiments with small things, say the classic thermodynamic experiment with a rod where one end is in a bucket of melting water and the other end heated by a flame. Because the rod was relatively thin, it didn’t matter where you took the temperature, left or right side, top or bottom, when taking measurements every few centimeters from one end to the other. But how to deal with a column of water 2 kilometers deep and spaced out hundreds of kilometers from the next one. It’s like we would need the Argos to move laterally on command, to come close together and move apart so that we can see how much the temperature can fluctuate side to side even at depths in the thermocline and in the very deep water.
Is the very deep water, at 2 km, very stable at 4°C, even compared pole to equator?
Thanks!
Editorial note : My “sense of humour” was surgically removed when I was very young.
For the heat content and 0-2000m temperature anomalies a good start is the NCEI / NOAA webpage.
URL 1 : https://www.ncei.noaa.gov/data/oceans/woa/DATA_ANALYSIS/3M_HEAT_CONTENT/DATA/basin/
The “./3month” sub-directory contains OHC data, for global (world, 0-2000m, quarterly) average anomalies you want the 4 “h22-w0-2000m*.dat” files.
The “./3month_mt” sub-directory contains temperature data, for global average anomalies you want the 4 “T-dC-w0-2000m*.dat” files.
The “./onemonth” subdirectory only contains OHC anomaly data, and only from January 2005. The 12 “month-h22-w0-2000m*.dat” files are therefore “limited” to the ARGO network time period.
_ _ _ _ _ _ _ _ _ _ _
The lead author of the paper cited in the ATL article, “Cheng, L”, has a very useful website (in English !) for OHC-related reanalysis datasets.
URL 2 : http://www.ocean.iap.ac.cn/pages/dataService/dataService.html
Clicking on the “Time series” tab (left-hand side, 4th option from the top) gives access to the 0-700m and 0-2000m files, both of which go from January 1940 to December 2023 (at the time of typing this).
Note that he has “recently” (?) added a “2000-6000m” option, but that only has “valid” (reanalysis …) numbers from January 1992.
Another set of “extremely interesting” (to me, at least) files can be found under the (penultimate) “Other Obs data” tab.
Scroll down to “3. Data in the paper of Cheng et al. 2019 Science” entry, which has the title “Global OHC0-2000m changes (Unit: Zeta Joules; Resolution: annual mean) from 1955 to 2100 from CMIP5 models and four latest observational datasets”.
There you will discover a set of “CMIP5 models” files, which provide OHC numbers for individual model “projections” going all the way to the year 2100 for RCPs 2.6, 4.5 and 8.5, from which “ensemble means” can be easily calculated for any desired “Reference Period”.
Re Rayleigh-Benard convection: Does the heat from below mean the geophysical average about 100mW/m2? My BS meter awakens. I believe – and your graphs agree – that deeper ocean is colder.
Nah. That graphic shows R-B circulation with a heat source underneath.
In the night-time ocean, the same circulation exists, but in this case, it’s not because the bottom is warmed.
It’s because the top is cooled. Same outcome, top cooler than underneath, but a different cause.
w.
All three of the metrology experts I have books/papers on say that identifying systematic biases in measurements is next to impossible using statistical analysis. Since you don’t know the systematic bias in nearby units they can’t be used to calibrate another one – unless you increase the measurement uncertainty to account for the possible systematic bias in the nearby units, i.e. a Type B measurement uncertainty component.
As you point out, a sudden change in readings can be caused by several things, none of which can be considered a “quality control” issue.
As I’ve stated elsewhere, when I see someone stating they can “adjust” data to account for measurement uncertainty in a field measurement device I just automatically count them as either ignorant or charlatans.
Hoookay.
So lets talk a margin of error based on the barely 6°C of warming starting at 280°K ambient at Earths distance from the sun. … and its effects on pangendered red squirrel attack helicopters in Wales.
_______________________________________________________________
Maybe that does beat the IPCC’s AR4 Chapter 5 Opening Statement:
The oceans are warming. Over the period 1961 to 2003, global ocean temperature has risen by 0.10°C from the surface to a depth of 700 m.
Ya, the balls, or ovaries, to claim the temperature in the ocean upper layer is .10°C higher than in 1961 – not .011 or .09°C – and based on how many measurements going on in 1961? Wasn’t just buckets and ropes at that time?
Are we really supposed to be astonished by .1 degree over 4 decades? And that is in the top layers of the ocean, most affected by weather and can range anywhere from near freezing to ~30°C – so less than 1% of the range, and certainly less than anyone or anything would notice or be affected by it.
Errr… did you mean to write that?
1e21 is 1 x itself 21 times. So still = 1
Whereas 10e21 is 10 x itself 21 times. So a factor of 1,000,000,000,000,000,000,000 (I think I’ve counted the 0s right)
Doh!
Bloke, the symbol “e” in this context doesn’t mean “to the power of”. It means “followed by” with the number of zeros.
Here are two examples from my computer. What follows the > is the instruction to the computer. What follows the [1] is the answer.
> 1e3
[1] 1000
> 1e5
[1] 100000
Best to you,
w.
Doh – you are correct.
My “brain fart” 🙂
e with a number is 10 to the power of the number, at least it has always meant that in engineering. so 1 Would be written as 1e0, pointless, yet correct. 0.001 would be 1e-3.0.0027 would be 2.7e-3 etc etc etc. To do with using significant figures correctly mostly.
The satire is tedious, like a joke with a long set up and an unfunny punchline.
Stick to science please.
Where is an editor when one is needed?
I’m also offended that your list did not include albino dwarfs, often called “People of No Color”
Thanks, Richard. At this point, I’ve written on the order of 1,000 full posts for the web on every subject under the sun. I’m far and away the most popular and most-read guest author on WUWT.
One thing I learned early on is that no matter what I write, some people will love it, some will hate it, and some will be “meh”.
So I’m with Ricky Nelson in this one:
Me, I don’t write until I can’t stand not writing … and when I do write, I write for the pure joy of telling a fun, interesting, eccentric, humorous, serious scientific tale.
In other words, I write to please myself.
Don’t like it?
Don’t care.
My best to you and yours,
w.
I’m sure the Chuckchi people of Siberia will be delighted that their ethnicity has now transitioned into a sexual orientation LOL
“An alert commenter, Izaak Walton, notes below….”
See how nice climate skeptics are- they tolerate and even thank critics. Climate alarmists don’t do the same- they’ll just accuse any critics of being science deniers. Or just ignore them.
Willis wrote, “An alert commenter, Izaak Walton, notes below that I’d used 10e21 instead of 1e21. This means all my results were too large by a factor of 10 “
1e21 is 1 times 1, 21 times which is … 1.
roving, someone upthread made the same comment. Here’s my answer.
The symbol “e” in this context doesn’t mean “to the power of”. It means “followed by” with the number of zeros.
Here are two examples from my computer. What follows the > is the instruction to the computer. What follows the [1] is the answer.
> 1e3
[1] 1000
> 1e5
[1] 100000
Best to you,
w.
Willis,
I particularly like that your “Willis’s First Rule of Authorship” squares the number of authors in the denominator.
This what NOAA reckon in centigrade rather than in joules, since the 1950’s:
A “zoomed in” view of the same data.
Note that “IAP” is the data from Lijing Cheng’s website (see my previous post), aligned to match the “most recent, therefore with more coverage and probably more accurate / precise” 1995-2020 NOAA data.
For reference, a copy of “Fig. 2” from the Cheng et al (2024) paper of the ATL article.
This shows the ocean heat content (OHC, in ZJ), allowing for the ratio between (lots of) Joules and (fractions of) degrees Celsius to be estimated.
Note also that the “cooling the past” result of their reanalysis is present in this figure, it’s just slightly less obvious than in my version …
In any case, a (roughly) 0.19°C temperature rise in 69 years — i.e. 2.75 hundredths of a degree per decade — hardly counts as “rapid” warming … in my books, at least.
The warming from 1995 looks AMO related, probably the associated decline in low cloud cover.
Willis, thanks for the post. After reading the gender list I’m not even sure what I am.
Anyway, I also find their uncertainty claim ludicrous. My guess is they used the old divide the MU of individual data points used in the average by the square root of N. In this case I assume that one data point would be the average of the data collected in one floats excursion from 2000 m to the surface. With 4000 floats collecting 36.5 data points per year that’s N =146,000. To get an MU of 0.0019 C the MU of each reading would be 0.72 (0.0019 x SQRT(146,000). Maybe some one knows what the actual MU of an Argo float traverse is. I did find that the Argo temperature sensor accuracy is claimed to be 0.002 K. That’s for a single measurement of a stable measurand. So the claim is that they have measured the average temperature of the upper 2000 m of the world’s oceans to an uncertainty better than that of the instrument spec. Absurd!
This site has had dozens of posts explaining why the “Law of Large Numbers” and the divide by the square root of N is not applicable to measurements of this type.
0.0019 x sqrt(146000) is probably only the standard deviation of the average. If the instrumental uncertainty is 0.5C, then as a minimum the combined uncertainty should be the root-sum-sqr of the two or 0.9C. The 1 or 2 mK number is only the sensor calibration and doesn’t include other systematic effects.
The measurement uncertainty of the entire float is between +/- 0.3C and +/- 0.5C. Not much different than any other field temperature measurement device that is land based. Since none of those measurements are of the same thing using the same device under the same environmental conditions (i.e. repeatability requirements) the total measurement uncertainty grows as root-sum-square as km pointed out. The total measurement uncertainty is HUGE. As it should be!
Note carefully that you NEVER see a climate scientist talking about MEASUREMENT uncertainty, only how precisely they can calculate the average value. Precision is not accuracy.
It’s all based on the common climate science meme of: “all measurement uncertainty is random, Gaussian, and cancels”. I
u(y) = u(random) + u(systematic) Even if *some* of the random uncertainty it doesn’t *all* cancel. And the systematic uncertainty never cancels, it just adds.
Temperature distributions are *not* Gaussian. The daily temperature profile is typically a sinusoidal one during the day and is typically an exponential decay at night. Put them together and you get a multi-modal distribution – meaning the “average” is physically meaningless (it’s actually a median, not an average the way climate science does it). And then climate science uses this physically meaningless “average” to build an edifice of averages, totally ignoring the fact that even if the distributions were Gaussian you need the VARIANCE to fully describe the resulting distribution, not just the “average” value.
When you are measuring multiple things a single time using a different device under different environmental conditions, you simply cannot ASSUME that measurement uncertainty is random across all the data elements and thereby cancels. You must prove such an assumption is warranted – but climate science never does.
Tim, you say “The measurement uncertainty of the entire float is between +/- 0.3C and +/- 0.5C.”
Do you have a link to a source for that? I’d be very interested.
Thanks,
w.
I’m sorry Willis. It’s buried on my hard drive (or on one of several). If I find it I’ll post it. Don’t hold your breath.
The last time I looked the document was no longer available on the internet and I’m not well-versed enough to try and find it in historical archives.
This one gives an indication of the standard deviation of the temperature being around +/- 0.1C. https://www.mdpi.com/2077-1312/8/5/313
Even that amount of uncertainty is enough to prevent determining average temperature out to the hundredths digit.
In essence the document looked at the problems within the water flow channel, the water pumps, the length of time the water was stored, etc. Any kind of detritus in the water channel can affect the bias of the temperature measured by the sensor. As typical in climate science today, climate science substitutes the SEM for the float measurement uncertainty instead of the actual measurement uncertainty. in other words, the more samples the floats make the more precise the average value becomes. But that tells you nothing about the actual measurement uncertainty.
Thanks, Tim. I’ll take a look on the web and see what I can find.
w.
OK, here’s JoNova on the subject.
https://joannenova.com.au/2015/06/study-shows-argo-ocean-robots-uncertainty-was-up-to-100-times-larger-than-advertised/
All the best, and thanks for the push.
w.
Willis,
Thank you for finding these references.
The published study pretty much confirms your, along with others, suspicions that the ARGO system has problems with measuring actual ocian temperatures due to the small area actually being covered.
It also confirms the inanity of many on WUWT not being either learned or experienced in the science of making measurements. One cannot simply look at one component in a measuring device to determine it’s resolution or accuracy. Additionally, the term microclimate exists for a reason and must be considered in estimating any measurement uncertainty between weather stations.
Thanks Willis. This sounds like the study I saw. I’ll print some of this out and file it. Maybe I’ll be able to find it later!
Tim, further research also found this most fascinating study comparing 3 different global analyses of the Argo data, a couple using Argo plus further hydrograpic data, plus some reanalyses … good stuff.
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021EF002532
w.
Thank you . I’ll try to find time to read it.
I have to tell you, however, whenever I see the word “reanalysis” it totally turns me off. Especially in climate science there are simply too many unstated, implicit assumptions made about how to combine observational data from different sources that just don’t satisfy reasonableness. Different variances in the data are typically ignored so differences are not weighted. Same for measurement uncertainty, all stated values are assumed to be 100% accurate when forming trend lines, even if the measurement uncertainty is greater than the differences that are trying to be identified. If model outputs are used as part of the reanalysis then any metrology protocols are typically ignored for the assumption that model outputs have no measurement uncertainty.
If they have truly identified uncertainty greater than the typical accepted accuracy of the sensor itself, then I’ll be shocked. If their uncertainty is based on SEM calculations then I’ll know they aren’t really identifying measurement uncertainty.
I’ll try to let you know what I find.
I may be misinterpreting this wrongly but greenhouse gas emissions are unlikely to have caused the ocean heat content to have increased. IR from GHG’s only affects the smallest amount of surface molecules so it is unlikely that OHC increase, especially at depth is caused by GHG’s.
I’ll admit I am not an expert on ocean heating so I may be mistaken.
Willis, one thing to look at is what NOAA says for CRN uncertainty. The sensors are pretty much the same, yet the actual uncertainty rises to ±0.3° C for the system.
I’m sure I don’t need to point out to you that measuring devices are a conglomeration of components, but other folks may need some understanding. That system, when made, must follow the adage that uncertainties add, ALWAYS. Each component in the system adds to the total uncertainty. Consequently, the total uncertainty is never, ever as low as the best component.
People who make calibrated measurements always have a feeling in their bones that when measurement uncertainty is attributed to one component when making field measurements, something is not right.
This is typical of the alarmists’ desire to scare people with large numbers–imagine 15 million million million million million million million Joules of extra heat in a year! OMG!
But expressed as an average temperature rise of 0.0035 C per year, no one would be impressed, and people would rightly question (as Willis did) whether we can measure temperatures to within that precision in salt water at a 2,000-meter depth at a pressure of 19.6 MPa (about 2850 psi). Have the researchers considered the possibility of instrument drift? How are the temperature measurement devices recalibrated when the buoys surface? Methinks the signal/noise ratio is fairly small…
There is a whole lot of water in the oceans, and it has a huge heat capacity, so those 15 zettajoules amount to not much on a global scale. Oh, by the way, Joules aren’t very big. You can get about 120 million Joules by burning a gallon of gasoline.
Expressed another way, if the 15 ZJ per year of heat absorption is correct, dividing by 8760 hours/yr and 3600 seconds/hour results in an energy absorption rate of 476 terawatts (476,000 GW). Dividing this by the surface area of the oceans (361 million km^2 = 3.61*10^14 m2) results in an average heat absorption intensity of about 1.32 W/m2, or about 0.1% of the intensity of the sun at the zenith.
The oceans are a huge heat sink, easily able to damp out anything happening in the atmosphere due to infrared absorption by CO2.
It is just amazing how the warmist cry foul (or should that really be “fowl” in this case) every time a statistician or econometrics analyst tells them they are doing it wrong. “They aren’t climate scientist!” No, it takes a climate “scientist” to think they can borrow precision If they don’t understand the tools, why do they insist they are doing it right. When this whole thing started, it was McIntyre and McKitrick that ma stand up and go, “cheater”..