[note: this was published behind the paywall last week to premium users, but it’s time for complete public release]
An Epistemological Problem at the Heart of Ocean Heat Content
The central empirical claim of modern climate science is that the Earth system is gaining energy, and that this gain is sufficiently well measured to justify strong conclusions about long-term warming.
This claim does not fail because of greenhouse physics, radiative transfer, or conservation laws. It fails—or at least becomes far less certain—because of a category error about measurement.
That error becomes obvious once one confronts the scale, dominance, and uncertainty of the ocean’s energy content.
The Ocean Dominates the Climate Energy System
More than 90% of the energy attributed to recent climate change is claimed to reside in the oceans. The atmosphere, land surface, and cryosphere together account for only a small residual fraction.
This is not controversial. It is foundational.
As a result, any claim about whether the Earth’s climate system is warming, cooling, or remaining stable is, in practice, a claim about ocean heat content.
If we do not know the ocean’s energy state with sufficient epistemic confidence, then we do not know the system’s energy state—no matter how well radiative forcing is understood.
Step 1: The Absolute Energy Scale of the Ocean
Let us begin with a physically honest question:
How much energy does the global ocean contain?
To answer this, we compute the sensible enthalpy of the ocean referenced to absolute zero, including the energy required to warm ice and melt it. This is not how oceanographers usually frame the problem—but it is how thermal energy is actually defined.
Ocean mass
The mass of Earth’s oceans is approximately:

Energy components
To bring the ocean from 0 K to its present mean temperature (~3.5 °C), three energy terms are required:
1. Warming ice from 0 K to 0 °C
Because the heat capacity of ice drops sharply toward zero at low temperature, this term cannot be calculated with a constant . A physically reasonable range yields:

This term alone carries substantial uncertainty.
2. Melting ice at 0 °C (latent heat of fusion)


This term is comparatively well constrained.
3. Warming liquid seawater from 0 °C to ~3.5 °C

This is small compared to the first two terms.
Total ocean sensible enthalpy
Summing these components:

A reasonable central estimate is:

Step 2: The Uncertainty in That Quantity
The uncertainty in absolute ocean sensible enthalpy is dominated by:
- Low-temperature heat capacity of ice,
- Reference-state assumptions,
- Simplifications required to compute a planetary-scale integral.
A conservative estimate is:

This is not a statistical error bar—it is a structural uncertainty.
Step 3: The Claimed Signal
Now compare this with the quantity that underpins modern climate attribution:
Estimated ocean heat uptake over the last ~50 years:

This is the signal.
Step 4: Putting the Scales Side-by-Side
Here is the comparison that is almost never made explicitly:
| Quantity | Order of Magnitude (J) |
| Absolute ocean sensible enthalpy | ~1027 |
| Uncertainty in absolute enthalpy | ~1026 |
| Claimed 50-year ocean heat uptake | ~1023 |
The ratio is unavoidable:

The uncertainty in the ocean’s sensible heat content exceeds the reported signal by roughly three orders of magnitude.
This is not a matter of better statistics. It is a matter of scale.
“But We Measure Changes, Not Absolutes”
At this point, the standard rebuttal appears:
We do not need to know the absolute heat content. We measure changes.
That sounds reasonable—until one asks what must be true for that to work.
The Wall-Mark Allegory (and Why It Fails)
Imagine measuring the growth of a child without a ruler.
Instead, you make marks on a wall as the child grows taller. Over time, the marks move upward, and you infer growth.
This works only if two conditions hold:
- You know where the floor is.
- The floor is not moving.
In climate science, the ocean is the wall—and the observing system is the floor.
Condition 1: Knowing where the floor is
Ocean temperature measurements over the last 50–70 years come from a sequence of fundamentally different systems:
- Ship-based mechanical instruments,
- Expendable probes with known, evolving biases,
- Sparse deep measurements,
- A modern Argo float network with different calibration regimes.
Each transition introduces offsets that must be corrected after the fact, using models.
That means the baseline is not observed. It is inferred.
Condition 2: The floor is not moving
The floor has been moving continuously:
- Instrument types changed,
- Sampling depth changed,
- Spatial coverage changed,
- Correction methods changed.
The reference frame itself has drifted.
You are no longer measuring marks on a fixed wall—you are measuring marks while the floor shifts and tilts, and then reconstructing where the floor must have been.
A Second Moving Floor: Heat Entering from Below
The epistemological problem deepens further once the bottom boundary of the ocean is acknowledged.
The ocean is not heated only from above.
Seafloor heat flux
Earth’s internal heat flow to the surface is commonly estimated at roughly:

Integrated over 50 years:

This is not negligible relative to claimed multidecadal ocean heat uptake.
Why uncertainty matters more than the mean
The critical issue is not the global mean flux—it is where and how the heat enters the ocean:
- Mid-ocean ridges with intense hydrothermal circulation,
- Ridge flanks with poorly constrained low-temperature heat transfer,
- Seamounts and submarine volcanoes,
- Vast plate interiors filled in by models due to sparse measurements.
The spatial and temporal structure of this heat input is uncertain, heterogeneous, and partially modeled rather than observed.
Epistemologically, this means the “floor” is not merely unstable—it is actively injecting heat, unevenly, through pathways that are not well constrained at climate-trend resolution.
Tightening the Comparison One Last Time
Put all relevant energy terms together:
| Quantity | Energy over ~50 years |
| Claimed ocean heat uptake | ~(3–5)×10²³ J |
| Seafloor heat input (order) | ~7×10²² J |
| Uncertainty in absolute ocean enthalpy | ~2×10²⁶ J |
This is the full context.
The Epistemological Error
Here is the core issue, stated plainly:
Climate science treats a reconstructed differential signal as if it were a directly observed quantity, even though the signal is orders of magnitude smaller than the uncertainty of the dominant energy reservoir and comparable to poorly constrained boundary fluxes.
This is a category error.
In experimental physics or metrology, such a situation would immediately trigger questions about traceability, reference stability, and error dominance. In climate science, it is largely bypassed by redefining the problem in anomaly space and assuming stability.
That assumption is not a law of nature. It is a methodological choice.
What This Does—and Does Not—Imply
This argument does not show that the Earth is not warming.
It does not show that greenhouse forcing is irrelevant.
It does not show that climate models are useless.
What it does show is this:
From a strict epistemological standpoint, we do not know—at high confidence—whether the total energy of the Earth’s climate system is increasing, decreasing, or remaining approximately constant.
The dominant reservoir cannot be measured with uncertainty smaller than the claimed change, and one of its boundaries injects heat in ways that are incompletely observed.
That matters.
Conclusion
You can measure growth without a ruler—by making marks on a wall—but only if you know where the floor is, and only if the floor is not moving.
In climate science, the floor has moved, the wall has changed, and heat is entering from below through pathways that are not well constrained.
Until those epistemological limits are acknowledged explicitly, claims about the Earth’s energy trajectory remain inferences, not measurements.
That distinction matters.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Thank You so much for the excellent articles that appear on WUWT ! Merry Christmas and a Happy New Year !
I second that! 🙂
“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” –Sam Clemens
We know what we know.
We know what we don’t know.
The risk comes from what we don’t know we don’t know.
And from what (they) THINK they know that they don’t know.
That was the point of Mark’s post.
Well done! Finally some actual figures of the uncertainty propagation of the elements involved in the system!
I should have also noted the use of an EXTENSIVE property for the calculations: joules.
As opposed to climate science that adamantly, stubbornly insists that you can average an *intensive* property – temperature – and get a meaningful value in the real world.
W/m^2 is not energy. It is power density. Until time is added, it is not energy.
Why do they do this? So they do not have to properly account for thermal energy and so they can blame it all on IR and call IR heat.
We also do not know for certain that a trace gas, occupying just 1/2400 of the atmosphere has anything whatever to do with climate. Yet people are spending trillions of dollars to ‘mitigate’ that trace gas. None of which is actually possible. If humans emitted zero CO2, the climate would not notice.
Each of the 8 B people on the planet exhale 2 lb. of CO2 daily.
Sounds like a lot.
The 30C sustainable limit of the oceans depends on atmospheric mass. Adding carbon to the atmosphere adds to the atmospheric mass although there is no detectable increase in atmospheric pressure. So assuming an increase in mass due to the extra carbon, there will be a slight increase in the sustainable limit. Ice formation on the ocean surface sets the lower limit of the water temperature. So a slight increase in the sustainable upper limit with a fixed lower limit will increase the temperature. Although the difference is unlikely to be detectable.
No it doesn’t. When the oceans first appeared, the minimum surface temperature was below the boiling point of water. The surface has now cooled to the point where the maximum ocean surface temperature is around 38 C.
Adding CO2 to air doesn’t make thermometers hotter.
Although the difference is unlikely to be detectable.
Most fundamentalest of all.
Near Earth space is hot, ie 400 K.
Surface cannot radiate as a black body.
400k = 127C = 260f
NS says that near earth space is hotter than Phoenix Arizona this morning.
“Space near Earth has extreme temperature swings, averaging around 50°F (10°C) due to constant solar exposure but ranging from scorching 250°F (120°C) in sunlight to freezing -250°F (-150°C) in shadow”
Wow that’s not the way an engineer looks at temperature. Google and NS seem to be using it as a metric for how hot a thing would be if it were in space.
If heat is a representing of kinetic energy expressed by moving atoms,
and space is defined by the “space” between atoms, ie empty
Then space has no measurable temperature, hot or cold.
So when NS says “Near Earth space” he must mean places where there are enough particles in the void to absorb radiation – like the places where atmosphere is so thin we can’t fairly call it atmosphere anymore.
Went back to check myself:
“A perfect vacuum has no temperature, but space isn’t truly empty; it contains diffuse gas and dust that absorb and radiate energy.”
Ok, Google agrees that space itself has no temperature. The catch is that space is not really empty. So
IF a thermometer is positioned in a dense enough collection of particles on the sunny side of Earth
THEN NS is about right.
Also my first quote from google AI demonstrates how context removal converts true statements into a false impression: Who would ever look at something that is either +250F or -250F and say its average temperature is about 50F? What nonsense.
Depends on if average is (Tmax + Tmin)/2 or if average takes into account an orbit around a sphere.
I went through that, too. A nuance is the temperature of an object sees those sunlight, shadow temperature swings.
No clue where the 400C came from.
It is true. No surface, no BB radiation.
Not quite stated correctly, but close enough for government work.
Temperature is by definition and application the KE of stuff.
Void of space has neither therefore temperature is undefined.
However as Happer pointed out there is considerable energy, EMR.
There is also stuff: Moon, ISS, Earth, etc.
When EMR encounters stuff it turns into KE.
How do we know?
That stuff gets hot. Car hood on sunny day.
How hot?
1,368 W/m^2 = 394 K, 250 F.
That’s the theory.
Reality?
UCLA Diviner, ISS, Nikolov, Kramm, JWST.
288 K w – 255 K wo = 33 C colder -18 C Earth is nonsense.
No GHE & Earth becomes much like Moon.
Mean is (high + low)/2
Average is sum n/n
For a spherical surface average is much higher than mean.
Yes. Also, the the mean is meaningless when calculated using a flat earth
Thanks.
UCLA.Diviner: “The Diviner Lunar Radiometer Experiment is one of seven instruments aboard NASA’s Lunar Reconnaissance Orbiter, which launched on June 18, 2009. It is the first instrument to create detailed day and night surface temperature maps of the Moon. Data from Diviner has helped identify potential ice deposits in the polar regions, map compositional variations on the surface and derive subsurface temperatures. Since July of 2009, Diviner has operated continuously, acquiring nearly one trillion radiometric measurements to create the most detailed and complete set of thermal measurements of any planet in the solar system.”
Also: “Spherical Mean: The average value of a function (e.g., temperature, density) distributed over the surface of a sphere. It’s calculated by integrating the function over the sphere and dividing by the sphere’s surface area, providing a central value for that function, not just a point’s direction. ”
NS answer checks out.
My own understanding of the spherical surface area puzzle is that a sphere has the smallest surface area of solids relative to its volume: “The sphere has the lowest surface area for a given volume, meaning it achieves the most efficient, lowest surface-area-to-volume ratio (SA:V) compared to any other 3D shape, a principle seen in nature with water droplets and bubbles. This mathematical fact comes from the isoperimetric inequality in three dimensions, where the sphere encloses maximum volume with minimum surface, or conversely, minimal surface for a fixed volume, notes this Physics Stack Exchange article and this Reddit thread.”
What does volume have to do with anything here? If you need to spread a fixed amount of energy (like the heat I’ve been thinking about) over a fixed amount of material’s surface (like Earth’s surface), then a sphere gives you the highest heat (measured as temperature) on its surface. To complete my idea you have to make definitions of average and mean that account for efficiency – I could not find a good source to quote.
“ then a sphere gives you the highest heat (measured as temperature)”
If the impinging source is a plane wave, then a sphere with its minimum area per volume presented to the plane wave will actually absorb the smallest amount of heat.
This is due to the fact that only that portion of the impinging source that is normal to the surface will generate a transfer of heat.
The function describing the heat transfer has a cos(θ) term in it. A flat surface of area equal to that of the sphere that is also normal to the impinging source will generate the largest transfer of heat since cos(θ) = 1 for every point on the flat surface. The sphere sees the normal component of the plane wave varying by cos(θ) as you travel from the equator of the sphere to the poles of the sphere.
If temperature of the surface is related to the heat transferred, then the sphere will show the lowest temperature because it will see the smallest heat transfer.
Angle of incidence changes the magnitudes of absorption and reflection.
My words assumed the heat already arrived. I agree both TG and SN4’s rephrasing of same are correct.
It affects the rate at which a body cools. A greater surface to volume ratio results in faster cooling.
Assuming the Moon was originally at the same temperature as the Earth when created, then it will have cooled faster, resulting in a glowing core much deeper below the surface. The Moon’s crust is around 50 km thick, compared to the Earth’s 5km under deep oceans.
Other evidence points to more rapid cooling than the larger Earth, as well.
In a deterministic chaotic system, there is no minimum change which may lead to completely unexpected outcomes, so even minor variations in the spatial distribution of the 44 TW or so currently leaving through the crust are meaningful.
Future states of such a chaotic system can only be guessed, not predicted. You or I can do as well as the most powerful computer in existence, operated by the best and brightest.
The theory ignores electromagnetic fields and waves theory.
I will at sometime do a write up.
In a 2007 interview with the Wisconsin Energy Cooperative,
Reid Bryson stated that the contribution of human-generated
CO2 to global warming was insignificant. He specifically said,
“You can go outside and spit and have the same effect as
doubling the CO2” and followed it with the “fart in a hurricane”
comparison to illustrate
Source Google AI
How many of these ‘climate experts’ have had at least an introductory oceanography course, preferably also physical as I had? Or have these changed in this century? Not to my limited knowledge of textbooks but there are new words and terms in many disciplines. Still a great Christmas.
Feynman said that science is belief in the ignorance of experts. Very true for “climate experts” who are demonstrably ignorant and gullible.
They believe that adding CO2 to air makes thermometers hotter, for example. They believe that the temperature of a body receiving radiation can be calculated from the amount of radiation reaching it. They believe that deep ocean currents are due to winds. They believe the ocean depths are heated by the Sun. And so on, and so on, and so on.
A pack of fools – or possibly frauds, who know they are spouting nonsense, but keep the fraud going in order to avoid the difficulty of getting a real job.
It’s probably worse than we thought.
Science advances based on addressing mistakes.
Consensus is avoidance of acknowledging mistakes.
And , how do they measure the effects of storms on the ocean energy ?
(sub vet ) We went through (under) a super typhoon .Man battle stations , go deep.(3/4 of test depth)
Had 20 deg. rolls to port, then 20 deg. rolls to starboard , plus pitch and yaw .
For hours .
That took a lot of energy to move that much water for that long.
😉
“The Mariana Trench’s deepest point, the Challenger Deep, plunges to nearly 7 miles (about 11 kilometers) below the Pacific Ocean’s surface, with precise measurements around 6.8 miles (10,984 meters), making it the deepest spot on Earth, deeper than Mount Everest is tall. ”
Posted because both the article and SOB’s (Unfortunate? initialism of Sweet Old Bob) story got me thinking about just how big the oceans are.
SOB ….on purpose …. humor …
😉
A lot of energy to move that much water for that long.
Yes.
And the moving water gained kinetic energy.
That means the water warmed in the process. How much? Unknown.
“And the moving water gained kinetic energy.”
Not necessarily! The oceans have enormous kinetic energy variations by depth and latitude! The Earth is rotating, after all!
Or it could be that the ocean turbulence is distributing the loss of energy to the atmospheric surface winds (evaporation, cooling).
The wind blows and forms waves. That is a transfer of kinetic energy.
The answer to a specific question of storms and waves was provided, not a full or even partial analysis of ocean thermodynamics and fluid dynamics.
The above article is generally good, but IMHO suffers from the following:
— no mention of the degree of uncertainty in determination of the total mass of Earth’s liquid oceans,
— no mention of the degree of uncertainty in determination of the total mass of ice on Earth, both on land and floating on oceans,
— no mention of the degree of uncertainty of the average temperature of Earth’s ocean’s depths below 2000 m . . . the maximum depth at which the world-wide array of Argo floats measure water temperatures, which should be compared to the average depth of the world’s oceans, about 3,700 meters. IOW, we know relatively little about the temperature (and thus heat content) of over 50% of the world’s oceans, and
— no mention of the difficulty in determining the rates of vertical heat transfer due to conduction and convection between the relatively active ocean above the typical depths of thermoclines and the relatively quiescent depths below the thermoclines, both complicated by more-or-less horizontal-moving currents in both regions.
Good comments, and I rushed the piece. There will be follow up work to make this stronger and even more dramatic in its conclusion.
You just keep on doing this good work, Charles.
Maybe the AI will learn something? Nah.
Excellent news!
I’ll just add that there is a semi-qualitative way to judge/confirm changes in the overall heat content of Earth’s liquid oceans rather than by accounting for energy derived from water mass and specific heat and delta-T: one can also use satellite-measured sea level rise (SLR) and the coefficient of thermal expansion confined to just the first 1000-2000 meters of depth below the surface (volumetric thermal expansion essentially occurring only in the radial direction) to derive/confirm a change in water temperature. In such case, the changing ocean water level acts analogous to a liquid-in-glass thermometer.
Of course, like anything else, lots of simplifying assumptions (e.g., discounting land ice melt volume and all ice melt enthalpy as being insignificant, and discounting any subsidence/uplift across ocean floors) but—having tested it and finding a good correlation between Argo float-measured ocean average temperature temporal variations and NOAA satellite-measured average sea level temporal variations—I can attest that it is credible.
Complete nonsense. Oceans are warmed from below. Convection follows. Surface water radiates energy, cools, contracts, sinks, being replaced by warmer less dense water, and the cycle repeats.
The result is that bottom waters consist of water at maximum density, at between 1 and 4 C, depending on several factors. Convection causes chaotic flows, complicated by chaotic magma “hot spots” below the crust, and chaotic vertical and lateral crustal movements.
As well as mid-ocean ridges, there are completely unknown numbers of hydrothermal vents, injecting water at up to 400 C or so.
Measuring the temperature of thermometers immersed in sea water is about as pointless as measuring the temperature of thermometers above the geoid.
All a charade designed to convince the ignorant and gullible that adding CO2 to air makes thermometers hotter.
It doesn’t, but “climate scientists” can’t figure out how to acknowledge reality without looking extremely foolish. I have no sympathy for them at all.
Oceans are warmed from below.
Thermal energy flows from hot to cold. If the surface warms, it stays on top and energy is flowed to cooler water. How much? Depends on many things, some of which you identified.
It is complete nonsense to assert Earth’s oceans of warmed from below (to any significant degree).
There, I corrected the sentence for you, no charge.
From https://www.physics.unlv.edu/~jeffery/astro//earth/savedir/earth_energy_budget_1.html :
“The geothermal heat flow of the Earth is on average 0.087 W/m**2 (see Wikipedia: Geothermal gradient: Heat flow). Clearly this is virtually insignificant compared to the solar flux to the Earth’s surface of ∼ 170 W/m**2.
The energy that powers climate, weather, and the biosphere comes almost entirely from solar flux.”
Also, it is quite amusing to imagine—with even a rudimentary understanding of physics—how thermal energy from below (i.e. at the sea floor interface) can travel through relatively cold, essentially constant temperature water (in the range of 0-4 °C) to then heat near-surface ocean water to the average of about 20° C (68° F).
Apart from that which warms the 70% of the surface covered by ocean, of course.
You can’t accept reality because you are ignorant and gullible. Fluid warmed from below induces convection. A high school experiment shows that water can be boiled by heat from above, while ice anchored to the bottom remains frozen.
You are probably dim enough to believe that adding CO2 to air makes thermometers hotter!
Now that’s amusing!
Are you seriously implying that the Earth’s oceans don’t have anything to do with climate, weather and the biosphere?
Really? SERIOUSLY???
Now THAT’S an absolutely AMUSING comment to post anywhere, let alone here on WUWT.
Also this advice for you: the logical fallacy of ad hominem attacks (look it up) gets you nowhere with people seeking intelligent discussions.
Not at all. I’m not sure how you leapt to that conclusion. If you disagree with what I said, maybe you could indicate why.
Around 70% of the surface is covered by oceans, which are heated from the bottom, whether you feel like accepting reality or not.
“If you disagree with what I said, maybe you could indicate why.”
It is my ethical responsibility to inform you, not to educate you.
😄😆😅🤣😂
Quit it! You’re killing me!
Satellite measurements are another microcosm of the type of issues discussed in this post – an error range in centimeters trying to measure a change of millimeters.
Must account for lunar, solar, and other extra terrestrial gravitational effects.
Hmmmmm . . . I thought Earth’s orbit about the Solar System barycenter (which is essentially its orbit around the Sun) accounted for ALL gravitational effects it is subject too, including even those from outside the solar system and outside the Milky Way and those created by ETs.
But perhaps I was misinformed.
My bad: “. . . ALL gravitational effect is is subject
tooto . . .”That’s more or less true – and does nobody any good at all. The orbit, like the movement of all bodies is chaotic. We assume that the solar system will behave in the future pretty much as it did in the past.
I certainly hope so.
While true in the far-term, there is the matter of the degree-of-relevance of that fact that you clearly overlook.
From https://en.wikipedia.org/wiki/Stability_of_the_Solar_System (my bold emphasis added) :
“Though the planets have historically been stable as observed, and will be in the ‘short’ term, their weak gravitational effects on one another can add up in ways that are not predictable by any simple means.
“For this reason (among others), the Solar System is chaotic in the technical sense defined by mathematical chaos theory, and that chaotic behavior degrades even the most precise long-term numerical or analytic models for the orbital motion in the Solar System, so they cannot be valid beyond more than a few tens of millions of years into the past or future – about 1% its present age.
“The Solar System is stable on the time-scale of the existence of humans, and far beyond, given that it is unlikely any of the planets will collide with each other or be ejected from the system in the next few billion years, and that Earth’s orbit will be relatively stable.”
Let’s see . . . beyond a few tens of millions of years into the future . . . please get back to me well before then, say a million years from now, to again remind me about how “chaotic” the Earth’s orbit about the SS barycenter/Sun has been.
Until then, keep on hoping.
ROTFL.
Then you try to imply it’s not. Typical ignorant and gullible response.
You are appealing to the authority of someone equally as ignorant and gullible as yourself, who believes the future can be predicted from minute examination of the past.
The future states of chaotic systems simply cannot be predicted. The Wikipedia author wrote ” . . . not predictable by any simple means.”, implying that the future is predictable by non-simple means.
No, if the system is chaotic, its outputs are unpredictable at any time scale. One of the characteristics of chaos is self similarity. What this means is that Los Angeles may be destroyed by an earthquake occurring in the next 5 minutes. The inhabitants assume that it won’t, and plan accordingly.
I assume that the Sun will rise tomorrow, as do you. If it doesn’t, bad luck for both of us.
All irrelevant. Adding CO2 to air doesn’t make thermometers hotter, does it? So much for “climate science”, and its ignorant and gullible followers.
By that statement you obviously do not understand my clear cut reference to “far term”, as opposed to “near term” which as I specifically mentioned is the time frame of the next million to ten million years.
Oh well, I tried.
But please do continue your ad hominem attacks as well as your repeated logical fallacy of strawman arguments (again, look it up) for the continuing humor that such provide.
Splendid.
The article itself presents a category error: uncertainty in the absolute enthalpy of the ocean is treated as if it limits our ability to detect change. Climate science does not estimate warming by differencing two absolute energy states referenced to 0 K; it tracks time-varying temperature anomalies. Large uncertainty in absolute energy is expected and irrelevant unless it can be shown to alias into the observed trend with the same sign and structure, which is merely asserted here, not demonstrated.
The “moving floor” analogy overstates observing-system issues without engaging how they’re handled. Instrument changes and coverage shifts are explicitly corrected using overlap periods and physical cross-checks (e.g., sea-level rise, TOA imbalance). To sustain the claim, one would need to show residual biases large and systematic enough to explain the coherent, multidecadal OHC increase across independent datasets.
Finally, geothermal heat is a distraction. Its magnitude and variability are constrained and too small to explain observed upper-ocean heat uptake. The real question isn’t whether absolute ocean energy is known to extreme precision, but whether known uncertainties plausibly overturn the sign and persistence of observed energy gain.
The real question is –
“what do we think we know that we really don’t know?”
And the answer is –
SHITLOADS.
No, I’m exposing the hubris in those who say they can definitely determine that the Ocean is gaining heat energy when that heat energy is uncertain by orders of magnitude more than the proposed signal. None of your reference points for anomalies are fixed or useful.
You estimate uncertainty in absolute ocean enthalpy, but you never demonstrate that this uncertainty maps onto the time-derivative of ocean heat content. The claim that anomaly-based measurements are “not fixed or useful” is asserted, not shown. To sustain it, you’d need to quantify how observing-system changes produce a systematic, multidecadal bias comparable to the observed signal, and that analysis isn’t in the post.
I think you’ll find it’s the anomaly based measurements that are using unsupported assertions. I simply point out their signal is insignificant compared to the magnitude in quantifying the energy of the Ocean.
Anomalies should use a globally common value of the best temperature for the earth. That is the only way to tell where the earth as a whole really stands.
But that’s exactly the unsupported step. Showing that the absolute energy of the ocean is huge and uncertain does not demonstrate that the uncertainty of the anomaly is comparable to the signal. You never quantify the uncertainty of the time-dependent change, only the magnitude of the reservoir. In physics, a small signal riding on a large background is not “insignificant” unless the background uncertainty couples into the difference in a systematic way, and that coupling is asserted here, not shown.
Moreover, the anomaly signal is not standing alone. Ocean heat uptake is consistent with independent indicators of net energy gain: TOA radiative imbalance, thermal expansion–driven sea-level rise, surface warming, and cryosphere mass loss. For your objection to hold, the reference frames for all of these would need to drift coherently in the same direction for decades. That’s a much stronger claim than “the ocean is large.”
“Showing that the absolute energy of the ocean is huge and uncertain does not demonstrate that the uncertainty of the anomaly is comparable to the signal.”
Why do you assert such garbage? If the components of the anomaly have uncertainties of +/-1 then when you subtract them the uncertainties add. The difference can range from -2 to +2. [ (-1 – (+1) = -2, (+1 – (-1) = 2] If you are trying to find a difference less than +/-2 it’s impossible to know if you’ve found a difference or not! The uncertainty of the absolutes applies to the ability to discern the signal.
” In physics, a small signal riding on a large background is not “insignificant” unless the background uncertainty couples into the difference in a systematic way, and that coupling is asserted here, not shown.”
Those in physics that believe this have not had a single physics lab!
The uncertainty of a measurement has both random effects and systematic effects. According to the GUM both are treated the same when calculating the final uncertainty. They both contribute to the total measurement uncertainty and their specific values are UNKNOWN.
Your assertion is based on the climate science meme that all natural measurement uncertainty is random, Gaussian, and CANCELS, leaving the measurement uncertainty component as being from systematic effects.
That is quite simply a garbage assumption, especially the “Gaussian” assumption. The burden is on *YOU* to show that the measurement uncertainty of the components is Gaussian and cancels before you can dismiss the measurement uncertainty. RANDOM does *not* mean GAUSSIAN. Prove *your* assumptions.
Why do you think the uncertainty maps “onto” a time derivative to begin with? Do you think an uncertainty budget has a “time” component?
You need to research what epistemology has to do with measurement before going off on statistical inference.
Those of us who have been trained in physical measurement and the uncertainty laugh at your pseudoscientific pronouncements about inferring what is wrong with CR’s analysis.
I would love to see you infer the correct composite angles for crown molding where the walls don’t meet at 90° and the ceiling angles vary too.
“Sea level rise” is due to the “moving floor”, and historically has varied by at least 10,000 m. The floor moves a fair amount, obviously.
There is no “TOA” imbalance resulting from CO2 in the atmosphere. The Earth has cooled over the last four and a half billion years, and is continuing to do so, whether you accept it or not.
Adding CO2 to air doesn’t make thermometers hotter. That’s just a fantasy indulged in by the ignorant and gullible.
Sad but true.
If one studies the CERES specifications, one finds the “TOA imbalance” is smaller than the stated measurement error band.
Once again the academic community knows this, the scientific community knows this and even some in the government know this. Yet they make a conscious choice to ignore it to further their quest for power and control. It is despicable.
Mr. Rotter,
Good Post!
You say that their is no ruler to measure climate change. May I suggest one, or at least a reliable proxy.
-1. What kind of ruler:
As is said at the outset of the article: The Ocean Dominates the Climate Energy System.
As is also said in the article, it is difficult to measure climate change because it is difficult to measure energy accumulation in the oceans. To be useful, a proxy ruler should be highly sensitive to energy accumulation in the oceans.
For this we can look, not at energy input to the oceans, but rather energy transfer from the oceans to the atmosphere.
Evaporative cooling is the most important process of heat transfer from the earth’s surface to the atmosphere. Energy budgets published in Wikipedia show that roughly 35% of solar energy that reaches the earth’s surface is transferred to the atmosphere by evaporative cooling. However, for the oceans, it represents a much greater proportion of cooling, at least 50% overall and much higher in the tropical oceans.
Evaporative cooling is also very sensitive to tropical ocean water temperatures (i.e. ocean heat content). Figure 5, from Willis Eschenbach’s post Rainergy (WUWT 2024-05-22), ‘Total Cloud Cooling vs Sea Surface Temperature’, shows that from from tropical ocean temperatures between 25 and 30C, total cloud cooling increases at a rate of roughly 40% per degree C.
Thus evaporative cooling from the oceans, which is the dominant heat transfer mechanism in the oceans, and is highly sensitive to tropical ocean temperatures, is in fact the marginal cooling mechanism that off-sets the marginal heating of the oceans caused by climate change.
A proxy ruler that tracks changes in the marginal rate of evaporative cooling from the oceans should thus be an appropriate proxy to track climate change.
-2. Accumulated Cyclone Energy as an proxy for the marginal rate of evaporative cooling from the tropical oceans.
The sensitivity of evaporative cooling to tropical water temperature, at temperatures above 25C, as noted above, of the order of 40% per degree C, is many times what would be expected from temperature alone, which would be 7% per degree C. Therefore this sensitivity must be driven by other factors that affect the mass transfer of water vapor at the ocean surface, i.e. wind and wave action, factors that are in turn driven by high rates of water evaporation. Wind and wave action are also the factors that drive tropical cyclone activity.
Accumulated Cyclone Energy (ACE), measured on an annual basis, can be thus taken as an indication of the marginal rate of water evaporation in the tropics, which in turn, as noted under point 1 above, can be used as a proxy to track climate change.
-3 What does ACE say about Climate Change?
In a December 9, 2025 article on WUWT, by Paul Dorian, titled ‘Northern Hemisphere tropical activity in 2025’ presented a 50 year graph of global ACE, using 24 month running sums. What this graph shows is that in general ACE increased from the 1970s to the 1990s, and has been declining ever since. It also shows activity peaks that roughly align with the solar cycles.
Using ACE as a proxy for Climate Change, one would thus conclude.
Perhaps it was the strongest solar activity in the last 1000 years, associated with the peak of the 100 year Gleissberg Solar Cycle, that occurred in the 1960s that is responsible for the global warming that we saw at the end of the last century.
Since tropical cyclone activity exhibits multidecadal variability, the period you are using appears too short to suggest it is a proxy for climate change. And ACE tends to decrease in a warmer climate, not decrease, as the temperature differentials decrease as the climate gets warmer.
A recent Atlantic Ocean sediment study revealed higher tropical cyclone activity during The Little Ice Age vs. today, confirming that it is a colder climate that should be expected to increase ACE, not a warmer one.
Correction: “ACE tends to decrease in a warmer climate, not increase”
Too late to edit. 😐
The cyclonic heat engine is powered by the thermal difference from troposphere to ocean surface. If the troposphere is constant and the ocean temperature rises, the delta decreases and the cyclone is less energetic. If the ocean temperature is constant, the increase or decrease in cyclone energy depends on the troposphere.
Just because the surface warms or cools does not mean the troposphere changes the same amount (ignore spherical geometries, just total energy).
So yes, it is quite likely the LIA had more tropical cyclone activity..
Anything you get from Wiki related to climate is badly tainted. Be aware.
I think that your assessment, by using the latent heat of fusion in your sums is a fault in your analysis.
Your assessment addresses the magnitude of the signal, being the human induced warming over a 50 year period.
50 years ago, the oceans were not frozen. If you wanted to see the percentage difference that the signal could have done to the total energy content of the ocean over a 50 year period, then you should have started with the energy value of the oceans 50 years ago and compared that to the energy value of the oceans today, the difference being the result of the signal.
Your analysis SHOULD be akin to a person pushing a car on a level road and asking if a persons ‘push’ energy can make a difference. What you have actually done is equated the total energy of the car as you would find it on a level road at the top of a very tall mountain. ie you’ve included the massive amount of potential energy of the elevation when all we really need to know is the push vs rolling resistance.
The 50 year human caused signal is not trying to melt ice, the water was already melted 50 years ago. Similarly, a person does not have to get the car to the top of the mountain, they are just trying to push it along a flat road.
This change makes a huge difference to your assessment and subsequent dismissal of the signal energy to total energy ratio.
It’s not TRYING to do anything. It’s a change in an energy reservoir, a change that is undetectable in the uncertainty of trying to estimate the total quantity.
On your analysis, a large dam is not affected by rain. Simply because you look at the whole storage, and find that 1Ml of rain is no where near to the dam’s capacity when it has many thousands of Ml of storage.
In reality, if the dam is already near the overflow point, then any rain may cause over-topping.
The measure of signal to noise should not be back to a dry dam but should be compared to the point JUST before it rained.
It was also noted in a comment above that you’ve equated the signal to the ocean’s heat energy starting from absolute zero. Why?
As a final pointer, when a driver is pulled over for speeding they only look at the signal relative the local zero point, (the road surface), not to a static point, isolated from the Earth’s spin and neither to some static point beside the solar system that the the Sun is rapidly traveling through.
Your signal to noise argument, when taken back to frozen water, is easily shot down, this could provide argument for those who follow the climate religion. ie Another climate ‘denier’ argument shot down in flames, (I don’t believe that we should be giving them any ammunition to play with).
Your analogy postulates a known stable reference point and a way to measure against it. We don’t have that with Ocean heat content.
50 years ago, the oceans were not frozen. So why have you calculated the energy within the oceans back to a frozen state?
You and I both know that the energy required to melt ice to water is many times larger than the energy required to shift the water temperature up by even a single degree C.
You have thrown a very large number into the equation for no better reason than to dilute any signal. You wouldn’t accept someone from the climate cult doing this and nor should you.
I was alive 50 years ago, as you most probably were too, the oceans were not frozen then, so you should not be using that criteria as a starting point nor as a dilution factor.
To demonstrate my point to others who may be just scanning these notes. 50 years ago a litre of water at 4C had an energy of 4.1x4x1000J, (relative to liquid water at zero C), If that water was raised to 5C now, then the energy added would be 1 quarter of that number or an extra 25%. You could say that the signal, is 25% of the background energy in the liquid. Actual numbers being 4.1kJ added to 16.4kJ. 25% over 50 years is 0.5% per annum and would not be trivial, it could be measured.
BUT, if you went back to say that the water energy related to the ICE at zero C, then the numbers are 333.5kJ to melt the ice plus the 16.4kJ to get to 4C and compare that to the added 4.1kJ.
Actual numbers being 4.1kJ added to 349.9kJ, or about 1.2%. 1.2% over 50 years is easily lost in rounding errors.
But both calculations above are for the same 1C rise from 4C to 5C.
I still believe that you have chosen a starting point, (of ice which was not present at the start of the period), for the sole purpose of making a large denominator. If you stayed with liquid for the start and end, then the signal to noise ratio is closer to 25% and could not be ignored, compared to a calculated 1.2% which you could sweep under the carpet.
I should be grateful that you started with just ice, I could imagine that the extra 4.1kJ, (as used in the example above), would be beyond insignificance if you had used the E=mC2 value of the mass of the water as your starting point.
Energy = 9×10^16 compared to ( 9×10^16 + 4100). It’s the same 1C rise.
Now do you see the problem with using the ICE value as your basis for the energy in the system?
“It was also noted in a comment above that you’ve equated the signal to the ocean’s heat energy starting from absolute zero. Why?”
Because thermodynamics *always* uses Kelvin – where 0 *is* absolute “0”.
Well clearly you missed the point. If you wanted a precision measurement, you’d want your gauge to move a lot based on a small signal and still have the capability to see the full range of the data.
For example you could measure a 4 degree rise from 20 to 24 using a thermometer that measured very accurately between 20 and 30 C or you could use one that measured very accurately from 0K to 300K.
Both sound good but when it comes to accuracy, it is can also be expressed as accuracy with respect to the scale, eg +/- 1% of the range. So for the same percentage accuracy, you could say that the 20-30C thermometer had an accuracy of 0.1C, (1% of 10), or it was 3K, from the 0K to 300K meter. Based on these gauges, the first measurement was a rise of 4C with an accuracy of +/- of 0.1C, the second was a rise of 4K with +/- 3K.
So now do you see why you can lose a signal if you make the numbers excessively large because you relate to a zero that is a long way from the target range?
I spelled it out a little more by using the mass to energy conversion, showing that the signal would be well and truly lost when looking at the E=mC^2 values.
When you want accuracy, you don’t reference it to something big, you reference using a gauge that just exceeds the min and max values to be read.
Temperature measurement uncertainty is typically given as a +/- degree range and not a percentage value. E.g. the NWS ASOS measurement stations are shown with a manufacturing tolerance of +/- 1 deg C measurement uncertainty.
When doing thermodynamics the proper scale is Kelvin. You have to measure against an absolute 0 (zero) point or you run into problems around the arbitrary zero-degree scale points.
You miss the point entirely.
If you know °C, you can add 273.15 K. Pretty easy.
If you know °F, you can add 459.67 R. Again, pretty easy.
Subtract to reverse the process.
Have you never wondered why Stefan-Boltzmann requires Kelvin temperatures?
Have you never had a thermometer with both °C and °F scales. I’ll be honest, I’ve never had one with °C and K scales, but it wouldn’t be hard.
You miss the point. The analysis was not to determine a precise value of anything. It was to demonstrate relative magnitudes of signal and uncertainty.
AFAIK, the Earth’s oceans have never frozen solid in some 3.8 billion years of their existence. However, there is scientific evidence that the surface waters of all of Earth’s oceans may have frozen over during “snowball Earth” climate intervals, the last of which was more than 600 million years ago.
I agree with you that they may never have been frozen, most likely due to ice floating above warmer water and the ice being an insulator trapping any geothermal heat.
One more reason not to start with ice as the initial energy level of the equation.
You assume a “human caused signal,” incorporating another assumption, “atmospheric CO2 drives temperature,” for which there is zero empirical evidence in support and a good deal of empirical evidence to the contrary.
Thanks for exposing the multitude of holes in the Swiss cheese they laughingly call “climate science.”
The more people understand the litany of shit they gloss over while presenting a facade of certainty, the better.
“Until those epistemological limits are acknowledged explicitly, claims about the Earth’s energy trajectory remain inferences, not measurements.
That distinction matters”.
Indeed. It always comes down to the base level which is not A level at all. No zero point of departure.
And all assumptions are based on those inferences which is the basis of any hypothesis. And so on and so forth..
I find that comforting..
I am still waiting for a concise, experimentally measurable, definition of the optimum climate.
For all they (who claim to the contrary) know, we might be approaching the optimum.
‘No matter how well radiative forcing is understood’. And even that is not understood very well because it is overestimated for a trace gas like CO2 by a factor 2 to 3, if not by more.
All while ignoring the counter-acting feedbacks which cannot be denied given zero empirical evidence that atmospheric CO2 levels “drive” the Earth’s temperature (to the contrary, the empirical evidence indicates atmospheric CO2 does not “drive” the Earth’s temperature).
The Climate Hysterics use of forcing is driving me to drink more beer and release more CO2.
There are too many hijacked and redefined/repurposed scientific definitions used to confuse the masses.
Of course it does – fail, that is. As Fourier said, the Earth loses all the heat it receives from the Sun to outer space, plus a little internal heat – currently 44 TW or so.
The Earth no longer has a molten surface – it has cooled, and continues to do so.
There are no “greenhouse physics”, radiative transfer is irrelevant, and “conservation laws” are not understood by people who believe that adding CO2 to air makes thermometers hotter.
Mind you, it’s nice to see some acceptance that the oceans are heated from beneath. Good for you, Charles. As Feynman said “Nature cannot be fooled”.
Climate is complex and we try to fit our approximations into the data (and hope the data are consistent and accurate). I don’t have the magic 8-ball with all of the answers regarding how much climate change is occurring and its impact. But on the micro level, I look to the maple tree on the property we’ve owned for a little under 30 years.
I noticed that the leaves seemed to fall between November 1 and 6 so I thought I’d track them. Sure enough, with the exception of two years… one two days early and one one day late, the tree didn’t seem to notice any change in its environment. It just happily grew and dropped its leaves on schedule every year. Of course, the tree doesn’t have weather station sensors or computer feedback processes. It’s just a tree without a brain reacting to our doomsday climate change as if it wasn’t all that important.
Bravo.
I’ve long argued that El Nino’s are strongly influenced by heat from the ocean floor, not just from above. I claimed the Pacific Ring of Fire, with its 400 C water, transports heat by subsea plumes that accumulate in the general area of Tahiti before journeying to offshore Central America.
Coplot AI walked me through the data and convinced me that I was wrong. But now I read this in today’s WUWT and I’m not so convinced. The ocean is not heated only from above and we are clueless of the magnitude from the ocean floor because no one is attempting to quantify it. Nothing that may distract from the CO2 as climate control narrative gets funded.
I provided the WUWT link to Copilot and here’s its response:
Yes, the ocean is not heated only from above—geothermal heat from the seafloor exists—but its magnitude is tiny compared to solar input. Global geothermal flux averages about 0.09 W/m², while incoming solar radiation is roughly 240 W/m². That’s less than 0.05% of the energy budget.
Even localized hydrothermal vents or volcanic activity can warm small regions, but they cannot account for large-scale phenomena like El Niño, which are driven by wind patterns and redistribution of surface heat absorbed from the sun.
This article could help serious high school science students understand the importance of uncertainty in drawing conclusions.
This article could help serious high school science students understand the importance of uncertainty in drawing conclusions.