Argo And Ocean Heat Content

Guest Post by Willis Eschenbach

Today I ran across an interesting presentation from 2013 regarding the Argo floats. These are a large number of independent floats spread all across the world oceans. They spend most of their time sleeping at a depth of 1,000 metres (3,300 feet) below the surface of the ocean. Then they drop down to 2,000 metres, which is followed by a slow ascent to the surface taking measurements of temperature and salinity. Once on the surface they call home like ET, and then drop down to the deeps and go to sleep again.

argo float operationFigure 1. Argo float operation. There are about 3,500 floats in the ocean, and a total of ~10,000 floats have been used over the period of operation.

Now, there were several interesting things in the presentation. The first was a total surprise to me. We hear a lot about how the heat is “hiding” in the ocean. But what I didn’t know was that according to the Argo floats, every bit of the warming is happening in the southern extratropical ocean, while the oceans of both the tropics and the northern hemisphere are actually cooling … color me puzzled.

argo change by hemisphereFigure 2. Change in ocean heat content by zone. Units are 10^22 joules. Graph from the presentation linked to above.

What does that indicate? I’m sure I don’t know … but I doubt very greatly if any of the climate models reproduce that curious combination of warming and cooling.

What I found most interesting, however, was a graph of the global change in ocean heat content over the period. Here is that graph:

Argo heat gain globalFigure 3. Global change in ocean heat content (OHC) since the time of the full deployment of the Argo floats. Data from the surface down to 2000 decibars (dbar), which is approximately 2000 metres.

I was sad to see a couple of things. First, this is the data with the monthly averages (the “climatology”) removed. I prefer to see the raw data so I can look at seasonal patterns. Second, the presentation lacks error bars … but needs must when the devil drives, so I use the data I have. I digitized the data so I could analyze it myself.

The first thing that I wanted to do was to look at the data using more familiar units. I mean, nobody knows what 10^22 joules means in the top two kilometres of the ocean. So I converted the data from joules to degrees C. The conversion is that it takes 4 joules to heat a gram of seawater by 1°C (or 4 megajoules per tonne per degree). The other information needed is that there are 0.65 billion cubic kilometres of ocean above 2,000 metres of depth, and that seawater weighs about 1.033 tonnes per cubic metre.

ocean volume above a given depthFigure 4. Ocean volume by depth. Little of the ocean is deeper than about 5 km.

Using that information, I calculated what the change in heat content means in terms of temperature change. Here is that graph:

global change in ocean temperature argoFigure 5. Change in ocean temperature from the surface down to 2,000 dbar (~ 2,000 metres).

A change of two hundredths of a degree per decade … be still, my beating heart. Unfortunately, I can’t give you any error estimate on the trend because there are no error bars on the data in the presentation.

Let me take a detour here whose purpose will be clear in a moment. I want to look at the CERES data, which is satellite based data on the radiation budget of the earth. Here is the month-by-month change in the “Net TOA Radiation”. The net TOA radiation is the incoming radiation at the Top Of the Atmosphere (TOA) that hits the earth (sunlight) minus the outgoing radiation at the TOA leaving the earth (reflected sunlight plus thermal infrared [longwave] radiation). Figure 6 shows those changes:

ceres net toa radiation decompFigure 6. Decomposition of the CERES net TOA radiation into a seasonal and a residual component. Units are watts per square metre (W/m2). The residual component (bottom panel) is the raw data (top panel), with the monthly averages (seasonal component or “climatology”, middle panel) removed.

Now, this is an interesting graph in its own right. In the net radiation you can see the ~ 20 watts per square metre (W/m2) effect of the annual swing of the earth towards and away from the sun. The earth is closest to the sun in January, so the earth gains energy around that time, and loses it in the other half of the year. In addition, you can see the amazing stability of the system. Once we remove the monthly averages (the “climatology’), the net TOA imbalance generally only varies by something on the order of ± half a watt per square metre over the thirteen years of the record, with no statistically significant trend at all … astounding.

But I digress. The reason I’m looking at this is that the excess energy that comes in to the Earth (positive values), peaking in January, is stored almost entirely in the ocean, and then it comes back out of the ocean with a peak in outgoing radiation (negative values) in July. We know this because the temperature doesn’t swing from the radiation imbalance, and there’s nowhere else large enough and responsive enough for that amount of energy to be stored and released.

In other words, the net TOA radiation is another way that we can measure the monthly change in the ocean heat content, and thus we can perform a cross-check on the OHC figures. It won’t be exact, because some of the energy is stored and released in both ice and land … but the main storage is in the ocean. So the CERES net TOA data will give us a maximum value for the changes in ocean storage, the value we get if we assume it’s all stored stored in the ocean.

So all we need to do is to compare the monthly change in the OHC content minus the climatology, as shown in Figure 1, with the monthly change in downwelling radiation minus the climatology as shown in the bottom panel of Figure 6 … except that they are in different units.

However, that just means that we have to convert the net TOA radiation data in watts per square metre into joules per month. The conversion is

1 watt-month/m2 (which is one watt per square metre applied for one month) =

1 joule-month/sec-m2 * 5.11e+14 m2 (area of surface) * 365.2425/12 * 24 * 3600 seconds / month =

1.35e+21 joules

So I converted the net TOA radiation into joules per month, and I compared that to the Argo data for the same thing, the change in ocean heat content in joules/month. Figure 7 shows that comparison:

change in OHC argo and ceresFigure 7. A comparison of the monthly changes in ocean heat content (OHC) as measured by the CERES data and by the Argo floats.

Now, this is a most strange outcome. The Argo data says that there is a huge, stupendous amount of energy going into and out of the ocean … but on the other hand the CERES data says that there’s only a comparatively very small amount of energy going into and out of the ocean. Oh, even per CERES it’s still a whole lot of energy, but nothing like what the Argo data claims.

How are we to understand this perplexitude? The true answer to that question is … I don’t know. It’s possible I’ve got an arithmetical error, although I’ve been over and over the calculations listed above. I know that the CERES data is of the right size, because it shows the ~20 watt swing from the ellipticity of the earth’s orbit. And I know my Argo data is correct by comparing Figure 7 to Figure 2.

My best guess is that the error bars on the Argo data are much larger than is generally believed. I say this because the CERES data are not all that accurate … but they are very precise. I also say it because of my previous analysis of the claimed errors given by Levitus et al in my post “Decimals of Precision”.

In any case, it’s a most curious result. At a minimum, it raises serious questions about our ability to measure the heat content of the ocean to the precision claimed by the Argo folks. Remember they claim they can measure the monthly average temperature of 0.65 BILLION cubic kilometres of ocean water to a precision of one hundredth of a degree Celsius … which seems very doubtful to me. I suspect that the true error bars on their data would go from floor to ceiling.

But that’s just my thoughts. All suggestions gladly accepted.

Best of everything to all,

w.

My Standard Request: If you disagree with someone, please QUOTE THE EXACT WORDS YOU DISAGREE WITH. That way everyone can understand the exact nature of your objections.

Data and Code: The Argo data (as a .csv file) and R code is online in a small folder called Argo and CERES Folder. The CERES TOA data is here in R format, and the CERES surface data in R format is here. WARNING: The CERES data is 220 Mb, and the CERES surface data is 110 Mb.

Further Data:

Main Argo Data Page

Main CERES Page

List of Argo “gray” floats 

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

275 Comments
Inline Feedbacks
View all comments
Rob aka flatlander
December 4, 2014 7:03 pm

0.65 BILLION cubic kilometres of ocean with 3500 floats, do the math, no accuracy at all

Mark Luhman
Reply to  Rob aka flatlander
December 4, 2014 7:18 pm

Yes exactly, top it off they move around, there is no reason to believe they are measuring much of anything that we could begin to comprehend. It like taking random skin temperature of a human across the period of a year and then trying to ascertain the humans state of health.

Paul Mackey
Reply to  Mark Luhman
December 5, 2014 1:19 am

The lack of error bars is sub-prime. Also, with these grandiose averaging schemes, we never get any insight into how any systemetic error is analysed, evaluated and its effect taken into account..
Perhaps there is a measured offset for each float before launch, but is this linear with temperature and does it perhaps change through time or because the biasing/measuring circuitry is changing in temperature. Or does the systematic error vary with the amount of power available. Any way it could be a real nightmare given there are 3500 roving around freely in the oceans currents, and therefore should be open to scrutiny.

DEEBEE
Reply to  Mark Luhman
December 5, 2014 1:59 am

That analogy got stretched a bit when you jumped to state of health.

Reply to  Mark Luhman
December 5, 2014 9:11 am

Paul, standard in the air- and marine-temperature measurement community is the assumption that each instance of measurement error has a constant mean and a random distribution. The workers then just estimate and subtract a mean error, invoke the central limit theorem and claim the distribution averages away, and then proceed as though the global average is free of measurement error.
It’s as unjustifiable an assumption as it is universal. In my experience, that assumption is defended fiercely. And for good reason: if they acknowledge the reality of systematic measurement error, the error bars are so large they are left with nothing to say.

Reply to  Mark Luhman
December 5, 2014 11:48 am

Yes, and they move around in the same blob of water during their horizontal drifts so if they sink into a warm patch in a current, they stay in that warm water as they are “moving” around the ocean during that part of their dive cycle.

Reply to  Mark Luhman
December 7, 2014 10:41 am

They move around in currents, so what is it that they are actually measuring? I would like to see where possible, anchored ARGO buoys.

Reply to  Rob aka flatlander
December 4, 2014 8:43 pm

Based upon facts introduced into evidence, the accuracy is indeterminate.

Leo G
Reply to  Rob aka flatlander
December 5, 2014 4:52 am

There have been serious problems with the accuracy of the ARGO CTD modules from the beginning of the program. Seabird issued a series of technical bulletins in 2009 that described some of the problems in detail.
The major issue was the high incidence of micro leaks of the float drunk pressure sensor.
The effect of the problem was that the buoys gradually developed an offset in the depth measurement and a reduction in sensitivity in the depth measurement such that the buoys increasingly overestimated the depth. The offset could be readily detected- the buoys would report surface depths above sealevel. The loss in sensitivity was more problematic.
The effect of this problem, and a related problem that affected sensitivity, was that each ARGO buoy reported temperatures at depth in excess of the true temperature (according to the temperature lapse rate) by an amount which increased over time.
I would like to know whether the problem has now been rectified.
Until I do know, I certainly won’t trust ARGO data.

Leo G
Reply to  Leo G
December 5, 2014 5:07 am

That should read “float druck pressure sensor micro leak problem”

Reply to  Leo G
December 5, 2014 1:38 pm

Yes, scienitsts using the data have adjusted it for this problem, but I’m unsure if the Argo data that Willis has accessed has had this correction applied. See Barker (2011) Pressure Sensor Drifts in Argo and Their Impacts. JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY, VOLUME 28 p 1036.

Leo G
Reply to  Leo G
December 5, 2014 4:14 pm

“Scientists using the data have adjusted it for this problem”

The adjustment is reliant on detecting the positive depth offset error from surface data. But the sensitivity component of the error is more difficult to estimate, and is complicated by changes over time in the sensitivity of the pressure transducer due to other effects which tend to obscure the depth offset at the surface.
At one time it was suggested that most of the transducers from the major supplier were affected.
As I understand the buoys were designed to take 70 pressure and temperature readings over a pressure range of 0dbar to 200dbar, which should correspond to readings at 30 metre depth intervals. The allowance for pressure drift error for 150 profiles over 4 years was +/-1.3dbar (+/-13m) which on average corresponds to a temperature error of +/-0.065 Celsius.
But some buoys were found to have errors of up to -65dbar at 200dbar (650 metre at 2 kilometre depth).

MarkW
Reply to  Rob aka flatlander
December 6, 2014 11:40 am

Does anyone know what the accuracy of the thermometers on those units are? I’d be surprised if it’s better than 0.1C, especially after a decade without re-calibration.

Al Goretex
Reply to  MarkW
December 9, 2014 12:18 am

http://www.argo.ucsd.edu/FAQ.html: How accurate is the Argo data?
The temperatures in the Argo profiles are accurate to ± 0.002°C and pressures are accurate to ± 2.4dbar

December 4, 2014 7:09 pm

Reblogged this on Centinel2012 and commented:
No basic disagreement with you work in fact I was impressed!
I do have a comment though, the 20 watt swing from the ellipticity of the earth’s orbit would effect the northern and southern hemispheres differently since the north is predominately land and the south is predominantly water. The south is closer to the sum in the winter so the water would absorb more heat and there is more water. This would set up a flow of water based on the heat imbalance. This maybe part of what you have found?

Phil B.
Reply to  Centinel2012
December 4, 2014 9:01 pm

The south is closer to the sun in the Northern winter. It’s summer down south in January. I’m not sure if that’s what you meant or not, but it seemed like you had missed that fact from your comment.

rgbatduke
Reply to  Centinel2012
December 4, 2014 9:07 pm

Yeah, about that. I once took the time to actually compute the TOA insolation variation from Mr. Sun, and ended up with a number more like 91 W/m^2. Let’s see now:
http://en.wikipedia.org/wiki/Solar_constant#Solar_irradiance
Yup, my calculation agrees well with this article. I get 6.7% variation, they assert 6.9% (probably disagreeing about the base, I’m using the insolation at 1 AU). This works out to be variation from 1412 W/m^2 in January to 1321 W/m^2 in July, and 1412 – 1321 = 91 W/m^2.
So I’m not sure what the 20 W/m^2 is referring to. Presumably some multiple of this after accounting for albedo, emissivity etc. But TOA insolation most definitely does not vary by 20 W/m^2 due to the Earth’s elliptical orbit, it varies by 91 W/m^2. Some other number based on this number may, but not this number.
I truly enjoy pointing out two things when I discuss this number, as they add to Willis’ perplexities. First, solar peak occurs in northern hemisphere winter, or southern hemisphere summer — right at the hottest/coldest parts, in fact. Second, the Earth’s average temperature peaks in northern hemisphere summer — when insolation is lowest — and is minimum in southern hemisphere summer — when insolation is the highest. Talk about counterintuitive! Yes, people have glib explanations for this, but we are talking about 90 W/m^2 difference here. That is an easy 30 to 40 times the entire change in forcing expected from CO_2, and it still isn’t sufficient to overcome the obvious nonlinearities in the Earth’s climate system that would produce the exact opposite result if one attempted to heat nearly any toy/model planet without those nonlinearities.
Of course, CERES shows a strong variation with the same general signal, but all one can really get out of it is that the system is remarkably stable. Oh, and that any modulation of energy flow by CO_2 is completely invisible, at least on the decadal timescale. That too is a bit surprising, although it may just be a matter of resolution.
rgb

RACookPE1978
Editor
Reply to  rgbatduke
December 4, 2014 9:37 pm

RGB:
For every day-of-year (DOY), TOA radiation at top-of-atmosphere follows a cosine equation curve fit to Leif’s 10 years of SORCE measured radiation data (within +/- 0.30 watt/m^2) every day:
TOA =1362.36+46.142*(COS(0.0167299*(DOY)+0.03150896))
Above is in radians, using Excel’s “language” for DOY from 1 to 366.
For Latitude in Radians, again using Excel’s language,
when TAU = Day_Angle (based on Day-of-Year) TAU = 2*3.1415*(DOY-1)/365
Declination angle =0.006918-0.399912*COS(TAU)+0.070257*SIN(TAU)-0.006758*COS(2*(TAU))+0.000907*SIN(2*(TAU))-0.002697*COS(3*(TAU))+0.00148*SIN(3*(TAU))
We can keep going, because, based on each hour-of-day and day-of-year, you can get the actual radiation falling (in clear skies) on each sq meter at any latitude for any hour of the day.
The values in the graphs are above are represent daily total approximations. NOT the actual radiation falling on a flat surface at that latitude.
Thus, for the following day-of-year values, top-of-atmosphere radiation =

Date 	DofY	TOA_Rad.
5-Jan	5	1408
22-Jan	22	1405
22-Feb	53	1390
22-Mar	81	1371
22-Apr	112	1347
22-May	142	1328
22-Jun	173	1317
5-Jul	186	1316
22-Jul	203	1318
22-Aug	234	1330
22-Sep	265	1351
22-Oct	295	1374
22-Nov	326	1395
22-Dec	356	1406

Dates:
22 March, June, Sept and Dec represent the equinox and solstices (they vary a little with leap years and all).
5 Jan is solar maximum, 5 July is solar minimum.
The rest fill in the monthly pattern.

Reply to  rgbatduke
December 4, 2014 11:13 pm

“…we are talking about 90 W/m^2 difference here. That is an easy 30 to 40 times the entire change in forcing expected from CO_2,”
More like 200 time the forcing from CO2.
References: https://geoscienceenvironment.wordpress.com/2014/09/04/the-emperors-of-climate-alarmism-wear-no-clothes/

Reply to  rgbatduke
December 5, 2014 12:49 am

Willis writes “The change from a doubling of CO2 is estimated at 3.7 W/m2, averaged over the earth’s surface.”
I would say the change from a doubling of CO2 was actually supposed to be about 1.1W …and then you add (positively assumed) feedbacks to get to the 3.7W. There are no feedbacks applied to the 22W.

richard verney
Reply to  rgbatduke
December 5, 2014 4:33 am

I have not looked at in detiail but I always presumed that the 20 W/m^2 variation was based upon the claimed avreaged 342 W/m^2 in the K & T energy budget.which is about 5.85%, but now I am beginning to question the correctness of that presumption.
I am not entirely sure how K & T get the claimed average 342 W/m^2 from the 1412 W/m^2 in January to 1321 W/m^2 in July TOA figures, but once again, I presumed that it was incidental to some weighting from the eliptical nature of the orbit

John Finn
Reply to  rgbatduke
December 5, 2014 8:08 am

TimTheToolMan December 5, 2014 at 12:49 am
Willis writes “The change from a doubling of CO2 is estimated at 3.7 W/m2, averaged over the earth’s surface.”
I would say the change from a doubling of CO2 was actually supposed to be about 1.1W …and then you add (positively assumed) feedbacks to get to the 3.7W.

No – the forcing for 2xCO2 is ~3.7 w/m2. Myhre et al provide a quick and easy formula which gives a pretty close approximation, i.e Forcing = 5.35 x ln(2) for the response to a doubling. If you want estimate the forcing to date since ~1850 use 5.35 x ln (400/285) where current level = 400 ppm; 1850 level = 285 ppm.

RACookPE1978
Editor
Reply to  rgbatduke
December 5, 2014 10:29 am

Willis Eschenbach
December 4, 2014 at 11:23 pm
(replying to) Frederick Colbourne December 4, 2014 at 11:13 pm
… The change from a doubling of CO2 is estimated at 3.7 W/m2, averaged over the earth’s surface. The 90 W/m2 is NOT averaged over the earths surface. Once we do that by dividing 90 W/m2 by 4, to give us an apples to apples comparison, it reduces to about 22 W/m2.

While I disagree strongly with this (widely-used) flat-earth approximation, the area’s for a spherical global of radius 6371 km for the latitude bands you’ve used above follow:
Notice, please, the very small relative area of the Arctic Ocean (only 14 Mkm^2 between latitude 70 north and the pole) compared to the huge equator band between 23.5 south and 23.5 north 203.4 Mkm^2; and the even larger mid-latitude band between 45 south and 45 north: 360.7 Mkm^2.

radius = 6371 km
Area-Total Earth = 510.1 Mkm^2
Band            Area (Mkm^2)
60 N to Pole       34.2
20 N to 60 N      133.7
20 S to 20 N      174.5
20 S to 60 S      133.7
60 S to Pole       34.2
60 N to 60 S      441.7

Now, we need to address the percent of each latitude band that is land, and what area is ocean, but that comes after the decision of what latitude bands are chosen.

Crispin in Waterloo
Reply to  rgbatduke
December 5, 2014 10:45 am

In reply to Willis and RGB
“And dividing 22 by 3.7 gives us a value of about six times the estimated forcing from a doubling of CO2.”
It is not correct that the figure 3.7 is to include 2.7 degrees of water vapour feedback? Isn’t it agreed the CO2 itself is about 1 degree?
The feedbacks are not in evidence. Assuming there are not powerful negative feedbacks, it means the calculation is really 22/1 = 22.

Reply to  rgbatduke
December 5, 2014 2:07 pm

John, Yes, my bad, I said doubling but I should have simply said “change”.

Auto
Reply to  rgbatduke
December 5, 2014 2:30 pm

So, Basis the above – there is at least a suggestion that the ‘science is – in fact – not entirely settled, definitions, understanding, magnitude of effects and all . . .’
Goodness me!
Have the Menn I/C global warming (“we can Menn-make it”) exceeded their briefs?
I would suggest that there may be a presumable case that they are, in fact, busy trying to feather their own nests, and when the whole meme is going down the porcelain telephone . . . .
Mildly unsurprised,
Auto

Frank
Reply to  rgbatduke
December 6, 2014 6:13 am

RGB: A change in solar flux of 90 W/m2 needs to be divided by 4 and then multiplied by 0.7 (1-albedo) to convert it to a global solar forcing of about 16 W/m2.
Willis’s CERES data is the TOA radiative imbalance: TSI – reflected SWR – OLR. It has been adjusted so that the average imbalance agrees with the ARGO, which is why it runs exactly down the middle of Willis’s CERES/ARGO plot. When the earth is closest to the sun, the snow cover in the NH is at it’s greatest and reflected SWR from clear skies is also at its greatest. This increases the 16 W/m2 forcing (radiative imbalance) from ellipticity by about another 5 W/m2.
http://www.pnas.org/content/110/19/7568.full

LewSkannen
December 4, 2014 7:21 pm

One ARGO for every 300,000 cubic km of water. I think that if they travel constantly at about 400km/h 24/7 they will be able to visit every cubic km once a month.
Yep, that sounds feasible…

RockyRoad
Reply to  LewSkannen
December 4, 2014 10:46 pm

And yet the buoys travel with the water, so they’re not traveling horizontally through the water between readings. Hence, they measure only a very small column of water as they ascend. Those messing with the data make assumptions about the rest of the 300,000 cubic km of water that can’t be statistically projected or substantiated.

Crispin in Waterloo
Reply to  RockyRoad
December 5, 2014 10:50 am

Rocky that is a well observed point: They are re-measuring the same water they float with, unless there are flows in different directions at different depths.
Suppose one was East and one was West and they were half the vertical depth each, same speed. The float would cycle up and down remaining in the same physical region all the time, measuring the passing current. Not bad. If the current was 99% in one direction, it would be re-measuring the same water each 10 days, following it with the current. That would tell us nothing about what is going on 20 or 200 km away.

Reply to  RockyRoad
December 5, 2014 11:59 am

Amen Rocky. I’ve been saying that for a while. The drift data should be treated as spot data where the float gets to depth and then ignored since it’s moving with the water… assuming they stay at the prescribed depth and accurately report that, which I understand has been problematic since the program began.

JR
Reply to  RockyRoad
December 6, 2014 1:11 pm

So does this mean that they will all end up at that floating garbage heap in the Pacific? This feels a lot like the games they used to play with waether stations before they got the satellite that measures surface temperature that showed the nonesence of the weather stations for depicting global averages.

Will Nelson
Reply to  LewSkannen
December 5, 2014 3:38 pm

Luckily that’s only km/hr and not knots

Reply to  LewSkannen
December 5, 2014 8:12 pm

And averaging all those measurements from different parts of the ocean are, like average surface measurements, physically meaningless. Intensive properties, people.

martinbrumby
December 4, 2014 7:25 pm

Of course there must be some heat coming up from the bottom. How much do we understand that?

Reply to  martinbrumby
December 5, 2014 3:49 am

A bit less than 0.1 watts per m2

richard verney
Reply to  Fernando Leanme
December 5, 2014 4:56 am

And what is that based upon? Eg., energy from underwater volcanos, fissures, subduction zones, the fact that the seabed is nearer the core and thereby receives more heat, the fact that the oceanic crust is far thinner than the continental crust, etc?
At best it is a guestimate, with significant errors since we have never attempted the collection of scientific observational data, so we are not in a position to proffer a scientif figure.
This becomes important when one claims that there is some unaccounted 0.75 watts per m2 of energy. If your claimed figure of 0.1 is not 0.1 but is instead 0.3, it may account for almost 50% of the alleged missing energy.
This is a point I was making with the K & T energy budget. You see an input solar figure of between 340 and 342 W/m^2 whereas it would appear (even ignoring the variance of the 11 year solar cycle), the figure should perhaps be 341.625. W/m^2. Now if every figure used in the K & T energy budget is +/- 0.375 W/m^2 how can you begin to claim that there is some missing heat/energy of the orders of magnitude being claimed by the warmists. Some of the errors may cancel each other out, but some may compound.
When you are looking for missing energy in the system of approximately 1 W/m^2 every 0.1 W/m^2 of measurement error, or energy source not accounted for becomes material.

Billy Liar
Reply to  Fernando Leanme
December 5, 2014 12:42 pm

Richard Verney,
Try the paper on this page:
Hofmeister and Criss 2003, page 170 table 2.
63 mW/m² seems to be the going rate for mean oceanic flux.

Billy Liar
Reply to  Fernando Leanme
December 5, 2014 12:43 pm
stan stendera
December 4, 2014 7:26 pm

I completely understand why you are perplexed. So am I, but no reflection on your post. And the “science is settled????”

Warren in New Zealand
December 4, 2014 7:27 pm

Now, there were several interesting things in the presentation. The first was a total surprise to me. We hear a lot about how the heat is “hiding” in the ocean. But what I didn’t know was that according to the Argo floats, every bit of the warming is happening in the southern extratropical ocean, while the oceans of both the tropics and the northern hemisphere are actually cooling … color me puzzled.
Willis, I remember reading in here a few years back about the oscillation between the Antarctic, and Arctic Poles, how each basically cools then warms in alternative cycles, could this be what we see in this?
Alternatively, your Thermostat hypothesis in action, using the ocean currents to move excess heat to the Poles to cool, in conjunction with the convection currents in the atmosphere?

The Ghost Of Big Jim Cooley
Reply to  Warren in New Zealand
December 4, 2014 11:46 pm

Yes, the poles apparently warm and cool alternately. i believe I read a piece by a meteorologist a while back that said that. He said that when the Arctic starts to revert to ice-build, the Antarctic will go through a warming period – and attention of the warmist liars will shift from the Arctic to the Antarctic.

Stephen Richards
Reply to  The Ghost Of Big Jim Cooley
December 5, 2014 2:02 am

Joe Bastardi weatherbell analytics.

Quinn the Eskimo
December 4, 2014 7:30 pm

Perhaps naive question, but how do zonal trends in Fig. 2 of 5.7 x 10^22 J/decade, -.1 x10^22 J/decade and -/4 x 10^22 J/decade average to global trend of 6.3 x 10^22 J/decade in Fig. 7?

Quinn the Eskimo
Reply to  Willis Eschenbach
December 4, 2014 7:49 pm

Well, I thought about that, but it still don’t make no sense. Granting the southern zone is way bigger, it seems odd, even to an innumerate boob such as myself, that averaging in negative trend values in tropics and northern zone could elevate the global average trend above that of the Southern Zone. That would be some pretty darn fancy math, if you ask me.

Quinn the Eskimo
Reply to  Willis Eschenbach
December 4, 2014 7:54 pm

Let me put it a different way. I know I don’t get out much, but I have never seen an average of values that was outside the range of the values.

Reply to  Willis Eschenbach
December 5, 2014 6:03 am

experimenter expectation effect. Clearly it is impossible to average 5.7, -0.1 and -0.4 and get 6.3, regardless of the size of the 3 basins. The correct answer is probably 0.63 (when all results are scaled to the same order of magnitude).
however, in Climate Science the correctness of the answer depends not on the math, it depends on what the scientists expected to see. if the answer matches what they expect, they don’t bother to check the math. if it doesn’t match, they change the math until it does.
in this case they expected to see warming, so they didn’t bother to check the reasonableness of their results, once they delivered the expected warming.
what this result shows is the problem in doing science without the necessary experimental controls to prevent human bias.

garymount
Reply to  Willis Eschenbach
December 8, 2014 9:29 pm

Fig. 3 appears to have 9 additional months of the year 2013 added compared to Fig. 2.

Mozman
Reply to  Quinn the Eskimo
December 5, 2014 9:33 am

They don’t *average* to 6.3 x 10^22, they *sum* to it. The missing 1.1 x 10^22 is a combination of heat gain in the polar oceans (80 – 90 degrees) and rounding error.

Crispin in Waterloo
Reply to  Mozman
December 5, 2014 11:19 am

Not to toss in a small spanner or anything, but I was reading on another site this week a claim that the Antarctic ice movement from glaciers into the ocean is 91.5 cu km/year. It was so badly worded that I had to ask if that was the correct interpretation of their text but I think that is what they meant.
I noted that if so, then heating 91.5 cu km of -20 C ice to +4 would absorb 1.27 x10^19 Joules of ocean heat (per year, not per decade). This might in part account for the expanding sea ice around Antarctica (if that volume of glacier ice entering the sea was increasing). They deleted my comment – the only one at the time.
If all the glaciers entering the sea were considered (no idea what that total is) the number might start to influence the total heat gained.
Yes, ocean heat is actually net gain, but it means heat is entering and being lost to melting ice and it has to come from somewhere. As the transport mechanism of the heat liberated by creating replacement glacier snow above the land involves shedding heat higher and closer to the TOA, at least in it part bypasses the need to radiate from the surface.
The heat needed to account for melting all the NH seasonal snow and ice is considerable. It would take maybe 1×10^25 J to melt all the world’s permanent ice. That is enough to hold the oceans at a constant temp for 1600 years (if the 6.3×10^22 gain rate per decade is real).

Reply to  Crispin in Waterloo
December 5, 2014 11:57 am

So the processes of evaporation/condensation and freezing/thawing are the prime controllers of the earths surface temperature. Also, increasing CO2 near TOA should increase the rate of radiation to space more than to the surface. This effect is measurable at the poles where there are inversions (air warmer than the surface).

DB
Reply to  Mozman
December 9, 2014 8:18 am

The global OHC slide in the presentation reads ‘Global, 60°S-60°N’ and shows a trend of 63 ZJ/decade, so the difference isn’t in the polar oceans.

hunter
December 4, 2014 7:36 pm

Interesting and creative analysis and it seems to be well rooted in the data.
Is it amazing that the error bars for so much to do with the promotion of the climate crisis are always missing?

Hugh
Reply to  hunter
December 5, 2014 2:00 pm

The error bars are missing, since these people do not make errors?
Besides, it would be humiliating to admit that one does not have the faintest idea how good the results are. One can also do it by stating ‘medium confidence, expert evaluation’ meaning ‘this is what I think’. Not science, really.

December 4, 2014 7:42 pm

How are we to understand this perplexitude?
Maybe this has something to do with it:
Correcting Ocean Cooling

G. Karst
Reply to  Steve Case
December 5, 2014 8:14 am

My mind reels! GK

Billy Liar
Reply to  Steve Case
December 5, 2014 12:55 pm

Ah, the old cooling the past trick to make the present look warmer.

ossqss
December 4, 2014 7:44 pm

What percent of the global salt water is being measured by these again? How does that coverage look on a map at depth?
I am also checking on the proper use of the word perplexitude! 😉
Thanks for the continue effort to trust but verify Willis!.

u.k.(us)
Reply to  Willis Eschenbach
December 4, 2014 10:59 pm

Cleared that right up.
I’ll move on now that that has been conslained.

Reply to  Willis Eschenbach
December 5, 2014 12:08 pm

Are these terms copyrighted or public domain? I could swear I saw one of those in a Mann paper and I don’t wish to be sued for using it.

martinbrumby
December 4, 2014 7:44 pm

Supposed to be uniformly pretty cold in the depths of the oceans.
But the Earth’s mantle is quite toasty. Hey, I’m sure they increased their estimate of the temperature recently.
Then there are all the submerged volcanoes, sea vents and such like.
What happens if an Argo float drifts near a plume from them?
Will it vary temperature by a few hundredths of a degree, maybe?

Reply to  martinbrumby
December 5, 2014 12:24 am

Well that is an interesting question. If the Argo gets caught in the flows from a vent – and this is expected behavior – then they will read too high.

Paul Mackey
Reply to  M Simon
December 5, 2014 1:24 am

That would be a systematic error…..

Reply to  martinbrumby
December 5, 2014 12:28 am

The tendency will be for the Argo to center on the plume.
http://www.aeronautics.nasa.gov/pdf/bernoulli_principle_k-4.pdf

biff33
December 4, 2014 7:45 pm

Line 5: “measurements of time and salinity” should be “measurements of temp and salinity”, no?

garymount
Reply to  biff33
December 4, 2014 9:12 pm

Awe, you have spoiled my fun, I was going to mention that.
Well, all is not lost, I’m going to mention that I worked on (during the design and construction of) underwater Remotely Operated Vehicles (ROVs) that can reach depths of 6,000 meters. I mentioned only 3,000 recently elsewhere on this blog.
http://www.ise.bc.ca/rov.html

Bill Illis
December 4, 2014 7:57 pm

I think this is generally correct.
The amount of energy that the Earth is accumulating is just a small number.
1.61 X 10^22 joules per year = 1.0 Watt/m2/year across the whole Earth surface.
1.13 X 10^22 joules per year = 1.0 Watt/m2/year over the ocean surface only.
The Sun provides us with 386.4 X 10^22 joules per year. The Earth is emitting back to space anywhere from the same 386.4 10^22 joules per year (Ceres) or something as low as 385.8 joules per year (Argo and other estimates) .
Put those numbers to memory for any future commenting on this topic.
Ceres says there has been no change in the net balance of energy since it was put into orbit. The Argo floats have 0.6 Watts/m2/year accumulating. Sometimes the Ceres operators will say the combined Argo/Ceres data indicates 0.5 or 0.6 W/m2/year is accumulating. (Actually, I think Ceres is missing 5.0 W/m2/year in its numbers somewhere so they use the Argo float data as the base double-check).
Big deal. The oceans warm by 0.3C in 100 years. Nothing happens to the atmosphere temperatures, that is what the numbers say.
The earliest estimates of ocean heat content accumulation in the theory were about 1.4 W/m2/year. If we are getting just 0.5 or 0.6 W/m2/year showing up, then there is a problem.
In addition, where is the GHG forcing showing up then. The IPCC has 2.3 W/m2/year of human-induced forcing and there should be another 1.7 W/m2/year showing up from the feedbacks.
But these numbers are not showing up in the temperatures recorded (the actual ones that is), it is not showing up in the Argo measurements, it is not showing up in the Ceres measurements.

Quinn the Eskimo
Reply to  Bill Illis
December 4, 2014 8:08 pm

Nor is there any hotspot. The AGW conjecture is comprehensively busted by observations along almost every conceivable metric.

Rud Istvan
Reply to  Quinn the Eskimo
December 5, 2014 3:59 pm

Quinn, your above on hemispheric averages was priceless.
And your equatorial troposphere hot spot comment is more than just very good. See Dr. Christy’s APU testimony earlier this year. Recounted in essay Humidity is still Wet, along with underlying physical explanations hypothesized by Lindzen and experimentally shown by Willis Eschenbach elsewhere.
You get out and about more than you avere. Must be that melting Arctic ice thingy… Regards, and hope you enjoy the rest of Blowing Smoke including essay Northwest Passage.

Hugh
Reply to  Willis Eschenbach
December 5, 2014 2:13 pm

This is too common in scientific competition. People try to make they results fit to authority’s results in case they are too imprecise and thus not sexy enough. Argo people might be suffering the same disease. It is sad because good findings are findings that are unexpected, not what the guru expected.
Precise but not accurate — oh dear, my old mum can do better than that.

David A
Reply to  Bill Illis
December 5, 2014 6:12 am

“Big deal. The oceans warm by 0.3C in 100 years. Nothing happens to the atmosphere temperatures, that is what the numbers say. ”
================================================
I think this only incudes about 1/2 the total ocean volume, so perhaps .15 C per century, with a whole bunch of assumptions.

John Finn
Reply to  Bill Illis
December 5, 2014 8:15 am

In addition, where is the GHG forcing showing up then. The IPCC has 2.3 W/m2/year of human-induced forcing and there should be another 1.7 W/m2/year showing up from the feedbacks.

No there shoudn’t. I think you’re getting confused.

Crispin in Waterloo
Reply to  Bill Illis
December 5, 2014 11:40 am

Hang on, is it n x 10^22 J/year or per decade?

Steve in SC
December 4, 2014 8:02 pm

I suspect there is some sort of flakyness with the joules part.
The fact that you can’t measure joules directly is an instant clue.
Now since they have a temperature and a salinity profile versus depth they know the density profile so the joules number is calculatable. I think what they have given you for data is some sort of average.
Having been in the precision temperature measurement business for 30 years I automatically get very suspicious of any temperature accuracy claim better than + or – 0.5 deg C. Averages can be calculated to fantastic precision, but the accuracy is always the accuracy of your worst sensor. Been there done that have my FDA scars to prove it.

Paul Mackey
Reply to  Steve in SC
December 5, 2014 1:28 am

in SC Well said Sir!

DEEBEE
Reply to  Steve in SC
December 5, 2014 2:17 am

C’mon Steve get with the ensemble.

DD More
Reply to  Steve in SC
December 5, 2014 12:43 pm

So you also think the Argo boys and buoys might be blowing a little smoke in their FAQ here
http://www.argo.ucsd.edu/FAQ.html
* How accurate is the Argo data?
The temperatures in the Argo profiles are accurate to ± 0.002°C and pressures are accurate to ± 2.4dbar. For salinity,there are two answers. The data delivered in real time are sometimes affected by sensor drift. For many floats this drift is small, and the uncorrected salinities are accurate to ± .01 psu.

RH
Reply to  Steve in SC
December 5, 2014 1:53 pm

From the Argo NKE manual:
6.3.4 Temperature Coding
Depending upon the value of the first bit, it is followed by either 10 or 15 data bits. If the difference between the current temperature measurement and the previous temperature measurement (Tn – Tn-1) is included in the closed interval[-0.512C, +0.511C], the difference (Tn– Tn-1) is coded into 10 bits two’s-complement.
Otherwise the measurement is absolutely coded in 15 bits with an offset of – 2 °C. The temperature is reported in the range -2C to + 30.767C, with a resolution of 0.001C

December 4, 2014 8:11 pm

The Argo floats accuracy is described as: “The temperatures in the Argo profiles are accurate to ± 0.002°C and pressures are accurate to ± 2.4dbar. ” 2.4dbar is about 2.5 meters. So in areas where the temperature gradient is more than 0.002°C/2.5m, or 0.8°C/1000m, the errors in depth swamp the errors in temperature. The tropical ocean has a difference between surface water and 1000m water of about 20°C or more, which makes the temperature error due to depth error 25 times greater than the temperature error itself, or +/- 0.05°C.
Refs:
1) http://www.argo.ucsd.edu/Data_FAQ.html#accurate
2) http://upload.wikimedia.org/wikipedia/commons/e/e7/Temperaturunterschiede_Ozeane.png

Spoon
Reply to  UnfrozenCavemanMD
December 4, 2014 9:05 pm

Actually, the temperature accuracy is 0.005 degC by design. The 0.002 degC accuracy is an observation of a few of the probes that needed to be recovered and were recalibrated.
http://www.terrapub.co.jp/journals/JO/pdf/6002/60020253.pdf
I’m not sure that your methodology is correct, but if it is your error would be 2.5 times bigger at +/- 0.125 degC.

Reply to  Spoon
December 5, 2014 12:47 pm

I believe the Apex floats use the Sea Bird temperature modules with a sensor system that has a tolerance of +/-0.01°C in the range above 1.6 °C or so. How can they get 0.005°C accuracy from that?

Reply to  Spoon
December 5, 2014 3:35 pm

Not so, Spoon. If the temperature error is determined by the depth error in the presence of a temperature gradient, then whether the temperature sensor is accurate to +/- 0.002 or +/- 0.005 ceases to matter because a temperature gradient of 20°C per 1000m (or 0.02°C/m), and a depth error of +/-2.5m yields a temperature-at-depth error of +/-0.05°C even for a perfect temperature sensor, or for any sensor with an accuracy significantly better than +/-0.05°C.

Spoon
Reply to  Spoon
December 6, 2014 3:25 am

UnfrozenCavemanMD, I agree that my huge error is not correct. On the other hand, without more information on the hydrostatic pressure measurement gauge we need to be careful. I see your link that states the accuracy and agree with your conversion from pressure accuracy to depth accuracy. The +/- 2.5 meters is an absolute error, not a cumulative one. The gauge will report a depth within 2.5 meters of where it actually is, but it will be a constant offset. Instead of 2000 m it will be 2001.25m +/1 1.25m. I don’t actually see that this makes the ARGO data useless.

December 4, 2014 8:20 pm

The lower half of the ocean volume the Argo floats do not measure must be doing something on decadal scales. The AMOC anyone??? 1/100th of degree C is a lot of heat for the immense volume, but measuring as temperature delta it is like trying to measure a weak signal… and the Argo designers don’t even try.

Rud Istvan
Reply to  Joel O’Bryan
December 5, 2014 12:00 pm

All of the abyssal water is very close to 4C, which is the maximum density of seawater and why that is what is found on ocean bottoms. But that is neither steady state nor uniform due to the global thermohaline circulation. At temperatures around 4C, salinity affects seawater density more than temperature does. The deep global thermohaline circulation is powered by seasonal polar ice formation. Salinity increases as brine is exuded from freshly formed sea ice, causing that polar water to sink, in turn drawing surface water poleward as ice forms. This seasonal process is a major poleward ‘surface’ heat transport mechanism.That process ‘stops’ as seasonal sea ice melts. There are few if any reliable measurements of all this.
One possibility for the ‘increasing’ Southern ocean ARGO ‘heat’ Willis notes is simply the observed increase in seasonal Antarctic sea ice ober the ARGO era Similar but converse for ARGO Arctic ‘heat’. Less ice, less sinking cold water, less warmer water drawn from lower latitudes… Of course that reasoning does not explain why the polar ice variations over the ARGO era…
All ‘settled science’.

Paul Linsay
December 4, 2014 8:21 pm

A friend of mine cleverly joined the Navy during Viet Nam and spent his tour of duty on submarine warfare. His job was to look for acoustic channels where the boomers could hide. These are undersea currents of temperature and salinity different from the surrounding ocean that reflect sonar so that a sub traveling the channel is hidden from an enemy sub. It turns out the oceans are full of these currents. It would be an interesting exercise to figure out what effect these channels have on the energy budget of the iceans.

Reply to  Paul Linsay
December 4, 2014 8:44 pm

The US Navy has a vast amount of temp and salinity data from the oceans top 1,500 feet or so, going back to the 50’s…. but it is all classified, since the geoposition data would elucidate patrol and SLBM cruise areas as well as typical diving/operating depths.

Claude Harvey
December 4, 2014 8:23 pm

As I recall, calculations correlating rate of sea level rise with total sea volume indicate that if temps. at the depths Argo is sampling are rising as their measurements indicate, temperatures at depths beyond Argo’s reach must be cooling. It would seem to be either that or the Argo results are in error.

Reply to  Claude Harvey
December 4, 2014 8:47 pm

It would seem to me, that that we should respect the Argo data as data, as look to the unmeasured lower half of ocean volume for more answers to our perplexitude.

RACookPE1978
Editor
December 4, 2014 8:29 pm

Willis:
I do not know if I am going to throw something in the “perplexitude” that will illuminate the problem/issue/difference; or if it will only compound the problem.
You are looking at the difference in energy measured down south between 20 south and 60 south latitude, right? Over a period of years between 2002 and 2014, right?
OK. Ignore the north arctic sea ice right now – it varies between 70 north and 80 north each year over a very limited area locked in by land, and reflects (or allows to be absorbed) very little sunlight in the total scheme of things. And, it is NOT measured by the ARGO buoys available anyway, since they do not float up through the Bering Strait, through the Arctic ocean then out the Denmark Strait or outboard of Greenland where the Arctic sea ice is concentrated.
But.
The Antarctic sea ice does cycle from a yearly low in mid-February of about 2.0 Mkm^2 up to a yearly maximum of 13.4 to 14.0 Mkm^2 in mid-September. Unlike the arctic sea ice, the Antarctic sea ice cycle AROUND the 14.0 Mkm^2 of Antarctica and its 1.5 Mkm^2 fixed ice shelves. Thus, all of the Antarctic sea ice cycles between 67.5 south (at minimum) and 58 – 59 south latitude at maximum.
Relevant? Well, the Antarctic sea ice anomaly was slowly but steadily increasing since 1992 – 2011. Then in very early 2011, the Antarctic sea ice began expanding very rapidly: gaining in June this year an excess of over 2.06 million sq kilometers (Mkm^2).
Is this Antarctic sea ice gain important? Well, 2.06 Mkm^2 is just about the same area as the entire area of Greenland (2.16 Mkm^2).
The entire area of Greenland was “added” to the “regular” sea ice right at the 58-59-60 south latitude of the southern seas. Sure, that record-setting “new” ice area equal to the size of Greenland was present for only a few weeks this early summer, and we have gone back down to a more “normal” Antarctic sea ice anomaly of “only” 1.0 Mkm^2 (today was 0.726 Mkm^2 for example): but the anomaly has been steadily and near continuously GREATER than 1.0 Mkm^2 for three years now.
if Trenberth’s “ice reflects energy, open water absorbs energy” theory is correct (and it fails for most months in the Arctic at latitude 75-80 north!), then the heat is not here anymore.
Half of Greenland has now appeared down south between latitude 67 south and 58 south every day from Jan 2011 through Dec 2014 – shows no sign of abating nor moderating, and is still increasing. The missing heat been reflected it out to space – if NASA-GISS/Hansen-Trenberth-Serezee are right, correct?
But – What happened to temperature during this period? Is their precious “arctic amplification” theory proved even more dead wrong, since we have now “run an experiment” on the earth and … the opposite happened of what theory predicted?
– Ice area increased dramatically near the southern ocean over the three year period 2001/01 – 2014/12.
– ARGO Ocean temperatures in the southern ocean around that new ice area increased in the same years the ice area was increasing most rapidly. The other ocean ARGO tempertures went down by a little in that same time.
– Average land temperatures across the entire Antarctic continent between the new ice area and the southern ocean has been very slowly, but equally very steadily going down the entire period of 1996 through 2014.

Leonard Weinstein
December 4, 2014 8:34 pm

Willis,
The Antarctic sea ice has averaged larger the last several years, and even though it mostly melts their summer, the year round average is larger than previous years. the energy of fusion for the larger sea ice average may be at least part of the answer.

Reply to  Leonard Weinstein
December 4, 2014 10:24 pm

Ice is lunch money in the energy budget. Enthalpy of fusion I much lower than vaporization to begin with and the volumes of seasonal ice compared with the total ocean are trivial. Even the total estimated total ice loss from 1976 until before the recent resurgence is insignificant.

Dsystem
December 4, 2014 8:35 pm

OK a minor point, but it never ceases to amaze me how scientists can so easily mangle a simple system. Why not use 6 ZJ instead of 0.6 x 10^22 J? The SI system of units caters for very large and very small quantities. The zettajoule (ZJ) is 10^21 joules.
Thankfully, they’re using joules as the unit of energy. See Richard Feynman’s humorous comments on scientists and units: http://youtu.be/roX2NXDUTsM

December 4, 2014 8:38 pm

Willis. I stopped reading at your conversion from heat to temperature. At the trivially small changes we are discussing here, you need to factor pressure, salinity, time and any number of other critical bits. If the ARGO program is to have any credibility, then show us the numbers. The raw data. If that isn’t available, then the rest is just speculation. Mining promoters selling moose pasture can’t get away with this kind of crap.

Reply to  John Eggert
December 4, 2014 8:39 pm

Just to be clear. The ARGO data is crap, not Willis’ interpretation.

Reply to  Willis Eschenbach
December 5, 2014 8:53 pm

Willis: I appreciate your reply. The ARGO site claims a stupendous accuracy for temperature. Just for reference, the best you can be assured of from a thermocouple is 0.5 kelvin. The accuracy for temperature claimed by the supplier of the ARGO floats is far greater than .5% at about 250 kelvin, .5% is more than 1. Kelvin. The ARGO supplier claims (might be off a bit here) 0.01 kelvin. At those levels of precision, when measuring temperature you need to consider salinity and pressure among other things.

Crispin in Waterloo
Reply to  Willis Eschenbach
December 8, 2014 6:25 am

Eggert
I am bit mystified by your numbers there. I use a common lab instrument for collecting temperatures, a multiplexer – the Agilent 34972A with a thermocouple module. It can read thermocouples to 6.5 digits though it is dependent on the type of thermocouple to get that degree of precision. There are 2 wire and 4 wire thermocouples, palladium, gallium blah, blah. Getting 0.001 degrees precision is not a big issue. Getting accuracy (by calibration and linearization) is not a big issue. That is why those components were chosen.
The accuracy of the calculated ocean heat content is (as described above by others) dependent on getting a known depth because they have to integrate the temperature across a volume. The temperature profile is nothing like linear with depth – the big changes are compressed toward the surface. This leads to an important observation: if the vertical error margin is the same for all depths (no one said it was) then the heat content error is much greater than that proposed above if the temperature change with depth is mostly near the surface and not linear with depth.
The point is to calculate a volume of water integrated with its vertical temperature profile to get a total heat content. If the depth is off by 0.1m or 1.0m there is a smaller or larger error in the calculated total heat content, right? Well if that measurement zone is a vertical distance and the depth error is fixed, then for any given delta T (top to bottom) the error is larger if the zone is thinner because the probe has ‘less idea’ where it is vertically as a percentage of the total depth of the zone in which the temperature is changing. As a result of being ‘thin, there is a % larger error in the calculated heat content, agreed?
So the temperature precision x accuracy data is pretty good, but the depth precision x accuracy x temperature precision x accuracy combined is not as good as calculated above. Half of the temperature change takes place over much less than 1/2 the vertical field being measured. The net effect of the depth error is therefore more than twice ‘what it appears to be’ when the temperature change is constant with depth. We can’t tell what the actual resolution of the system is (and therefore ocean heat content) without knowing the temperature profile first. In the deeps where the change ins tiny, the calculated results is probably pretty good. But in the upper ocean is where the change is claimed to be, and that is where the total error is highest. Hmmm…
For this reason the error in the OHC calculation may be larger than the claimed gain or loss. Oops. We don’t know which, because we have been presented here with less information than is required to make the calculation. The sources of ARGO data are hiding the true error by giving us the precision and accuracy of the individual components of the system, not the system itself in the context of the temperature profile.
I would be happy if someone were to show that this last statement is incorrect. I would be much more comfortable with the claims of changes in OHC.

SteveP
Reply to  Willis Eschenbach
December 8, 2014 8:03 am

Crispin in Waterloo
December 8, 2014 at 6:25 am
Eggert
I am bit mystified by your numbers there. I use a common lab instrument for collecting temperatures, a multiplexer – the Agilent 34972A with a thermocouple module. It can read thermocouples to 6.5 digits though it is dependent on the type of thermocouple to get that degree of precision. There are 2 wire and 4 wire thermocouples, palladium, gallium blah, blah. Getting 0.001 degrees precision is not a big issue. Getting accuracy (by calibration and linearization) is not a big issue. That is why those components were chosen.
2-wire and 4-wire thermocouples Crispin? I think you are perhaps mixing up thermocouples and RTDs? If you are talking about thermcouples then the 34972A accuracy is 1degC for type J (see http://cp.literature.agilent.com/litweb/pdf/5965-5290EN.pdf page 16). Page 15 then gives a typical TC accuracy calculation (Example 3) which includes TC probe error and total accuracy of 2.1degC. Certainly PRTDs are more accurate and stable than TCs. But AFAIK the ARGO units use thermistors. Finally, just because an instrument has 6.5 digits of resolution (or even 8.5) does not mean that it is accurate. Certainly the higher resolution a DMM has the better it’s accuracy spec tends to be. But they are not the same thing. In our lab we have an 8.5 digit DMM. If I measure 10V with it the reading might read 10.000;000;0, i.e down to 0.1uV resolution. However, at 10V it’s accuracy spec at 23degC +/-5deg is 80uV.

Crispin in Waterloo
Reply to  Willis Eschenbach
December 8, 2014 7:27 pm

SteveP
Thank you for taking the trouble to check out the Agilent. We use them in several labs – wonderful instrument.
Convenient Data Logging Features
• Direct measurement of thermocouples, RTDs, thermistors, DC voltage, AC voltage, resistance, DC current, AC current, frequency, and period. So that is the control unit.
The board inserted that actually reads the thermocouples is a 34901and it can handle 2 and 4 wire sensors.
Resolution for a thermistor is 0.08 over 230 degrees = 1/2875 of full scale.
Resolution RTD is 0.06 degrees over 800 degrees = 1/13,333 or 0.000075 of full scale. See the footnotes – it requires an external reference to that precision and that is 6.5 digits.
http://www.farnell.com/datasheets/1851621.pdf page 16
E-type thermocouples have a sensitivity of 60 microvolts/degree C at room temperature. There is a lovely picture of the errors after calibration caused mostly by rounding to the nearest microvolt at
http://www.mosaic-industries.com/embedded-systems/microcontroller-projects/temperature-measurement/thermocouple/type-e-calibration-table showing an error of less than ±0.01 C.
>I think you are perhaps mixing up thermocouples and RTDs?
We call it the thermocouple unit as does Agilent. It reads both.
>But AFAIK the ARGO units use thermistors.
I believe they do and it is possible to do so with 0.01. I am thankful that I don’t have to work at that level. I only need 0.1. It seems the ordinary units using platinum resistance sensors are 0.01 with an accuracy of 0.04 or 0.05 depending on who is reading it. The Agilent is better than the Omega IMV.
>If I measure 10V with it the reading might read 10.000;000;0, i.e down to 0.1uV resolution. However, at 10V it’s accuracy spec at 23degC +/-5deg is 80uV.
I did not find the exact equivalent in the manual for the 34972A/34902 in the time I had.

SteveP
Reply to  Willis Eschenbach
December 9, 2014 3:44 am

Crispin in Waterloo
Hi Crispin
Thanks for all the info re the Agilent meter.
That is an interesting link to the Type E Thermocouple Calibration, thanks for that. Please feel free to correct me on this but as far as I can see this mathematical method is used to reduce the error introduced when converting from voltage to temperature using the standard high-order polynomials? However, it doesn’t address how accurate the voltage measurement actually is, how accurately the cold-junction temperature is known and what errors are introduced by the actual thermocouple and cable. I would argue that if my measurement uncertainty is say 1degC then it makes little difference whether my V-T error is 0.1degC or 0.01degC. But maybe I am wrong about this?

rgbatduke
December 4, 2014 8:40 pm

Hi Willis,
ARGO bothers me, too. It is clearly a mistake to think that the 0.02 is at all uniformly distributed, as the water under 800 meters or so is all within a degree or so of 4 K independent of depth. Nearly all of it must arise from integrated/averaged variations in the top 100 meters, in fact, since that is where the ocean exchanges nearly all of its energy. Or rather, if Argo shows something else I’d be deeply, deeply suspicious. And there one is trying to resolve a tiny effect by subtracting/integrating a function that varies over a large range (but almost entirely cancels) which is an open invitation to rounding errors, truncation errors, mere computational errors, and worse. Even a tiny asymmetry in the electrical responses of your measurement apparatus can create entirely spurious, quite possibly biased, changes.
But my bigger issue is with the fact that the buoys are free floating and not randomly distributed. By their nature, they are caught up in the major currents and eddies that define the flow of the subsurface waters of the ocean. If they all hover at a fixed depth, especially the SAME fixed depth (1000 m), they are all transported in this layer, and then sample the waters both above and below wherever they end up around a week later.
Let us count the possible sources of systematic biases in such a sampling scheme. No, there are too many. Let’s sum up. It is highly implausible that there is no systematic bias in such a scheme. All it takes is for the bulk of the buoys that measure temperature to be driven at depth towards a region where the ocean is upwelling to create an entirely artificial bias. Ocean currents at any depth are not random. They are driven by heat/temperature and salinity variations.
Randomness is the sampler’s friend. Trusting that a heat driven process has no thermal bias is in contrast silly. Presuming that 4000 buoys can provide accurate temperatures to depth sufficient to resolve 0.02 C of warming a layer 2 km thick times 70% of the Earth’s area, where many times that many thermometers contributing to HadCRUT4 cannot resolve temperatures on the overall surface itself only to any better than 0.1 C is dubious at best.
They would do far better if they a) selected 4000 locations in the oceans randomly; b) fixed the buoys to those specific spots; c) measured temperatures at those spots for (say) a decade; d) moved the buoys to new, randomly selected locations; e) iterated forever. In fact, they could iterate this forever and just move them round robin style to random locations within a uniformly distributed set of domains that cover the ocean, this year shifting all the buoys in some million square kilometer chunk of ocean, next year in another. At least that way they could make definite statements about what’s going on at each fixed location, if not over a year perhaps over a decade.
A further advantage of this is that it MIGHT — with enough sites — give one some insight into heat transport at many depths.
In general, I like ARGO. I think it is a clever, ambitious project, and that in time it will give us really useful state data concerning the oceans. But it is very difficult to see how to divorce its claims (so far) with plain old SST measurements. Does anyone assert that SSTs are known to within 0.02C? HadCRUT4 claims errors around 0.1C, and I have a hard time believing that. That ARGO can resolve trends of 0.002C/year, or around 0.03 or 0.04 C total in the entire timespan that ARGO has existed at all, seems highly implausible. Your comparison of the apparently nonexistent radiation imbalance that is supposed to be producing the warming and the supposed warming resulting from this radiation only emphasizes this point. That the warming is almost entirely localized in a single band of the ocean is, at least, something that requires some pretty serious explanation.
rgb

milodonharlani
Reply to  rgbatduke
December 4, 2014 9:10 pm

Could not agree more.
ARGO floats need to be moored either to cover specific equal areas or distributed randomly, but fixed. Their sample is almost useless, worse than relying on random ships sampling the temperature of the water in their engine intakes.

Reply to  rgbatduke
December 4, 2014 10:33 pm

You are right, but it is not so easy to get geostationary buoys. Like weather balloons they go with the flow and the alternatives are corrective thrusters or anchor cables; neither practical.

Reply to  rgbatduke
December 4, 2014 11:25 pm

This comment makes a lot of sense.The total heat capacity of the world ocean is not relevant to the process of heat sequestration in the oceans.
This is because Newton’s law of cooling also applies to heating. The temperature gradient from the sea surface to deeper levels is negative exponential, both from theory and observations. This claim is consistent with the figure Roy Spencer derived from his physical model, in turn derived from an IPCC graphic with a different parameter value.
URL: http://www.drroyspencer.com/2011/06/

Reply to  rgbatduke
December 5, 2014 12:42 am

I said the same thing up thread before I got to this.
It is K-4 grade science according to NASA: http://www.aeronautics.nasa.gov/pdf/bernoulli_principle_k-4.pdf

Stephen Richards
Reply to  rgbatduke
December 5, 2014 2:05 am

as the water under 800 meters or so is all within a degree or so of 4 K independent of depth
OOPS. 4K or 4°C

richard verney
Reply to  rgbatduke
December 5, 2014 5:25 am

“But my bigger issue is with the fact that the buoys are free floating and not randomly distributed. By their nature, they are caught up in the major currents and eddies that define the flow of the subsurface waters of the ocean. If they all hover at a fixed depth, especially the SAME fixed depth (1000 m), they are all transported in this layer, and then sample the waters both above and below wherever they end up around a week later”
///////////////////////
For years, on almost every ARGO post, I have commented upon the problem that ariises because the ARGO Buoys are free floating. They float on currents or get swept along by currents, which are currents are themselves temperature related.
There has been no attempt to assess what bias is caused by the fact that the Buoys are free floating. In my opinion, it is almost inevitable that some bias will result. Whether that will be a warming bias is moot, but is something that needs to be assessed.

Non Nomen
Reply to  richard verney
December 5, 2014 6:24 am

If a wx-station on land is moved just by some hundred yards(or just gets a different painting), this is quite rightfully criticised because of the inconsistency and incomparability of the data. Here we have buoys that merrily floats around the world, giving a reading every now and then. But the data thus produced never ever come from the same point where they have been before or where the Argo has been put to sea first. So these data seem to me simply not worth a tinkers cuss. Furthermore, each Argo covers more than 180.000 cubic kilometres of water, roughly an area of 425 by 425 kilometers, 1 km high. Do we look out of the window at Norfolk, Va to tell how the temperature in Boston, Mass. is? And, if I got it right, there are more Argos in the northern than in the southern hemisphere. All of this is a little bit too dubious to me to be taken too seriously.

David A
Reply to  richard verney
December 5, 2014 6:27 am

Yes, I, and may others have noted this often as well. But a warning. Climate scientists love to crunch numbers to confuse. They could easily use this for another adjustment in their direction, so be carful what you wish for.
Logically however the mean latitude of the floats is of course relevant. Brandon answered some of these questions beginning here, including the one Willis mentions about error bars.
http://wattsupwiththat.com/2014/11/30/the-tempering-effect-of-the-oceans-on-global-warming/#comment-1803131
About four or five comments in succession, with graphs on current float locations, and surface ocean currents.

mellyrn
Reply to  rgbatduke
December 6, 2014 5:18 am

It’s not every one who can work a Princess Bride allusion into oceanographic data analysis! Ya made my evening, sir.

mpainter
December 4, 2014 8:46 pm

Note figure 2.
This shows that about 3/4 (per eyeball) of the warming for the 20S-60S ocean occurred during 2010. This is funny business.
It does not seem natural.

December 4, 2014 9:04 pm

Recently several participants in the climatology blogs have suggested that the oceans are warming. How, they have subsequently asked, can we not be facing catastrophic anthropogenic global warming if the oceans are warming?
Willis’s work places their argument in perspective. The oceans are warming at a rate which, if continued for one thousand years, would result in a warming of 2 Celsius. In one thousand years, though, the increment in the atmospheric CO2 concentration that had purportedly caused this warming would have long since dissolved into the waters of Earth’s oceans. This argument features an effect that is without a cause.

1 2 3 4