ARGO—Fit for Purpose?

By Rud Istvan

This is the second of two guest posts on whether ‘big’ climate science missions are fit for purpose, inspired by ctm as seaside lunch speculations.

The first post dealt with whether satellite altimetry, specifically NASA’s newest Jason3 ‘bird’, was fit for sea level rise (SLR) ‘acceleration’ purpose. It found using NASA’s own Jason3 specs that Jason3 (and so also its predecessors) likely was NOT fit–and never could have been–despite its SLR data being reported by NASA to 0.1mm/yr. We already knew that annual SLR is low single digit millimeters. The reasons satellite altimetry cannot provide that level of precision are very basic, and were known to NASA beforehand—Earth’s requisite reference ellipsoid is lumpy, oceans have varying waves, atmosphere has varying humidity—so NASA never really had a chance of achieving what they aspired to: satalt missions to measure sea level rise to fractions of a millimeter per year equivalent to tide gauges. NASA claims they can, but their specifications say they cannot. The post proved lack of fitness via overlap discrepancies between Jason2 and Jason3, plus failure of NASA SLR estimates to close.

This second related guest post asks the same question of ARGO.

Unlike Jason3, ARGO had no good pre-existing tide gauge equivalent mission comparable. Its novel oceanographic purposes (below) tried to measure several things ‘rigorously’ for the very first time. “Rigorously’ did NOT mean precisely. One, ocean heat content (OHC), was previously very inadequately estimated. OHC is much more than just sea surface temperatures (SSTs). SSTs (roughly but not really surface) were formerly measured by trade route dependent buckets/thermometers, or by trade route and ship laden dependent engine intake cooling water temperatures. Deeper ocean was not measured at all until inherently depth inaccurate XBT sensors were developed for the Navy.

Whether ARGO is fit for purpose involves a complex unraveling of design intent plus many related facts. The short ARGO answer is probably yes, although OHC error bars are provably understated in ARGO based scientific literature.

For those WUWT readers wishing a deeper examination of this guest post’s summary conclusions, a treasure trove of ARGO history, implementation, and results is available at www.ARGO.uscd.edu. Most of this post is either directly derived therefrom, or from references found therein, or leads to Willis Eschenbach’s previous WUWT ARGO posts (many searchable using ARGO), with the four most relevant directly linked below.

This guest post is divided into three parts:

1. What was the ARGO design intent? Unlike simple Jason3 SLR, ARGO has a complex set of overlapping oceanographic missions.

2. What were/are the ARGO design specs relative to its missions?

3. What do facts say about ARGO multiple mission fitness?

Part 1 ARGO Intent

ARGO was intended to explore a much more complicated set of oceanography questions than Jason’s simple SLR acceleration. The ideas were developed by oceanographers at Scripps circa 1998-1999 based on a decade of previous regional ocean research, and were formulated into two intent/design documents agreed by the implementing international ARGO consortium circa 2000. There were several ARGO intended objectives. The three most explicitly relevant to this summary post were:

1. Global ocean heat climatology (OHC with intended accuracy explicitly defined as follows)

2. Ocean ‘fresh water storage’ (upper ocean rainfall salinity dilution)

3. Map of non-surface currents

All providing intended “global coverage of the upper ocean on broad spatial scales and time frames of several months or longer.”

Unlike Jason3, no simple yes/no ‘fit for purpose’ for ARGO’s multiple missions is possible. It depends on which mission over what time frame.

Part 2 ARGO Design

The international design has evolved. Initially, the design was ~3000 floats providing a random roughly 3 degree lat/lon ocean spacing, explicitly deemed sufficient spatial resolution for all ARGO intended oceanographic purposes.

There is an extensive discussion of the array’s accuracy/cost tradeoffs in the original intent/design documentation. The ARGO design “is an ongoing exercise in balancing the array’s requirements against the practical limitations imposed by technology and resources”. Varying perspectives still provided (1998-99) “consistent estimates of what is needed.” Based on previous profiling float experiments, “in proximate terms an array with spacing of a few hundred kilometers is sufficient to determine surface layer heat storage (OHC) with an accuracy of about 10W/m2 over areas (‘pixels’) about 1000km on a side.” Note the abouts.

The actual working float number is now about 3800. Each float was to last 4-5 years battery life; the actual is ~4.1 years. Each float was to survive at least 150 profiling cycles; this has been achieved (150 cycles*10 days per cycle/365 days per year equals 4.1 years). Each profile cycle was to be 10 days, drifting randomly at ~1000 meters ‘parking depth’ at neutral buoyancy for 9, then descending to 2000 meters to begin measuring temperature and salinity, followed by a ~6 hour rise to the surface with up to 200 additional measurement sets of pressure (giving depth), temperature, and salinity. This was originally followed by 6-12 hours on the surface transmitting data (now <2 hours using the Iridium satellite system) before sinking back to parking depth.

The basic ARGO float design remains:

clip_image002

And the basic ARGO profiling pattern remains:

clip_image004

‘Fit for purpose’ concerning OHC (via the 2000 meter temperature profile) presents two relevant questions. (1) Is 2000 meters deep enough? (2) Are the sensors accurate enough to estimate the 10W/m2 per 1000km/side ‘pixel’?

With respect to depth, there are two differently sourced yet similar ‘yes’ answers for all mission intents.

For salinity, the ARGO profile suffices. Previous oceanographic studies showed (per the ARGO source docs) that salinity is remarkably unvarying below about 750 meters depth in all oceans. This fortunately provides a natural salinity ‘calibration’ for those empirically problematic sensors.

It also means seawater density is roughly constant over about 2/3 of the profile, so pressure is a sufficient proxy for depth (and pressure can also be calibrated by measured salinity above 750 meters translated to density).

For temperature, as the following figure (in °F not °C) typical thermocline profiles show, ARGO ΔT depth profile does not depend very much on latitude since 2000 meters equaling ~6500 feet reaches the approximately constant deep ocean temperature equilibrium at all latitudes, providing another natural ARGO ‘calibration’. The 2000 meters ARGO profile was a wise intent/design choice.

clip_image005

Part 3 Is ARGO fit for purpose?

Some further basics are needed as background to the ARGO objectives.

When an ARGO float surfaces to transmit its data, its position is ascertained via GPS to within about 100 meters. Given the vastness of the oceans, that is an overly precise position measurement for ‘broad spatial scales’ of deep current drift and 1000000km2 OHC/salinity ‘pixels’.

Thanks to salinity stability below 750 meters, ARGO ‘salinity corrected’ instruments are accurate (after float specific corrections) to ±0.01psu, giving reasonable estimates of ‘fresh water storage’. A comparison of 350 retrieved ‘dead battery’ ARGO floats showed that 9% were still out of ‘corrected’ salinity calibration at end of life, unavoidably increasing salinity error a little.

The remaining big ‘sufficient accuracy’ question is OHC, and issues like Trenberth’s infamous “Missing Heat” covered in the eponious essay in ebook Blowing Smoke. OHC is a very tricky sensor question, since the vast heat capacity of ocean water means a very large change in ocean heat storage translates into a very small change in absolute seawater temperature.

How good are the ARGO temperature sensors? On the surface, it might seem to depend, since as an international consortium, ARGO does not have one float design. There are presently five: Provor, Apex, Solo, S2A, and Navis.

However, those 5 only ever embodied two temperature sensors, FS1 and SBE. Turns out—even better for accuracy—FS1 was retired late in 2006 when JPL’s Willis published the first ARGO OHC analysis after full (3000 float) deployment, finding (over too short a time frame, IMO) OHC was decreasing (!). Oops! Further climate science analysis purportedly showed FS1 temperature profiles in a few hundred of the early ARGO floats were probably erroneous. Those floats were taken out of service, leaving just SBE sensors. All five ARGO float designs use current model SBE38 from 2015.

SeaBirdScientific builds that sensor, and its specs can be found at www.seabird.com. The SeaBird E38 sensor spec is the following (sorry, but it doesn’t copy well from their website where all docs are in a funky form of pdf probably intended to prevent partial duplication like for this post).

Measurement Range

-5 to +35 °C

Initial Accuracy 1

± 0.001 °C (1 mK)

Typical Stability

0.001 °C (1 mK) in six months, certified

Resolution

Response Time 2

500 msec

Self-heating Error

< 200 μK

1 NIST-traceable calibration applying over the entire range.
2 Time to reach 63% of nal value following a step change in temperature

That is a surprisingly good seawater temperature sensor. Accurate to a NIST calibrated 0.001°C, with a certified temperature precision drift per 6 months (1/8 of a float lifetime) of ±0.001°C. USCD says in its ARGO FAQs that the ARGO temperature data it provides is accurate to ±0.002°C. This suffices to estimate the about 10W/m2 OHC intent per 1000000 km2 ARGO ‘pixel’.

BUT, there is still a major ‘fit for purpose’ problem despite all the ARGO strong positives. Climate papers based on ARGO habitually understate the actual resulting OHC uncertainty—about 10W/m2. (Judith Curry has called this one form of her ‘uncertainty monster’). Willis Eschenbach has posted extensively here at WUWT (over a dozen guest posts already) on ARGO and its findings. His four most relevant for the ‘fit for purpose’ scientific paper uncertainty question are from 2012-2015, links that WE kindly provided via email needing no explanation:


Decimals of Precision

An Ocean of Overconfidence

More Ocean-Sized Errors In Levitus Et Al.

Can We Tell If The Oceans Are Warming

And so we can conclude concerning the ARGO ‘fit for purpose’ question, yes it probably is—but only if ARGO based science papers also correctly provide the associated ARGO intent uncertainty (error bars) for ‘rigorous albeit broad spatial resolution’.

5 1 vote
Article Rating
83 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
January 16, 2019 10:48 am
Reply to  steve case
January 16, 2019 10:57 am

Hard to read that bilge since it was NOT a straightforward credible science literature, it was self serving and insulting to some people.

Greg
Reply to  Sunsettommy
January 16, 2019 11:28 am

Well of course it’s “their version” but it is interesting in that it reflects the ease with which data is “corrected” and the all round objective seems to be to coerce the data to fit the modelled expectations.

Latitude
Reply to  steve case
January 16, 2019 11:03 am

The ‘Little Ice Age’ hundreds of years ago is STILL cooling the bottom of Pacific, researchers find

https://www.dailymail.co.uk/sciencetech/article-6558285/Little-Ice-Age-hundreds-years-ago-cooling-bottom-Pacific-researchers-find.html

Greg Goodman
Reply to  steve case
January 16, 2019 11:22 am

Thanks for the link. It’s a while since I’d read that.

Since the revision, says Willis, the bumps in the graph have largely disappeared, which means the observations and the models are in much better agreement. “That makes everyone happier,” Willis says.

Note that the warming in the ERBE data used was post Pinatubo period and got wiped out by /98 El Nino. Since that it’s been pretty flat. Kinda , pause-like.
comment image

When I analysed the ERBE data a few years ago I found that a few months after the eruption there was an increased imbalance in the Earth’s energy budget. That persisted until the El Nino event. This was echoed in Josh Willis’ OHC data.

That suggests that it was changes to the stratosphere caused by Mt P which cause the 90s warming.
https://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/

Greg Goodman
Reply to  Greg Goodman
January 16, 2019 11:32 am

Figure 10 from that article shows the excess warming caused by Mt Pinatubo. See article for derivation.

comment image

BCBill
Reply to  steve case
January 16, 2019 12:07 pm

Does anybody know if there is a simple name for the type of bias that results from correcting the data when it doesn’t meet your expectation but not looking as hard for errors when it does? Expectation Error Bias perhaps- EEB?

David A
Reply to  BCBill
January 16, 2019 12:34 pm

” confirmation bias”

BCBill
Reply to  David A
January 16, 2019 8:16 pm

It is sort of confirmation bias but a special case. The data for all measures of warming are continuously being corrected to show more recent warming. If errors are random this can’t be. If (a big if) climate scientists are honest they might simply be more prone to finding the errors in their data that cause the results to diverge from their expectations- a quasi honest mistake given how little training scientists get on minimizing bias in experimental design and interpretation. Nine out of ten scientists don’t even understand the concept of “control”, sadly mistaking it for a nil treatment, but I digress. Given how pervasive unidirectional error detection is in climate science, we need quick word to decribe it. Expectation Driven Error Detection- EDED?

Martin Hovland
Reply to  David A
January 17, 2019 1:11 am

“Administrative adjustment”

Rocketscientist
Reply to  BCBill
January 16, 2019 12:58 pm

Scientific fraud.

Prjindigo
Reply to  BCBill
January 16, 2019 1:27 pm

“fraud” is the correct term for altering data to fit your conclusion

john
Reply to  BCBill
January 16, 2019 2:29 pm

“Classic Climate Science”!
Peer reviewed!

Alan Tomalty
Reply to  steve case
January 16, 2019 11:02 pm

The whole correction was based on bogus satellite altitude measurements which Rudd debunked as NOT FIT FOR PURPOSE in his 1st article as he explained in the 1st paragraph of this article. This is laughable.

richard verney
Reply to  steve case
January 17, 2019 1:12 am

whenever one talks about ARGO, one must bar the ‘correction’ in mind.

If there was reason to believe that the data being returned was suspect, then any competent scientist would take a random sample of the buoys showing the most cooling and a random sample of the buoys that showed the most warming and return these to the laboratory for testing to see whether there was any real hardware or software problem. THIS WAS NOT DONE.

The idea of simply removing from the system the buoys which showed the most cooling without first ascertaining whether they were faulty forever undermines the credibility of the ARGO measurements and tells you all that you need to know about quality control in this discipline.

OUTRAGEOUS

Greg
January 16, 2019 10:51 am

“when JPL’s Willis published …”

that would Josh Willis ? Useful to give his proper name for searching and attrubution rather than “JPL’s Willis “

Tom
January 16, 2019 10:51 am

eponious = eponymous?

Rud Istvan
Reply to  Tom
January 16, 2019 12:57 pm

Yup. My bad.

Greg
January 16, 2019 10:54 am

10 W/m^2 is several times larger than supposed CO2 warming, so this tells us nothing about Trenberth’s “missing heat”.

This article seems strangely open about that, surprisingly for Rud’s usually incisive and legally precise mind.

Rud Istvan
Reply to  Greg
January 16, 2019 1:19 pm

The figure is cululative, not instantaneous, and was the design intent (accuracy). Anything less than ‘about’ 10 cannot be distinguished by design. Anything more can be, provided the pixel is about 1000km per side. What this in essence does is ditinguish tropical from high latutude ocean, and For example the North Atlantic from the Southern Ocean. ‘Broad spatial scales’. It is inherently low resolution and too short time frame for AGW ‘missing heat’.

Curious George
Reply to  Rud Istvan
January 16, 2019 5:11 pm

Rud, I too am confused with the 10W/m2 uncertainty. Could you please elaborate?

Gilbert K. Arnold
Reply to  Rud Istvan
January 16, 2019 8:49 pm

typo again? cumulative?

Gamecock
Reply to  Rud Istvan
January 17, 2019 2:45 pm

This suffices to estimate the about 10W/m2 OHC intent per 1000000 km2 ARGO ‘pixel’.

I am not convinced that one measurement per million square kilometers is enough resolution to tell us anything with certainty.

‘Accurate to a NIST calibrated 0.001°C, with a certified temperature precision drift per 6 months (1/8 of a float lifetime) of ±0.001°C.’

Great! An extremely accurate reading. Representing 1,000,000 km2. Texas is 700,000 km2.

HD Hoese
January 16, 2019 11:09 am

Thanks for the analysis, such masses of information make it difficult to understand reliability given the current situation. I found this interesting. “Unlike for the atmosphere, pressure is not used directly, as obtaining sufficiently accurate measurements of pressure in the ocean is problematic (see Talley et al., 2011).” Salinity from space M. Srokosz, C. Banks
https://rmets.onlinelibrary.wiley.com/doi/full/10.1002/wea.3161

I have also wondered about any tendency of the floats to collect over time.

January 16, 2019 11:09 am

Among all of the articles on ARGO and SeaBird, and their accuracy I have yet to see a proper discussion on the effects of the changes in ambient temperature and supply voltage. I mention this a I calibrated test instruments to meet NIST standards. This was done in a temperature and humidity controlled room, with atmospheric pressure monitoring. Accuracies specified in the test equipment specifications are easily shown to be repeatable day after day after day while in that room. However, laboratory test equipment has specifications on measurement drift due to supply voltage change, ambient temperature, pressure and humidity changes. The ARGO floats are going to undergo changes in each of these as the do what they do, go up and down in the ocean. From my reading of the specification sheets several years ago these changes are significant! Yet they are ignored.
Just because you calibrate a device at laboratory ambient conditions, put it to use for 4 years, bring it back and determine that there was no change in the calibration does not mean that the reading taken at 1000 meters depth, 5 degrees C, and high humidity is going to be the correct. [What is the conditions of the electronics in the measuring equipment in the Buoy?] If it was there was no effect there would be no requirement to calibrate these devices at the specific ambient conditions required in calibration standards. Those that fly airplanes know that their equipment is tested to verify that it meets the conditions you fly in, and is tested by a certified lab that conforms with calibration standards.
Also, note that in the specifications noted in the article that the drift/change in reading from ambient temperature change, power supply voltage change, etc IS NOT LISTED. When I looked at this 8 -10 years ago the ambient changes were in the specification – where did it go? Looked for it today – no more info than listed above. Why did it go away?

RetiredEE
Reply to  Usurbrain
January 16, 2019 12:51 pm

I’ve designed sensor systems using platinum sensors (PRT). Platinum is the interpolation standard for temperature and is specified in the ITS-90 which is the current temperature standard. Your concerns are well stated and certainly the system specifications need to be fully understood. Although the specifications are a bit vague relative to the actual sensor system, I assume that these are at the output of the temperature sensor system since an entire range of calibration and linearization needs to be done to achieve the stated results. In achieving these specifications the measurement system needs to accommodate the variations in power supply voltages, ambient temperature, humidity, pressure, stress/strain on the circuitry, etc. These are all conditions which are addressed in precision measurement systems and are generally well understood. Also, knowing the environmental conditions under which the system is to operate these conditions would certainly be part of the qualification testing both during design and final acceptance. As far as variations in the system precision measurements are usually made against internal references (precision resistances) with ratiometric measurements that eliminate supply/reference voltage variations. Components in the system need to be selected for both accuracy and stability as part of the system and detailed design.

As to final calibration that most certainly is done in a controlled environment per calibration specifications employed in such laboratory measurements for which there are several to ensure the accuracy of the calibration equipment as much as the unit under test. The final instrument then must perform per specification in the end environment. If such sensors fail to meet those specifications in the field over their life cycle the design, manufacturing, and calibration need to be questioned. However making precise and accurate temperature measurements in the lab and in the field are well understood and is, next to time, likely the most common measurement made.

Reply to  RetiredEE
January 16, 2019 1:30 pm

Thank you. Good description of how to calibrate instruments, and the many inherent issues. Also, good to point out that a laboratory calibration at SLSTD conditions, while valuable, is only one of the calibrations that needs to be done.

Rick C PE
Reply to  RetiredEE
January 16, 2019 8:39 pm

Having spent 35+ years in laboratories measuring and calibrating all sorts of instruments, I’m skeptical of any temperature measurement data reported to 0.001 C (or F) precision. Calibrating to such levels requires enormously expensive equipment and strict attention to detailed procedures just to provide stable and accurate references. But when you take your highly precise and accurately calibrated instrument and measure something in the real world you rarely, if ever, find that what your measuring is stable within even tenths of a degree.

In most cases measuring to an accuracy of +/- 1% or 0.1% of scale is good enough. Sometimes precision of +/- 0.01% or better is necessary and realistic for things like mass, length, voltage, current and time. But temperature in my experience is not something that can be measured meaningfully it the field to 0.001 C. I did have thermistor and RTD instruments that displayed 3 decimal places, but no matter what was being measured the last digit or two almost never stabilized. We used to record a “visual average” of we thought these digits showed. Since the digital revolution, it is now common for data loggers to record the output of Analog to Digital converts to 8 or more decimals. Of course all but maybe the first 2 or 3 are nonsense.

czechlist
Reply to  Usurbrain
January 16, 2019 4:29 pm

Usurbrain, RetiredEE
I managed testing laboratories, including Metrology, at a major defense company for 20 years. I oversaw too many disciplines to rate myself as expert in any of them; however, Metrology/calibration proved most challenging. Difficulties in hiring competent employees, meeting and maintaining ANSI/NCSL and ISO standards, acquiring measurement standards in ever advancing technologies, setting and justifying reliability/recall intervals and getting end users to turn in their measuring and test equipment were a daily pain. I was disappointed, that so many end users had no concept of what tedious rigors were required in many accuracy verifications. The process, technicians and engineers were generally unappreciated and deemed as merely necessary evils. Out of Tolerance investigations to identify potentially defective product(s) were rare but made us very unpopular. I was also amazed at how many end users did not understand the limitations of their M&TE or even the correct method of making a measurement.
Anyways, whenever I see global temperature measurement stated to one hundredth of a degree I sigh.

Rick C PE
Reply to  czechlist
January 16, 2019 9:04 pm

Czechlist: I completely agree. I managed an ISO 17025 compliant independent lab. As of my retirement recently even the outside ISO 17025 accredited cal labs we used failed to provide proper Measurement Uncertainty statements on their cal certificates. “Within stated tolerance” was the most the common conclusion. Auditors rarely showed any understanding of the GUM. I frequently had to have reports reworked to eliminate excessive significant digits. But apparently life is a lot easier if you’re a PhD research. It seems there are no detailed measurement standards or competent auditors looking over your work. Just peer reviewers who don’t know and don’t care about such details.

Reply to  czechlist
January 17, 2019 7:57 am

These are the problems I am trying to point out. The problems that are ignored or those designing/using these buoys are completely unaware of that explode the myth of 0.000x accuracy.
I have worked in the Nuclear power industry for fifty years. tyen in the US Navy. All of the specifications listed in the manuals for the reactor instrumentation listed the obvious accuracy specifications listed above and also listed the effect on the module/device of changes in power supply voltage, changes in ambient temperature and even changes in ambient humidity. Not sure of how many of the engineers above have knowledge are Electrical or Electronic devices, but ALL are affected by changes in the surroundings. any competent engineer should know that the value of a resistor, capacitor, crystal (used for frequency stability), wiring, will change in value with changes in surrounding temperature, humidity, etc. If NOT why do we have massive air conditioning systems for the electronic equipment? When riged the ship/sub for reduced power the cooling systems for the vital equipment were rarely shut down – then only when it was absolutely necessary. As I said above, about ten + years ago I saw the specifications for the ARGO buoys, and the included a factor for the change in accuracy with temperature and voltage. These were in the same range as I saw for the equipment I worked on in the Nuclear Power Plants, The Buoy goes through a rather extreme change in temperature as it goes from the surface to it lowest depth. When the accuracy changes by 0.01 per degree and the change in temperature is 20 degrees then the accuracy has changed by 0.2 degrees. [Example only, no longer remember the factor. ]These buoys are run on batteries, That means the voltage out and amperage available will change with time after charge and temperature. Thus you have compounding accuracy problems. At comercial NPPs we usually calibrated this equipment “in situ,” that is in the cabinet and not on the bench. When bench calibrated in the instrument lab (which was maintained to ISA standards) we verified the equipment in situ afterward.
Just how do they believe that the electronic equipment will be at within +/- 1 degree of the temperature, humidity, etc of when it was initially calibrated? Will not happen it will get colder as it sinks and is down there for days.
It is the complete lack of appreciation of these changes that makes me post these comments. There is just no possible way that you can claim that these instruments are accurate beyond 2 decimal points.
Humidity also affects calibration. One of my technicians spent days trying to calibrate one of the reactor power rate change monitors. I asked did you check the 200 megohm resistor, he said yes and I replaced it, it was out of spec. I then asked did you wear cotton gloves when you touched the resistor and then clean with 100% alcohol afterward? and he asked Why? I responded because your skin oil/salt will change the value. After several cleanings with alcohol, the meter was working properly.

January 16, 2019 11:27 am

Great post Rud!

A C Osborn
January 16, 2019 11:52 am

The Argos system may up to the job, but we cannot trust the data coming out of NASA/NOAA

Adrian
January 16, 2019 11:54 am

I have a question/point regarding the stability of the temperature sensors. Having built and used thermocouple devices one of the greatest problems with stability is the reference temperature used in such devices. Most that know that thermocouples use the difference between dissimilar metals but what is often not understood is that there needs to be a second junction which is at a known reference temperature. In the case of hand-held devices, where the reference is inside the handle, heating the handle while holding the probe part in iced water will actually produce different readings.

My point here is how is a stable fixed reference achieved in the Argo buoys? Inside a NIST lab which would be temperature controlled the error would be tiny. However, inside a buoy which is experiencing a temperature gradient over days the “fixed” internal reference would be anything but stable. Over a period of months this would likely drift over several degrees. Once back in stable lab conditions the same gauge would be to all intents and purposes as stable as predicted.

AndyE
January 16, 2019 11:55 am

Isn’t the time frame the most important factor? Given enough time the “uncertainty monster” will solve itself. The Argo floats have been there only a very few years. After another 100 years results will appear. The old bucket-over-the-railing system, if consistent over enough time, would probably also prove accurate (for surface temperatures).

Rud Istvan
Reply to  AndyE
January 19, 2019 5:06 pm

Yes. A very late reply to put on record something ctm and I discussed at our intercoastal side lunch today, which I regret not having thought about until post comments prompted it.

The ARGO mission is ~10W/m2 per ‘pixel’. The most recent literature for 1992-2015 from 2000 meters is 0.6W/m2. The presently shut down NOAA site says <1W/m2. So the ARGO design spec says it takes more than a decade for ARGO to measure OHC. Actually worse, since ARGO has no accurate baseline from which to measure. So the first ‘decade’ of ARGO gives a ‘rigourous baseline’, while the second decade plus gives the first ‘rigourous’ ARGO OHC measurement. Coming soon between 2025 and 2030.

Now, this is just another way to restate Willis Eschenbach’s eeferenced ARGO posts.
But now on WUWT record without an addendum.

DayHay
January 16, 2019 12:00 pm

Very ingenious devices even if the data was suspect or tortured after the fact.

Frank
January 16, 2019 12:04 pm

Rud: Thanks for compiling this information. As best I can tell, however, if you have 4000 Argo floats with thermometers all equally like to drift up OR down by less than 0.001 K in six months, then averaging the results is likely to deal effectively with this problem. If you have 4000 ARGO floats, all of which drift upward by 0.001 K in six months, you will have a large systematic error in OHC. The way to address this problem is to test the instruments that have been removed from ARGO floats near or at the end of their lifetime and determine which scenario is occurring. Have such studies been done?

Ideal, the developers would have tested the stability of their instruments for a decade before deploying them, but no one was going to wait a decade before deployment. So they are presumably doing so upon retirement.

Rud Istvan
Reply to  Frank
January 16, 2019 1:04 pm

For salinity, yes, as noted. Done because the ARGO knew the salinity sensors were problematic. In my research I found no equivalent for the SBE38 temperature sensors. Granted, I am not an Argo or sensor expert and spent days, not months, putting this together.

Steve O
Reply to  Frank
January 16, 2019 1:25 pm

“So they are presumably doing so upon retirement.”

It wouldn’t be hard for them to do that. They would just need to drop all the retirees in the same place as one of the new floats whose measurements they trust.

adrian
January 16, 2019 12:24 pm

sorry my last attempt dissapeard after posting. Just checking if there is a posting issue.

Clyde Spencer
January 16, 2019 12:34 pm

Rud,
You quoted, “Accurate to a NIST calibrated 0.001°C, with a certified temperature precision drift per 6 months (1/8 of a float lifetime) of ±0.001°C.” Can we presume that the precision of individual readings is then ±0.0005°C, barring malfunctions?

That then leaves us with a separate question of whether the sampling protocol and density is sufficient to match the accuracy and precision of the sensors (assumed to be functioning within tolerance). The answer to that, when measuring a variable, is to define the uncertainty of the accuracy as being ±2 standard deviations of the sample population, say the measurements taken at a particular depth. The reported accuracy should be of the same order of magnitude as the ±2 standard deviations. The standard error of the mean is only applicable for something with a fixed value, which is measured numerous times with the same instrument.

RetiredEE
Reply to  Clyde Spencer
January 16, 2019 1:14 pm

Some interesting comments. Generally the rule of thumb is that the resolution of the measurement should be 10 times the accuracy specification. Depending on the particular property being measured this may be a bit more or a bit less. Given today’s analog/digital converters (ADC) resolution of 18 to 24 bits is available. Assuming the range is -5 to +35K an 18 bit converter would give a resolution of 0.00015 K which would probably be quite adequate. It would be interesting to know the actual resolution.

Beyond the the accuracy of the temperature measurement I would find it more important to know the sensitivity of the analysis (heat content, etc) to the temperature. That would be the more relevant question in my mind as to whether the temperature measurement is adequate for the purpose. If their models predict the end of life on earth when the ARGO system shows a system drift of 0.001K then maybe we need to ask some other questions.

Reply to  RetiredEE
January 16, 2019 3:58 pm

Given today’s analog/digital converters (ADC) resolution of 18 to 24 bits is available. Assuming the range is -5 to +35K an 18 bit converter would give a resolution of 0.00015 K which would probably be quite adequate.

Since they have been making them since 1997 I would guess the ADC was a 12 bit SAR ADC and they did oversampling/decimation. While higher resolution ADC’s were available they were power hogs and not something one would desire in a probe that is trying to measure temperature with high accuracy.

RetiredEE
Reply to  Greg F
January 16, 2019 8:48 pm

GregF – thank you for the comment. I would suggest that 12 bit resolution would not meet the specified accuracy/resolution of 0.001 K as over the 40K specified measurement range 12 bits would be .0097 K with out any margin outside the range. They were probably not SAR ADC as you say they are rather power hungry but possibly sigma-delta types which have a slower acquisition rate but can achieve higher resolution. They are typically lower power than the higher speed SAR ADCs. You may be right that the higher resolution devices were less prevalent at that time although there were devices being introduced during that time. These were out of the price range for the systems I was designing then so I didn’t look at them much. There are other ADC methods that yield higher resolution such as dual or multi slope integration systems although these tend to be a bit more fussy to implement. I have not seen any description of the actual measurement approach used in the ARGO systems. It would be interesting to find out how they actually achieved the specifications. If anyone can provide more information on the sensor design it would be fascinating.

Analog Design Engineer
Reply to  RetiredEE
January 17, 2019 8:25 am

There is a book by Marc Le Menn called ‘Instrumentation and Metrology in Oceanography’ which has some detail about the measurement process, calibration etc. Part of it is free to view on the google books link below.

https://books.google.ie/books?id=oWwANyKd8k8C&pg=PT25&source=gbs_selected_pages&cad=2#v=onepage&q&f=false

RetiredEE
Reply to  RetiredEE
January 17, 2019 3:54 pm

Analog Design Engineer: Thank you for the reference. Some very interesting material which I’m adding to my reference list.
The discussion as far as I can discern relates to the accuracy specification of the ARGO temperature sensor. In general from what I’ve seen in industry has the accuracy specification relative to a traceable standard in laboratory conditions and implying these are consistent across the operating conditions of the ARGO system. “Accuracy” specifications are relative to some traceable standard and are not necessarily related to the absolute uncertainty of the measurement. To know the measurement uncertainty you would need to know the instrumentation uncertainty as well as the uncertainty through the entire calibration chain. As Crispin points out below that can propagate through the series of measurements.
I guess I would still like to know how the temperature measurement is used in producing the modeled heat content estimates they report for ocean heat content and average temperature. The error analysis would be interesting and likely give light to the actual uncertainty in the computations.

Crispin in Waterloo but really in Beijing
Reply to  Greg F
January 17, 2019 3:17 am

There are two uncertainties: the sampling error (which we can’t do much about) and the uncertainty about the measurements.

All uncertainties propagate if there are mathematical procedures used. In this case, the readings are added and the total divided by the number of readings. There are spatial factors applied but let’s assume they are absolute “numbers” not “readings” with an uncertainty.

Even when the uncertainty is only ±0.001, (that is, 0.001±0.001), summing thousands of readings and finding the average increases the uncertainty, and the magnitude increases with the accumulation of each individual uncertainty. Uncertainties do not “cancel out”. The uncertainty is a reflection of what is probable, in this case that the answer lies between those extremes 68% of the time, with a 32% chance that it is outside that range.

Let’s attempt a sample calculation which, for an average, adds uncertainties in quadrature. For 1000 readings the average is [SUM(1:1000)]/1000 = n

The uncertainty for 1000 readings of equal quality is

SQRT[(±0.001^2)*1000] = ±0.0316

So correctly expressed, the average is n±0.033
For 10,000 readings the answer is n±0.100
For 100,000 readings the answer is n±0.316
For 2,000,000 readings the answer is n±1.41 degrees

No one knows the average temperature of the oceans within 0.001 C so obviously the ocean heat content cannot be known with greater precision than the temperature.

If you do not agree with this statement, argue with Harvard:
http://ipl.physics.harvard.edu/wp-uploads/2013/03/PS3_Error_Propagation_sp13.pdf

Tom Schaefer
January 16, 2019 12:35 pm

I have an idea for satellite altimetry: Could you fly a three-satellite constellation that uses laser links to measure the distance to it’s partners every few nanoseconds to within a few wavelengths, and then use a Kalman filter to precisely determine their velocity and relative position (maybe barrow algorithms from the Navy’s Cooperative Engagement Capability), and then feed this to a geo-model to very precisely determine their altitude.

tty
Reply to  Tom Schaefer
January 16, 2019 1:07 pm

That is exactly what GRACE and GRACE FO does. It will determine the gravity field with great precision, but is no good for altimetry.

troe
January 16, 2019 1:04 pm

Always impressed with the intellect and knowledge of many who post here. Thanks for the education.

paul courtney
January 16, 2019 1:12 pm

Rud: Good post, thanks. I see these expensive, complex devices to measure an indirect AGW effect and can’t help wonder why they didn’t devise an experiment to measure CO2 and temp (we could measure temp in C, like a european!) in a way that might just verify their theory. Seems like a vast rube-goldberg approach to make sure they AVOID measuring the very thing they are after. Are these shiny satellites and floats just a distraction?

Gerry Parker
January 16, 2019 1:43 pm

I performed a design for ARGO in the mid-1990s while employed at a sonobuoy company, and had a terrible time finding temperature sensors. I remember that first sensor having something like a 1C accuracy, and the buoy a lifetime on the order of six months or a year.

tty
January 16, 2019 1:46 pm

Actually I would say that the Argo buoys are not good enough to determine OHC, irrespective of how good their sensors are, for several reasons.

Coverage is incomplete. Areas with sea-ice (5 % of ocean area) are not covered, and there are a number of other large areas without buoys, e g the Sea of Okhotsk and Indonesia-Carpentaria. The lack of coverage of the Arctic and Antarctic means that two of the most important ocean areas for OHC where NADW and AABW originates are not covered.

They do not go deep enough. 2000 meters is quite insufficient. This means that almost half of the ocean volume is not sampled at all. It is true that salinity in the deep ocean is rather constant (c. 34.6-35.0 PSU), but temperature is emphatically not constant. The deep ocean is dominated by two water masses The North Atlantic Deep Water (NADW) and the Antarctic Bottom Water (AABW). The temperature of the NADW is +2 to +4 C, that of the AABW -0.8 to +2 C. Without measuring the volumes and temperatures of the AABW and the NADW it is quite impossible to determine OHC.

This amounts to that we are not measuring the amount of heat going down into the deep ocean around Antarctica and Greenland (no buoys there), we are not measuring the (varying) amount of heat moving through the deep ocean conveyor (no buoys there either), while being slowly warmed by geothermal heat, and then we measure the amount of heat coming up in upwelling areas, and simply assume that any changes are due to current climate, which is demonstrably untrue. And upwelling areas are also badly covered by ARGO, since currents are normally away from them.

It is usually claimed that this is unimportant because the deep water is only replaced quite slowly (on the order of every thousand years), but this ignores the fact that this deep water is all the time coming back to the surface in upwelling areas, and taking along “historical” heat changes that have nothing whatsoever to do with recent climate. There was a recent paper that demonstrated this for the North Pacific.

Here is Judith Curry’s insightful comments on this:

https://judithcurry.com/2019/01/14/ocean-heat-content-surprises/

She also points out that geothermal heat is a significant factor for OHC. Without measurements near the bottom this is almost unknown. It is usually stated to be about 0.1 Wm^-2, but the uncertainty is very large.

Reply to  tty
January 16, 2019 3:25 pm

Geothermal heating of the deep basins is not only an “almost unknown”, it cannot be assumed to be constant over time, either.

Rud Istvan
Reply to  tty
January 16, 2019 3:57 pm

I tried to prove that in depth meters by latitude, as shown in the mysteriously image inverted post image.
Geothermal deep thermal heat is a plausibe but unproven hypothesis.
A simple observation. If this were generally true, then one should see a deep ocean anomaly over sea floor spreading sites. But we do not.

Geoff Sherrington
Reply to  Rud Istvan
January 16, 2019 4:41 pm

RI,
Not so sure. I’ve seen figures for metres per second ocean current velocity around deep vents. IIRC, the heat dispersal/dilution could be rapid. Also, there is little argument that geothermal heat enters the deep oceans. Has it been detected yet by any instrument? I simply do not know. Geoff.

tty
Reply to  Geoff Sherrington
January 17, 2019 2:37 am

Yes, it has been measured in a considerable number of submarine drilling sites by e g DSDP, but the sampling is of course very sparse.

And no, it would make a measurable anomaly over spreading ridges. The effect would be on the order of tenths of degrees over the millenial residence time of the thermohaline circulation. Not that anyone has tried to measure it as far as I know.

John Hedinger
Reply to  tty
January 16, 2019 3:58 pm

And, since the floats “float”, they don’t give readings of the same geographic position each time.

Charlie
January 16, 2019 2:04 pm

Precise measurements between the ocean surface and the sensor’s phase center are one thing (hopefully atmospheric delay is measured directly versus estimated) , but satellite orbit accuracy, in terms of ECEF X/Y/Z position and velocity data of the satellite in the WGS earth’s reference frame are a big challenge to maintain and have close to real-time knowledge of.

For GPS signals at L band, ionospheric delays are handled by noting differential delay of signal propagation using L1 and L2 frequencies. Tropospheric delays are handled my models such as the Klobuchar model, and are mathematical formulas.

For GPS satellites at 10,000 miles altitude, it takes 6-10 world wide monitoring stations with all the slant range data from the remote tracking sites being sent in real-time back to Colorado for the AF to estimate and then refine the precise broadcast ephemeris in the Navigation message for the 25 or so satellites that are operational, and the error budget is in the cm range (50 or so). It costs a ton more in computer and human time and infrastructure investments, an said additional measurements to get below 5 cm for GPS.

So is there a published error budget for this satellite based measurement system, in terms of space segment errors, ground segment errors (biases and standard deviations) on a daily basis ?

Since there is a lot of significant digits being sent, its not a precision problem, but an accuracy problem. Do they have ground based benchmark altitudes they can check their latest estimates for sea surface altitude against ?

Then there is the spheroid, the ellipsoid, and the geoid. The altitudes that they generate between the instrument phase center and the sea surface would be in reference to the geoid, so that the gravity undulations are accounted for.

Then strong sustained wind fields will raise and lower local sea levels, so these effects, as well as tidal fluctuations, which are easier to model, have to be accounted for.

Air pressures changes without winds will also effect ocean levels.

A lot harder to due than GPS, which only has to worry about one other ‘segment’, the user segment, which is contained in the user’s GPS receiver hardware and software.

knr
January 16, 2019 2:32 pm

Even if you accept that the instruments have the ability to take the measurements, the massive issue that remains is that there simply not enough of them to support the idea of ‘ average ocean anything, and the number seen now has simply not been collecting data for long enough to offer any real meaning over time.
In short as usual its ‘better than nothing ‘ being sold as ‘settled science ‘

bit chilly
Reply to  knr
January 16, 2019 3:49 pm

+1 .there are many clever people posting here but sometimes they miss the most obvious problems. argo bouys measure the temperature of the water they travel within. anyone claiming the argo data is an accurate representation of the entire layer of ocean above 2000m depth is either grossly mistaken or telling lies.

Rud Istvan
Reply to  bit chilly
January 16, 2019 4:16 pm

An opinionated assertion. Now, based on Argo mission intent/ design, now prove your assertion true within the ARGO design intent.
You cannot, because you imagine accuracy/precision that was never intended nor specified. The tell is what your DID NOT specify as intended ‘accuracy’.

You missed the while point of thid post.
Which was, ARGO suffices fo its design intent, while Jason3 does not. I said claiming (or claiming failing) Argo mission accuracy/precision beyond its design intent is an unfair fools errand. Even if some ‘climate scientists’ and skeptics both try to unfairly do so

Robert B
January 16, 2019 3:50 pm

Interesting that they realised that the there was a problem when the wrong political result turned up. This is the greatest systematic error in The Science – identifying only systematic errors that nullify the warming. Harassing the contrarians amplifies it.

SeaBird seems to be creating instruments for the environmental custodian industry. A typical $1500 ultra high resolution instrument will come with specifications such as “resolution 0.001° or 0.0001°, accuracy +0.05°C between -80 to 250°C” you really need to be wary of the claims that there is little drift after such a trip.

StephenD
January 16, 2019 4:23 pm

Hi Rud,
great article.

Appears to be a broken link, try http://www.argo.ucsd.edu.

Erik Magnuson
Reply to  StephenD
January 16, 2019 9:18 pm

Stephen,

I noticed that too, figured “uscd” was a typo for “ucsd”.

Geoff Sherrington
January 16, 2019 4:33 pm

Here is additional recent info to add to the welcomed article by Rud Istvan.
National measurement laboratories exist in many countries, for when independent, specialist answers are needed for questions like these I asked of Britain’s National Physical Laboratory, Teddington:
Q: “Does NPL have a publication that gives numbers in degrees for the accuracy and precision for the temperature measurement of quite pure water under NPL controlled laboratory conditions?
At how many degrees of total error would NPL consider improvement impossible with present state-of-art equipment?”
A: “NPL has a water bath in which the temperature is controlled to ~0.001 °C, and our measurement capability for calibrations in the bath in the range up to 100 °C is 0.005 °C. However, measurement precision is significantly better than this. The limit of what is technically possible would depend on the circumstances and what exactly is wanted. We are not aware of any documentary standard setting out such limits.”
……………………………………….

Water bath performance is a better guide than straight thermometer accuracy, which is little more than the performance of its voltmeter.
Brief summary comments like those above should be treated with caution. But, the NPL ‘capability’ of 0.005 C under controlled conditions does not sit easily with the claimed Argo figure of 0.001 C.
There is a need for a detailed, independent study of signal:noise in the Argo operational environments. Can any reader reference one?
The whole credibility of Argo lies between 0.001 C and 0.01 C as the correct accuracy figure. That has to be got right before much else has meaning. Geoff.

January 16, 2019 4:51 pm

What a great post. Thorough and professional research, clear and well documented. Interesting, and relevant analysis and presentation.
Thank you sir.

January 16, 2019 4:51 pm

Rud

Forgive me if you answered this, or it’s inherent in your essay (as you know I’m not in any way scientific) and I usually read the comments for clues but don’t have time tonight. But are the Argo buoys used as satellite sea level indicators. i.e. are signals bounced off them to give a sea level reading every time they surface, or are signals simply transmitted from the buoy to the satellite relaying depth, salinity and temperature etc.?

Personally I can’t fathom (forgive the pun) how a satellite signal can be bounced off the sea surface, even thousands of times a minute (or a second) then return a signal that’s accurate to a fraction of a millimetre. Individual wave, squalls, swells, typhoons, jumping fish, ships wakes and passing whales must all affect the measurements that surely can’t be calculated down to a level of accuracy engineers can’t achieve when even building a structure on dry land.

I guess an Argo buoy would at least give a reasonable fixed point of reference that might at least eliminate lapping waves.

Sorry for the stupid question.

Phil Rae
Reply to  HotScot
January 16, 2019 7:30 pm

HotScot

These are 2 distinct systems and not related to one another. The Jason satellites use LIDAR or some similar ranging technology to measure sea height and therefore suffer from all the inherent problems you mention, as well as wind, atmospheric pressure & tidal influences, amongst others.

The ARGO buoys just use a different set of satellites (Iridium network) to send back their data. There’s no connection between the 2 networks.

Reply to  Phil Rae
January 17, 2019 12:20 am

Phil Rae

Much obliged. It just seems silly to this layman that the Argo buoys aren’t used as a reference point of some description to assist with sea level measurements. But I don’t even know if it’s possible.

I guess it’s not.

Ted S
January 16, 2019 5:37 pm

Given that each drifting ARGO float measures the volume of water equivalent to 8 times the volume of the Great Lakes (Lakes Superior, Michigan, Huron, Erie, and Ontario) combined, the errors bars must large indeed.

https://wattsupwiththat.com/2015/10/08/theres-life-in-the-old-pause-yet/#comment-1620775

Also see the comment by Richard Verney just below the included link addressing the expedient removal of the floats showing cooling without calibration testing to justify their removal.

Tom Abbott
January 16, 2019 7:16 pm

Has there ever been a case where a ship has collided with one of these buoys?

AlexS
January 16, 2019 10:02 pm

To the author:
You should have put a link to your first post.

Adrian
January 17, 2019 6:55 am

I’ll try again but my comments re thermocouples having to have a fixed internal reference seem to never make it.

John Andrews
January 17, 2019 10:35 pm

Thanks, Rud. Great conversation.

Charlie
January 18, 2019 11:59 am

Does anyone have any information on the manufacture – quality of materials, precision of parts testing in relevant conditions ? What effect does change in pressure and temperature have on materials? 150 cycles of up to 2000m of water pressure are quite extreme and I would concerned with the hydraulic bladder and pressure sensor. Pressure, movement, presence of quartze sediment and salinity could cause problems. Any defects which allowed gas to enter parts which were then subject to pressures of 2000m of water and then raised back to the surface could cause expansion cracks.

Charlie
January 18, 2019 12:37 pm

Crustal thickness varies, there are plumes of upwelling magma in the mantle, there are areas where tectonic plates are spreadingand releasing magma and CO2 and all these factors vary over time. Does anyone care to put numbers to temperature, CO2 and energy ?

WBWilson
January 18, 2019 12:40 pm

Late to the thread, as usual, but one comment to add. During this current Ice Age the Earth’s oceans have not been this cold for hundreds of millions of years. And the long term trend (over the last 2 million years) is still downward.

A little more heat would be a good thing.

Gary Palmgren
January 20, 2019 10:30 am

Even if you have perfect thermometers, there is no perfect place to put them. Temperatures vary across the sample. You don’t just set the oven temperature, you need a meat thermometer deep in the turkey and those thermometers are only inches apart.

So the Argo floats know where they are within 100 meters at the surface. Where are they in the currents 1000 meters down in the ocean currents. How accurate is the depth? How do they compensate for the varying air pressure above the ocean? I would guess more than 10 cm in likely error. A 100 meter diameter circle 0.1 meters deep contains about 800 metric tons of water. What is the temperature variation in that volume at the more interesting 50-100 meter depth? That is your sampling error.

Hartmut Hoecht
January 22, 2019 8:19 am

I have specifically asked the ARGO program office for a system error, and they replied ‘we don’t know’. Seabird does not respond to this question. I suggest many contributing factors of processor, conversions, power supply, the antenna system, the satellite data link error up and down, the switching error on the satellite, and the re-conversion to analog on the ground, etc. etc. A dark hole. The 0.002 degree thermistor accuracy is meaningless in this context.

Frank
January 22, 2019 3:38 pm

Rud: I wrote the ARGO info email address posted on their home page about testing of sensors after years in the field and received the reply below. A limit on lifetime drift is far more useful than a limit on 6 month drift.

“Thanks for your questions about the temperature sensors on Argo floats. In fact, the temperature sensors are very stable and are supposed to be accurate to +/- 0.002 degrees C over the float’s lifetime. The average float lifetime is 4.5 years, but many floats have lasted 10 years and there has been no detectable drift in the temperature sensors. Salinity sensors definitely drift over time and there is a delayed mode quality control procedure to address this and correct salinity back to close to shipboard CTD quality.

Over the years, floats have occasionally been recovered to look at the sensors and try to determine what has happened over their lifetime. This is particularly helpful for sensors in development. It is usually an expensive process since the floats are designed to be in the open ocean and sending a ship out to get them is costly.

Our Data FAQ page has some additional information about sensor accuracy and drift. http://www.argo.ucsd.edu/Data_FAQ.html

Frank
Reply to  Frank
January 22, 2019 3:46 pm

Rud: Looking at the 1900 m and 1500 m ARGO temperatures at the link below, I see 15-year rises of about 0.02 and 0.03 K. If sensors are stable to 0.002 K, these should be real. I’ll need to stop ignoring what I read about OHC below 700 meters.

http://climate4you.com/