Argo, Temperature, and OHC

Guest Post by Willis Eschenbach

I’ve been thinking about the Argo floats and the data they’ve collected. There are about 4,000 Argo floats in the ocean. Most of the time they are asleep, a thousand metres below the surface. Every 10 days they wake up and slowly rise to the surface, taking temperature measurements as they go. When they reach the surface, they radio their data back to headquarters, slip beneath the waves, sink down to a thousand metres and go back to sleep …

At this point, we have decent Argo data since about 2005. I’m using the Argo dataset 2005-2012, which has been gridded. Here, to open the bidding, are the ocean surface temperatures for the period.

Argo_Surf_Temp_2005_2012

Figure 1. Oceanic surface temperatures, 2005-2012. Argo data.

Dang, I like that … so what else can the Argo data show us?

Well, it can show us the changes in the average temperature down to 2000 metres. Figure 2 shows that result:

Argo_Avg_0m_to_2000m_2005_2012Figure 2. Average temperature, surface down to 2,000 metres depth. Temperatures are volume-weighted.

The average temperature of the top 2000 metres is six degrees C (43°F). Chilly.

We can also take a look at how much the ocean has warmed and cooled, and where. Here are the trends in the surface temperature:

trend ocean surface temps argo 2005 2012Figure 3. Decadal change in ocean surface temperatures.

Once again we see the surprising stability of the system. Some areas of the ocean have warmed at 2° per decade, some have cooled at -1.5° per decade. But overall? The warming is trivially small, 0.03°C per decade.

Next, here is the corresponding map for the average temperatures down to 2,000 metres:

trend ocean 0to2000m temps argo 2005 2012Figure 4. Decadal change in average temperatures 0—2000 metres. Temperatures are volume-averaged.

Note that although the amounts of the changes are smaller, the trends at the surface are geographically similar to the trends down to 2000 metres.

Figure 5 shows the global average trends in the top 2,000 metres of the ocean. I have expressed the changes in another unit, 10^22 joules, rather than in °C, to show it as variations in ocean heat content.

OHC argo 0to2000 2005to2012 loess decompFigure 5. Global ocean heat content anomaly (10^22 joules). Same data as in Figure 4, expressed in different units.

The trend in this data (6.9 ± 0.6 e+22 joules per decade) agrees quite well with the trend in the Levitus OHC data, which is about 7.4 ± 0.8 e+22 joules per decade.

Anyhow, that’s the state of play so far. The top two kilometers of the ocean are warming at 0.02°C per decade … can’t say I’m worried by that. More to come, unless I get distracted by … oooh, shiny!

Regards,

w.

SAME OLD: If you disagree with something I or anyone said, please quote it exactly, so we can all be clear on exactly what you object to.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
232 Comments
Inline Feedbacks
View all comments
Paul Westhaver
March 2, 2014 3:41 pm

Willis Eschenbach says:
March 2, 2014 at 2:39 pm
Paul Westhaver says:
March 2, 2014 at 12:26 pm
I don’t see how 4000 devices can produce a 4-dimensional (x,y, temp, time) image that has that level of resolution in each frame…
Willis Eschenbach then says:
You’re looking at Nyquist backwards. Since it’s physically sampled at ~30000 samples per frame, we can’t detect anything changing on a smaller plysical scale than that. In fact, we can’t really detect anything changing on a smaller scale than twice that size …
___________________________________________________________________________
Nope. I don’t think so. In order to detect a frequency f you need to sample at roughly 2 x the frequency f.
In Figure 1, if that animation is generated with Argo data which is only 4000 buoys then the actual image resolution belies the implied artistry. If rather, the image is generated with a satellite imagery, with a higher resolution sensor array, say 60,000 cells, then the 30,000 pixels seem approximately reasonable.
Since the image is titles 0-2000 meters, then it can’t be sub surface temperatures if a satellite sensor was used. I assume Argos were used.
So either the Figure 1 is generated with a satellite OR it is at least 90% an artists’ rendering.
So. Was the image in Fig 1. generated with ~4000 thermometers, or ~60,000?

George E. Smith
March 2, 2014 3:46 pm

“””””…..Paul Westhaver says:
March 2, 2014 at 12:58 pm
george e. smith says:
March 2, 2014 at 12:36 pm
Why all this sudden interest in the Nyquist sampling theorem ?
____________________________________________________
George,
My interest in adhering to the Nyquist-Shannon ……””””
Paul, you didn’t catch that my question was chastisal; not at you, but at the level of ignorance among people who deal with data samples.
We are all here chatting on a communication network, that simply would not exist, but for the fact that communication engineers, DO NOT neglect their Nyquist duties. Not familiar with Claude Shannon having much to do with the Nyquist theorem. He of course does have his own pedestal, when it comes to the signal to noise ratio of bandwidth limited systems, and data capacity of noisy channels.
All of the gee whizz statistics that climatists do on measurements of weather/climate variables, is total nonsense, when they don’t even have valid samples being input as data.
The spectrum foldback of undersampled signals, results in spurious signals that are now inside the signal bandwidth, so they are impossible to remove with filtering, without also altering real signal information, which itself will corrupt the measurement.
And as Willis pointed out, if you under sample by only a factor of two, the spectrum folds back all the way to zero frequency, which gives aliasing noise at zero frequency, which happens to be the average of the signal; exactly the information that was being sought; that no longer is valid.
No amount of statistical prestidigitation, can buy one a reprieve from the consequences of a Nyquist violation.
[The mods admit this is only the second chastisal comment ever submitted to the web. 8<) Mod]

Billy Liar
March 2, 2014 3:49 pm

Willis (E) said:
The part that doesn’t fit your preconceptions is that Josh Willis had already written about the recent ocean cooling … so he was surprised and had to retract some claims when he noted that the purported cooling didn’t agree with the sea level data he had accepted as valid.
Willis (E) – do you think (J) Willis was right to throw out actual measurements from Argo floats because he believed some satellite fiction about sea level? Envisat, of course, agreed with the floats but didn’t last long enough.
http://www.c3headlines.com/2011/04/eu-satellite-documents-huge-sea-level-decline-that-us-scientists-refuse-to-discuss-or-publish.html

Billy Liar
March 2, 2014 3:50 pm

Tag failure! Italic is supposed to stop after first para.

George E. Smith
March 2, 2014 3:58 pm

“””””…..Willis Eschenbach says:
March 2, 2014 at 3:37 pm
George E. Smith says:
March 2, 2014 at 3:25 pm
“””””…..RichardLH says:
March 2, 2014 at 1:24 pm…..”””””
OK ! I surrender.
Clearly, nuking Hiroshima, and Nagasaki didn’t do anything much; on average. well just divide by n and you will see it’s not much.
So sorry for the kerfluffle Dr. Trenberth; yes the sun does shine 24/7/52 continuously but dimly, at the south pole. How could I have gotten so confused ? Of course an input trickle of energy, no matter how small, will produce as high a Temperature as you like; you just have to wait a while for the average to add up. It’s a Heisenberg thing: dT x dt >= h / 2pi

Paul Westhaver
March 2, 2014 4:17 pm

George,
I understand your point and I’ll even ‘ll go one further and acknlowledge that W.E. is entitled to do whatever he wants regardless of Nyquist. My persistence was 1) to encourage due diligence on our part because we should and are able and 2) to get some actual detail about the data set source details. It seemed way to pretty a plot based on the number of available coincident/(same depth notwithstanding) temp measurements.
In any event, no personal chastisement taken…. today.
Cheers.

March 2, 2014 5:03 pm

Here’s the original paper by Josh Willis showing the sharp reversal in ocean heating to cooling in the original data when the ARGO system came on board in 2003-2004:
Figure showing the original data:
http://3.bp.blogspot.com/-Y0SBIZuHTsI/UxPRKFEi68I/AAAAAAAAFzI/dtHsmluws6Y/s1600/www.junkscience.org+Greenhouse+heat_2006_1.pdf.png
The paper and the correction published the following year [all in same pdf]:
http://www.junkscience.org/Greenhouse/heat_2006_1.pdf
The suggestion of ocean cooling in the 2006 paper caused quite a stir in the climate community, so a “software problem” was found in the cold floats and the data was thrown out for those. Did any floats run too hot?
Draw your own conclusions on the merits of throwing out the cold floats from the published correction above.

Mike Wryley
March 2, 2014 5:20 pm

This was a pretty good blog so far, esp with WE’s epiphany on samples per month, lots of good points from divergent disciplines,

RoHa
March 2, 2014 5:37 pm

“The top two kilometers of the ocean are warming at 0.02°C per decade … can’t say I’m worried by that.”
O.K., smart Alec. So what are we doomed from now?

george e smith
March 2, 2014 6:19 pm

WE’s map of the Argo distribution from 2-mar 2014 sure looks like a quite reasonable poke around the oceans. I suppose one could deduce from knowledge of their exact locations, what the likely maximum useful bandwidth of spatial signals would be. Remember that under sampling simply hides you from knowing what went on in between samples (Argo buoys).
I’m more than elated, with what WE’s movies how about Wattgoseon out there in the oceans during a typical year.
Yes if I lived on an islet in the middle of an Argo hole, I’d feel jilted. So I’d ask for a grant to get our own Argo buoy for the eyelet. But remember that -2,000 meters, doesn’t get you very close to too many islets.

Mervyn
March 2, 2014 7:09 pm

About those 4,000 Argo floats in the ocean… I seriously worry about the maintenance program relating to these Argo floats. I really do! And so should everyone else.

Matthew R Marler
March 2, 2014 7:10 pm

Willis, thanks again. I have not had time yet to read all comments and replies — I check back later.
Two questions: 1, what is the function that is fitted to the data to get the seasonal variation? 2, Is the “residual” the difference between the Loess smooth and the fitted seasonal variation. Sorry if I missed that.
I sympathize with everyone on the issue of standard errors: yes they would be good, but they are hard work for spatio-temporally correlated data, especially when the “space” is 3D, not merely 2D as with Kriging. It’s even harder when, as with the Argo floats, they are drifting.
I think everyone understands that you understand that the conversion from temps to heat is an uncertain process. It is still useful to do something first, instead of postponing every detail until later. You could work with these data a long, long time. God Speed!

ECK
March 2, 2014 7:29 pm

I am truly impressed with the volume of responses generated in response to Willis! It’s gratifying to see that a person so versed in statistics and math is so widely read and (mostly) appreciated. Keep it up Willis! I enjoy these discussions even though I’m not quite up on the math any more (I’m old).

ferdberple
March 2, 2014 7:43 pm

Willis Eschenbach says:
March 2, 2014 at 2:27 pm
In any case, I fear that based only on your memory, you’ve made an unsubstantiated, unpleasant, and untrue accusation of scientific malfeasance against Josh Willis …
===========
nonsense. I’m talking about the process not the people. I’ve pointed out that making adjustments based on the observed trend cooling trend is a form of experimenter expectation bias. It is reminiscent of using thermometer data to selectively eliminate some sets of tree rings during “calibration”.
Statistically the floats that are “too cool” are telling you something. They are telling you that the Argo data is likely not as accurate as believed. If there are floats that are “too cool” then there may be undetected floats that are “too warm”. These later would not trigger a search, because a warming trend would match expectations.
This was the problem with tree ring calibration. It gave a false statistical measure of accuracy. It made the tree rings that passed calibration look more accurate statistically than they were. In reality the tree rings that did not pass calibration were telling you that tree rings were not accurate thermometers.
As for my memory:
jeffguenther8 says:
March 2, 2014 at 2:12 pm
http://earthobservatory.nasa.gov/Features/OceanCooling/page1.php
“Basically, I used the sea level data as a bridge to the in situ [ocean-based] data,” explains Willis, comparing them to one another figuring out where they didn’t agree. “First, I identified some new Argo floats that were giving bad data; they were too cool compared to other sources of data during the time period. It wasn’t a large number of floats, but the data were bad enough, so that when I tossed them, most of the cooling went away. But there was still a little bit, so I kept digging and digging.”

ferdberple
March 2, 2014 8:46 pm

Selectively including/excluding floats or tree rings based on temperature is “selecting on the dependent variable”, since it is temperature you are seeking to study, under the assumption that floats or tree rings provide a measure of temperature.
http://www.nyu.edu/classes/nbeck/q2/geddes.pdf
Most graduate students learn in the statistics courses forced upon them that selection on the dependent variable is forbidden, but few remember why, or what the implications of violating this taboo are for their own work.
http://poli.haifa.ac.il/~levi/pitfalls.html
This is the basis for warning about the hazards of “selecting on the dependent variable”. This expression refers, not only to the deliberate selection of cases according to their scores on this variable, but to any mode of selection correlated with the dependent variable (i.e., tending to select cases that have higher, or lower, values on that variable) once the effect of the explanatory variable is removed. If such a correlation exists, causal inference will tend to be biased (Coolier, 1995, 461).”

Ed, Mr. Jones
March 2, 2014 8:58 pm

Willis,
It appears, during the period of the animation, that the warmest waters achieve higher latitudinal amplitude in the Northern Hemisphere than in the Southern. I wonder if this holds over longer periods . . .
Sometimes I visualize the NH & SH as two horizontally opposed cylinders connected by a single intake/exhaust manifold . . . .

March 2, 2014 9:07 pm

Um, if they nap at 1000m, how do we know what the temperature is at 2000m? Same program as alleging that the deep ocean warmth is trapped beneath a salinity inversion in Antarctica. The fresher water is how deep? The temperature/salinity/density relationship is non linear and the amount of freshwater is trivial in comparison to ocean volume.
Beware the constant effort to bury the missing heat in the deep ocean. The Argo floats were clearly designed to measure the mixed layer. Kick me, please, but I suspect they just extrapolated down another thousand meters.

March 2, 2014 9:21 pm

A couple more questions and observations.
What is the vertical resolution in meters of the temperature readings in each sounding?
Figure 2, from the 11:42 am reply to rgb has 42% of all the ocean with between 1 and 24 soundings over the entire period. That is at best 4 per year. But do they tend to cluster; 2 or 3 ten days apart then a long hiatus before the next bouy drifts into the cell?
Does the data show the time of recording as well as the date?
The long and short of it is, is there a diurnal component to the surface temperature and is it recorded?
if an ARGO sample the surface at 13:30 on one day, will it be about 13:30 ten days later or does it let the time drift? Do they sample the surface around the clock?
Figure 2 shows much more blotchiness in the SH Pacific than I would have guessed for 6 years of data. Perhaps there is some divergence and convergence of float paths at 1000 m where they drift. Downwelling draws floats to concentrate, upwelling disperses them. If that is true, then floats would be preferentially sampling downwelling, saltier warmer water than the up welling water. An interesting test would be if High concentration cells show a slightly warmer temp than nearby low concentration cells.

March 2, 2014 9:33 pm

On ferdberble’s caution that floats that are too cool may sometimes be discarded,
An interesting test would be to see for each ARGO ID (there must be one in the data), are there any missing dates in its history?

Michael Whittemore
March 2, 2014 9:33 pm

This blog post, averages out the the temperatures from 0- 2000 meters, but isn’t science (peer reviewed) saying that the lower part of the ocean is seeing an increase?

george e smith
March 2, 2014 9:56 pm

“””””…..Ed, Mr. Jones says:
March 2, 2014 at 8:58 pm
Willis,
It appears, during the period of the animation, that the warmest waters achieve higher latitudinal amplitude in the Northern Hemisphere than in the Southern. I wonder if this holds over longer periods . . .”””””
Couldn’t possibly be that northern winters occur around perihelion, while southern winters occur at aphelion, and are therefore colder and longer ??
Nah ! Can’t be that easy; must be some other reason.

george e smith
March 2, 2014 10:08 pm

“””””…..Paul Westhaver says:
March 2, 2014 at 4:17 pm
George,……”””””
Paul, you realize, I wasn’t criticizing Willis in any way. He ingeniously presented the available data from Argo, to give us some insight that would be hard to get by other means.
But he never claimed or intimated to reveal anything going on in between the buoys. So he accepted the spatial signal bandwidth that the Argo sampling dealt him. It was the suggestion by others, that some regions are under represented, so things are going on in there that we don’t know about (they are), but Willis did not try to expand his level of detail knowledge, beyond the resolution (or bandwidth), that the Argo arrays provide for.

Paul Westhaver
March 2, 2014 10:19 pm

George,
No I made no assessment about what you said to that effect at all. I didn’t consider what you said beyond the words you wrote.
My questions (as of yet unanswered except by you) was one of clarification precipitated by my knowledge of signal processing and image analysis.
It seems to me that the animation really should be considered and “artist’s simulated rendering” rather than a plot of data. The space filling is fine with me, but [it] should be labelled as such. Before I made such a claim, I wanted to be sure I understood the data set.
So. I consider the animated plot an vague approximation rendered by an artist base on a limited data set. Were this in a scientific journal, authored by a fat, go-tee sporting, megalomaniacal climate Nazi, I would be less congenial.

RACookPE1978
Editor
March 2, 2014 10:45 pm

Michael Whittemore says:
March 2, 2014 at 9:33 pm
This blog post, averages out the the temperatures from 0- 2000 meters, but isn’t science (peer reviewed) saying that the lower part of the ocean is seeing an increase?

Those are the claims. But there are no measurements across the oceans, in the deep oceans showing any of those guesses are actually true. further, there is absolutely NO evidence of ANY historical trend backing up those claims over any period of time.
Worse, even ‘IF’ there were historical evidence of any heat energy from the atmosphere getting “stored” into the deep oceans, heat transfer is – at EVERY energy change – ONLY a function of the difference in temperature or state. Water is 1000 denser than air,and has a different heat coefficient. If the atmosphere heats up by 1/5 of one degree, as it “might have” heated up between 1975 and 1998, the ocean – AT MAXIMUM – can only heat up by 1/1000 of that amount or about 1/1000 x 1/5 or 1/5000 of one degree in those same 25 years. Because the atmosphere has NOT heated up at between 1997 and 2014 due to CO2 – or ANYTHING ELSE – over a 17.5 year period, then the ocean could not have heated up at during that same period. “IF” something else (unknown) prevented the atmosphere from heating up during those 17.5 years since 1997, but drove ALL of that energy into the oceans between 1997 and 2014, the mechanism remains unknown. What “changed” between 1945 (when cooling started and CO2 began increasing recently) and 1973 (when heating started but CO2 continued to increase or 1997 (when heating stopped but CO2 continues to increase) or 2014 to cause “some” of the heat to go into the ocean (somehow) and “some” of it to go into the atmosphere at different rates over different periods of time obviously must also be unknown.
Now, even “if” the deep ocean were to warm by 1/5 of 1/1000 of one degree, then IT could only heat the air again at the end of the cycle by 1/5 x 1/1000 of one degree. In others, nothing.
These government-paid “scientists”: are getting paid billions of dollars per year to issue government-needed propaganda for their government-funding sources in the government-granting bureaucratic heads in the White House. Not facts, not science. Propaganda.