Krige the Argo Probe Data, Mr. Spock!

A few weeks ago I wrote a piece highlighting a comment made in the Hansen et al. paper, “Earth’s Energy Imbalance and Implications“, by James Hansen et al. (hereinafter H2011). Some folks said I should take a real look at Hansen’s paper, so I have done so twice, first a quick look at “Losing Your Imbalance“, and now this study. The claims and conclusions of the H2011 study are based mainly on the ocean heat content (OHC), as measured in large part by the data from the Argo floats, so I thought I should look at that data. The Argo temperature and salinity measurements form a great dataset that gives us much valuable information about the ocean. The H2011 paper utilizes the recent results from “How well can we derive Global Ocean Indicators from Argo data?“, by K. von Schuckmann and P.-Y. Le Traon. (SLT2011)

Figure 1. Argo float. Complete float is about 2 metres (6′) tall. SOURCE: Wikipedia

The Argo floats are diving floats that operate on their own. Each float measures one complete vertical temperature profile every ten days. The vertical profile goes down to either to 1,000 or 2,000 metres depth. It reports each dive’s results by satellite before the next dive.

Unfortunately, as used in H2011, the Argo data suffers from some problems. The time span of the dataset is very short. The changes are quite small. The accuracy is overestimated.  Finally, and most importantly, the investigators are using the wrong method to analyze the Argo data.

First, the length of the dataset. The SLT2011 data used by Hansen is only 72 months long. This limits the conclusions we can draw from the data. H2011 gets around that by only showing a six-year moving average of not only this data, but all the data he used. I really don’t like it when raw data is not shown, only smoothed data as Hansen has done.

Second, the differences are quite small. Here is the record as shown in SLT2011. They show the data as annual changes in upper ocean heat content (OHC) in units of joules.  I have converted OHC change (Joules/m2) to units of degrees Celsius change for the water they are measuring, which is a metre-square column of water 1,490 metres deep. As you can see, SLT2011 is discussing very small temperature variations. The same is true of the H2011 paper.

Figure 2. Upper ocean temperatures from Schuckmann & Le Traon, 2011 (SLT2011). Grey bars show one sigma errors of the data. Red line is a 17-point Gaussian average. Vertical red line shows the error of the Gaussian average at the boundary of the dataset (95% CI). Data digitized from SLT2011, Figure 5 b), available as a comma-separated text file here.

There are a few things of note in the dataset. First, we’re dealing with minuscule temperature changes. The length of the gray bars shows that SLT2011 claims that we can measure the temperature of the upper kilometer and a half of the ocean with an error (presumably one sigma) of only ± eight thousandths of a degree … 

Now, I hate to argue from incredulity, and I will give ample statistical reasons further down, but frankly, Scarlett … eight thousandths of a degree error in the measurement of the monthly average temperature of the top mile of water of almost the entire ocean? Really? They believe they can measure the ocean temperature to that kind of precision, much less accuracy?

I find that very difficult to believe. I understand the law of large numbers and the central limit theorem and how that gives us extra leverage, but I find the idea that we can measure the temperature of four hundred million cubic kilometres of ocean water to a precision of ± eight thousandths of a degree to be … well, let me call it unsubstantiated. Others who have practical experience in measuring the temperatures of liquids to less than a hundredth of a degree, feel free to chime in, but to me that seems like a bridge way too far. Yes, there are some 2,500 Argo floats out there, and on a map the ocean looks pretty densely sampled. Figure 3 shows where the Argo floats were in 2011.

Figure 3. Locations of Argo floats, 2011. SOURCE

But that’s just a chart. The world is unimaginably huge. In the real ocean, down to a kilometer and a half of depth, that’s one Argo thermometer for each 165,000 cubic kilometers of water … I’m not sure how to give an idea of just how big that is. Let’s try it this way. Lake Superior is the largest lake in the Americas, visible even on the world map above. How accurately could you measure the average monthly temperature of the entire volume of Lake Superior with one Argo float? Sure, you can let it bob up and down, and drift around the lake, it will take three vertical profiles a month. But even then, every measurement it will only cover a tiny part of the entire lake.

But it’s worse for Argo. Each of the Argo floats, each dot in Figure 3, is representing a volume as large as 13 Lake Superiors … with one lonely Argo thermometer …

Or we could look at it another way. There were about 2,500 Argo floats in operation over the period covered by SLT2011. The area of the ocean is about 360 million square km. So each Argo float represents an area of about 140,000 square kilometres, which is a square about 380 km (240 mi) on each side. One Argo float for all of that. Ten days for each dive cycle, wherein the float goes down to about 1000 metres and stays there for nine days. Then it either rises from there, or it descends to about 2,000 metres, then rises to the surface at about 10 cm (4″) per second over about six hours profiling the temperature and salinity as it rises. So we get three vertical temperature profiles from 0-1,000 or 0-1,500 or 0-2,000 metres each month depending on the particular float, to cover an area of 140,000 square kilometres … I’m sorry, but three vertical temperature profiles per month to cover an area of 60,000 square miles and a mile deep doesn’t scream “thousandths of a degree temperature accuracy” to me.

Here’s a third way to look at the size of the measurement challenge. For those who have been out of sight of land in a small boat, you know how big the ocean looks from the deck? Suppose the deck of the boat is a metre (3′) above the water, and you stand up on deck and look around. Nothing but ocean stretching all the way to the horizon, a vast immensity of water on all sides. How many thermometer readings would it take to get the monthly average temperature of just the ocean you can see, to the depth of one mile? I would say … more than one.

Now, consider that each Argo float has to cover an area that is more than 2,000 times the area of the ocean you can see from your perch standing there on deck … and the float is making three dives per month … how well do the measurements encompass and represent the reality?

There is another difficulty. Figure 2 shows that most of the change over the period occurred in a single year, from about mid 2007 to mid 2008. The change in forcing required to change the temperature of a kilometre and a half of water that much is about 2 W/m2 for that year-long period. The “imbalance”, to use Hansen’s term, is even worse when we look at the amount of energy required to warm the upper ocean from May 2007 to August 2008. That requires a global “imbalance” of about 2.7 W/m2 over that period.

Now, if that were my dataset, the first thing I’d be looking at is what changed in mid 2007. Why did the global “imbalance” suddenly jump to 2.7 W/m2? And more to the point, why did the upper ocean warm, but not the surface temperature?

I don’t have any answers to those questions, my first guess would be “clouds” … but before I used that dataset, I’d want to go down that road to find out why the big jump in 2007. What changed, and why? If our interest is in global “imbalance”, there’s an imbalance to study.

(In passing, let me note that there is an incorrect simplifying assumption to eliminate ocean heat content in order to arrive at the canonical climate equation. That canonical equation is

Change In Temperature = Sensitivity times Change In Forcing

The error is to assume that the change in oceanic heat content (OHC) is a linear function of surface temperature change ∆T. It is not, as the Argo data confirms … I discussed this error in a previous post, “The Cold Equations“.  But I digress …)

The SLT2011 Argo record also has an oddity shared by some other temperature records. The swing in the whole time period is about a hundredth of a degree. The largest one-year jump in the data is about a hundredth of a degree. The largest one-month jump in the data is about a hundredth of a degree. When short and long time spans show the same swings, it’s hard to say a whole lot about the data. It makes the data very difficult to interpret. For example, the imbalance necessary to give the largest one-month change in OHC is about 24 W/m2. Before moving forwards, changes in OHC like that would be worth looking at to see a) if they’re real and b) if so, what changed, before moving forwards …

In any case, that was my second issue, the tiny size of the temperature differences being measured.

Next, coverage. The Argo analysis of SLT2011 only uses data down to 1,500 metres depth. They say that the Argo coverage below that depth is too sparse to be meaningful, although the situation is improving. In addition, the Argo analysis only covers from 60°N to 60°S, which leaves out the Arctic and Southern Oceans, again because of inadequate coverage. Next, it starts at 10 metres below the surface, so it misses the crucial surface layer which, although small in volume, undergoes large temperature variations. Finally, their analysis misses the continental shelves because it only considers areas where the ocean is deeper than one kilometre. Figure 4 shows how much of the ocean volume the Argo floats are actually measuring in SLT2011, about 31%

Figure 4. Amount of the world’s oceans measured by the Argo float system as used in SLT2011.

In addition to the amount measured by Argo floats, Figure 4 shows that there are a number of other oceanic volumes. H2011 includes figures for some of these, including the Southern Ocean, the Arctic Ocean, and the Abyssal waters. Hansen points out that the source he used (Purkey and Johnson,   hereinafter PJ2010) says there is no temperature change in the waters between 2 and 4 km depth. This is most of the water shown on the right side of Figure 4. It is not clear how the bottom waters are warming without the middle waters warming. I can’t think of how that might happen … but that’s what PJ2010 says, that the blue area on the right, representing half the oceanic volume, is not changing temperature at all.

Neither H2011 nor SLT2011 offer an analysis of the effect of omitting the continental shelves, or the thin surface layer. In that regard, it is worth noting that a ten-metre thin surface layer like that shown in Figure 4 can change by a full degree in temperature without much problem … and if it does so, that would be about the same change in ocean heat content as the 0.01°C of warming of the entire volume measured by the Argo floats. So that surface layer is far too large a factor to be simply omitted from the analysis.

There is another problem with the  figures H2011 use for the change in heat content of the abyssal waters (below 4 km). The cited study, PJ2010, says:

Excepting the Arctic Ocean and Nordic seas, the rate of abyssal (below 4000 m) global ocean heat content change in the 1990s and 2000s is equivalent to a heat flux of 0.027 (±0.009) W m−2 applied over the entire surface of the earth. SOURCE: PJ2010

That works out to a claimed warming rate of the abyssal ocean of 0.0007°C per year, with a claimed 95% confidence interval of ± 0.0002°C/yr.  … I’m sorry, but I don’t buy it. I do not accept that we know the rate of the annual temperature rise of the abyssal waters to the nearest two ten-thousandths of a degree per year, no matter what PJ2010 might claim. The surface waters are sampled regularly by thousands of Argo floats. The abyssal waters see the odd transect or two per decade. I don’t think our measurements are sufficient.

One problem here, as with much of climate science, is that the only uncertainty that is considered is the strict mathematical uncertainty associated with the numbers themselves, dissociated from the real world. There is an associated uncertainty that is sometimes not considered. This is the uncertainty of how much your measurement actually represents the entire volume or area being measured.

The underlying problem is that temperature is an “intensive” quality, whereas something like mass is an “extensive” quality. Measuring these two kinds of things,  intensive and extensive variables, is very, very different. An extensive quality is a quality that changes with the amount (the “extent”) of whatever is being measured. The mass of two glasses of water at 40° temperature is twice the mass of one glass of water at 40° temperature. To get the total mass, we just add the two masses together.

But do we add the two 40° temperatures together to get a total temperature of 80°? Nope, it doesn’t work that way, because temperature is an intensive quality. It doesn’t change based on the amount of stuff we are measuring.

Extensive qualities are generally easy to measure. If we have a large bathtub full of water, we can easily determine its mass. Put it on a scale, take one single measurement, you’re done. One measurement is all that is needed.

But the average temperature of the water is much harder to determine. It requires simultaneous measurement of the water temperature in as many places as are required. The number of thermometers required depends on the accuracy you need and the amount of variation in the water temperature. If there are warm spots or cold parts of the water in the tub, you’ll need a lot of thermometers to get an average that is accurate to say a tenth of a degree.

Now recall that instead of a bathtub with lots of thermometers, for the Argo data we have a chunk of ocean that’s 380 km (240 miles) on a side with a single Argo float taking its temperature. We’re measuring down a kilometre and a half (about a mile), and we get three vertical temperature profiles a month … how well do those three vertical temperature profiles characterize the actual temperature of sixty thousand square miles of ocean? (140,000 sq. km.)

Then consider further that the abyssal waters have far, far fewer thermometers way down there … and yet they claim even greater accuracies than the Argo data.

Please be clear that my argument is not about the ability of large numbers of measurements to improve the mathematical precision of the result. We have about 7,500 Argo vertical profiles per month. With the ocean surface divided into 864 gridboxes, if the standard deviation (SD) of the depth-integrated gridbox measurements is about 0.24°C, this is enough to give us mathematical precision of the order of magnitude that they have stated.  The question is whether the SD of the gridboxes is that small, and if so, how they got that small.

They discuss how they did their error analysis. I suspect that their problem lies in two areas. One is I see no error estimate for the removal of the “climatology”, the historical monthly average, from the data. The other problem involves the arcane method used to analyze the data by gridding the data both horizontally and vertically. I’ll deal with the climatology question first. Here is their description of their method:

2.2 Data processing method

An Argo climatology (ACLIM hereinafter, 2004–2009, von Schuckmann et al., 2009) is first interpolated on every profile position in order to fill gappy profiles at depth of each temperature and salinity profile. This procedure is necessary to calculate depth-integrated quantities. OHC [ocean heat content], OFC [ocean freshwater content] and SSL [steric (temperature related) sea level] are then calculated at every Argo profile position as described in von Schuckmann et al. (2009). Finally, anomalies of the physical properties at every profile position are calculated relative to ACLIM.

Terminology: a “temperature profile” is a string of measurements taken at increasing depths by an Argo float. A “profile position” is one of the preset pressure levels at which the Argo floats are set to take a sample.

This means that if there is missing data in a given profile, it is filled in using the “climatology”, or the long-term average of the data for that month and place. Now, this is going to introduce an error, not likely large, and one that they account for.

What I don’t find accounted for in their error calculation is any error estimate related to the final sentence in the paragraph above. That sentence describes the subtraction of the ACLIM climatology from the data. ACLIM is an “Argo climatology”, which is a month-by-month average of the average temperatures of each depth level.

SLT2011 refers this question to an earlier document by the same authors, SLT2009, which describes the creation of the ACLIM climatology. I find that there are over 150 levels in the ACLIM climatology, as described by the authors:

The configuration is defined by the grid and the set of a priori information such as the climatology, a priori variances and covariances which are necessary to compute the covariance matrices. The analyzed field is defined on a horizontal 1/2° Mercator isotropic grid and is limited from 77°S to 77°N. There are 152 vertical levels defined between the surface and 2000m depth … The vertical spacing is 5m from the surface down to 100m depth, 10m from 100m to 800m and 20m from 800m down to 2000m depth.

So they have divided the upper ocean into gridboxes, and each gridbox into layers, to give gridcells. How many gridcells? Well, 360 degrees longitude * 2 * 180 degrees latitude * 2 * 70% of the world is ocean * 152 layers  = 27,578,880 oceanic gridcells. Then they’ve calculated the month by month average temperature of each of those twenty-five million oceanic volumes … a neat trick. Clearly, they are interpolating like mad.

There are about 450,000 discrete ocean temperatures per month reported by the Argo floats. That means that each of their 25 million gridcells gets its temperature taken on average once every five years

That is the “climatology” that they are subtracting from each “profile position” on each Argo dive. Obviously, given the short history of the Argo dataset, the coverage area of 60,000 sq. miles (140,000 sq. km.) per Argo float, and the small gridcell size, there are large uncertainties in the climatology.

So when they subtract a climatology from an actual measurement, the result contains not just the error in the measurement. It contains the error in the climatology as well. When we are doing subtraction, errors add “in quadrature”. This means the resultant error is the square root of the sum of the squares of the errors. It also means that the big error rules, particularly when one error is much larger than the other. The temperature measurement at the profile position has just the instrument error. For Argo, that’s ± 0.005°C. The climatology error? Who knows, when the volumes are only sampled once every five years? But it’s much more than the instrument error …

So that’s the main problem I see with their analysis. They’re doing it in a difficult-to-trace, arcane, and clunky way. Argo data, and temperature data in general, does not occur in some gridded world. Doing the things they do with the gridboxes and the layers introduces errors. Let me show you one example of why. Figure 5 shows the depth layers of 5 metres used in the upper shallower section of the climatology, along with the records from one Argo float temperature profile.

Figure 5. ACLIM climatology layers (5 metre). Red circles show the actual measurements from a single Argo temperature profile. Blue diamonds show the same information after averaging into layers. Photo Source

Several things can be seen here. First, there is no data for three of the climatology layers. A larger problem is that when we average into layers, in essence we assign that averaged value to the midpoint in the layer. The problem with this procedure arises because in the shallows, the Argo floats sample at slightly less than 10 metre intervals. So the upper measurements are just above the bottom edge of the layer. As a result when they are averaged into the layers, it is as though the temperature profile has been hoisted upwards by a couple of metres. This introduces a large bias into the results. In addition, the bias is depth-dependent, with the shallows hoisted upwards, but deeper sections moved downwards. The error is smallest below 100 metres, but gets large quite quickly after that because of the change in layer thickness to 10 metres.

CONCLUSIONS

Finally, we come to the question of the analysis method, and the meaning of the title of this post. The SLT2011 document goes on to say the following:

To estimate GOIs [global oceanic indexes] from the irregularly distributed profiles, the global ocean is divided into boxes of 5° latitude, 10° longitude and 3-month size. This provides a sufficient number of observations per box. To remove spurious data, measurements which depart from the mean at more than 3 times the standard deviation are excluded. The variance information to build this criterion is derived from ACLIM. This procedure excludes about 1 % of data from our analysis. Only data points which are located over bathymetry deeper than 1000 m depth are then kept. Boxes containing less than 10 measurements are considered as a measurement gap.

Now, I’m sorry, but that’s just a crazy method for analyzing this kind of data. They’ve taken the actual data. Then they’ve added “climatology” data where there were gaps, so everything was neat and tidy. Then they’ve subtracted the “climatology” from the whole thing, with an unknown error. Then the data is averaged into gridboxes of five by ten degrees, and into 150 levels below the surface, of varying thickness, and then those are averaged over a three-month period … that’s all un-necessary complexity. This is a problem that once again shows the isolation of the climate science community from the world of established methods.

This problem, of having vertical Argo temperature profiles at varying locations and wanting to estimate the temperature of the unseen remainder based on the profiles, is not novel or new at all. In fact, it is precisely the situation faced by every mining company with regards to their test drill hole results. Exactly as with Argo data, the mining companies have vertical profiles of the composition of the subsurface reality at variously spaced locations. Again just as with argo, from that information, the mining companies need to estimate the parts of the underground world that they cannot see.

But these are not AGW supporting climate scientists, for whom mistaken claims mean nothing. These are guys betting big bucks on the outcome of their analysis. I can assure you that they don’t futz around dividing the area up into rectangular boxes, and splitting the underground into 150 layers of varying thinknesses. They’d laugh at anyone who tried to estimate an ore body using such a klutzy method.

Instead, they use a mathematical method called “kriging“. Why do they use it? First, because it works.

Remember that the mining companies cannot afford mistakes. Kriging (and its variants) has been proven, time after time, to provide the best estimates of what cannot be measured under the surface.

Second, kriging provides actual error estimates, not the kind of “eight thousandths of a degree” nonsense promoted by the Argo analysts. The mining companies can’t delude themselves that they have more certainty than is warranted by the measurements. They need to know exactly what the risks are, not some overly optimistic calculation.

At the end of the day, I’d say throw out the existing analyses of the Argo data, along with all of the inflated claims of accuracy. Stop faffing about with gridboxes and layers, that’s high-school stuff. Get somebody who is an expert in kriging, and analyze the data properly. My guess is that a real analysis will show error intervals that render much of the estimates useless.

Anyhow, that’s my analysis of the Hansen Energy Imbalance paper. They claim an accuracy that I don’t think their hugely complex method can attain.

It’s a long post, likely inaccuracies and typos have crept in, be gentle …

w.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

167 Comments
Inline Feedbacks
View all comments
December 31, 2011 8:00 pm

Wonder how the El Nino and La Nina events show up in the data sets. Also, what about the temperature profiles of the various oceans/regions separately? Who is to say that the temperature profile of the ocean isn’t cyclic? When you have currents and possible other factors to consider, the temperature profile of the relatively small fraction of the ocean can change without the total heat content changing. Still, at the end of the day, no matter what, the overall temperature change is so small, even if real, it is nothing more than a curiosity to study and absolutely nothing to worry about.

George
December 31, 2011 8:04 pm

“Remember that the mining companies cannot afford mistakes. Kriging (and its variants) has been proven, time after time, to provide the best estimates of what cannot be measured under the surface.”
Sounds like a job for Steve McIntyre. I wonder what he would say about it.

Richard G
December 31, 2011 8:06 pm

I would say that over the last 4 decades the *Precision* of the data collection has improved, but we really cannot know what the accuracy is if we did not collect it originally. As an example please refer to Willis’ post here.
Hansen’s Arrested Development
http://wattsupwiththat.com/2011/12/20/hansens-arrested-development/#more-53430
The CERES satellite provides Hansen with extremely precise data, but he doesn’t trust it’s *accuracy* so he adjusts it to bring it into conformance with his expectations.

BrianP
December 31, 2011 8:23 pm

Haveing taken oceanagraphic measurements for years I can tell you there much more going on below the surface than people will admit. Just like the air there are rivers of water floweing meandering around almost at ramdom. . Warm blobs of water cold blobs of water. How can you possibly measure that.

randomengineer
December 31, 2011 8:38 pm

Willis the accuracy ought to be fine. The sensors are typically sampled N times per actual reported sample and ought to have an internal crossref to compensate for sensor drift. As to sensor repeatability the internal crossref and firmware linearity correction code ought to work. I spent a lot of years making ridiculously high precision NIST traceable instruments, and the error bars they’re reporting seem OK to me. Point is that you ought to look up the argo mfg data and verify that the sensors have NIST traceability for the expected temp bounds.

Alan S. Blue
December 31, 2011 8:50 pm

This is precisely my main complaint with the standard surface temperature analysis.
You have a -point-source- instrument. It was NIST calibrated as a point-source instrument. You’re using it for point-source weather analysis – which was the primary intent of the vast majority of sites.
But… then they’re turning around and using the exact same 0.1C stated instrument error for a point-source measurement as a reasonable estimate of the error in the measurement of the entire gridcell’s temperature. This ignores the issues of extensive completely empty gridcells ‘getting’ an estimated value (with the associated overly optimistic error estimates!).
The standard evening news weather maps for your local city demonstrates the relative levels of ‘error as a gridcell measurement device’ of a randomly placed instrument, and 0.1C is laughably optimistic.

DJ
December 31, 2011 9:06 pm

As Willis notes… Having worked in labs doing critical measurements, and dealing with calibration and “significant figures”, my experience with the reality of temperature measuring devices causes me to raise a red flag at .008Deg resolution claims. Especially with the mechanical limitations of the devices in question, in the working environment, how they’re calibrated, etc.
Simply as a practical matter, I understand where randomengineer is coming from, but knowing how scientists tend to throw stuff together…… Have the grad student get out the Omega catalog and order some type T’s……
…. And add a comment to your code…
;fudge factor
..Then tell the DOE in your weekly report that “Progess in all areas is excellent”.

AndyG55
December 31, 2011 9:10 pm


“If the error bands should be considerably larger, does this suggest the twin possibilities of either substantially cooler or substantially warmer trends?”
NO ! it suggests that we can’t make any determination about what is happening.

Philip Bradley
December 31, 2011 9:39 pm

Outstanding analysis, Willis.
I started out disagreeing with you on the sheer size of the oceans issue. As long as you have a sufficiently large number of (random) measurements, and Argo has far in excess of that number, the size of the oceans is irrelevant when it comes to determining any warming trend.
But when you got into climatology adjustments, gridding and interpolation, the alarm bells started ringing.
This is the same (or similar) dubious methodology that the climate models use.
Hopefully a real statistician will get access to the raw Argo data and give us a proper analysis. Until then I’ll be considerably more cautious about drawing conclusions from the published Argo data.
And to follow up on Bob’s comments. Downwelling currents could be warming the deep oceans, but there are no measurements to support this conjecture.

DJ
December 31, 2011 9:41 pm

Ok, it looks like the buoys use a “Scientific Thermistor Model WM 103”. I can’t find anything on these thermistors. Admittedly I did find some high accuracy units advertised at accuracies of .002-.004Deg. by some other company. I remain a bit skeptical.

RockyRoad
December 31, 2011 9:54 pm

Having done a lot of kriging in my profession as a mining engineer/geologist, I’ve always wondered if there was any way it could be applied to climate data. I’ve used kriging on composited drill hole samples to generate 3-d block models for global reserve estimates, on blast holes to outline grade control boundaries on day-to-day mining in open-pits, on surface geochemical samples to determine trends of anomalous mineralization, and I’ve used the estimation variance that is a byproduct of the procedure as a means of defining targets for development drilling to expand reserves at an operating mine. I’ve even used the technique on alluvial diamondiferous gravels in Africa to determine thickness of the gravels with surprising success.
However, in all the above applications, the first requirement is to determine the spatial correlation between the samples. The technique for this is known as the variogram, which divides samples into pairs separated by increasing distances; in addition, the pairs at increasing distances are generated using relatively narrow directional windows (usually 15 degree increments) around the three orthogonal planes. After being plotted on paper and taped into a 3-d model, it is fairly easy to determine the spatial orientation of the oblate spheroid of the sample correlation as well as the distance of the major, minor, and intermediate axes.
The intercept of each variogram curve with the origin (which should be the same regardless of the direction inspected) defines the “nugget effect”, which is the inherent noise of the sample set. The variogram curves rise with distance until the curve levels off, after which there is no further correlation of the sample values; the distance to the inflection point should be different for each direction inspected—it would be highly unusual to find a spherical range of influence because almost all things in nature display some degree of anisotropy. The sample set is said to have no correlation if the variogram curves display no downward trend for closer samples sets and hence, either because the sampling method is inherently corrupted or the distance between samples is too great or a mixture of sample sets representing a variety of correlations exists; in such situations the kriging methodology breaks down and you might just as well apply current climate science “fill-in-the-box” procedures and do an area- (or volume-) weighted calculation using inverse distance. (By the way, I’ve never understood the way “climate scientists” apply the temperature of one place to another simply because the other had a missing value; in mining you’d get fired for such blatant shenanigans.)
The critical factor in all this is trying to get a handle on the spatial correlation of the sample set being studies. Mining typically targets the concentration of a valuable compound or metal that is the result of geologic processes, which usually include hydrothermal or mechanical fluids, temperature gradients, lithologic inhomogeneities, and structural constraints such as faults and bedding planes. And usually the system from which the samples are derived isn’t in constant motion like the ocean. (I suppose taking the value of each ARGO “sample” at exactly the same time would fix the ocean in place and give one a fighting chance to determine if there is any sample correlation for that time period, although shifting currents later would change the orientation of all sets of correlations.)
Should a variogram analysis of the ARGO data indeed find some semblance of sample correlation, the defined model of anisotropy would be used in the kriging algorithm, which can be used to generate either a 2- or 3-dimensional model. The interesting thing about kriging is that it is considered a best linear unbiased estimator. After modeling, various algorithms can be used (for example bi-cubic spline) for fitting a temperature gradient to the block values and determine an overall average (although in mining it is essentially a worthless exercise to find the average value of your deposit—nobody is encouraged to mine “to the average” as that is a definite profit killer).
Admittedly, as has been noted by other comments, highly sophisticated methods of determining metal values in deposits can cause disastrous results if ALL significant controls on the distribution are not accounted for. I’ve worked at operations where major faults have divided the precious metals deposit into a dozen different zones of rock—the variography of each zone must be determined separately from all the others and blocks within those zones estimated (kriged) using only that zone’s anisotropy. If focus to such details isn’t emphasized, the model results would be less than ideal and may even be worthless.
In summary, I’m trying to think of a way zone boundaries could be delineated in the ocean since I’m pretty sure one 3-d or even 2-d variogram model wouldn’t be sufficient and my concluding remark is: good luck on using kriging. You’re first going to have to figure out the variography and I’m not necessarily volunteering (even though I have variography and kriging software) but it would be a great project if the grant money was sufficient. (Where’s my Big Mining check?)

Larry Fields
December 31, 2011 10:01 pm

Here’s my stoopid question of the day. If my aging memory is firing on all cylinders, the project with the Argo buoys started in 2003, and there was a substantial sharp temperature decline initially, followed by a rebound, and a very slow rate of increase thereafter. Isn’t Hansen cherry-picking the time frame of the ‘study’, in order to ‘support’ the foregone conclusion? If so, I’m shocked, I tell you, shocked!
Now I’ve gotta go, and read the link on kriging. It sounds fascinating, as Mr. Spock would say.

Graeme No.3
December 31, 2011 10:47 pm

Willis you say “It is not clear how the bottom waters are warming without the middle waters warming.”
Because giant squid come to the surface at night and pack suitcases of heat, which they drag down to the stygian depths. All that agitation and splashing causes the increasing number of hurricanes that are occurring (as I’m sure you’ve noticed).
That is quite as believable as a lot of AGW theory, and you can’t deny it because as soon as one of the ecoloons hears that a sceptic has denounced the theory, it will be all over the net as proven fact. They will have to change their mascot and stop bothering polar bears.
Just tell them to sarc off/
Happy New Year to you and all, and may it bring an increase in common sense to all who need it.

December 31, 2011 11:04 pm

For a start, the term “heat content” shows a lack of understanding of physics, for it should be “thermal energy content” and it is important to understand that thermal energy can interchange with gravitational potential energy as warmer water rises or colder water sinks – as happens in ocean currents. So there is nothing intrinsically “fixed” in so-called ocean heat content. Indeed, some energy can easily flow under the floor of the ocean into the crust and mantle, or at least reduce the outward flow.
What really affects Earth’s climate is the surface temperature of the seas and oceans which brings about close equilibrium (in calm conditions) with the very lowest levels of the atmosphere. Thus the climate in low lying islands like Singapore is very much controlled by ocean temperatures. So too, for example, is the rate of Arctic ice formation and melting governed by the temperatures and rates of flow of currents from the Atlantic to the Arctic Oceans. To me the main value in discussing ocean thermal energy content is to emphasise that it dominates land surface energy in a ratio of about 15:1 and thus, I say, sea surface temperatures should bear that sort of weighting over land temperatures when calculating global temperature means.
When we understand the dominance of sea surface temperatures in the scheme of things, it becomes apparent that we should seek historic ocean data, perhaps sometimes having to consider islands in key locations such as Jan Mayer Island (within the Arctic Circle) which I have mentioned in another post. See the record here: http://climate-change-theory.com/janmayen.jpg and also note the 200+ year record for the albeit larger island of Northern Ireland here http://climate-change-theory.com/ireland.jpg
Then of course we have Roy Spencer’s curved trend of sea surface data http://climate-change-theory.com/latest.jpg and Kevin Trenberth’s curved plot (on SkS) both of which are now in decline: http://climate-change-theory.com/seasurface.jpg
What does this data show? Well, certainly there’s no sign of any hockey stick. There is indication of warmer temperatures in the Arctic in the 1930’s (substantiated here http://climate-change-theory.com/arctic1880.jpg ) and indications in this last plot of a huge 4 degree (natural) rise in the Arctic from 1919 to 1939. There was also a significant increase of about 2.2 degrees in Northern Ireland between 1816 and 1828 – all “natural” it would seem and completely eclipsing the 0.5 degree (also natural) rises between 1910 and 1940 and 1970 and 2000. Yes, note the similarity before and after carbon dioxide levels took off – http://earth-climate.com/airtemp.jpg
Note also how the curved trend in Spencer’s plot of all lower atmosphere satellite data seems to be coming in from the left (in 1979) from higher levels – and clearly has a lower mean gradient than those standard plots which give greater weighting to land surfaces. I suggest that either one or both of the following explains this: (a) urban crawl must have had some effect, no matter what anyone says to the contrary (b) possible questionable choice (and elimination) of certain land based records with a view to specifically creating an apparent hockey stick effect, emphasised of course by simplistic weighting of about 30% for land measurements based on surface area, rather than about 6.5% based on thermal energy content.
Trust sea surface temperatures I say. What a shame those NASA measurements failed (?) on October 4, 2011. Perhaps they were too threatening to survive! Anyway, 2011 at sea surface undoubtedly did close with a mean less than that for 2003: http://climate-change-theory.com/2003-2011.jpg
Enjoy a cooler New Year everyone!

December 31, 2011 11:06 pm

“However, in all the above applications, the first requirement is to determine the spatial correlation between the samples. The technique for this is known as the variogram, which divides samples into pairs separated by increasing distances;”
The argo deployment plan is driven by Ocean Models. FWIW

UK Sceptic
December 31, 2011 11:09 pm

H2011. A jam sensation bursting with hand picked cherries you’ll want to spread thickly on your waffles. As recommended by James Hansen.

GeoLurking
December 31, 2011 11:13 pm

No, these guys don’t use kriging, they use kludging.

December 31, 2011 11:26 pm

Willis your are right on again. I have been Kriging geological stuff: metallic and non metallic ore, coal, kerogen and lots of other stuff since the early 80’s. You are correct, it works as described. In the oceans it will work too but not to anything like thousandths of a degree. That is why they don’t use it for ARGO; won’t give you the values predicted by your models or you ideology/dogma. It is obvious the precision and accuracy needed to achieve thousandths is more dream then reality. In most metallic ores for example tenths easy hundredths maybe, depends and thousandths I would be laughed out of the room. Ore bodies are static and way smaller then oceans. All Hanson et al. are doing is masturbating and not even doing that very well. What these guys are doing is not science. I think is approaches the paranormal.

Philip Bradley
January 1, 2012 12:25 am

There was also a significant increase of about 2.2 degrees in Northern Ireland between 1816 and 1828 – all “natural” it would seem
That was warming after the 1815 Tambora eruption cooling event (year without a summer).
That the warming extended for a decade indicates how long volcanic aerosols hang around (causing cooling).

January 1, 2012 12:25 am

I still remember J. Willis saying “I kept digging and digging” when Argo results were too cold for them. He kicked out all buoy data which seemed too cold. And every Argo update since has more positive trend. I do not believe it.

January 1, 2012 12:36 am

“thingadonta says:
December 31, 2011 at 7:41 pm
I am not an expert in kriging and resource analysis, but I can give you an example where some whizz bang mathematician using dubious statisitical methods came up with a resource model for a gold resource at Ballarat in Australia recently”
Guess who was chairman of this debacle – one very poor economist and government climate advisor called Garnaut
happy and healthy New Year to you Willis and other contributors

Geoff Sherrington
January 1, 2012 1:40 am

In 2006 I suggested by email to Phil Jones that he gets into geostatistics (which includes kriging). He said they had looked at it. Nothing more. Perhaps, post Climategate, we now know that the message went to an inappropriate person.
Willis, your paragraph says it all in one:
“One problem here, as with much of climate science, is that the only uncertainty that is considered is the strict mathematical uncertainty associated with the numbers themselves, dissociated from the real world. There is an associated uncertainty that is sometimes not considered. This is the uncertainty of how much your measurement actually represents the entire volume or area being measured.”
This alone is one good reason for climate workers to give complete data to others more skilled in practical work – to allow proper error estimates.

old44
January 1, 2012 1:56 am

To put it into perspective, one Argo thermometer taking three samples per month in 165,000 cubic kilometres of water is the equivalent of measuring Port Phillip Bay once every 183 years or Port Jackson every 5,646 years.

crosspatch
January 1, 2012 1:58 am

At some point they are going to run out of “adjustments” they can make.

old44
January 1, 2012 2:01 am

Dennis Nikols Professional Geologist says:
December 31, 2011 at 11:26 pm
In regards to Hanson et al., are you suggesting practice doesn’t make perfect?

Verified by MonsterInsights