Krige the Argo Probe Data, Mr. Spock!

A few weeks ago I wrote a piece highlighting a comment made in the Hansen et al. paper, “Earth’s Energy Imbalance and Implications“, by James Hansen et al. (hereinafter H2011). Some folks said I should take a real look at Hansen’s paper, so I have done so twice, first a quick look at “Losing Your Imbalance“, and now this study. The claims and conclusions of the H2011 study are based mainly on the ocean heat content (OHC), as measured in large part by the data from the Argo floats, so I thought I should look at that data. The Argo temperature and salinity measurements form a great dataset that gives us much valuable information about the ocean. The H2011 paper utilizes the recent results from “How well can we derive Global Ocean Indicators from Argo data?“, by K. von Schuckmann and P.-Y. Le Traon. (SLT2011)

Figure 1. Argo float. Complete float is about 2 metres (6′) tall. SOURCE: Wikipedia

The Argo floats are diving floats that operate on their own. Each float measures one complete vertical temperature profile every ten days. The vertical profile goes down to either to 1,000 or 2,000 metres depth. It reports each dive’s results by satellite before the next dive.

Unfortunately, as used in H2011, the Argo data suffers from some problems. The time span of the dataset is very short. The changes are quite small. The accuracy is overestimated.  Finally, and most importantly, the investigators are using the wrong method to analyze the Argo data.

First, the length of the dataset. The SLT2011 data used by Hansen is only 72 months long. This limits the conclusions we can draw from the data. H2011 gets around that by only showing a six-year moving average of not only this data, but all the data he used. I really don’t like it when raw data is not shown, only smoothed data as Hansen has done.

Second, the differences are quite small. Here is the record as shown in SLT2011. They show the data as annual changes in upper ocean heat content (OHC) in units of joules.  I have converted OHC change (Joules/m2) to units of degrees Celsius change for the water they are measuring, which is a metre-square column of water 1,490 metres deep. As you can see, SLT2011 is discussing very small temperature variations. The same is true of the H2011 paper.

Figure 2. Upper ocean temperatures from Schuckmann & Le Traon, 2011 (SLT2011). Grey bars show one sigma errors of the data. Red line is a 17-point Gaussian average. Vertical red line shows the error of the Gaussian average at the boundary of the dataset (95% CI). Data digitized from SLT2011, Figure 5 b), available as a comma-separated text file here.

There are a few things of note in the dataset. First, we’re dealing with minuscule temperature changes. The length of the gray bars shows that SLT2011 claims that we can measure the temperature of the upper kilometer and a half of the ocean with an error (presumably one sigma) of only ± eight thousandths of a degree … 

Now, I hate to argue from incredulity, and I will give ample statistical reasons further down, but frankly, Scarlett … eight thousandths of a degree error in the measurement of the monthly average temperature of the top mile of water of almost the entire ocean? Really? They believe they can measure the ocean temperature to that kind of precision, much less accuracy?

I find that very difficult to believe. I understand the law of large numbers and the central limit theorem and how that gives us extra leverage, but I find the idea that we can measure the temperature of four hundred million cubic kilometres of ocean water to a precision of ± eight thousandths of a degree to be … well, let me call it unsubstantiated. Others who have practical experience in measuring the temperatures of liquids to less than a hundredth of a degree, feel free to chime in, but to me that seems like a bridge way too far. Yes, there are some 2,500 Argo floats out there, and on a map the ocean looks pretty densely sampled. Figure 3 shows where the Argo floats were in 2011.

Figure 3. Locations of Argo floats, 2011. SOURCE

But that’s just a chart. The world is unimaginably huge. In the real ocean, down to a kilometer and a half of depth, that’s one Argo thermometer for each 165,000 cubic kilometers of water … I’m not sure how to give an idea of just how big that is. Let’s try it this way. Lake Superior is the largest lake in the Americas, visible even on the world map above. How accurately could you measure the average monthly temperature of the entire volume of Lake Superior with one Argo float? Sure, you can let it bob up and down, and drift around the lake, it will take three vertical profiles a month. But even then, every measurement it will only cover a tiny part of the entire lake.

But it’s worse for Argo. Each of the Argo floats, each dot in Figure 3, is representing a volume as large as 13 Lake Superiors … with one lonely Argo thermometer …

Or we could look at it another way. There were about 2,500 Argo floats in operation over the period covered by SLT2011. The area of the ocean is about 360 million square km. So each Argo float represents an area of about 140,000 square kilometres, which is a square about 380 km (240 mi) on each side. One Argo float for all of that. Ten days for each dive cycle, wherein the float goes down to about 1000 metres and stays there for nine days. Then it either rises from there, or it descends to about 2,000 metres, then rises to the surface at about 10 cm (4″) per second over about six hours profiling the temperature and salinity as it rises. So we get three vertical temperature profiles from 0-1,000 or 0-1,500 or 0-2,000 metres each month depending on the particular float, to cover an area of 140,000 square kilometres … I’m sorry, but three vertical temperature profiles per month to cover an area of 60,000 square miles and a mile deep doesn’t scream “thousandths of a degree temperature accuracy” to me.

Here’s a third way to look at the size of the measurement challenge. For those who have been out of sight of land in a small boat, you know how big the ocean looks from the deck? Suppose the deck of the boat is a metre (3′) above the water, and you stand up on deck and look around. Nothing but ocean stretching all the way to the horizon, a vast immensity of water on all sides. How many thermometer readings would it take to get the monthly average temperature of just the ocean you can see, to the depth of one mile? I would say … more than one.

Now, consider that each Argo float has to cover an area that is more than 2,000 times the area of the ocean you can see from your perch standing there on deck … and the float is making three dives per month … how well do the measurements encompass and represent the reality?

There is another difficulty. Figure 2 shows that most of the change over the period occurred in a single year, from about mid 2007 to mid 2008. The change in forcing required to change the temperature of a kilometre and a half of water that much is about 2 W/m2 for that year-long period. The “imbalance”, to use Hansen’s term, is even worse when we look at the amount of energy required to warm the upper ocean from May 2007 to August 2008. That requires a global “imbalance” of about 2.7 W/m2 over that period.

Now, if that were my dataset, the first thing I’d be looking at is what changed in mid 2007. Why did the global “imbalance” suddenly jump to 2.7 W/m2? And more to the point, why did the upper ocean warm, but not the surface temperature?

I don’t have any answers to those questions, my first guess would be “clouds” … but before I used that dataset, I’d want to go down that road to find out why the big jump in 2007. What changed, and why? If our interest is in global “imbalance”, there’s an imbalance to study.

(In passing, let me note that there is an incorrect simplifying assumption to eliminate ocean heat content in order to arrive at the canonical climate equation. That canonical equation is

Change In Temperature = Sensitivity times Change In Forcing

The error is to assume that the change in oceanic heat content (OHC) is a linear function of surface temperature change ∆T. It is not, as the Argo data confirms … I discussed this error in a previous post, “The Cold Equations“.  But I digress …)

The SLT2011 Argo record also has an oddity shared by some other temperature records. The swing in the whole time period is about a hundredth of a degree. The largest one-year jump in the data is about a hundredth of a degree. The largest one-month jump in the data is about a hundredth of a degree. When short and long time spans show the same swings, it’s hard to say a whole lot about the data. It makes the data very difficult to interpret. For example, the imbalance necessary to give the largest one-month change in OHC is about 24 W/m2. Before moving forwards, changes in OHC like that would be worth looking at to see a) if they’re real and b) if so, what changed, before moving forwards …

In any case, that was my second issue, the tiny size of the temperature differences being measured.

Next, coverage. The Argo analysis of SLT2011 only uses data down to 1,500 metres depth. They say that the Argo coverage below that depth is too sparse to be meaningful, although the situation is improving. In addition, the Argo analysis only covers from 60°N to 60°S, which leaves out the Arctic and Southern Oceans, again because of inadequate coverage. Next, it starts at 10 metres below the surface, so it misses the crucial surface layer which, although small in volume, undergoes large temperature variations. Finally, their analysis misses the continental shelves because it only considers areas where the ocean is deeper than one kilometre. Figure 4 shows how much of the ocean volume the Argo floats are actually measuring in SLT2011, about 31%

Figure 4. Amount of the world’s oceans measured by the Argo float system as used in SLT2011.

In addition to the amount measured by Argo floats, Figure 4 shows that there are a number of other oceanic volumes. H2011 includes figures for some of these, including the Southern Ocean, the Arctic Ocean, and the Abyssal waters. Hansen points out that the source he used (Purkey and Johnson,   hereinafter PJ2010) says there is no temperature change in the waters between 2 and 4 km depth. This is most of the water shown on the right side of Figure 4. It is not clear how the bottom waters are warming without the middle waters warming. I can’t think of how that might happen … but that’s what PJ2010 says, that the blue area on the right, representing half the oceanic volume, is not changing temperature at all.

Neither H2011 nor SLT2011 offer an analysis of the effect of omitting the continental shelves, or the thin surface layer. In that regard, it is worth noting that a ten-metre thin surface layer like that shown in Figure 4 can change by a full degree in temperature without much problem … and if it does so, that would be about the same change in ocean heat content as the 0.01°C of warming of the entire volume measured by the Argo floats. So that surface layer is far too large a factor to be simply omitted from the analysis.

There is another problem with the  figures H2011 use for the change in heat content of the abyssal waters (below 4 km). The cited study, PJ2010, says:

Excepting the Arctic Ocean and Nordic seas, the rate of abyssal (below 4000 m) global ocean heat content change in the 1990s and 2000s is equivalent to a heat flux of 0.027 (±0.009) W m−2 applied over the entire surface of the earth. SOURCE: PJ2010

That works out to a claimed warming rate of the abyssal ocean of 0.0007°C per year, with a claimed 95% confidence interval of ± 0.0002°C/yr.  … I’m sorry, but I don’t buy it. I do not accept that we know the rate of the annual temperature rise of the abyssal waters to the nearest two ten-thousandths of a degree per year, no matter what PJ2010 might claim. The surface waters are sampled regularly by thousands of Argo floats. The abyssal waters see the odd transect or two per decade. I don’t think our measurements are sufficient.

One problem here, as with much of climate science, is that the only uncertainty that is considered is the strict mathematical uncertainty associated with the numbers themselves, dissociated from the real world. There is an associated uncertainty that is sometimes not considered. This is the uncertainty of how much your measurement actually represents the entire volume or area being measured.

The underlying problem is that temperature is an “intensive” quality, whereas something like mass is an “extensive” quality. Measuring these two kinds of things,  intensive and extensive variables, is very, very different. An extensive quality is a quality that changes with the amount (the “extent”) of whatever is being measured. The mass of two glasses of water at 40° temperature is twice the mass of one glass of water at 40° temperature. To get the total mass, we just add the two masses together.

But do we add the two 40° temperatures together to get a total temperature of 80°? Nope, it doesn’t work that way, because temperature is an intensive quality. It doesn’t change based on the amount of stuff we are measuring.

Extensive qualities are generally easy to measure. If we have a large bathtub full of water, we can easily determine its mass. Put it on a scale, take one single measurement, you’re done. One measurement is all that is needed.

But the average temperature of the water is much harder to determine. It requires simultaneous measurement of the water temperature in as many places as are required. The number of thermometers required depends on the accuracy you need and the amount of variation in the water temperature. If there are warm spots or cold parts of the water in the tub, you’ll need a lot of thermometers to get an average that is accurate to say a tenth of a degree.

Now recall that instead of a bathtub with lots of thermometers, for the Argo data we have a chunk of ocean that’s 380 km (240 miles) on a side with a single Argo float taking its temperature. We’re measuring down a kilometre and a half (about a mile), and we get three vertical temperature profiles a month … how well do those three vertical temperature profiles characterize the actual temperature of sixty thousand square miles of ocean? (140,000 sq. km.)

Then consider further that the abyssal waters have far, far fewer thermometers way down there … and yet they claim even greater accuracies than the Argo data.

Please be clear that my argument is not about the ability of large numbers of measurements to improve the mathematical precision of the result. We have about 7,500 Argo vertical profiles per month. With the ocean surface divided into 864 gridboxes, if the standard deviation (SD) of the depth-integrated gridbox measurements is about 0.24°C, this is enough to give us mathematical precision of the order of magnitude that they have stated.  The question is whether the SD of the gridboxes is that small, and if so, how they got that small.

They discuss how they did their error analysis. I suspect that their problem lies in two areas. One is I see no error estimate for the removal of the “climatology”, the historical monthly average, from the data. The other problem involves the arcane method used to analyze the data by gridding the data both horizontally and vertically. I’ll deal with the climatology question first. Here is their description of their method:

2.2 Data processing method

An Argo climatology (ACLIM hereinafter, 2004–2009, von Schuckmann et al., 2009) is first interpolated on every profile position in order to fill gappy profiles at depth of each temperature and salinity profile. This procedure is necessary to calculate depth-integrated quantities. OHC [ocean heat content], OFC [ocean freshwater content] and SSL [steric (temperature related) sea level] are then calculated at every Argo profile position as described in von Schuckmann et al. (2009). Finally, anomalies of the physical properties at every profile position are calculated relative to ACLIM.

Terminology: a “temperature profile” is a string of measurements taken at increasing depths by an Argo float. A “profile position” is one of the preset pressure levels at which the Argo floats are set to take a sample.

This means that if there is missing data in a given profile, it is filled in using the “climatology”, or the long-term average of the data for that month and place. Now, this is going to introduce an error, not likely large, and one that they account for.

What I don’t find accounted for in their error calculation is any error estimate related to the final sentence in the paragraph above. That sentence describes the subtraction of the ACLIM climatology from the data. ACLIM is an “Argo climatology”, which is a month-by-month average of the average temperatures of each depth level.

SLT2011 refers this question to an earlier document by the same authors, SLT2009, which describes the creation of the ACLIM climatology. I find that there are over 150 levels in the ACLIM climatology, as described by the authors:

The configuration is defined by the grid and the set of a priori information such as the climatology, a priori variances and covariances which are necessary to compute the covariance matrices. The analyzed field is defined on a horizontal 1/2° Mercator isotropic grid and is limited from 77°S to 77°N. There are 152 vertical levels defined between the surface and 2000m depth … The vertical spacing is 5m from the surface down to 100m depth, 10m from 100m to 800m and 20m from 800m down to 2000m depth.

So they have divided the upper ocean into gridboxes, and each gridbox into layers, to give gridcells. How many gridcells? Well, 360 degrees longitude * 2 * 180 degrees latitude * 2 * 70% of the world is ocean * 152 layers  = 27,578,880 oceanic gridcells. Then they’ve calculated the month by month average temperature of each of those twenty-five million oceanic volumes … a neat trick. Clearly, they are interpolating like mad.

There are about 450,000 discrete ocean temperatures per month reported by the Argo floats. That means that each of their 25 million gridcells gets its temperature taken on average once every five years

That is the “climatology” that they are subtracting from each “profile position” on each Argo dive. Obviously, given the short history of the Argo dataset, the coverage area of 60,000 sq. miles (140,000 sq. km.) per Argo float, and the small gridcell size, there are large uncertainties in the climatology.

So when they subtract a climatology from an actual measurement, the result contains not just the error in the measurement. It contains the error in the climatology as well. When we are doing subtraction, errors add “in quadrature”. This means the resultant error is the square root of the sum of the squares of the errors. It also means that the big error rules, particularly when one error is much larger than the other. The temperature measurement at the profile position has just the instrument error. For Argo, that’s ± 0.005°C. The climatology error? Who knows, when the volumes are only sampled once every five years? But it’s much more than the instrument error …

So that’s the main problem I see with their analysis. They’re doing it in a difficult-to-trace, arcane, and clunky way. Argo data, and temperature data in general, does not occur in some gridded world. Doing the things they do with the gridboxes and the layers introduces errors. Let me show you one example of why. Figure 5 shows the depth layers of 5 metres used in the upper shallower section of the climatology, along with the records from one Argo float temperature profile.

Figure 5. ACLIM climatology layers (5 metre). Red circles show the actual measurements from a single Argo temperature profile. Blue diamonds show the same information after averaging into layers. Photo Source

Several things can be seen here. First, there is no data for three of the climatology layers. A larger problem is that when we average into layers, in essence we assign that averaged value to the midpoint in the layer. The problem with this procedure arises because in the shallows, the Argo floats sample at slightly less than 10 metre intervals. So the upper measurements are just above the bottom edge of the layer. As a result when they are averaged into the layers, it is as though the temperature profile has been hoisted upwards by a couple of metres. This introduces a large bias into the results. In addition, the bias is depth-dependent, with the shallows hoisted upwards, but deeper sections moved downwards. The error is smallest below 100 metres, but gets large quite quickly after that because of the change in layer thickness to 10 metres.

CONCLUSIONS

Finally, we come to the question of the analysis method, and the meaning of the title of this post. The SLT2011 document goes on to say the following:

To estimate GOIs [global oceanic indexes] from the irregularly distributed profiles, the global ocean is divided into boxes of 5° latitude, 10° longitude and 3-month size. This provides a sufficient number of observations per box. To remove spurious data, measurements which depart from the mean at more than 3 times the standard deviation are excluded. The variance information to build this criterion is derived from ACLIM. This procedure excludes about 1 % of data from our analysis. Only data points which are located over bathymetry deeper than 1000 m depth are then kept. Boxes containing less than 10 measurements are considered as a measurement gap.

Now, I’m sorry, but that’s just a crazy method for analyzing this kind of data. They’ve taken the actual data. Then they’ve added “climatology” data where there were gaps, so everything was neat and tidy. Then they’ve subtracted the “climatology” from the whole thing, with an unknown error. Then the data is averaged into gridboxes of five by ten degrees, and into 150 levels below the surface, of varying thickness, and then those are averaged over a three-month period … that’s all un-necessary complexity. This is a problem that once again shows the isolation of the climate science community from the world of established methods.

This problem, of having vertical Argo temperature profiles at varying locations and wanting to estimate the temperature of the unseen remainder based on the profiles, is not novel or new at all. In fact, it is precisely the situation faced by every mining company with regards to their test drill hole results. Exactly as with Argo data, the mining companies have vertical profiles of the composition of the subsurface reality at variously spaced locations. Again just as with argo, from that information, the mining companies need to estimate the parts of the underground world that they cannot see.

But these are not AGW supporting climate scientists, for whom mistaken claims mean nothing. These are guys betting big bucks on the outcome of their analysis. I can assure you that they don’t futz around dividing the area up into rectangular boxes, and splitting the underground into 150 layers of varying thinknesses. They’d laugh at anyone who tried to estimate an ore body using such a klutzy method.

Instead, they use a mathematical method called “kriging“. Why do they use it? First, because it works.

Remember that the mining companies cannot afford mistakes. Kriging (and its variants) has been proven, time after time, to provide the best estimates of what cannot be measured under the surface.

Second, kriging provides actual error estimates, not the kind of “eight thousandths of a degree” nonsense promoted by the Argo analysts. The mining companies can’t delude themselves that they have more certainty than is warranted by the measurements. They need to know exactly what the risks are, not some overly optimistic calculation.

At the end of the day, I’d say throw out the existing analyses of the Argo data, along with all of the inflated claims of accuracy. Stop faffing about with gridboxes and layers, that’s high-school stuff. Get somebody who is an expert in kriging, and analyze the data properly. My guess is that a real analysis will show error intervals that render much of the estimates useless.

Anyhow, that’s my analysis of the Hansen Energy Imbalance paper. They claim an accuracy that I don’t think their hugely complex method can attain.

It’s a long post, likely inaccuracies and typos have crept in, be gentle …

w.

0 0 votes
Article Rating
167 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
ChE
December 31, 2011 5:18 pm

Using a good platinum RTD, that kind of accuracy is probably possible, but doesn’t the heat from the probe have effects on the water temp of that order?

Lew Skannen
December 31, 2011 5:33 pm

Students! If you found physics 101 tedious because you had to keep fussing about with error bars and weren’t allowed to just write down any number which popped out of your calculator don’t be disheartened!
There is a branch of science just for you!!!!
In fact Climatology needs you!
It pays well and will get you endless conferences around the world.
You will never have to ‘show your working’ and you will always have the option of explaining away a bad result by a quick ad hoc addition to your theory.
You will never be wrong.
Our guarantee: No Hypothesis ever rejected!
Join Up Now!
Become a Climatologer and your powers of prediction will outstrip those of even Astrologers.

Paul Martin
December 31, 2011 5:34 pm

Figure 5 seems to be missing (an internal UUID reference from the original document, I suspect).
[Thanks, fixed. -w.]

Editor
December 31, 2011 5:39 pm

Happy and Properous New Year to you, Willis. You wrote, “It is not clear how the bottom waters are warming without the middle waters warming.”
Just an example of how it could happen, “could” being the operative word: If the waters from the mid levels are being subducted to the deep ocean (through Meridional Overturning Circulation) at a rate that is comparable to the rate at which the mid levels are warmed, then the mid levels would remain flat while the lower levels are warmed.

Editor
December 31, 2011 5:45 pm

Don’t you just love it when you notice the typo in your comment after you click “Post Comment”? Properous? Oy vey! It’s the thought that counts, I guess.

AndyG55
December 31, 2011 5:47 pm

They are mistaking individual instrument accuracy for the final accuracy of a kludged together averaging procedure of sparse data.
DOH !!!
Same when they do their Land Temp calculations to get their meaningless Global Average.
These guys ARE NOT SCIENTISTS !!!

Lance Wallace
December 31, 2011 5:49 pm

Willis, Do the ARGO people provide estimates of the bias and precision of the temperature (and depth) measurements? I would think that such measurements would need to be made on two linked ARGO floats, so that identical conditions prevail throughout the entire dive; lab measurements of the equipment would not suffice without the actual pressures encountered on the dive.

John Silver
December 31, 2011 5:58 pm

30 years is one climate data point,
6o years is two climate data points,
9o years is 3 climate data points,
12o years is 4 climate data points,
……………………………………et cetera
Thou must obey the WMO and wait a couple of centuries.

mike
December 31, 2011 5:58 pm

wouldnt the 2004 argo data wipe out the 0.008 deg anyway?
http://cbdakota.files.wordpress.com/2011/09/fourfatalpiecesargobouys.gif

December 31, 2011 6:24 pm

I was always taught by my statistics professors: You can not have a greater precision in your estimates of error than your measuring devices.

DirkH
December 31, 2011 6:26 pm

Looks to me like kriging is an optimization problem where you want to find the optimal or near-optimal solution that minimizes a defined criterion. In the case of the oceans, one would have to take into account the currents, in other words, water masses move in a more or less well known way. Makes it a lot more complicated than the static models used in mining, but very interesting.

A physicist
December 31, 2011 6:30 pm

Lance Wallace asks: Do the ARGO people provide estimates of the bias and precision of the temperature (and depth) measurements?

Lance, to my mind the most real-world stringent calibration tests are conducted by the ARGO manufacturer on ARGO floats that are recovered (by accident) after several years at sea. According to the retrieval tests documented in Accuracy and Stability of Argo SBE 41 and SBE 41CP CTD Conductivity and Temperature Sensors (PDF available free), the ARGO instruments are remarkably accurate, remarkably stable … and (as seagoing equipment must be) they are immensely robust too. For example:

“[ARGO float serial number] WMO 1900169 experienced a pressure sensor malfunction after 10 profiles, and ceased descending to its normal park depth (1000m). It floated on the surface for 9 months before being picked up by a fisherman. It sat in port, in the sun, for 1+ year (personal communication, Dana Swift, Univ. of Washington).”

Note: despite this hard use, far outside its operating specifications, this recovered ARGO float’s temperature sensor remained in-calibration.
Much further information about ARGO’s remarkable floats can be found on the manufacturer’s site.
And as Willis says, the central limit theorem helps too, for the common-sense reason that the average value of many noisy temperature measurements points has lower uncertainty than any individual temperature measurement.

Tony Hansen
December 31, 2011 6:38 pm

Why dont we just measure whatever is in Jims ‘pipeline’?

December 31, 2011 6:40 pm

I interviewed a physics professor at the University of Washington a few weeks ago whose team spent several years measuring ocean temperatures by means of sonar. He believed the accuracy was far greater than with these buoys. Their methods seem to have covered pretty large regions of the ocean. Most of their measuring was done in what was purportedly the height of the temperature rise.
He said they found no signicant changes in water temperature. He is not a fan of AGW. I don’t hear much about their work in the context of AGW discussions, though.

AndyG55
December 31, 2011 6:48 pm

“I was always taught by my statistics professors: You can not have a greater precision in your estimates of error than your measuring devices.”
Your measuring procedure also affects your accuracy, and the number of samples done.
As is pointed out, 2000 sample points on the whole globe oceans is very sparse indeed.

John
December 31, 2011 6:58 pm

If the error bands should be considerably larger, does this suggest the twin possibilities of either substantially cooler or substantially warmer trends?

Camburn
December 31, 2011 7:16 pm

The ARGO data period is too short to make any conclusions.
The ARGO data, at this point and time, is too sparse in relation to the volume/change being measured to be quantitive and robust. We all know the XBT data was fraught with errors which no degree of smoothing can erase. The error bars remain so large that anything conclusive is a trojan horse.
In approx 20-25 years, one could use the ARGO data for round about calculations. Till that time, the interpretation of said data is wishful thinking at best.
The published papers concerning AGW as of late are showing that the scientists who publish have lost touch with science, and that the folks who are peer reviewing this published junk must have gotten their PHD from the University of Antarctica…..a mythical school with numerous graduates.
It is really sad to observe how low the bar has become in climate science. No one who thinks objectively and cticically, could use the current state of climate science to plan anything with any degree of certainty.

John Garrett
December 31, 2011 7:18 pm

Missing:
7,000 quintillion joules of heat energy
If located, call Kevin Trenberth.
Reward offered.

Camburn
December 31, 2011 7:19 pm

John@ 6:58:
You are totally correct. The error bars go in both directions.

Richard G
December 31, 2011 7:32 pm

we must consider the important distinction between precision and accuracy.
Example: A 1 meter diameter target with a 5 centimeter bulls eye. 5 shots clustered within the bulls eye demonstrate both high precision and high accuracy. The same shot cluster offset 20 cm right of center demonstrate high precision but low accuracy.
@AndyG55
“They are mistaking individual instrument accuracy for the final accuracy of a kludged together averaging procedure of sparse data.” >>>>
@A physicist
” the ARGO instruments are remarkably accurate, remarkably stable … and (as seagoing equipment must be) they are immensely robust too.”
To be more precise we often mistake instrumental *precision* for final accuracy.

thingadonta
December 31, 2011 7:41 pm

I am not an expert in kriging and resource analysis, but I can give you an example where some whizz bang mathematician using dubious statisitical methods came up with a resource model for a gold resource at Ballarat in Australia recently, which ulitmately resulted in a $400 million write off for one of the larger gold companies in the world (Lihir Gold) because they used fancy mathematics not backed up by basic geological characteristics, and used a resource method that was invalid. The geologists who understood the style of gold distribution (it was fundamentally ‘spotty’) and rock characteristics could have told them this from the start, but as usual, the mathematicians didnt listen to the field geologists-they went ahead with the mine plan, and it was a complete fiasco. Just an interesting case study for those who understand kriging and its relationship with field geology.

Shrnfr
December 31, 2011 7:47 pm

However, for the law of large numbers to hold true, they must all be drawn from populations that live in L2 If one is drawn from a population with a dimension less than 2 the CLT no longer applies and the sample variance diverge to infinity. Due to the mathematically chaotic nature of the processes, it is not a given that all rungs live in L2. I would thus approach the error bars with caution.

cohenite
December 31, 2011 7:48 pm

David Marshall says:
“I interviewed a physics professor at the University of Washington a few weeks ago whose team spent several years measuring ocean temperatures by means of sonar. He believed the accuracy was far greater than with these buoys. Their methods seem to have covered pretty large regions of the ocean. Most of their measuring was done in what was purportedly the height of the temperature rise.
He said they found no signicant changes in water temperature. He is not a fan of AGW. I don’t hear much about their work in the context of AGW discussions, though.”
How does SONAR measure temperature; that sounds very interesting.
The Hansen paper and the recent activity in the AGW camp about OHC is no doubt in response to Knox and Douglass’s paper discussed here:
http://wattsupwiththat.com/2011/01/06/new-paper-on-argo-data-trenberths-ocean-heat-still-missing/

Shrnfr
December 31, 2011 7:50 pm

How I turned variables into rungs, Iwill never know, but typing on aniPad with my dog on NY lap may explain it. sorry.

Mike Wryley
December 31, 2011 7:57 pm

The supposedly time invariant half life’s of radioisotopes has been changing as of late. Some folks are suggesting that some unknown emission from the sun is the cause. Maybe the same phenomenon can affect a temperature measuring system. Or it could be a Windows (gag) update.

December 31, 2011 8:00 pm

Wonder how the El Nino and La Nina events show up in the data sets. Also, what about the temperature profiles of the various oceans/regions separately? Who is to say that the temperature profile of the ocean isn’t cyclic? When you have currents and possible other factors to consider, the temperature profile of the relatively small fraction of the ocean can change without the total heat content changing. Still, at the end of the day, no matter what, the overall temperature change is so small, even if real, it is nothing more than a curiosity to study and absolutely nothing to worry about.

George
December 31, 2011 8:04 pm

“Remember that the mining companies cannot afford mistakes. Kriging (and its variants) has been proven, time after time, to provide the best estimates of what cannot be measured under the surface.”
Sounds like a job for Steve McIntyre. I wonder what he would say about it.

Richard G
December 31, 2011 8:06 pm

I would say that over the last 4 decades the *Precision* of the data collection has improved, but we really cannot know what the accuracy is if we did not collect it originally. As an example please refer to Willis’ post here.
Hansen’s Arrested Development
http://wattsupwiththat.com/2011/12/20/hansens-arrested-development/#more-53430
The CERES satellite provides Hansen with extremely precise data, but he doesn’t trust it’s *accuracy* so he adjusts it to bring it into conformance with his expectations.

BrianP
December 31, 2011 8:23 pm

Haveing taken oceanagraphic measurements for years I can tell you there much more going on below the surface than people will admit. Just like the air there are rivers of water floweing meandering around almost at ramdom. . Warm blobs of water cold blobs of water. How can you possibly measure that.

randomengineer
December 31, 2011 8:38 pm

Willis the accuracy ought to be fine. The sensors are typically sampled N times per actual reported sample and ought to have an internal crossref to compensate for sensor drift. As to sensor repeatability the internal crossref and firmware linearity correction code ought to work. I spent a lot of years making ridiculously high precision NIST traceable instruments, and the error bars they’re reporting seem OK to me. Point is that you ought to look up the argo mfg data and verify that the sensors have NIST traceability for the expected temp bounds.

Alan S. Blue
December 31, 2011 8:50 pm

This is precisely my main complaint with the standard surface temperature analysis.
You have a -point-source- instrument. It was NIST calibrated as a point-source instrument. You’re using it for point-source weather analysis – which was the primary intent of the vast majority of sites.
But… then they’re turning around and using the exact same 0.1C stated instrument error for a point-source measurement as a reasonable estimate of the error in the measurement of the entire gridcell’s temperature. This ignores the issues of extensive completely empty gridcells ‘getting’ an estimated value (with the associated overly optimistic error estimates!).
The standard evening news weather maps for your local city demonstrates the relative levels of ‘error as a gridcell measurement device’ of a randomly placed instrument, and 0.1C is laughably optimistic.

DJ
December 31, 2011 9:06 pm

As Willis notes… Having worked in labs doing critical measurements, and dealing with calibration and “significant figures”, my experience with the reality of temperature measuring devices causes me to raise a red flag at .008Deg resolution claims. Especially with the mechanical limitations of the devices in question, in the working environment, how they’re calibrated, etc.
Simply as a practical matter, I understand where randomengineer is coming from, but knowing how scientists tend to throw stuff together…… Have the grad student get out the Omega catalog and order some type T’s……
…. And add a comment to your code…
;fudge factor
..Then tell the DOE in your weekly report that “Progess in all areas is excellent”.

AndyG55
December 31, 2011 9:10 pm


“If the error bands should be considerably larger, does this suggest the twin possibilities of either substantially cooler or substantially warmer trends?”
NO ! it suggests that we can’t make any determination about what is happening.

December 31, 2011 9:39 pm

Outstanding analysis, Willis.
I started out disagreeing with you on the sheer size of the oceans issue. As long as you have a sufficiently large number of (random) measurements, and Argo has far in excess of that number, the size of the oceans is irrelevant when it comes to determining any warming trend.
But when you got into climatology adjustments, gridding and interpolation, the alarm bells started ringing.
This is the same (or similar) dubious methodology that the climate models use.
Hopefully a real statistician will get access to the raw Argo data and give us a proper analysis. Until then I’ll be considerably more cautious about drawing conclusions from the published Argo data.
And to follow up on Bob’s comments. Downwelling currents could be warming the deep oceans, but there are no measurements to support this conjecture.

DJ
December 31, 2011 9:41 pm

Ok, it looks like the buoys use a “Scientific Thermistor Model WM 103”. I can’t find anything on these thermistors. Admittedly I did find some high accuracy units advertised at accuracies of .002-.004Deg. by some other company. I remain a bit skeptical.

RockyRoad
December 31, 2011 9:54 pm

Having done a lot of kriging in my profession as a mining engineer/geologist, I’ve always wondered if there was any way it could be applied to climate data. I’ve used kriging on composited drill hole samples to generate 3-d block models for global reserve estimates, on blast holes to outline grade control boundaries on day-to-day mining in open-pits, on surface geochemical samples to determine trends of anomalous mineralization, and I’ve used the estimation variance that is a byproduct of the procedure as a means of defining targets for development drilling to expand reserves at an operating mine. I’ve even used the technique on alluvial diamondiferous gravels in Africa to determine thickness of the gravels with surprising success.
However, in all the above applications, the first requirement is to determine the spatial correlation between the samples. The technique for this is known as the variogram, which divides samples into pairs separated by increasing distances; in addition, the pairs at increasing distances are generated using relatively narrow directional windows (usually 15 degree increments) around the three orthogonal planes. After being plotted on paper and taped into a 3-d model, it is fairly easy to determine the spatial orientation of the oblate spheroid of the sample correlation as well as the distance of the major, minor, and intermediate axes.
The intercept of each variogram curve with the origin (which should be the same regardless of the direction inspected) defines the “nugget effect”, which is the inherent noise of the sample set. The variogram curves rise with distance until the curve levels off, after which there is no further correlation of the sample values; the distance to the inflection point should be different for each direction inspected—it would be highly unusual to find a spherical range of influence because almost all things in nature display some degree of anisotropy. The sample set is said to have no correlation if the variogram curves display no downward trend for closer samples sets and hence, either because the sampling method is inherently corrupted or the distance between samples is too great or a mixture of sample sets representing a variety of correlations exists; in such situations the kriging methodology breaks down and you might just as well apply current climate science “fill-in-the-box” procedures and do an area- (or volume-) weighted calculation using inverse distance. (By the way, I’ve never understood the way “climate scientists” apply the temperature of one place to another simply because the other had a missing value; in mining you’d get fired for such blatant shenanigans.)
The critical factor in all this is trying to get a handle on the spatial correlation of the sample set being studies. Mining typically targets the concentration of a valuable compound or metal that is the result of geologic processes, which usually include hydrothermal or mechanical fluids, temperature gradients, lithologic inhomogeneities, and structural constraints such as faults and bedding planes. And usually the system from which the samples are derived isn’t in constant motion like the ocean. (I suppose taking the value of each ARGO “sample” at exactly the same time would fix the ocean in place and give one a fighting chance to determine if there is any sample correlation for that time period, although shifting currents later would change the orientation of all sets of correlations.)
Should a variogram analysis of the ARGO data indeed find some semblance of sample correlation, the defined model of anisotropy would be used in the kriging algorithm, which can be used to generate either a 2- or 3-dimensional model. The interesting thing about kriging is that it is considered a best linear unbiased estimator. After modeling, various algorithms can be used (for example bi-cubic spline) for fitting a temperature gradient to the block values and determine an overall average (although in mining it is essentially a worthless exercise to find the average value of your deposit—nobody is encouraged to mine “to the average” as that is a definite profit killer).
Admittedly, as has been noted by other comments, highly sophisticated methods of determining metal values in deposits can cause disastrous results if ALL significant controls on the distribution are not accounted for. I’ve worked at operations where major faults have divided the precious metals deposit into a dozen different zones of rock—the variography of each zone must be determined separately from all the others and blocks within those zones estimated (kriged) using only that zone’s anisotropy. If focus to such details isn’t emphasized, the model results would be less than ideal and may even be worthless.
In summary, I’m trying to think of a way zone boundaries could be delineated in the ocean since I’m pretty sure one 3-d or even 2-d variogram model wouldn’t be sufficient and my concluding remark is: good luck on using kriging. You’re first going to have to figure out the variography and I’m not necessarily volunteering (even though I have variography and kriging software) but it would be a great project if the grant money was sufficient. (Where’s my Big Mining check?)

December 31, 2011 10:01 pm

Here’s my stoopid question of the day. If my aging memory is firing on all cylinders, the project with the Argo buoys started in 2003, and there was a substantial sharp temperature decline initially, followed by a rebound, and a very slow rate of increase thereafter. Isn’t Hansen cherry-picking the time frame of the ‘study’, in order to ‘support’ the foregone conclusion? If so, I’m shocked, I tell you, shocked!
Now I’ve gotta go, and read the link on kriging. It sounds fascinating, as Mr. Spock would say.

Graeme No.3
December 31, 2011 10:47 pm

Willis you say “It is not clear how the bottom waters are warming without the middle waters warming.”
Because giant squid come to the surface at night and pack suitcases of heat, which they drag down to the stygian depths. All that agitation and splashing causes the increasing number of hurricanes that are occurring (as I’m sure you’ve noticed).
That is quite as believable as a lot of AGW theory, and you can’t deny it because as soon as one of the ecoloons hears that a sceptic has denounced the theory, it will be all over the net as proven fact. They will have to change their mascot and stop bothering polar bears.
Just tell them to sarc off/
Happy New Year to you and all, and may it bring an increase in common sense to all who need it.

December 31, 2011 11:04 pm

For a start, the term “heat content” shows a lack of understanding of physics, for it should be “thermal energy content” and it is important to understand that thermal energy can interchange with gravitational potential energy as warmer water rises or colder water sinks – as happens in ocean currents. So there is nothing intrinsically “fixed” in so-called ocean heat content. Indeed, some energy can easily flow under the floor of the ocean into the crust and mantle, or at least reduce the outward flow.
What really affects Earth’s climate is the surface temperature of the seas and oceans which brings about close equilibrium (in calm conditions) with the very lowest levels of the atmosphere. Thus the climate in low lying islands like Singapore is very much controlled by ocean temperatures. So too, for example, is the rate of Arctic ice formation and melting governed by the temperatures and rates of flow of currents from the Atlantic to the Arctic Oceans. To me the main value in discussing ocean thermal energy content is to emphasise that it dominates land surface energy in a ratio of about 15:1 and thus, I say, sea surface temperatures should bear that sort of weighting over land temperatures when calculating global temperature means.
When we understand the dominance of sea surface temperatures in the scheme of things, it becomes apparent that we should seek historic ocean data, perhaps sometimes having to consider islands in key locations such as Jan Mayer Island (within the Arctic Circle) which I have mentioned in another post. See the record here: http://climate-change-theory.com/janmayen.jpg and also note the 200+ year record for the albeit larger island of Northern Ireland here http://climate-change-theory.com/ireland.jpg
Then of course we have Roy Spencer’s curved trend of sea surface data http://climate-change-theory.com/latest.jpg and Kevin Trenberth’s curved plot (on SkS) both of which are now in decline: http://climate-change-theory.com/seasurface.jpg
What does this data show? Well, certainly there’s no sign of any hockey stick. There is indication of warmer temperatures in the Arctic in the 1930’s (substantiated here http://climate-change-theory.com/arctic1880.jpg ) and indications in this last plot of a huge 4 degree (natural) rise in the Arctic from 1919 to 1939. There was also a significant increase of about 2.2 degrees in Northern Ireland between 1816 and 1828 – all “natural” it would seem and completely eclipsing the 0.5 degree (also natural) rises between 1910 and 1940 and 1970 and 2000. Yes, note the similarity before and after carbon dioxide levels took off – http://earth-climate.com/airtemp.jpg
Note also how the curved trend in Spencer’s plot of all lower atmosphere satellite data seems to be coming in from the left (in 1979) from higher levels – and clearly has a lower mean gradient than those standard plots which give greater weighting to land surfaces. I suggest that either one or both of the following explains this: (a) urban crawl must have had some effect, no matter what anyone says to the contrary (b) possible questionable choice (and elimination) of certain land based records with a view to specifically creating an apparent hockey stick effect, emphasised of course by simplistic weighting of about 30% for land measurements based on surface area, rather than about 6.5% based on thermal energy content.
Trust sea surface temperatures I say. What a shame those NASA measurements failed (?) on October 4, 2011. Perhaps they were too threatening to survive! Anyway, 2011 at sea surface undoubtedly did close with a mean less than that for 2003: http://climate-change-theory.com/2003-2011.jpg
Enjoy a cooler New Year everyone!

December 31, 2011 11:06 pm

“However, in all the above applications, the first requirement is to determine the spatial correlation between the samples. The technique for this is known as the variogram, which divides samples into pairs separated by increasing distances;”
The argo deployment plan is driven by Ocean Models. FWIW

December 31, 2011 11:09 pm

H2011. A jam sensation bursting with hand picked cherries you’ll want to spread thickly on your waffles. As recommended by James Hansen.

December 31, 2011 11:13 pm

No, these guys don’t use kriging, they use kludging.

December 31, 2011 11:26 pm

Willis your are right on again. I have been Kriging geological stuff: metallic and non metallic ore, coal, kerogen and lots of other stuff since the early 80’s. You are correct, it works as described. In the oceans it will work too but not to anything like thousandths of a degree. That is why they don’t use it for ARGO; won’t give you the values predicted by your models or you ideology/dogma. It is obvious the precision and accuracy needed to achieve thousandths is more dream then reality. In most metallic ores for example tenths easy hundredths maybe, depends and thousandths I would be laughed out of the room. Ore bodies are static and way smaller then oceans. All Hanson et al. are doing is masturbating and not even doing that very well. What these guys are doing is not science. I think is approaches the paranormal.

January 1, 2012 12:25 am

There was also a significant increase of about 2.2 degrees in Northern Ireland between 1816 and 1828 – all “natural” it would seem
That was warming after the 1815 Tambora eruption cooling event (year without a summer).
That the warming extended for a decade indicates how long volcanic aerosols hang around (causing cooling).

January 1, 2012 12:25 am

I still remember J. Willis saying “I kept digging and digging” when Argo results were too cold for them. He kicked out all buoy data which seemed too cold. And every Argo update since has more positive trend. I do not believe it.

January 1, 2012 12:36 am

“thingadonta says:
December 31, 2011 at 7:41 pm
I am not an expert in kriging and resource analysis, but I can give you an example where some whizz bang mathematician using dubious statisitical methods came up with a resource model for a gold resource at Ballarat in Australia recently”
Guess who was chairman of this debacle – one very poor economist and government climate advisor called Garnaut
happy and healthy New Year to you Willis and other contributors

Geoff Sherrington
January 1, 2012 1:40 am

In 2006 I suggested by email to Phil Jones that he gets into geostatistics (which includes kriging). He said they had looked at it. Nothing more. Perhaps, post Climategate, we now know that the message went to an inappropriate person.
Willis, your paragraph says it all in one:
“One problem here, as with much of climate science, is that the only uncertainty that is considered is the strict mathematical uncertainty associated with the numbers themselves, dissociated from the real world. There is an associated uncertainty that is sometimes not considered. This is the uncertainty of how much your measurement actually represents the entire volume or area being measured.”
This alone is one good reason for climate workers to give complete data to others more skilled in practical work – to allow proper error estimates.

old44
January 1, 2012 1:56 am

To put it into perspective, one Argo thermometer taking three samples per month in 165,000 cubic kilometres of water is the equivalent of measuring Port Phillip Bay once every 183 years or Port Jackson every 5,646 years.

crosspatch
January 1, 2012 1:58 am

At some point they are going to run out of “adjustments” they can make.

old44
January 1, 2012 2:01 am

Dennis Nikols Professional Geologist says:
December 31, 2011 at 11:26 pm
In regards to Hanson et al., are you suggesting practice doesn’t make perfect?

January 1, 2012 2:28 am

It would be interesting to hear from a submariner. I know they do a bit of lurking above or below layer boundaries; I wonder how the argo floats measure these.

Peter Miller
January 1, 2012 2:55 am

It is a long time since I used kriging for orebody analysis, but one thing I remember causing problems is ensuring you always compare apples with apples, not pears. Kriging is only accurate when you are analysing the same ‘population group’. In mining terms that means analysing the same orebody (mines often have multiple ore zones, each with its own individual genesis).
In this instance, my guess is different depths, latitudes etc would represent different ‘population groups’. I suspect this concept is far too complex for the typical ‘climate scientist’, who prefers Mannian Marhs, or one of its derivatives.

LazyTeenager
January 1, 2012 2:56 am

Dennis nickols professional geologist says
Ore bodies are static and way smaller then oceans. All Hanson et al. are doing is masturbating and not even doing that very well. What these guys are doing is not science. I think is approaches the paranormal.
——————–
Dennis You seem to have overlooked a rather significant fact. The rocky crust is not homogeneous and is highly discontinuous , the oceans are very very continuous and very nearly homogeneous.

Old England
January 1, 2012 3:28 am

Willis,
You say “If there are warm spots or cold parts of the water in the tub, you’ll need a lot of thermometers to get an average that is accurate to say a tenth of a degree.”
Seems right to me – given the nature of water and flows in water (even in a bath) it seems to me it would be near impossible to achieve even in a bath tub.
What would be interesting would be to take a volume of water similar to a bath tub, heat it with a known, measurable heat quantity (to the top layer ?), swirl it gently and then take temp measurements at different places and depths. Use the temperature readings and volume in isolation from the known heat input to calculate the heat input to the water. The difference between the calculated and the real, measured heat input would show the potential inaccuracy / error margin of calculations based on the ARGO sampling. My guess is the error margin would be significant.
Just a thought – and great post.

January 1, 2012 3:59 am

Phillip: Thanks for pointing that out regarding the 1815 Tambora eruption. There was however also subsequent warming above the trend. The long-term trend was not affected by this, but it shows such events can have a significant short-term effect.
It is interesting to note that Roy Spencer makes a point of showing the 1991 Mt Pinatubo cooling on his plots http://climate-change-theory.com/latest.jpg – this cooling thus amplifying the short term warming which followed until 1998, though this was also amplified by the lead up to the largest El Niño on record.
I dispute that volcanic cooling is due to aerosols, however, as ozone depletion is a more likely cause and we have seen proof in 2011 that back radiation from a cooler atmosphere cannot have any warming (or cooling) effect on a warmer surface. (See my posts which SkS could not answer and so deleted. http://climate-change-theory.com/SKS111223D.jpg )
Another interesting long-term trend is seen in US surface temperature records which show only about 0.03 deg.C per decade for 1880-2006: http://earth-climate.com/image318.gif Quite possibly the US scientists were more careful than those in some other countries to locate stations so as to avoid urban crawl – and I note they don’t know much about hockey sticks. (There is quite enough information in data from 1880 to show no hockey stick – or perhaps a boomerang – exists.) Even so, we should not claim to be able to extrapolate a 120 year trend for more than about another 40 to 50 years. As you may know, I believe 400+ year cooling will start after about 2058 based on natural cycles I discuss at http://earth-climate.com

Viv Evans
January 1, 2012 4:02 am

Willis wrote:
“One problem here, as with much of climate science, is that the only uncertainty that is considered is the strict mathematical uncertainty associated with the numbers themselves, dissociated from the real world. “
(My bold)
This isn’t just one problem of climate science, I think it is the main problem, the hard work of a handful of climate scientists notwithstanding.
I suggest it would also be worthwhile to show graphs in addition to the one in Fig 2, where the left-hand scale represents steps of 1ºC, rather than the minuscule 0.01ºC shown.
Mind, that would make the ‘rising heat, we’re going to fry’-curves look less scary …

Speed
January 1, 2012 4:11 am

People with experience in instrumentation will immediately question the accuracy and precision reported in the discussion paper. This paper discusses one possible source of errors.

We strongly suggest the adjustment of all known pressure drifts in Argo data.
These adjustments will improve the consistency and accuracy of the hydrographic dataset
obtained by Argo Program. Even small errors should be corrected because the potential
impact of a small bias in Argo data pressures from uncorrected pressure sensor drifts
could be quite significant for global ocean heat content anomaly estimates. In the worst
case, a bias could introduce an artifact comparable in magnitude to ocean heat content
changes estimated for the later half of the 20th century.

If the pressure sensor of a float has a 5-dbar bias, the float reports the profiles
with a warm surface layer that is falsely thick, and a thermocline that is falsely deep, both
of which tend to increase the oceanic heat content in the most of the ocean.

From …
Argo Float Pressure Offset Adjustment Recommendations
http://prelude.ocean.washington.edu/dmqc3/pub/argo_float_press_offset_adjustment.pdf

January 1, 2012 4:15 am

Philip – you said “That was warming after the 1815 Tambora eruption cooling event (year without a summer)” But actually 1814 went 6 degrees below the long term trend and cooling had started before the eruption. Also, the moving average for the subsequent warming went about 0.6 degrees above the trend from about 1823 to 1833. Aerosols don’t cause cooling for the same reasons I have outlined in other posts regarding carbon dioxide not causing warming. Volcanic eruptions can however affect the ozone layer.

EternalOptimist
January 1, 2012 4:21 am

I am not a scientist, or a kridger.
Are we not measuring the temperature of the currents, not the entire piece ? Isn’t it the currents that perform most of the oceanic mixing ?

David L
January 1, 2012 4:34 am

For this type of data one would employ a “split plot” statistical analysis with the proper error term. The proper error is NOT the error of instrument replication (the variability within an instrument making repeated measurements on a given sample) but would also include variation of measurement over time (drift), variation between instruments (reproducibility), variations in location and in depth (how precisely do they measure the same exact geographic location? Within inches, feet, miles?)
They employ statistics as if it’s the same instrument in the same location in a short time time frame. They have multiple instruments in multiple locations measuring themselves in multiple locations (depths) and over long periods of time.
For a “science” that heavily relies on statistics it’s shocking how little basic statistics they understand!

January 1, 2012 5:11 am

Fascinating. It seems to me that while statistical techniques from Geology may be useful, the reality is that the statistical tools for handling OHC need to be developed based on the known complexities of the system under consideration. I would be interested in hearing from the two or three commenters with backgrounds in Kriging as to its origins and how techniques have developed in response to actual validated results. The comments on identifiying different zones strikes me as being extraordinarily relevant given the known variations in currents and geothermal activity.

Claude Harvey
January 1, 2012 5:42 am

How many times can the world “go for” the same gang of magicians teasing a tiny “signal” out of vast amounts of data through the use of “sophisticated and complex statistical techniques”. I believe that is how certain parties concluded Antarctica was about to catch fire and roast all the penguins. Don’t hear much about that one anymore.

Bill Illis
January 1, 2012 5:42 am

Really good article Willis. Thanks for doing this; it is important that we understand what the underlying data is about.
Just noting that the data does go back to 2003 and 2004. Schuckmann used these years when she arrived at 0.77 W/m2/yr figure in her 2009 paper. This number was widely quoted and used by Hansen in a previous paper similar to the current one. It was later surmised, however, that the Argo distribution wasn’t high enough and the climatology used was not accurate enough to use these two years. How come noone said anything about that until very recently. The 0.77 W/m2 is still be used by some today. The (insufficient but still worthwhile talking about) data from 2004 was about the same as 2005 and 2003 was lower than 2004.
Second, there are a huge number of calculations, interpolations and assumptions required to generate this data. As Hansen does, one can imagine that every one of von Schuckmann’s choices reqarding those assumptions favoured a rise in OHC over time. I haven’t heard Josh Willis say anything about this latest paper (which is a signal by itself).
http://pielkeclimatesci.wordpress.com/2011/02/07/where-is-the-missing-argo-upper-ocean-heat-data/
Regardless of whether the data is accurate enough to use, the new OHC content rise number is 0.54 W/m2/yr in the 0 to 1500 metre ocean. 0 to 700 metre is quite flat at about 0.16 W/m2/yr (the newest numbers by bump that up to 0.2 W/m2/yr or so). I’m not sure how the math works with 0 to 1500 metre being so much higher than the 0 to 700 metre ocean.

Richard M
January 1, 2012 6:13 am

When doing voter polls the typical error rate is given as +- 3%. This usually is based on a 1000 or more samples out of millions of voters. So, I suspect we’re looking at something in the same ballpark in this case. I think polling science is probably a good methodology to apply to ARGO. Each buoy is “voting” on the result.

January 1, 2012 6:27 am

Mr. Eschenbach,
You don’t need a fluid specialists to support your incredulity. Think of sampling a temperature in the air and claiming that ridiculous accuracy. Also note that the ocean is not a monolithic body of stable water. It has surface currents, many, many layers with independent currents, upwelling zones, down welling zones. etc.
Would any one pop a weather balloon up in the atmosphere and claim the could calculate the relative atmosphere temperature out 500 km? Of course not.
Whoever made this claim is a novice at planetary phenomena.

Bob
January 1, 2012 6:31 am

I’ve always been very impressed with the accuracy and precision of weather/climate measurements. All the years I struggled to get reliable, reproducible temperature resolutions to one decimal place and climate scientists get 3-5 decimal place resolution routinely. And then they can resolve meaningful trends of 0.0002°, which can be extrapolated out for centuries. According to this article, they don’t need, relatively, a lot of data to do this. Truly amazing that they would publish such stuff.

January 1, 2012 6:35 am

BTW, Bottom water warmth without surface response (yet) is easy to understand. Mid ocean ridges and subduction zones. I keep harping on this, but folks need to address the inner core heat AND how that heat escapes. All calculations I have seen for heat flowing from the core are based on stable and calm regions without the volcanic structure we see in ocean ridges and trenches.
But these places are where the inner core transfer the most heat – at the bottom of the oceans. If we see an uptick in magma or heat moving through these features, the lowest layers of the ocean will warm.
And we may not see that heat hit surface areas (upwelling zones) for decades.

Bill Illis
January 1, 2012 7:06 am

I’ve charted up the NODC (0 to 700 metres) Ocean Heat Content numbers versus these new ones from Schuckmann (0 to 1500 metres).
Schuckmann 2011 has a much higher trend in the overlap period but the first glance is not always as it seems.
http://img97.imageshack.us/img97/9177/schuckmannvsnodcohc.png
It does look like there is too much variability in the Schuckmann data to simply use a straight line trend. The NODC (0 to 700 metres OHC) actually increases over the period by more (+2.6 W/m2) than the Schuckmann (0 to 1500 metre OHC data) does (+2.3 W/m2). The Schuckmann trend of 0.54 W/m2/yr is actually declining over the period as well so that it is in the negatives at the end of the period.
This is why I always want the actual numbers in one of these studies.

Steve Keohane
January 1, 2012 7:53 am

Willis, got stuck on fig.2. +/- 1 sigma is not 95% confidence level, +/- 2 sigma is. If the graph is showing +/- 1 sigma, then +/- 3 sigma is +/- .01°, by putting a ruler on my screen, which pretty much covers the whole dataset, rendering any trend or differences in the measurements meaningless.

Dennis Kuzara
January 1, 2012 8:01 am

It is not clear how the bottom waters are warming without the middle waters warming. I can’t think of how that might happen …
Have you considered this?:
Not all heat sources are external. Heat is constantly being generated within the earth and obviously flows outward, warming the ground and the water from below. The Earth’s internal heat comes from a combination of residual heat from planetary accretion (about 20%) and heat produced through radioactive decay (80%). Mean heat flow is 65 mW/m2 over continental crust and 101 mW/m2 over oceanic crust. There are places like Yellowstone national park and the oceanic rifts where it is much greater than the mean.

January 1, 2012 8:07 am

Another outstanding article Willis !!
========================================================================
Now, I hate to argue from incredulity, and I will give ample statistical reasons further down, but frankly, Scarlett … eight thousandths of a degree error in the measurement of the monthly average temperature of the top mile of water of almost the entire ocean? Really? They believe they can measure the ocean temperature to that kind of precision, much less accuracy?
========================================================================
My first thought was “And eight thousandths of a degree is bad, why???” (Even if anyone accepts that it is the correct trend/measurements.)
The volume of water in the oceans is still mind boggling. Not sure if everyone can appreciate how immense the oceans really are.

Tom Bakewell
January 1, 2012 8:25 am

This article is a fine example of why I love WUWT. Great technical writing augmented with cogent discussions offered by knowledgable folks. Wonderful job you do, Sir Anthony.

REPLY:
Thanks but the credit here in this story goes to Willis Eschenbach – Anthony

Lance of BC
January 1, 2012 8:25 am

ARGO=GIGO
http://earthobservatory.nasa.gov/Features/OceanCooling/page1.php
“First, I identified some new Argo floats that were giving bad data; they were too cool compared to other sources of data during the time period. It wasn’t a large number of floats, but the data were bad enough, so that when I tossed them, most of the cooling went away. But there was still a little bit, so I kept digging and digging.”…
…when [Willis] factored the too-warm XBT measurements into his ocean warming time series, the last of the ocean cooling went away.
So the new Argo data were too cold, and the older XBT data were too warm, and together, they made it seem like the ocean had cooled,” says Willis. The February evening he discovered the mistake, he says, is “burned into my memory.” He was supposed to fly to Colorado that weekend to give a talk on “ocean cooling” to prominent climate researchers. Instead, he’d be talking about how it was all a mistake.”
More,
http://sonicfrog.net/?p=4820

ferd berple
January 1, 2012 8:38 am

Willis, here are some new plots of Argo showing E and W hemispheres 60N to 60S, 2004 thru 2011. These were made with the latest “Global Argo Marine Atlas” viewer, downloaded from their website.
http://www.flickr.com/photos/57706237@N05/6613084529/lightbox/
http://www.flickr.com/photos/57706237@N05/6613108605/lightbox/
These might make a good addition to your article, to give the reader some idea of how flat the temperatures actually are. Maybe if you blow up the scale as per Hansen to 0.01 degrees there might be a detectable change, but when you put this into context as in these plots, there is no trend.

FerdinandAkin
January 1, 2012 9:03 am

Camburn says:
December 31, 2011 at 7:16 pm
… must have gotten their PHD from the University of Antarctica…..a mythical school with numerous graduates.

I believe the University you are referring to is actually Wossamotta U. (located in FROSTBITE FALLS, a small town in the state of Minnesota, in Koochiching County, near to MOOSYLVANIA. )

ferd berple
January 1, 2012 9:11 am

Looking at these plots I can see a very small increase in near surface temperature between 0-180E. Look at the area of the 22C seasonal scalloping.
http://www.flickr.com/photos/57706237@N05/6613084529/lightbox/
However, looking at the other plot 180E-360E, it appears that the 22C area is decreasing slightly from 2004-2011
http://www.flickr.com/photos/57706237@N05/6613108605/lightbox/
Is it possible that 1/2 of the world (East) is warming and the other half is cooling (West), as divided by Greenwich and the international date line? Or could this in fact be evidence that Argo’s accuracy is not nearly good enough to rely on 0.01 C?
Or is this evidence that China and India are actually warming the world with their emissions, contrary to what is said about aerosols? While the West is cooling the oceans with its high technology such as fracking and tar sands?

Robert of Ottawa
January 1, 2012 9:16 am

Fallacy of false precision: Take 1000 measurements with an accuracy of 1 degree, and the average is 1/1000 of a degree (NOT!) You may end up with a number with 4 decimal places, but the accuracy, or precision, is still only 1 degree.

DJ
January 1, 2012 9:32 am

With the sudden (to me at least) revelation that the ARGO buoys log temps to within .008deg (remarkable resolution and accuracy by any measure) it makes the issue of the “missing heat” of Trenberth even more interesting.
How could the buoys have missed this heat, no matter how stealth, on its path downward?

ferd berple
January 1, 2012 10:06 am

Looking at Figure 3 above, in the Pacific there is a line clear of float at the equator. Right away the Argo data is suspect because that is most likely the equatorial counter current. A sub-surface river of cold water that flows eastward to return the excess water that builds up in the western Pacific due to the prevailing easterlies.
Excluding this from the Argo data is unlikely to provide an accurate estimate of Pacific Ocean temperatures. From personal experience I can confirm this is an area of cold water. It may also solve a mystery of nature, as we caught a juvenile blue marlin, weighing about 5 pounds while sailing between Palmyra and Samoa when we passed into an area in which the counter current was running on the surface. Air temperatures were decidedly chilly for the equator.

January 1, 2012 10:10 am

Excellent post.
Willis clearly explains how big the oceans are. That, perhaps, could be emphasised; all those 2,500 Argo floats, eacxh representing some 50,000 or so sqaure miles of water. That’s a circle witha radius of about 130 miles. To have a horizon distance of 130 miles ii is necessary to have a height of eye of about 20,500 feet. All the water you can see from that height – higher than Mount Kilimanjaro or Mount McKinley (just) – is represented by one Argo float.
Many years ago, when I was at sea, it was the practice to meaure the temperature of oil cargoes [Crude oil, mostly, for me] with three temperatures – to cover five centre tanks; and four more to cover five pairs of wing tanks. [Yes, that’s right, a majority of tanks did not have any temperature taken at all]. Temperature, from tables, will give a density [to four significant figures]; the volume was measured to the nearest half-inch or centimetre [depending on whether the volume tables were in Imperial or metric]. By nifty multiplication, it was possible to get an answer to three decimal points of a tonne – something like 251 872.126 tonnes. In reality, the first two digits were accurate, and the third was probably quite close – say 251,900 plus or minus 200 tonnes. But the company liked the [utterly spurious] accuracy of measuring to the nearest kilogram.
And we only had to be within 0.2% of the shore figure [say 500 tonnes] for there to be no protest on either side.
None of this took into account any deformation of the hull, and only lip-service was paid to heel and trim errors, unless they were substantial.
And we appear to have been utter paragons of accuracy compared with some of the castles in the air, built on the Argo data as described.
Happy and Healthy New Year to all.

Roger Andrews
January 1, 2012 10:11 am

A few more comments from another mining consultant who has done a lot of kriging.
As Rocky Road and others have noted above, you can’t do kriging until you can define your kriging parameters, and to define them you use a thing called a variogram, which plots sample variance or covariance or whatever (there are numerous different ways of doing it) against sample separation. What you would like to see is a plot that starts low at short separations, climbs upwards as separation increases and then flattens out. From such a plot you can estimate the three parameters you need to define the kriging search ellipsoid and the kriging weights – the nugget (where the plot intersects the y-axis), the range (the distance at which the plot flattens out) and the sill (the y-axis level at which the plot flattens out). Getting these data will, however, be a complicated and uncertain exercise because you will have to take the time variable into account (results may vary from month to month or from year to year) and because different types of variograms may give you quite different kriging parameters.
But if you don’t get an interpretable variogram you can’t do any kriging.
So the first step is to run some variograms and see what you get.
And if you get interpretable variograms you then have to decide whether you want to krige anyway. Kriging is often represented as the answer to a statistician’s prayer, but it’s basically just another way of averaging the data, and as noted in a couple of earlier comments it can give you seriously screwed-up results if you don’t know what you are doing. Kriging is in fact just GIGO-prone as any other spatial-averaging method.
And what’s my professional opinion of kriging? Well, after having used it to construct hundreds of orebody models over the last twenty years I’ve concluded that averaging grades using an inverse-distance-to-some-higher-power operator usually gives more representative results. But this applies only to land that doesn’t move. I have no idea what might happen over a shifting ocean.

peter_dtm
January 1, 2012 10:14 am

LazyTeenager says:
January 1, 2012 at 2:56 am
Dennis nickols professional geologist says
Ore bodies are static and way smaller then oceans. All Hanson et al. are doing is masturbating and not even doing that very well. What these guys are doing is not science. I think is approaches the paranormal.
——————–
Dennis You seem to have overlooked a rather significant fact. The rocky crust is not homogeneous and is highly discontinuous , the oceans are very very continuous and very nearly homogeneous.
——————–
I suggest two things
Several trips to the beach over a year or two; preferably weekly; and go paddling or swimming on every visit (which will instruct you on just how un-homogonous the top layer of sea water is)
and
A quick look up of Ocean currents; the penetration of the Amazon River into the South Atlantic; and the effects when two currents flow over each other (the Aghulhas current is a good start point; but so is the Gulf Stream)
Not only are there differences in temperature; but also composition – look up Plimsol Line and why it has saved so many lives and ships. If you dig deep enough you may even find out which load line to load to if you go from South America (Say Sao Francisco do Sul) via Belem to Archangel – and would you load differently in December or June ? Why? (or Why Not ?)
The oceans are definitely not humongous; nor are they continuous (either horizontally or vertically)

randomengineer
January 1, 2012 10:17 am

OK, I spent some time looking now.
The wm 103 sensor is from http://www.sensorsci.com and is listed at +/- 0.5 deg C accuracy.
In the 10 to 15 deg C range is a difference of 4190 ohms resistance i.e.838 ohms/deg C.
That would be 15713 @ 15 deg to 19903 @ 10 deg.
The reported “error bar” (1 sigma) is climate statistics; instrumentation that is NIST traceable and usable will have a 3 sigma value (at least in industry I have been involved with.) In this case the implied 3 sigma is 0.025 (3 x 1 sigma) but my experience in instrumentation says that it’s common to see tight 1 sigma values and tight 3 sigma is *difficult* which is why industry specs call for 3 sigma rating. As such the actual 3 sigma since it’s unreported is likely in the .05 range (about 6 times the 1 sigma, not 3.)
I don’t know how the argo is configured internally re sensor handling and firmware nor pressure sensor for drift correction.
My guess is that they are not internally compensating or applying firmware linearity but rather reading ohms resistance and crossref with pressureand depth on a table. Remember there are 4190 ohms between just 10 and 15 C, so the false resolution of 838 ogms per degree seems to be how they derive the .008 1 sigma.
In short by all appearances it’s worse than Willis thought. There appears to be no way to resolve temp changes inside the claimed resolution, and CERTAINLY not within the 3 sigma.
Could be that I’m full of crap, too.

Patrick
January 1, 2012 10:19 am

Hi
I am extremely sceptical regarding the claimed long term accuracy of these devices.
I had a quick look at the buoy website. The temperature is measured by a thermistor, compared with a reference Vishnay resister ( article does not reveal what type). The voltage across the thermistor and resistor is measured by a 24 bit AD, The article makes a huge fuss about the long term stability of the thermistor and resistor, but glosses over the stability of the xtal oscillator that is used to clock the A-D. There is also a curious lack of information about the long term drift of the reference voltage that is used to provide the pd across the resistors, ( other than saying it is ac excitation. Also the temperature stability of the various ovens that are required to keep the drift to a minimum is not quoted.
+- 8 thousands of a degree OVER A YEAR ?? — pull the other one!
cheers
Patrick

Theo Goodwin
January 1, 2012 10:30 am

RockyRoad says:
December 31, 2011 at 9:54 pm
“The intercept of each variogram curve with the origin (which should be the same regardless of the direction inspected) defines the “nugget effect”, which is the inherent noise of the sample set. The variogram curves rise with distance until the curve levels off, after which there is no further correlation of the sample values; the distance to the inflection point should be different for each direction inspected—it would be highly unusual to find a spherical range of influence because almost all things in nature display some degree of anisotropy. The sample set is said to have no correlation if the variogram curves display no downward trend for closer samples sets and hence, either because the sampling method is inherently corrupted or the distance between samples is too great or a mixture of sample sets representing a variety of correlations exists; in such situations the kriging methodology breaks down and you might just as well apply current climate science “fill-in-the-box” procedures and do an area- (or volume-) weighted calculation using inverse distance. (By the way, I’ve never understood the way “climate scientists” apply the temperature of one place to another simply because the other had a missing value; in mining you’d get fired for such blatant shenanigans.)”
Brilliant work, RockyRoad. You have given a clear example of the difference between using statistics that apply to a “population” whose characteristics are known and a “population” which is entirely fictitious. The ‘way “climate scientists” apply the temperature of one to another simply because the other had a missing value’ should bring automatic termination because they are “applying” an apple to an orange, though we are talking about imaginary “climate science” apples and oranges.
If you do not know some characteristics of the “population” studied then you have no basis for drawing those “cells” and claiming that they are comparable. Climate scientists have no reason for claiming that the temperature measurements that they are comparing are measurements of the same thing. Other commenters have made the same point in a practical way when they point out that the climate scientists have no ideas about, say, “rivers” of temperature that meander all over the place. (What will climate scientists say? Somehow they get away with the all purpose excuse that “It averages out.”)

Nick Shaw
January 1, 2012 10:40 am

Is it just me or does everybody believe you can put any kind of precision instrument in, or even on, the ocean for more than a month and expect it to perform exactly how it did when first placed in position? In my experience, with a boat load of equipment to measure speed, bottom contour and fish finding, as well as cameras, without cleaning once a month or so they quickly go out of calibration. Oh, I know, we’re dealing with the best stuff money can buy but, salty ocean water laden with everything from biologicals to chemicals kicks the bejesus out of any equipment made by man right smartly! How often are these buoys taken from the water cleaned and calibrated? My guess is not often enough!
And this is completely beyond the ludicrous idea that the ocean’s temperature could be measured with any accuracy whatsoever using this method, as Willis has so ably pointed out here!

Sal Minella
January 1, 2012 10:41 am

Willis,
You had me at your precision and accuracy argument. Much the same argument can be used when looking at global annual atmospheric temperature data. I’d love to see a discussion of the atmospheric data with regard to measurement technology, sensor placement, lack of sensors early on (1850s – 1970s), adherence to scientific principles when reading and recording data, human bias in reading and recording data, etc.
Is it possible that the atmospheric data set (1850 – present) is precise and accurate enough to support claims that we can detect global-annual-average atmospheric temperature changes to +/- .01 degree C?

peter_dtm
January 1, 2012 10:41 am

randomengineer says:
January 1, 2012 at 10:17 am
Do they say what their digitisation level is ? Are they using a 10 bit; 16 bit or 32 bit A to D chip ?
I came across a HART pressure device the other day still using a 10 bit A to D; and claiming some stupid accuracy/resolution 4.00000mA to 20.00000mA – mainly because it came up to the PLC/SCADA via a 64 bit PLC Input Card ……..

randomengineer
January 1, 2012 11:27 am

peter_dtm — Do they say what their digitisation level is ?
Someone else above posted that it’s 24 bit.

January 1, 2012 11:30 am

Willis,
“If there are warm spots or cold parts of the water in the tub, you’ll need a lot of thermometers to get an average that is accurate to say a tenth of a degree.”
That’s really not the question. While they talk about the average temperature, what we are really talking about is the best estimate of the unobserved water. I’ve done this average temperature of a pool thought experiment over at Lucia’s maybe I should do it here.
Imagine you have a very large pool of water and I ask you what the temperature of the water is.
Well, if all you know is physics,then by just looking at the water you can note that it is not
freezing (32F) and not simmering ( say 180F). So, knowing only physics and nothing else
your best estimate of the temperature of the water is.. (180+32)/2 = 106F. That guess will be least wrong. Note I am assuming you are not sensing the air temp. All you know is physics and the measurements I give you.
What does that mean? When we talk about the average temperature of the water what we really should say, is this : ‘our best estimate of the unobserved water temperature” So, standing by that pool, if I restrict myself to only what I know about physics, I can say my best estimate is 106F. I have not measured the temperature. I only know physics and what I see; the water isnt frozen, and its not simmering. So, I estimate 106F.
This guess will minimize the error. If I ask you to guess the temperature at the edge of the pool
your best estimate is 106F, center of the pool.. 106F. What that means is this: If I choose to measure the pool, my inevitable error will be minimized by this guess. Now If I told you the air temperature one inch above the water was 72.. what would you estimate the water at?
you probably wouldnt guess 106F, would you?
Now I place one thermometer in the center of the pool at the surface. It reads 72F. I now have new information about the temperature of the water. My guess of 106 was way off. 49F off.
but it was based only on my knowledge of physics and the visual appearence of the water.
Now, I ask you a question: Given what you know about physics and heat transfer, I ask you
to predict the temperature at ANY other place on the surface of the pool. What is your best estimate? well its not 106F. and its not 32F..Your best estimate is 72F. That guess will minimize your error. Can you see what assumption we are making.. and can you see how physics plays
a role in that assumption?
Now we place a second thermometer in the pool at the edge. It reads 73F. and I ask you another question:
predict the temperature at a distance halfway between the center and the edge of the pool.
Again, using what you know about heat transfer and the conductivity of water, you can
make a more informed estimate. It wont be 106F, and not 72 or 73. To minimize your error
what do you estimate? why you estimate 72.5 of course. that minimizes your error. So I place
a thermometer there and I measure 72.5. Wow? good guess. Now I add a few more thermometers, some are higher than 73, some lower than 72. and I create a grid. each grid has a temperature and each is different. We are not calculating an average really, we are creating a better estimate of the unobserved water temperature. we are saying, If you place a thermometer anywhere in this grid, the temperature will be close to that value. You cant guess a more accurate value based on the information you have. ( well a real physics model might help you here to improve it somewhat)
Then we repeat this the next day. The thermometer at the center of the pool is 73, the one at the edge is 74. Now, Estimate the one in between. your job is to get the best estimate you can, given your prior information. well you would guess 73.5. Thats the best you can do ( unless you want to model it.. ) We assume that changes in the observed points are tracked by changes in the unobserved points.
What can we say about the temperature of the pool. Our best knowledge says that it increased by 1 degree. Whats that mean? that means if we knew the temperature at location x y yesterday, that today, our best estimate of the temperature at that position will be
+1. Will these estimate be perfect? No. do we have any basis to say that the unobserved location will go down in temperature? based on the information we have? no.
Does this mean that it is impossible for that temperature to be -1 in that location from what it was yesterday?
No. But based on our knowledge, our best estimate is +1 for every unobserved location.
we do this all the time:
When we read that it was cold in the LIA, that there were frost fairs in London, what does that tell us about the rest of the world? What do we assume about the rest of the world and why?
When we read the post the other day telling us that proxies from one part of the ocean implying
a warmer MWP. what does that imply about the unobserved parts of the world? . I dont see you guys making the same arguments on those threads. Why? because the assumption of uniformity between the observed and the unobserved suits your purpose and because, absent information to the contrary that assumption works more often that not.
So, Argo gives you nothing more than the best estimate of the unobserved water temp/heat content. Thats an an odd way to put it, so instead people call it “the average”. But its not. The underlying assumption is that the heat content varies smoothly between measurement points. That assumption can be tested. But only, by making the measurements
with the same equipment in the same manner. It can also be tested by decimating the field, although not as rigorously.
The precision of your measurements is just a function of the number of measurements. The accuracy of your estimate is based on an assumption. Accuracy is always based on an assumption. (At the limit we assume the laws of physics dont change in between measurements). Here are the assumptions we can use:
1. You can assume that the heat varies smoothly between measurement points
and calculate a number
2. You can question that assumption and say nothing or write a blog post
3. you can prove that assumption wrong by making new measurements using the same equipment.
Questioning that assumption ( #2) is pretty weak. Here is why it is weak.
Back to the pool.
We see that big pool. it measures 72F at the center
I ask you to guess the temperature at the edge. You have 2 choices
1. you assume the heat varies smoothly and guess 72 ( its 73, you are off by 1)
2. You say ” I question whether I can guess the temperature here, it may not
vary smoothly” and you make no estimate.
which statement is wrong? well 1 is “wrong” in that the answer is slightly off. But estimates are always wrong. Always and forever. However, we are able to do things and take action only BY making assumptions.
If we avoid action because our estimates may be wrong, then we are really stuck. I assume
the next piece of concrete I step on will be as solid as the last piece of concrete. I have to or I could not walk to the store. I would just sit here and assume that my seat will continue to be a solid object..
But what about 2?
In number 2, the person isnt “wrong” about the temperature, they are just questioning. merely questioning.
in #2, the person has used the formal ability to question an assumption to really deny that we have any knowledge. That’s wrong in a different way. Its a pragmatically indefensible position.
and impossible to maintain consistently.
Wrapping up. Argo doesnt give you an average temperature ( that probably doesnt exist) what it gives you is a very precise ( yes the precision is warranted) estimate of the “unobserved” temperature of the ocean.
This estimate is based on an assumption. You can question that assumption, but there is only one way to prove it wrong. make more measurements. Even there, you will still be left with the assumption…
We always have assumptions. We cannot act without them. Each of you have assumptions. When you read a paper that says location X was warmer in the MWP, you make assumptions
about the unobserved locations. When you see some evidence from the LIA, you make assumptions about unobserved locations.
The difference is this: Warmista want to use the kind of assumption we make all the time ( that unobserved values vary like observed values ) to take certain actions. You object to the actions, so IN THIS CASE you object to the assumptions. In other cases you embrace the very same assumption.
If you want to object to the underlying assumptions ( be skeptical) then you need to practice consistency. If you want to prove the assumption wrong, you need to get busy making some floats.

Bill Illis
January 1, 2012 11:30 am

If we can’t use the Argo network to arrive at a precision that gets us to 0.X W/m2myr or 0.00X C/yr (which is where the numbers will actually be at), then why did we put 3,000 of them out there.
I guess any data is better than none.

RockyRoad
January 1, 2012 11:42 am

Geoff Sherrington says:
January 1, 2012 at 1:40 am

In 2006 I suggested by email to Phil Jones that he gets into geostatistics (which includes kriging). He said they had looked at it. Nothing more. Perhaps, post Climategate, we now know that the message went to an inappropriate person.

If Phil Jones can’t do an Excel spreadsheet, he would literally gag on geostatistics. (I’m surprised Michael Mann didn’t pick up on this earlier for the ARGO buoy data and invent argostatistics. The resulting shape? Fishysticks.)

Theo Goodwin
January 1, 2012 11:44 am

Steven Mosher says:
January 1, 2012 at 11:30 am
Once again, you fail to distinguish between the pool, which is a population whose characteristics are known and easily checkable, and the endless, wild and free ocean whose characteristics are neither known nor easily testable. If half the pool is always shaded you have no doubts about that. If half the ocean is under the influence of clouds you will not know that unless you do some ball busting work to first learn it and then confirm it.
All Hansen has done is make every possible simplifying assumption about the ocean and the sum total of them can be easily assummed: the oceans of the world are uniform with regard to all characteristics whatsoever. Now how stupid is that?
Warmists need therapy for deficiency of empirical instincts.

Theo Goodwin
January 1, 2012 12:01 pm

Nick Shaw says:
January 1, 2012 at 10:40 am
“Is it just me or does everybody believe you can put any kind of precision instrument in, or even on, the ocean for more than a month and expect it to perform exactly how it did when first placed in position?”
Warmists don’t do physical hypotheses at all, so they will have no clue what you are saying. No doubt they have never sampled their Argo measuring devices by pulling some out of the oceans and testing for changes.

Steve Oregon
January 1, 2012 12:03 pm

They’ve taken data, added, subtracted & averaged.
They also collaborated, compiled, prepared, filed and distributed.
It doesn’t matter if any of it is either accurate or useful.
There point is there’s a ho lotta measuring going on round here.
Busy bureaucrats making busy work measuring for the sake of measuring. Measuring everything everywhere with layers of processing for the sake of processing.
Much of it is simply using tax money to provide activists a means to turn their hobby interests into careers.
So what is the real difference between all of this data and having none of it?
Suppose none of it was available? OMG? What would science do?

Pat Moffitt
January 1, 2012 12:05 pm

Willis,
We can use freshwater river temperatures to understand the problems with sensor coverage you describe and the attribution claims for small changes in T over time. The inherent difficulty is that for both the ocean and river temperature changes the full system complexity must be considered.
A number of papers have tried to extract a climate signal out of river temperatures. Kaushal et al. 2010 paper -“Rising stream and river temperatures in the United States” is an example attributing temperature increases to UHI and climate.
But do these temperature increases have anything at all to do with climatic factors? The answer is yes, no, maybe and we don’t know.The complexity of a given river’s hydrology and its changes over time -channel width, vegetative shading, depth, velocity, sediment porosity, macrophytes, tributary land use, etc– ALL impact a river’s temperature. It is often the changes in a river and ocean state that are at the heart of temperature change.
An example- many of the increasing river temperatures I have reviewed over the last few decades are the result of changes in sediment load- not increasing air temperatures. As the sediment load of a river increases- its channel widens in response. This widened channel increases the surface area subject to warming and also reduces the channel percentage shaded by trees. The increased channel width generally translates into decreased depth- another heat problem. With the wider channel we see decreased velocities during the base and low flow conditions and increased time exposure to the sun for any given unit of flow. More importantly the lower velocities promote settling of finer grained particles reducing the porosity of the river’s substrate. Perhaps 25% of a “healthy” rivers’ flow travels in the hyporheic zone (the sub gravel water transport). The surface water is “pushed” into the hyporheic zone is some areas of the river and “upwells” back into the surface water in others. The finer sediments accumulating at lower velocity inhibit a river’s ability to “push” water into these sub gravel “air conditioned” reserves. I have seen maximum peak summer temperatures rise as much as 7C following the collapse of an old mill-dam and its accompanying sediment release.
We cannot make any claims about the climatic impact on river temperature without accounting for the changes in the river’s hydrologic state described in the above sediment example.
And the ocean is even more complex and less understood than rivers. We cannot ascribe minute changes in ocean temperature unless we understand the changes in the ocean state including long term changes in Eckman transport and other ocean circulation changes including exchanges between the deeper water (below the Argo range) to the upper layers or changes in zonal wind patterns for the more surface layers. The problem is we have little understanding of the ocean circulation patterns and don’t know if the ocean operates in more than one stable mode. We didn’t even know of the PDO until the 1990s and only have hints about the existence of longer term and perhaps more important ocean cycles. Consider if you will the coral bleachings of a decade ago- were they the result of increased air temperature or the result of “natural” period of reduced cold water upwellings?
I remain skeptical of temperature attribution and continued to be shocked at scientists making claims of an ability to elicit a climate signal out of tiny temperature trends operating within a highly complex- and insufficiently understood- self organizing system. I do however applaud Argo’s engineering and its mission of collecting essential raw data needed for our journey towards understanding.

Doug in Seattle
January 1, 2012 12:14 pm

Kriging works for drill data primarily because the method allows one to factor in directional trends, something very much present in mineral data. It can also be used to rectify irregularly spaced data into a grid, which I suspect is the principal reason it is used here.

January 1, 2012 12:22 pm

Willis:
We constantly reminded by your reviews as well as those by other posters at WUWT that the scientific literature that claims to be peer reviewed is an empty claim of quality, honesty, and accuracy. Hansen’s paper is no exception. He may have the physics of what might happen to unbalanced earth energy but the data analyses performed to justify the physics is really incredulous beyond reason. How could the people, who reviewed H2011 listed in the acknowledgments of the paper, agreed that the paper was worthy of putting Goddard’s rep on the line? It makes me think that the reviewers do not care. Unfortunately, over and over again commentators and BLOG authors at this web site identify “errors in analysis or thinking for that mater that should have been picked up by the people who act to preview papers before they are published. In many cases the mistakes are very obvious to casual reading that something isn’t correct.
I don’t know if you are remunerated for your efforts to bring insight into the world of climate science, but you should be and should be hired by the so called scientific community as one who will carefully review a paper prior to publication. The confusion we find in climate science today is a direct result of a network of peers approving each others work rather dissecting it for scientific integrity and worthiness. It could have saved billions! Now the worms are out of can and can’t be put back. Some day this collection of junk science will make some sociologist famous as classic example of cabals in science gone amok.
Thank you for your efforts and for the insights you have shared in climate science and happy 2012.

richard verney
January 1, 2012 12:26 pm

Too few data points.
Too short a record.
No point in discussing,
Come back in 30 years and may be just may be it will be possible to analyse something of significance.

highflight56433
January 1, 2012 12:31 pm

I recall that the speed, shape and timing of a SONAR pulse changes with changes in water temperature. 🙂 For more on this, join the Navy. Or maybe ask a dolphin or whale… 🙂
Do ocean currents tend to congregate debris as in Argo debris?

Doug Proctor
January 1, 2012 12:35 pm

The fundamental problem is the small changes that occur during the time period studied. You don’t need high data density when the changes are large and widespread, any more than Gallup needs to survey every American when the political position of the electorate is raging one way.
This is an excellent example and well explained of a basic problem I see in the AGW wars. The differences are so small that high powered computers/computations and significant “adjustments” are requried to see them. The “data” doesn’t show enough to be free of the suspicion that the methods of dissection introduce the very patterns that are being sought. And, worse, that by throwing everything local into a global pot, the “stew” and the stew’s characteristics are an artefact. What is measured and discussed is not the world but the swirling colours that result from enthusiastic mixing and matching.
Think of a Jackson Pollack painting: splashes of colours that have harmony and pattern in them, a feeling of a complete, intended piece when in fact the Pollack is essentially chaos arrested at a specific point. And composed of paints that are monotones that live separate, barely connected lives (outside of the factory) in their tins.
I think some, perhaps a lot, of the warmist “proofs” are artefacts of mathematical collection, adjustment and manipulation. You can’t go backward and find in the small what you believe you have found in the big.
CAGW is like deriving an understanding of gravity by studying galaxies but being unable to explain the apple falling from trees.

Al Gored
January 1, 2012 12:47 pm

“Get somebody who is an expert in kriging, and analyze the data properly.”
Is that someone Steve McIntyre?

Anthony Scalzi
January 1, 2012 12:47 pm

cohenite says:
December 31, 2011 at 7:48 pm
How does SONAR measure temperature; that sounds very interesting.
——-
I’m guessing that sonar can measure temperature because the temperature of the water influences the density, which in turn affects the speed of the sonar signals, which can be measured with the sonar equipment.

January 1, 2012 1:04 pm

Cohenite:
“How can sonar measure water temperature?”
Speed of sound varies with Temperature..in water, as in air.

January 1, 2012 1:17 pm

Cohenite, Anthony: I’m guessing that sonar can measure temperature because the temperature of the water influences the density, which in turn affects the speed of the sonar signals, which can be measured with the sonar equipment.
Yes, they measured the speed of sound over large distances — hundreds of miles. I think he said they could measure temperature variations to within a hundredth of a degree, or so. Ultimately the project was closed down due to hysteria about marine mammals being harmed by the noise — though he said it was no louder than a whale’s call.
Maybe I’ll ask him if there’s a good article on-line, giving their findings.

RockyRoad
January 1, 2012 1:38 pm

Steven Mosher says:
January 1, 2012 at 11:30 am

Willis,
“If there are warm spots or cold parts of the water in the tub, you’ll need a lot of thermometers to get an average that is accurate to say a tenth of a degree.”
That’s really not the question. While they talk about the average temperature, what we are really talking about is the best estimate of the unobserved water.

If you want to object to the underlying assumptions ( be skeptical) then you need to practice consistency. If you want to prove the assumption wrong, you need to get busy making some floats.

Actually, Steven, you’re probably right—there’s undoubtedly a need for a lot more buoys but without actually doing any variography to see if there is any statistical correlation in the temperature of the current ARGO buoys, one must just make a guess. But Willis is right, too, since his statement also points to the the need for more measurements and, by extension, the determination of proper sample density with more buoys.
First, let’s consider a gold deposit that has an inherent assay variability (range of influence) of 200 feet yet the initial sample density is on 500 foot (assume square grid) spacings—you’d not see any sample correlation from the horizonal variography (ignore the vertical for now). You’d start to pick up correlation when the sample spacing was reduced to less than 200 feet, and even as it got smaller you might see complications such as nested structures (two or more ranges of influence superimposed) until your sample spacing got ridiculously small (and expensive), so there’s the potential for a lot of complexity in all this, but let’s keep it simple for now.
So let’s assume our 2,500 ARGO buoys are positioned on a somewhat regular grid of 245 miles apart (60,000 sq miles of influence each) but let’s start with a single row of 24 such buoys evenly spaced, say, 10 miles apart to determine initial correlation in a temperature variogram. (I’m admittedly guessing at the granularity of ocean currents so maybe that’s too much but also perhaps too little, depending on the location in the ocean, but 24 buoys should be a good start, although putting them down the current rather than across the current will get drastically different results so anisotropy will become a factor.) From that information, the number of needed infill buoys could be determined and (assuming no limit to the money spent and positioned on an orthogonal grid to accommodate the anisotropy), the ocean’s temperature could eventually be kriged and a global estimation variance determined. With this the precision of the estimate can be calculated and overlain on Hansen’s graph to see if Hansen’s temperature estimate is statistically significant. If not, Hansen’s line is just another worthless noodle on the world’s spaghetti plate of climate graphs.

Pat Moffitt
January 1, 2012 1:48 pm

Steven Mosher says:
“Imagine you have a very large pool of water and I ask you what the temperature of the water is.”
Your pool analogy is a poor one because we would expect no significant temperature gradients or large internal circulation patterns around this gradient. It is not simply the number of T sensors along some horizontal axis but also the spatial orientation of the sensors to capture the vertical temperature circulation in and out of the Argo sampled depth range. We cannot therefore assume a simple temperature gradation between two sensors but must also include the upwelling component.
I don’t disagree with your comment:
” Argo doesnt give you an average temperature ( that probably doesnt exist) what it gives you is a very precise ( yes the precision is warranted) estimate of the “unobserved” temperature of the ocean.This estimate is based on an assumption.”
Perhaps for me the biggest issue is this is tax funded work. We can comment all we like without being asked to build our own floats. Any claim- especially one that may have a critical impact on government policy and regulations- must first pass the smell test. Willis is absolutely correct given the distance between sensors and the sensor array’s failure to capture essential vertical and geographic profiles to write a blog that says the heat recorded may not rise beyond noise and to call for some defensible error bars and statistical analysis. The Argo data collection is to be applaud– some of the takeaways from that data- not so much. We are often way too confident in what we know.

January 1, 2012 2:13 pm

To DJ (from a fellow DJ Cotton): Trenberth’s missing energy was rubbished by Knox and Douglass: http://www.pas.rochester.edu/~douglass/papers/KD_InPress_final.pdf
In any event, energy imbalance at TOA does not prove carbon dioxide was the cause. If thermometers tell us the world is warming or cooling naturally, then we would expect natural imbalance. I don’t know why they bother with it.

January 1, 2012 2:33 pm

There is nothing particularly uniform in ocean temperatures due to the currents shown here http://www.climate-change-theory.com/currents.jpg so I really don’t know how they can estimate total thermal energy accurately.

ferd berple
January 1, 2012 2:41 pm

Viv Evans says:
January 1, 2012 at 4:02 am
I suggest it would also be worthwhile to show graphs in addition to the one in Fig 2, where the left-hand scale represents steps of 1ºC, rather than the minuscule 0.01ºC shown.
Maybe the place to start is to double check Hansen’s work. Here is what Argo has to say:
http://www.flickr.com/photos/57706237@N05/6615370673/in/photostream
I’ve included the settings to recreate this for yourselves. I leave it to the WUWT contributors to decide. Is the choice of 2005 as a starting point create a false picture of warming? Are we looking at a case of cherry picking to make it appear as though there is warming?
When you plot the Argo data at a scale that corresponds to the natural range of ocean temperatures, there is no apparent warming.
http://www.flickr.com/photos/57706237@N05/6615370673/in/photostream
http://www.flickr.com/photos/57706237@N05/6615370673/in/photostream
http://www.flickr.com/photos/57706237@N05/6615370673/in/photostream
http://www.flickr.com/photos/57706237@N05/6615370673/in/photostream

3x2
January 1, 2012 3:09 pm

So, let me see if I can guess how this will play out in AR5….
1) Play with the slight ‘cooling’ trend and tease out a ‘warming’ one.
2) Convert deg C into Joules so we have some big scary numbers.
(so good so far)
3) Define the colour scale such that 0.01 deg C is doom red al la Sherwood (08) and cover a graphic of the worlds oceans with huge scary red patches.
4) Bury the methodology deep within references where no ‘policy maker’ or ‘journalist’ will ever go.
5) Rely on the inevitable cut and paste ‘journalism’ to promote the story of ‘boiling’ oceans.
Did I miss anything?

highflight56433
January 1, 2012 3:40 pm

Once again there is a reliance that the general public, along with politicians are too ignorant to see through another scamming waste of resources. Our resources.

JimF
January 1, 2012 3:50 pm

@RockyRoad says:
December 31, 2011 at 9:54 pm
and
Andrews says:
January 1, 2012 at 10:11 am
Excellent comments regarding the use of geostatistics (i.e. “kriging” but in a broader sense) in mining geology. I share your skepticism that it would apply here. The data are too sparse, and the natural variations (maybe better: boundary conditions) in the material being sampled are both unknown and perhaps, unknowable. (Notwithstanding the “ocean is homogeneous” concept one of my favorite idiots – LazyTeenager says:January 1, 2012 at 2:56 am – puts forward while disparaging the comments of someone who knows something about the subject).
As to Mosher’s (January 1, 2012 at 11:30 am) enormous and tendentious screed, I guess one can summarize: Examine every horse’s mouth, both pro and con on the issue of “Climate Science/Change”. The “good guys” can be as wrong as the “bad guys”. I have no problem with that, except that in my view one party is likely to be mistaken, the other is likely to be lying. And that is a saddening thought.
Happy New Year to all.

January 1, 2012 4:00 pm

Philip – you said “That was warming after the 1815 Tambora eruption cooling event (year without a summer)” But actually 1814 went 6 degrees below the long term trend and cooling had started before the eruption. Also, the moving average for the subsequent warming went about 0.6 degrees above the trend from about 1823 to 1833. Aerosols don’t cause cooling for the same reasons I have outlined in other posts regarding carbon dioxide not causing warming.
There were several volcanic eruptions starting in 1812 and culminating in Tambora, the biggest eruption in 1500 years.
Aerosols cool the climate by completely different mechanisms to GHG warming so I’d be surprised to say the least if they don’t affect the climate ‘for the same reasons’.

HAS
January 1, 2012 4:08 pm

It is interesting to see the way in which the basic idea of using sample measurements to test assumptions about the nature of the underlying data and models gets lost in the blink of an eye.
von Schuckmann (2009) assumes an Argo climatology – fit or predictability of interpolations not tested even just at the grid scale; next von Schuckmann (2010) uses this to fill in data and eliminate anomalous points and deduces “a trend” in some of the indices. Why these should be in a linear relationship with time is unexplained – and therefore no statistical testing is possible of first fit with assumptions (everything is stationary and normal one assumes?) and then with the model of the real world being evaluated. Then Hansen (2011) pops along to quote these linear relationships and conclude this “provides fundamental verification of the dominant role of the human-made greenhouse effect in driving global climate change”.
The point about Kriging isn’t so much that it might help with modeling this particular kind of physical system, more the point that it demands formality in addressing these underlying assumptions.

HAS
January 1, 2012 4:18 pm

Oh and BTW I Mosher (January 1, 2012 at 11:30 am) and others miss the point that your existing data set gives you the ability to test the assumptions about the underlying processes, and help tell you just how warranted they are.
It is the failure to do this that’s the basic criticism of this body or work, particularly as it heads to Hansen’s conclusion.

Keith
January 1, 2012 4:36 pm

Willis:
Several years ago there was a notice to all users of the ARGO float data informing them that there was a 10-30% of Druck pressure sensor failures. Over time the sensors developed a progressive negative offset in measured pressure. Sea-bird stopped shipping floats until they could identify and fix the problems. How that affected data accuracy is room for further investigation.

Kevin Kilty
January 1, 2012 5:03 pm

Steven Mosher says:
January 1, 2012 at 11:30 am
Willis,
“If there are warm spots or cold parts of the water in the tub, you’ll need a lot of thermometers to get an average that is accurate to say a tenth of a degree.”

So, Argo gives you nothing more than the best estimate of the unobserved water temp/heat content. Thats an an odd way to put it, so instead people call it “the average”. But its not. The underlying assumption is that the heat content varies smoothly between measurement points. That assumption can be tested. But only, by making the measurements
with the same equipment in the same manner. It can also be tested by decimating the field, although not as rigorously.

Mosher, I really believe that you are going to a lot of trouble, unnecessarily, to avoid using the term average. In fact your “unobserved water temperature” is really just ocean heat content. But ocean heat content is a bulk measure that is very much like another bulk measure known as the average–at least in terms of measurement issues.

The precision of your measurements is just a function of the number of measurements.

Precision usually means how closely successive measurements repeat, but it might also mean how closely statistics from independent samples repeat as a way of verifying closeness to an unknown value (accuracy). If you mean by precision how closely two randomly located sets (samples) of ARGO temperature measurements would provide a repeated statistic like ocean heat conent, then what you say is not generally so. The errors in measurement have to compensate for one another statistically if what you say is to be true. If errors in measurements are highly correlated to one another, then you might gain nothing with more measurements. Geophysical data often contains correlated errors. In the case of the buoys perhaps currents tend to round them into particular locations with correlated biases. As a related example: Walter Munk sampled TOPEX/Poseidon data at the locations of global tidal gauges and showed that these tidal gauge stations provided a biased set of data that tended to over-estimate sea level rise by a factor of two–one could not get a better estimate of sea level rise by placing more tidal gauges in a similar manner. Hard to believe that global data sets can contain large systematic biases, but that is what happened.

Roger Andrews
January 1, 2012 6:01 pm

The disputatious question of whether the Argo buoys are located close enough together to define ocean temperature distribution to within acceptable limits can be resolved by running variograms. If the variogram range is greater than the buoy spacing there’s no problem. If it’s less we need more buoys. It’s as simple as that.
And if we don’t want to bother with variograms we can get an idea of how sensitive the results are to sample spacing simply by re-estimating temperatures with the data from every other buoy removed to see how much difference it makes. Given the +/- 300 km spacing shown in Fig.3 my guess would be, not much.

January 1, 2012 7:19 pm

The first sentence in Chapter 5 of the latest IPCC report says:
“The oceans are warming. Over the period 1961 to 2003, global ocean temperature has risen by 0.10°C from the surface to a depth of 700 m.”
That’s 0.002°C per year. What’s the problem?

January 1, 2012 7:23 pm

Did Argo floats steal Trenberth’s missing energy? Could be. Kevin Trenberth and John Fusillo had an article in16 April Science in 2010 with the title “Tracking Earth’s Energy.” They measured the net incoming and outgoing energy to determine how much energy remained in the Earth system. They showed that these energy flows pretty much balanced out until 2004. But then a mysterious energy deficit began to form. It got worse and by 2009 eighty percent of net radiation energy was simply disappearing. They had high confidence in their measurements and stated that “Since 2004 ~3000 Argo floats have provided regular temperature soundings of the upper 2000 m of the ocean, giving new confidence in the ocean heat content assessment – …” So what do you know, new equipment comes on line and energy does a disappearing act! If I had been the reviewer I would have sent them back checking the equipment until the discrepancy was resolved. Since that was left unresolved they resorted to guessing at random possibilities like: “Is the warming associated with the latest El Nino a manifestation of the missing energy reappearing?”

January 1, 2012 7:27 pm

Oh I forgot, that change in 2007 – 2008? Does that correspond to when Dr. Josh Willis and his team corrected erroneous data and modified the data with a computer model.
http://en.wikipedia.org/wiki/Argo_(oceanography)
(Wikipedia being what it is, doesn’t exactly say that anymore, but it did at one time.)

Septic Matthew
January 1, 2012 7:40 pm

Steve Mosher: Wrapping up. Argo doesnt give you an average temperature ( that probably doesnt exist) what it gives you is a very precise ( yes the precision is warranted) estimate of the “unobserved” temperature of the ocean.
Willis, what you have shown can be summarized thus: if the precision of the buoys is less than Hansen et al claim (which you don’t assert), and if the spatial variation in temperature is greater than Hansen et al estimate (another claim that you do not assert), the the precision claimed by Hansen et al is not supportable. You surely could be correct, but you don’t provide evidence that the Hansen et al claim of precision is inaccurate. Kridging could in principle provide an improvement if the correlations of the errors of the “adjacent” buoys is high enough, but each buoy is precise and they are separated by long distances (and they are not generally at the same depths at the same time), so that is unlikely. Hansen’s estimate of precision is only wrong by a significant amount if the unsampled regions have much different temperatures (compared to estimated variability) from the sampled regions
In my experience estimates of precision are over-optimistic, and this one probably is as well. But you have not shown empirically that it is seriously in error.

Pat Moffitt
January 1, 2012 7:53 pm

Roger Andrews says:
“If the variogram range is greater than the buoy spacing there’s no problem. If it’s less we need more buoys. It’s as simple as that.”
I’m not so convinced this is solved by a “simple variogram” computation. Lets use the bathtub analogy again. The ocean water segment being measured by Argo-unlike a tub- is not bounded and sees chaotic exchanges with water from above and below the Argo measurement range. So if we use the bathtub analogy – we must place some unknown number of drains and pumped inlets between each sensor. We cannot know when or if any one of these drains will open causing the pump to run AND we cannot know the volumes or the temperature differential when and if the exchanges occur. How then do we assume any degree of spatial dependence between the sensors?
Basically, how do you measure the “right” number of sensors for a tub using a variogram when unbeknown to you someone continually sneaks into the bathroom drains out some quantity of water and turns on the tap running at random temperatures to replace it? I’m just not sure how we account for the chaotic ocean upwelling/circulation patterns that varies in time, intensity and place in a way that gives us meaningful insight given the small measured sensor delta T. Sensor location must capture the points of exchange-and this is knowledge we do not have.

Theo Goodwin
January 1, 2012 8:49 pm

Bill Illis says:
January 1, 2012 at 11:30 am
“If we can’t use the Argo network to arrive at a precision that gets us to 0.X W/m2myr or 0.00X C/yr (which is where the numbers will actually be at), then why did we put 3,000 of them out there.”
The data might be useful for other purposes but Hansen is misusing it.

RockyRoad
January 1, 2012 9:30 pm

People should consider that there are two errors (variances): Sample variances (by itself a great big field), and estimation variances (the process of applying sample values to whole volumes, at least within the boundaries of interest, an even bigger field). It appears sample variances for these ARGO buoys are relatively small, whereas the estimation variance is potentially huge. The only way to know the degree of correlation between sample buoys (if any exists at all) is to run the buoy data through a variogram program. Until that happens, we’re all guessing. And until that is done the only thing we can do is to skip the estimation step entirely and use the samples as point values without any spatial separation or volumetric attribution and just average them (we can all do a “Hansen”), then argue about what it all means, if anything.

RockyRoad
January 1, 2012 9:37 pm

HAS says:
January 1, 2012 at 4:08 pm


The point about Kriging isn’t so much that it might help with modeling this particular kind of physical system, more the point that it demands formality in addressing these underlying assumptions.

That’s true, for the reasons I’ve stated above. Doing a proper mathematical analysis of this data might convincingly show it can’t tell us much at this point. That would have to be my (safe) position until math demonstrates the data is sufficient for the task at hand.

HAS
January 1, 2012 9:48 pm

I’m not sure that some of the commentators have quite thought through the fun to be had running the temp readings through Kriging in 4-D (space-time). Hansen et al aren’t about an estimate of average temperatures from averages across the globe – they’re talking about changes in those temperatures over time.
I do have a nasty feeling however that there would need to be quite a bit of data manipulation in the time dimension to get the requisite assumptions for the technique to hold together.

Brian H
January 1, 2012 11:07 pm
DirkH
January 1, 2012 11:39 pm

Septic Matthew says:
January 1, 2012 at 7:40 pm
“Willis, what you have shown can be summarized thus: if the precision of the buoys is less than Hansen et al claim (which you don’t assert), and if the spatial variation in temperature is greater than Hansen et al estimate (another claim that you do not assert), the the precision claimed by Hansen et al is not supportable.”
Science is not a crime scene; there is no “in dubio pro reo”; he who claims extraordinary precision carries the burden of the proof. You’ve got it upside down. (Well, ok CAGW science COULD be a crime scene, given all the conflicts of interest, but that’s a different theme.)

AndyG55
January 2, 2012 12:28 am

I have another question or 2…
Who funds the ARGO project?
What happens to ARGO funding if they find none or minimal SST rise over the next say 5-10 years ??

Aus_skeptic_atm
January 2, 2012 1:43 am

First post, currently a climate skeptic (from Australia) and only qualified as a technologist.
I’ve read every single post here, but what about getting back to the original posts question on “how do you explain the huge energy injection into the oceans in 2008? (only)”

wermet
January 2, 2012 4:14 am

LazyTeenager says: January 1, 2012 at 2:56 am

… The rocky crust is not homogeneous and is highly discontinuous , the oceans are very very continuous and very nearly homogeneous.

LazyT – you have obviously never had any dealings with any group that uses sonar. The ocean is not continuous nor homogeneous. In fact, the ocean is highly stratified in both temperature and salinity. There are many, shifting thermal layers and inversions. These phenomenon routinely confound the analysis of sonar data to such a degree as to make it nearly impossible. They reflect and distort the transmission of acoustic energy through the oceans.
I suspect that the world is far more complex and untidy than you (or Climate Scientists) have ever imaged. Please remember that while it maybe easy to understand the *basic* principles of most fields of science, the devil is always in the details.

Speed
January 2, 2012 4:26 am

Steven Mosher said, “Imagine you have a very large pool of water and I ask you what the temperature of the water is.”
Imagine that the very large pool is Lake Superior and that the only instrument you have is a single Argo float that you can place in just one location and that you must estimate the energy content of the lake every 10 days over five years. Estimate the precision and accuracy of your measurement.

aeroguy48
January 2, 2012 5:07 am

How much does one of those suckers cost? Who pays for them? Oh never mind.
signed;
A bent over taxpayer

January 2, 2012 5:48 am

AndyG55 says: January 2, 2012 at 12:28 am
Who funds the ARGO project?
There’s this huge money pot:
U.S. Global Change Research Program
http://www.ucar.edu/oga/pdf/FY12_USGCRP.pdf
http://www.climatescience.gov/infosheets/ccsp-8/#funding
Total $39.504 Billion since 1989
What happens to ARGO funding if they find none or minimal SST rise over the next say 5-10 years ??
Dr. Josh Willis and his team ride to the rescue to correct and adjust the erroneous data?

RockyRoad
January 2, 2012 6:52 am

HAS says:
January 1, 2012 at 9:48 pm

I’m not sure that some of the commentators have quite thought through the fun to be had running the temp readings through Kriging in 4-D (space-time). Hansen et al aren’t about an estimate of average temperatures from averages across the globe – they’re talking about changes in those temperatures over time.
I do have a nasty feeling however that there would need to be quite a bit of data manipulation in the time dimension to get the requisite assumptions for the technique to hold together.

Indeed, HAS! This just might put the horsepower of the Cray XK6 (referenced here recently on WUWT at http://wattsupwiththat.com/2011/12/23/friday-funny-new-noaa-supercomputer-gaea-revealed/ ) to good use, since the 4th dimension of the modeling would certainly show parts of the ocean warming, other parts cooling, and many parts staying essentially the same and the graphic output of the Cray XK6 supercomputer for each time increment could result in an amazing history for all to view. However, I’m still of the (admittedly unsubstantiated) opinion that the current number of buoys is insufficient for such a project.
At the same time, there’s no reason atmospheric temperature data couldn’t also be run through geostatistical methods, except that the current data set probably has an even higher level of variation compared to the ARGO buoys and would need an even greater number of temperature stations to accomplish a truly statistical-substantive model. The resulting estimation variance would probably be so high as to be an embarrassment to all climate scientists, but reducing it to acceptable levels might finally get us to the point of something believable–if only the US weren’t so far in debt that adding the proper number of stations would be a completely irresponsible request for money the government simply doesn’t have.

Pat Moffitt
January 2, 2012 9:04 am

AndyG55 says:
“What happens to ARGO funding if they find none or minimal SST rise over the next say 5-10 years ?”
I would hope funding to Argo’s is not threatened. While I have some issues with the very early Argo heat interpretation- the floats are providing multi-parameter raw data sorely needed forunderstanding the world’s oceans. Big science today is too much modeling and too little raw data collection. I continue to applaud projects like Argo that focus on sampling and analyses.

January 2, 2012 10:48 am

First, the length of the dataset. The SLT2011 data used by Hansen is only 72 months long. This limits the conclusions we can draw from the data. H2011 gets around that by only showing a six-year moving average of not only this data, but all the data he used. I really don’t like it when raw data is not shown, only smoothed data as Hansen has done.
If this is a NASA publication, I believe that the current policy requires that all of the raw data and methods be posted on the web by the time of publication, so you should be able to get it. An extremely amusing way to analyze it would be to hire professionals to do it, e.g. employ SAS as contractors or (as you say) find a “kridging” firm and kridge away.
Once again, though, the fluctuation-dissipation theorem seems apropos. As you note, the largest short-time-scale fluctuation appears to be on the order of both the (admitted) error and the overall variation in the smoothed data. This is very suspicious. I suspect that an honest computation of R^2 would yield a very small value, that is, there is no discernible linear trend in the data. (Where by “discernible” I mean “statistically valid”.) As you also note, there is something rather absurd about that — a few months where the ocean heats far, far more than can be reasonably explained by any known physical mechanism. It would be very interesting to compare these months to known solar state, to see if what is going on is something like inductive heating of the ocean itself at depth due to variation of the coupled geosolar magnetic field. It really isn’t terribly plausible that this much heating would occur, this fast. This isn’t “greenhouse warming” — CO_2 concentrations don’t fluctuate anywhere nearly enough to explain this. It can’t easily be solar warming — not down to the depths involved, not over the entire globe.
Is is (mostly) spatially localized? Is it stratified (and confined to the very smallest depths they sampled)? Were there enormous storms and turbulence, major changes in salinity, a “turning over” of the water column (everywhere?)?
This is really puzzling. You get a big set of interesting data. You cook it carefully so that it shows a warming, albeit one that is so tiny that nobody could possibly care. It contains a number of very interesting things — real surprises, if you think about them, things that are either clues concerning some actual new relevant physics or anomalies that show that the actual errors are (say) 3-10 times larger than what you are admitting and this is basically a straight line with no visible warming at all. And then you make no effort at all to understand it?
The raw data itself is bound to be better, and far more informative.
rgb

January 2, 2012 5:24 pm

Willis writes “One problem here, as with much of climate science, is that the only uncertainty that is considered is the strict mathematical uncertainty associated with the numbers themselves, dissociated from the real world.”
This nails it. And it totally applies to proxy reconstructions.

January 2, 2012 6:04 pm

Some Random Engineer writes “Willis the accuracy ought to be fine. The sensors are typically sampled N times per actual reported sample and ought to have an internal crossref to compensate for sensor drift. …etc”
And is a perfect example of someone like Tamino who is focussed on the numbers and not in reality. Whether one square meter of water in the ocean can be measured to the nearest 1000th degree or not has no bearing on the temperature of the ocean.

Alan Wilkinson
January 2, 2012 7:26 pm

The flaw in Steve Mosher’s criticism is that Willis is not critiquing the result, he is critiquing the confidence in that result and particularly that the accuracy, stability and confidence is sufficient to detect a trend. It is irrelevant that this may (or may not) be the best available. The test is whether it is good enough to prove the result that Hansen claims.

EFS_Junior
January 2, 2012 8:22 pm

An entirely overly verbose post devoted to nothing more than an argument from incredulity.
That dog don’t hunt.

Gary Swift
January 3, 2012 7:04 am

I have some experience with measuring temperature in the real world. I work at a major fresh bread factory. We make up to 1.5 million units per week. Bread is all about time, temperature and humidity. If we had the same sample rate as the Argo network, we would have only ten sample points per week. We run five days a week, so that’s two samples per day. If we tried to run our mixer, proof box and oven with only two temp measurements per day, bread would be in very short supply.

Bruce Stewart
January 3, 2012 8:21 am

Thank you for highlighting this great open data set.
Another candidate for what happened in 2007 might be that the floats began to reach a relatively uniform global distribution.

Kevin Kilty
January 3, 2012 1:17 pm

Willis Eschenbach says:
January 3, 2012 at 2:46 am
Here’s my interesting thought before bed .. 2:34 AM.
The “standard error of the mean” gives the error for something like their monthly Argo average of the top 1.5 km of the global ocean. They say the error of that monthly mean (average) is ± 0.008°C.The formula for the standard error of the mean is that it is the standard deviation divided by the square root of “N”, the number of datapoints….

And even this works only if the data points are independent, identically distributed observations. But the only way to determine this is through calculating cross-correlation. The authors admit in their paper “…This [method] takes into account the rediced number of degrees of freedom to estimate error on the mean value for a given box (through the covariance matrix). Note that this effect is not negligible….”
Your figure 2, above, Willis, shows that one sigma of OHC is about plus/minus 0.01C, or 1.96 sigma, which is the 95% confidence interval, is about plus/minus 0.02C. At this level of significance a person cannot exclude 0.00 as the trend.

PaulL
January 4, 2012 2:36 am

Some interesting comments in here. Responding to a couple (just from having bothered to read the whole thread):
– some people questioning the accuracy of the floats themselves, largely on the argument that anything in the ocean is likely to be unreliable. Be that as it may, it sounds like they’ve retrieved some floats over time and tested them, and the calibration remained sound. Without evidence to the contrary, that sounds like a reasonable cross check
– some people seem to be suggesting that there isn’t enough information. Reasonably enough, some other people are saying we never have perfect information, we have to go with what we have. Which is also fair. I think the important question is whether this information shows anything alarming enough to justify taking action. I’d say that evidence of 0.01 degree of warming over 5 years, or 0.1 degree over 50 years, isn’t something that needs urgent action. And if, further, there is question about whether the measurement is even accurate enough to tell us that it’s 0.01 v’s -0.01 degrees, then even less reason to take action. So the real question here is whether the results are alarming enough or accurate enough to justify any action. I’d say no. To put it another way, we have three possible actions: alarm and spend money, decide there’s definitely not a problem and do nothing, or agree that this data isn’t complete, nor alarming enough for us to take action despite it being incomplete – and therefore wait for more complete data. The last seems the right answer here.

Steve Keohane
January 4, 2012 7:37 am

Willis Eschenbach says:
January 3, 2012 at 12:58 am
Steve Keohane says:
January 1, 2012 at 7:53 am
Willis, got stuck on fig.2. +/- 1 sigma is not 95% confidence level, +/- 2 sigma is. If the graph is showing +/- 1 sigma, then +/- 3 sigma is +/- .01°, by putting a ruler on my screen, which pretty much covers the whole dataset, rendering any trend or differences in the measurements meaningless.
Unfortunately, they didn’t say whether the error was one or two sigma.

Fig2 has a labeling of 1σ Gaussian boundary error…. perhaps I misunderstand what that means. From implementing Demings’ 6 sigma process control methods in ICs c.1980, I thought the Gaussian distribution to be +/- 3 σ, 95% of a ‘normal’ distribution at +/-2 σ. The third sigma gets you another 4.7% of the distribution. So a ‘normal’ Gaussian distribution can be expected to contain 99.7% of the dataset.
The graph itself shows a 1σ boundary error. The text beneath the graph claims this is a 95% CI.Both cannot be true.

RockyRoad
January 4, 2012 11:32 am

Willis Eschenbach says:
January 3, 2012 at 12:47 am

Rocky, many thanks for your ideas. Since each Argo buoy provides one temperature profile every ten days, seems to me you could use a ten day sliding window to analyze the data, and treat that as one instant in time. Treat that whole chunk of data (~ 3000 “drill-holes” scattered randomly around the ocean) as being like 3,000 drill-holes in some ore-body.

Hi, Willis…
Like you, I’m also extremely skeptical of their data handling methods. Why on earth these “climate science” folks have such a serious phobia about missing data is beyond me yet at the same time they have no problem introducing bias into their data using “in-fill” procedures that are simply not warranted and likely unproven. The only reason to ever in-fill missing data is if a completely independent source of data exists upon which temperature data could reliably be derived and I know of no such source. (I once worked up a data in-fill procedure on numerous 300-meter angle cores through rock that had various silicified mineral zones that caused extensive deviation of the drill (and hence extensive problems with the variography because the precious metal values weren’t in a straight line like I assumed they were). The geologist sitting the rig had logged the mineralization and I was able to use it to determine a very close correlation between mineralized zones and hole deviation based on down-hole surveys where angles of deviation were taken every 5 meters on half the holes; applying the correction factor to the other half resulted in much better variograms for the entire dataset. As a check, the correction was also applied to holes of known deviation to measure the efficacy of the adjustment algorithm, which was considered acceptable.) It sounds like the ARGO data is being adjusted and filled in based on other temperature data, which is a form of data incest and should never be used. If there are problems with missing data, they should address those problems so future data isn’t missed, otherwise leave it missing! Fill-in data will definitely introduce bias and has no justification whatsoever.
And you’re right—the ARGO profiles can be handled just like drill hole data from most ore deposits (with the exception of placer gold, but there the problem is one of sample support and meaningful gold analysis, not the geostatistical methodologies employed). One problem I see is that these ARGO temperatures profiles are not vertical. I’ve no idea what the current speed is at each of these buoys, but surface currents of oceans around the world generally range from 1.5 to 2.5 m/s compared to an ascent rate of 0.1 m/s for the buoy, so the profile is leaning significantly (and in different directions for different buoys depending on their location—some are diverging and some are converging. At the same time, there’s probably eddying and mixing going on at some buoys). At an average current speed of 2m/s, a 6 hour buoy ascent will have 43 km of horizontal movement from just the current. Complicating this is the high probability that the current is not moving at a continuous speed or in a steady direction as the buoy ascends. And based on differing salinity measurements and the interesting temperature curve presented in your Figure 5, it is likely that the buoys traverse several zones of the ocean that shouldn’t be separated when applying the variography. (I’d be surprised if there are actually 152 zones like they’ve defined, but working on the actual data would either support or negate their interpretation.)
There are several things to look for when generating variograms that determine how reliable the data is:
First, the larger the nugget effect is with respect to the sill, the more the data itself is suspect (larger sample variance).
Second, if points defining the line running from the nugget effect to the sill portion of the variogram show a lot of scatter, the problem is likely that the samples are poorly positioned and/or represent several zones mixed together.
Third: Down-profile temperature datasets should lend themselves to variography but all temperature intervals should be composited to the 20-m length (unless looking at each subset separately). This means averaging the 5 m samples in the top 100 meters to be 20 m in length; the 10 m samples in the next 700 meters to be 20 m in length, and leaving the rest of the 20-m samples as they are and combining the set. Try this on an individual profile since the zone configuration of even adjacent profiles is undoubtedly different (indicating horizontal as well as vertical zoning). If compositing is not applied and all data points are used together, the sample support is different for each length and this violates basic tenants of geostatistics. However, try it and compare the differences.
Fourth: Nested structures in the variogram (manifested by inflections in the slope from the nugget to the sill) generally indicate superimposed zones although this is generally seen only when there is an abundance of data compared to the range.
Fifth: The down-profile variogram should, when properly composited, indicate the vertical range of influence. Start with one hole, add adjacent holes, and increase the inclusion and see what happens to the down-hole variograms. If the variography falls apart, the likely culprit is different zones encountered in the various profiles. Do the same but in a horizontal orientation starting with half a dozen adjacent holes (the vertical and horizontal windows should be about 15 degrees). If there is any variography at all (you probably won’t see a nugget effect), add adjacent profiles until something shows up. One advantage of these independently floating buoys is that some should be closer together than others, so initially target groups with closer horizontal spacing to overcome the problems associated with a sparse dataset.
My guess is that the down-profile variography will show a variogram structure but I’m betting the horizontal spacing isn’t close enough to get a horizontal signature. However, I would be happy to be wrong but start with a single composited profile and even though it is displaced a significant distance while recording, at least it is moving with the current an not the current so it may approximate a vertical traverse.
I’d do the basic variography on a single temperature profile first before going further since that’s the basis of the kriging. Then if things hold together, you can start adding data, adjusting timeframes, and come up with all sorts of interesting results. Overall, this sounds like a fascinating project, but also a lot of work. Good luck, Willis!
PS> I remember doing some geostatistical work using polar coordinates, but that’s something to consider later on, since that might eliminate the N/S edge effect in your kriged model.

RockyRoad
January 4, 2012 11:34 am

Correction: “…and even though it is displaced a significant distance while recording, at least it is moving with the current an not against the current so it should approximate a vertical traverse.”

RockyRoad
January 4, 2012 12:37 pm

Willis, I have mistakenly used “hole” or “drill hole” where I should have used “profile” occasionally in the above description–Sorry about that; can’t teach this old geology dog new tricks, apparently!

RockyRoad
January 4, 2012 6:14 pm

Another correction: ” And based on differing salinity measurements and the interesting temperature curve presented in your Figure 5, it is likely that the buoys traverse several zones of the ocean that should be separated when applying the variography.”

RockyRoad
January 4, 2012 9:19 pm

Willis Eschenbach says:
January 4, 2012 at 7:40 pm

That implies that with a hundredth of the data points, we could have accuracy one decimal point less than what they claim. That means they are claiming that 25 Argo floats, each measuring 3 profiles per month, should be able to measure the temperature of the top 1.5 km deep (a bit over a mile) of the entire immensity of the global ocean to a precision of 0.08°C. I don’t buy that in the slightest.

Thanks, Willis, and I agree–And as you have shown, their precision is way overstated. But I do believe climate science is ready to be examined by the suite of sophisticated geostatistical tools now available which will, in the least, demonstrate that many of their claims are unsubstantiated. (Their reliance on “egostatistical” methodologies will someday succumb to the more robost “geostatistical” approaches and leave them with nothing but excuses.)
I wish I was independently wealthy–I’d take on the challenge just to see where it all goes. Let us know how it all works out in a future thread, naturally.
RockyRoad

OzJuggler
January 5, 2012 12:10 am

Willis,
kriging was mentioned in the TAR Chapter 10, according to the ClimateGate emails, but perhaps it has not been touched since then?? Here was their early reasoning according to :
1339.txt

Other non-linear
techniques are kriging and analogs, whose performance were compared by Biau
et al., (1999) and von Storch (1999). Kriging resulted in better
specifications of averaged quantities but too low variance, whereas analogs
returned the right variance but lower correlations. Also analogs can be
usefully constructed only on the basis of a large data set.

But what is the context of that comparison and what does it mean? You’re gonna love this:

The most systematic and comprehensive study so far is that one by Wilby et
al. (1998) and Wilby and Wigley (1997). They compared empirical transfer
functions, weather generators, and circulation classification schemes over
the same geographical region using climate change simulations and
observational data.

Yes, that’s right, kriging the observations didn’t match kriging the models… so it seems they threw out kriging… and presumably kept the models.
Your tax dollar hard at work.

Brian H
January 5, 2012 3:39 am

Willis Eschenbach commented on Krige the Argo Probe Data, Mr. Spock!.

Many thanks, Rocky, for the interesting information. Infilling data always makes me very, very nervous.

Yes, it’s logically impossible to get more information by infilling. You can only obscure or “overwrite” what’s there if you change it at all. The convenience factor — making the data table format match the needs of a program or algorithm — invites and almost enforces over-interpretation and over-extrapolation. Bias is inevitably introduced; consider how choice is made between alternative “infilling” methods. There is no pre-existing standard for such, so short of using a random number generator to do the infilling, it’s going to be done with prettified numbers.
No footnote or asterisk saying, “Caution: infilling may have left soft patches. Watch your step!” is adequate to offset the damage done.

Steve Keohane
January 6, 2012 6:35 am

Willis Eschenbach says:
January 4, 2012 at 10:55 pm
Steve Keohane says:
January 4, 2012 at 7:37 am

Thanks Willis. Sorry for being OT, my getting hung on that graph made me gloss over the precision/infilling problems, the real thrust of your fine article..