Guest Post by Willis Eschenbach
Anthony has an interesting post up discussing the latest findings regarding the heat content of the upper ocean. Here’s one of the figures from that post.
Figure 1. Upper ocean heat content anomaly (OHCA), 0-700 metres, in zeta-joules (10^21 joules). Errors are not specified but are presumably one sigma. SOURCE
He notes that there has been no significant change in the OHCA in the last decade. It’s a significant piece of information. I still have a problem with the graph, however, which is that the units are meaningless to me. What does a change of 10 zeta-joules mean? So following my usual practice, I converted the graph to a more familiar units, degrees C. Let me explain how I went about that.
To start with, I digitized the data from the graph. Often this is far, far quicker than tracking down the initial dataset, particularly if the graph contains the errors. I work on the Mac, so I use a program called GraphClick, I’m sure the same or better is available on the PC. I measured three series: the data, the plus error, and the minus error. I then put this data into an Excel spreadsheet, available here.
Then all that remained was to convert the change in zeta-joules to the corresponding change in degrees C. The first number I need is the volume of the top 700 metres of the ocean. I have a spreadsheet for this. Interpolated, it says 237,029,703 cubic kilometres. I multiply that by 62/60 to adjust for the density of salt vs. fresh water, and multiply by 10^9 to convert to tonnes. I multiply that by 4.186 mega-joules per tonne per degree C. That tells me that it takes about a thousand zeta-joules to raise the upper ocean temperature by 1°C.
Dividing all of the numbers in their chart by that conversion factor gives us their chart, in units of degrees C. Calculations are shown on the spreadsheet.
Figure 2. Upper ocean heat content anomaly, 0-700 metres, in degrees C.
I don’t plan to say a whole lot about that, I’ll leave it to the commenters, other than to point out the following facts:
• The temperature was roughly flat from 1993-1998. Then it increased by about one tenth of a degree in the next five years to 2003, and has been about flat since then.
• The claim is made that the average temperature of the entire upper ocean of the planet is currently known to an error (presumably one sigma) of about a hundredth of a degree C.
• I know of no obvious reason for the 0.1°C temperature rise 1998-2003, nor for the basically flat temperatures before and after.
• The huge increase in observations post 2002 from the addition of the Argo floats didn’t reduce the error by a whole lot.
My main question in this revolves around the claimed error. I find the claim that we know the average temperature of the upper ocean with an error of only one hundredth of a degree to be very unlikely … the ocean is huge beyond belief. This claimed ocean error is on the order of the size of the claimed error in the land temperature records, which have many more stations, taking daily records, over a much smaller area, at only one level. Doubtful.
I also find it odd that the very large increase in the number of annual observations due to the more than 3,000 Argo floats didn’t decrease the error much …
As is common in climate science … more questions than answers. Why did it go up? Why is it now flat? Which way will the frog jump next?
Regards to everyone,
w.
Willis: Please note the pre-Argo values are probably MEANINGLESS.
This whole exercise is a “faith based” activity that makes the snake holding/Bible thumping Penecostals look RATIONAL!
I think, a superficial examination of the “source data” and the derivation of the 1950’s to 2000 part of the curve will illustrate it’s unreliability.
THEREFORE THE ONLY INFORMATION WE HAVE FROM THIS STUDY IS THE CURRENT INFORMATION WHICH PRETTY MUCH CORRELATES WITH THE FLAT TROPOSPHERIC TEMPERATURES OF THE LAST 17 YEARS.
Equilibrium system, period. STABLE. Negative feedback. Willis’ thunderstorm thermostats.
gymnosperm says:
February 26, 2013 at 7:52 am
Is anyone else having trouble getting on this site? I have been continuously blocked by ad popups along the top that prevent anything else from loading. This happens both when I approach from my browser and my wordpress reader. I can only get through with multiple refreshes.
I am using Firefox (version 18.0.2 now). I don’t remember ever having any problems or adds popping-up when accessing this site.
@Mark Benson 8:28 am
In fact, no one is making any claims about the temperature of the ocean.
The calculation is about the *change* in average temperature of the ocean: dT = dQ/mc , and the uncertainty attaches to dT, not to T.
That claim might wash with Stevenson Screens that are bolted to the ground, and assumes that nothing moves, is built or destoryed around them, nothing moves but the air. All of those caveats are usually false.
But ARGO floats move with currents, 3D velocity fields in a complicated geometry where the instantaneous divergence is zero, but with a non-zero curl. From measurement to measurement, the float are recording a different piece of the ocean, representing a different proportion to the whole. The only way to get a delta-average-T is to know the average-T at different times.
Your “to dT, not to T” argument holds no water, much less an ocean.
“If a man will begin with certainties, he shall end in doubts; but if he will be content to begin with doubts, he shall end in certainties.” —Francis Bacon (1605) The Advancement of Learning, Book 1, v, 8
Juraj V. says:
February 26, 2013 at 3:47 am
‘Google “NASA correcting ocean cooling”. That JPL guy openly admits that he threw away all “too cool” buoy ARGO data, since consensus says it must be warming. I have barely any faith even in ARGO since then, whatever fine system it has been designed to be.’
That too cool data exists suggests that all the buoys could be miss-calibrated (why would only the cool ones be in error) and that suggests that there is instrument bias that must be incorporated into measurement errors. If there hasn’t been an effort to determine buoy biases and how they should be handled in measurement estimates, the buoy program is failing its scientific purposes.
Stephen Rasey says: “From measurement to measurement, the float are recording a different piece of the ocean, representing a different proportion to the whole.”
Levitus et al 2012 discuss such concerns explicitly in their Supplementary Material. You should read it. It’s really just applying the statistics of sampling to the ARGO data.
Uzurbrain says: “Where are they getting these accurate thermometers?”
Their temperature measurements are accurate to 1.5 C or less, but nowhere near your 10^-4 C to 10^-5 C. Levitus et al 2012 show this explicitly in their paper, and discuss it. Really, you should at least read and understand a paper before dismissing it.
How well calibrated are the Argo floats before deployment? Have any been retrieved and calibrated? How were they calibrated – against one temp or the whole temperature range? And under what pressures?
With 510 million square km of ocean and assuming a random distribution of floats, for a confidence interval of .01 – within the sensitivity of your graph at .01C – you would need A LOT more floats. Tens of thousands..
Richard Verney, you say:
Richard, that is total and absolute bullshit. I give you the Lie Direct on that.
I even dedicated an entire post to the question of the absorption of IR. In that post I answered your damn questions, over and over and over.
The fact that you didn’t like my answers doesn’t entitle you to make false claims about whether I’ve answered, Richard, thats underhanded and untrue, and I expect better of you.
See my post called Radiating the Oceans for a full discussion of this issue, including my answers to Richard that he’s trying to pretend never happened …
Richard, I expect an apology, or we’re done discussing forever. I won’t have dealings with a man who tells lies in an attempt to damage my reputation.
w.
Phobos says:
February 26, 2013 at 8:58 am
Jim G says: “Sampling error derives not only from a lack of significant numbers of observations but also from a lack of representativeness of those samples collected to the universe being sampled and from methods and or equipment used to collect those samples (siting and callibrations come to mind).”
“This is true for *any* measurement of *anything*. All data is model-dependent — all of it.”
Agreed, the point is that, per the examples I gave, “climate science” is particulary weak on collection of representative data and error confidence limits do not take quality of data into consideration.
Further on the step warming. There is a poorly documented step warming in 1976 that has been associated with the PDO phase change from cool to warm, another oceanic phenomenon. Can you get similar upper ocean data for this time period?
MikeR says:
February 26, 2013 at 6:53 am
Can someone give me some background on this issue – I’m totally confused. Graphs aside, surely no one is measuring the heat content of the ocean! Isn’t it correct that they are measuring temperature, with buoys and such? So even if someone decided to convert that into heat content for some reason beyond my understanding, why should we convert it back? Rather, what is the basic data available on temperature, above 700 meters or below or whatever? Do we know what’s been happening in the last decade and before, and how does it depend on depth? I had heard that there is “missing heat”, that some are guessing that it passed down into the very deep ocean beyond reach of our instruments… what are the facts about this?
Thanks.
Curt says:
February 26, 2013 at 8:51 am
Yes, what they are measuring is temperature, whether with the older XBTs or the newer Argo floats. It is sensible, in principle at least, to convert this to energy units using thermal capacitance values, because energy is conserved, and in all calculations, the conservation equations are the key ones to be solved. Average temperature levels don’t really have a physical meaning, but average energy units do.
So what Willis is really doing here is a form of back-calculation, to get a feeling for the sensitivity of the original temperature measurements.
———————————————
Let me add to Curt’s comments. The ocean is heated directly by absorbing solar radiation. The atmosphere is primarily warmed by the oceans rather than the reverse. Since heat flows from warmer to cooler, any warming of the atmosphere would not, on average exceed that of the oceans, which for the last decade, have warmed at a rate of about 0.3 C per century. If one wants to do energy balances, then it makes sense to convert the measured temperatures to heat units, but if you want to know what to expect for surface air temperatures pay attention to Willis.
If heat has been lost to the ocean depths below 700 m, it will not be coming back to heat the surface. Temperatures at those depths are only a few degrees. There aren’t many places on earth cold enough to be warmed the deep waters.
Phobos says: February 26, 2013 at 9:54 am
Uzurbrain says: “Where are they getting these accurate thermometers?”
Their temperature measurements are accurate to 1.5 C or less, but nowhere near your 10^-4 C to 10^-5 C.
——————————->
In which case their error band should be about 100 times larger than shown. Just because you can get an electronic instrument to show a reading to 5 decimal points does not mean that is the accuracy. You claim +/- 1.5 C that is about +/- 3 F. and even with 10 thousand measurements averaged, after considering accuracy, calibration errors, measurement errors, ambient effects upon the measurement creating errors you end up with GARBAGE IN GRBAGE OUT. Even a moron would agree that you can not express a change in water content of a lake In milliliters when you are taking your sample measurements with a barrel or even a litter jug, regardless of how many “samples” you take and average them. The math does not support the conclusions.
Willis Eschenbach says:
“Richard, that is total and absolute bullshit”.
I am assuming that this is a technical term used in climate science and differs from just plain bullshit as opposed to absolute bullshit?
Jim G says: “per the examples I gave, “climate science” is particulary weak on collection of representative data and error confidence limits.”
How so? There are ~3000 ARGO buoys in the oceans, with ~750 replaced annually. How many more do you think are needed to get meaningful statistics? Budgets have limitations….
Willis: you rock. What a genius way to cut through the clutter of “data” and try to elucidate the “meaning” and, in this case, the fact that the “meaning” is completely lost in the measurement errors. Thanks. These graphs offer no support for policy recommendations, in fact, the reverse. Anyone using them for that should be ignored.
There was no need to digitize the data from the graph.
From Anthony’s post of February 25,
http://wattsupwiththat.com/2013/02/25/fact-check-for-andrew-glickson-ocean-heat-has-paused-too/
click on the link directly below the 0-700 m Heat Content Anomaly graph:
http://oceans.pmel.noaa.gov/
Click “Data” in the menu at the top of the page, click on “Global 0-700m Ocean Heat Curve” to see the data http://oceans.pmel.noaa.gov/Data/OHCA_700.txt
It describes the errors as SE (standard error)
It should be obvious that the water the ARGO buoys measured in 1993 isn’t the same water they measured in 2012–the water moves on (sinks, evaporates, rises, mixes) and is replenished from God knows where, while the buoys are relegated to a certain strata of the oceans. So this whole study is of dubious value.
Willis’ calculations and discussion are good and interesting, but it’s an unquantifiable, dynamic/non-static “population” from inception. To NOT find a change over time would be the exceptional case.
All discussion is dependent on the accuracy of the data. If the data cannot be known to this precision, then we have no basis for discussion.
But didn’t the data start out as temperatures, that were THEN converted to zeta-joules? I don’t think you measure, but calculate zeta-joules.
Huge numbers of data points in different places and different times do not reduce error estimates; only repeated measurements of the same parameter that is unchanging reduces errors (of measurements). Each time you measure a different thing that is itself changing you get back to your fundamental measurement limitation.
So I doubt that the accuracy is as good as claimed.
Uzurbrain says: “Just because you can get an electronic instrument to show a reading to 5 decimal points does not mean that is the accuracy. You claim +/- 1.5 C that is about +/- 3 F.”
Again, _read the paper_. 1.5 C is their largest uncertainty. Most are less than 0.5 C, and many less than 0.25 C (Supp Mat, Figure S12). Hence the uncertainty of the average can be small.
RockyRoad says: “It should be obvious that the water the ARGO buoys measured in 1993 isn’t the same water they measured in 2012–the water moves on (sinks, evaporates, rises, mixes) and is replenished from God knows where, while the buoys are relegated to a certain strata of the oceans. So this whole study is of dubious value.”
By this argument, you can’t measure the average air temperature in your back yard. Is that really what you think?
ferdberple says:
February 26, 2013 at 8:04 am
CO2 lags temperature in the modern records as well, which is strong evidence that CO2 is not a forcing agent in global temperatures. Rather, global temperatures are forcing CO2, and some other mechanism is causing climate change.
With a chance of -again- starting the same discussions:
CO2 changes lag temperature changes in the modern record, but human caused CO2 emissions lead the increase of CO2. Not temperature. See:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/temp_co2_acc_1900_2004.jpg
and at an incredible stable ratio:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_co2_1900_2004.jpg
Temperature can’t be the cause: a maximum of 16 ppmv/°C for seawater at equilibrium (Henry’s Law). And vegetation is going opposite: more uptake with higher temperatures, that gives about 8 ppmv/°C over multi-decades to multi-millennia.
Thus maximum 8 ppmv since the depth of the LIA. The rest of the 100+ ppmv increase is human induced.
That doesn’t give a clue of what CO2 does on temperature. My own estimate: around 1°C for a CO2 doubling, mostly benign for nature, including humans…
Doug Proctor says: “Huge numbers of data points in different places and different times do not reduce error estimates; only repeated measurements of the same parameter that is unchanging reduces errors (of measurements). Each time you measure a different thing that is itself changing you get back to your fundamental measurement limitation.”
——————————–
It is simply statistical sampling — much like what a candy company does as its boxes come off the assembly line, in order to measure and control quality. They don’t measure the same box over and over again, but sample a subset of the boxes over time. With sufficiently sized sampling, the difference between the sampled set and the entire population is small — this is, after, the basis of all statistical reasoning.
” Lew Skannen says:
February 25, 2013 at 10:53 pm
I am always interested in error bars. I suspect that they are not really wanted and everyone in the modelling world would be happier if they would just disappear but stubbornly refuse to do so. They are cleaned up and brought out for show when they need to be accounted for by they are expected to behave themselves, not make a scene, not speak unless spoken to and certainly not relax and reveal any more of themselves that strictly necessary.
In reality if they were allowed to be themselves and behave as they would at home I suspect that the error bars would undo their corsets and belt buckles and flop out all over the place.
This would then spoil the image of the neat, tidy, prim and proper graph because it would be indistinguishable from a page of wall to wall error bars running rampant. A bit like a ball room dancer at a Hells Angels long weekend booze up.”
Thank you Lew. Probably half my comments on various climate change sites revolve around questioning the error analysis. I was trained in chemistry and it’s amazing how quickly error analysis builds a large error bar. I believe that if proper error analysis was performed on climate measurements, we would be able to authoritatively state the variations in average temperature between winter and summer for the past two decades and not much else.
I suspect that Phobos has a job paid at least in part with public funds. Is he given free rein to post his climate alarmism on blogs constantly throughout the work day? As a hard-bitten taxpayer, I would like to have a discussion with his boss — and with his boss’s boss.