Guest Post by Willis Eschenbach
Anthony has an interesting post up discussing the latest findings regarding the heat content of the upper ocean. Here’s one of the figures from that post.
Figure 1. Upper ocean heat content anomaly (OHCA), 0-700 metres, in zeta-joules (10^21 joules). Errors are not specified but are presumably one sigma. SOURCE
He notes that there has been no significant change in the OHCA in the last decade. It’s a significant piece of information. I still have a problem with the graph, however, which is that the units are meaningless to me. What does a change of 10 zeta-joules mean? So following my usual practice, I converted the graph to a more familiar units, degrees C. Let me explain how I went about that.
To start with, I digitized the data from the graph. Often this is far, far quicker than tracking down the initial dataset, particularly if the graph contains the errors. I work on the Mac, so I use a program called GraphClick, I’m sure the same or better is available on the PC. I measured three series: the data, the plus error, and the minus error. I then put this data into an Excel spreadsheet, available here.
Then all that remained was to convert the change in zeta-joules to the corresponding change in degrees C. The first number I need is the volume of the top 700 metres of the ocean. I have a spreadsheet for this. Interpolated, it says 237,029,703 cubic kilometres. I multiply that by 62/60 to adjust for the density of salt vs. fresh water, and multiply by 10^9 to convert to tonnes. I multiply that by 4.186 mega-joules per tonne per degree C. That tells me that it takes about a thousand zeta-joules to raise the upper ocean temperature by 1°C.
Dividing all of the numbers in their chart by that conversion factor gives us their chart, in units of degrees C. Calculations are shown on the spreadsheet.
Figure 2. Upper ocean heat content anomaly, 0-700 metres, in degrees C.
I don’t plan to say a whole lot about that, I’ll leave it to the commenters, other than to point out the following facts:
• The temperature was roughly flat from 1993-1998. Then it increased by about one tenth of a degree in the next five years to 2003, and has been about flat since then.
• The claim is made that the average temperature of the entire upper ocean of the planet is currently known to an error (presumably one sigma) of about a hundredth of a degree C.
• I know of no obvious reason for the 0.1°C temperature rise 1998-2003, nor for the basically flat temperatures before and after.
• The huge increase in observations post 2002 from the addition of the Argo floats didn’t reduce the error by a whole lot.
My main question in this revolves around the claimed error. I find the claim that we know the average temperature of the upper ocean with an error of only one hundredth of a degree to be very unlikely … the ocean is huge beyond belief. This claimed ocean error is on the order of the size of the claimed error in the land temperature records, which have many more stations, taking daily records, over a much smaller area, at only one level. Doubtful.
I also find it odd that the very large increase in the number of annual observations due to the more than 3,000 Argo floats didn’t decrease the error much …
As is common in climate science … more questions than answers. Why did it go up? Why is it now flat? Which way will the frog jump next?
Regards to everyone,
w.
It is pretty obvious that the step up is related to the super El Nino of 1998. In the satellite record it is clear that this super El Nino initiated a step warming that raised global temperature by a third of a degree in four years. It was obvious too that this step warming had an oceanic origin and was not anthropogenic. Your upper ocean temperature data confirm this and add additional data that may lead to an understanding of this rare phenomenon that happened only once in the twentieth century.
“garymount says:
February 26, 2013 at 4:58 am
John Eggert says: February 26, 2013 at 3:44 am
Willis: Small (5%) quibble. Your heat capacity is for pure water at 4 celsius, atomspheric pressure. None of which, generally, apply to sea water.
– – – –
Are you sure about that John?
“I multiply that by 62/60 [1.03333…] to adjust for the density of salt vs. fresh water…”
That looks like about 3.3% to me, in the ball park.”
As well as adjusting for the density of salt vs fresh water, one needs to adjust for their relative specific heat capacities. The SHC of fresh water is about 4180 J/kg/K. The SHC of seawater is about 3900 J/kg/K, about 6.7% lower. AFAIK, the adjustment factor is not very sensitive to temperature or pressure.
But, as John Eggert says, it is only a minor point (and it is one that I have seen a well known climate scientist overlook).
It the error bars are as small as indicated, how is it that Argo data has needed to be adjusted to such a degree? Argo data initially showed ocean cooling, until the floats that showed the cooling were eliminated from the data set.
The large increase in temperature from 1997 to 2003, of 0.12C over 6 years. Give me a break. There is no known process that could warm the upper oceans by that amount in such a short period time, that would not fry the land surface of the earth.
I call BS on the error bars.
John Marshall says:
February 26, 2013 at 2:27 am
To your point. Have you ever listened to a baseball game when the announcer tells everyone that it is 128 F in the outfield or the pre-game when a player will hold a thermometer on the field to show how hot it is. The stands reduce convection. The atmosphere cools the surface.
I am sure there are differences between real turf and the fake stuff in regard temperature in the outfield.
Folks may remember the scandal over the Argo adjustments. This is clearly a case of confirmation bias. When the Argo data started coming in it didn’t show the expected picture. It showed the oceans were cooling, so a hunt was made to eliminate the floats that were showing cooling because they must clearly be in error.
I submit that no such hunt would have been made for sensors that were showing warming, because that was showing the expected result. Given that there should on average be as many faulty sensors reading high as faulty sensors reading low, given the large number of floats, it is likely that by eliminating the low reading sensors we now have an unbalanced population of faulty sensors reading high. This would explain the large jump in ocean temps 1997-2003 around the time Argo was first deployed.
One could just as easily reduce the average high of adult humans by eliminating the male portion of the population, or raise the average height be eliminating the female portion. By taking lots of measurements you could claim statistically that your error bars were statistically very small. But it wouldn’t be true.
Ben Wouters says:
February 26, 2013 at 3:35 am
Ben I have no idea if geothermal heat is factored into the assorted Climate models in use or how a warm “Black Body” behaves. But I worked below ground in Homestake Gold Mine for a period of time and know from personal experience that below the frost line, that the temperature goes up X degrees for every Y increase in depth. The Earth is a warm body that without any solar input that directly impacts the temperature of the seas and atmosphere.
garymount says:
February 26, 2013 at 4:58 am
From engineering toolbox specific heat of sea water. About a 6% difference. But the overall contention remains the same.
Water, sea 36oF 3.93
I think everyone has the Tisdale function backwards. ENSO transfers energy from the ocean to the atmosphere so ocean enthalpy should have decreased after the 1997 El Nino. The atmosphere should show an increase and it did very briefly but has been flat ever since. The 1997 inflection point keeps showing up in strange places. It also happens to be the beginning of the acceleration of the magenetic north pole motion from about 10 to over 50 kilometers per year.
Is anyone else having trouble getting on this site? I have been continuously blocked by ad popups along the top that prevent anything else from loading. This happens both when I approach from my browser and my wordpress reader. I can only get through with multiple refreshes.
FauxScienceSlayer says:
February 26, 2013 at 6:20 am
http://www.sciencedirect.com/science/article/pii/S0921818112001658
=================
more:
Changes in global atmospheric CO2 are lagging about 9 months behind changes in global lower troposphere temperature. ► Changes in ocean temperatures explain a substantial part of the observed changes in atmospheric CO2 since January 1980. ► Changes in atmospheric CO2 are not tracking changes in human emissions.
===========
As with the ice cores. Early studies of temp and CO2 failed to take lag and lead times into account, which made it look like CO2 was driving temperature. Something Al Gore “conveniently” forgot to mention in his movie.
CO2 lags temperature in the modern records as well, which is strong evidence that CO2 is not a forcing agent in global temperatures. Rather, global temperatures are forcing CO2, and some other mechanism is causing climate change.
This further confirmed by the failure of the CO2 based climate models to predict observed temperatures going forward. This should have been the nail in the coffin for CO2 theory. In any other branch of science it would have been. Except that CO2 theory is closely tied to environmental fears over fossil fuels and pollution. It is this fear that is preventing an honest scientific examination of the cause and effect issue. Instead we get opportunists like Gore and Packy using their positions of authority to knowingly suppress information for personal gain.
Sampling error derives not only from a lack of significant numbers of observations but also from a
lack of representativeness of those samples collected to the universe being sampled and from methods and or equipment used to collect those samples (siting and callibrations come to mind). These issues plague much of the “data” used in “climate science”. Tree rings, sediments, etc. as well as poorly measured temperatures.
I don’t think converting the OHCA to temperature anomaly is actually helpful. It relies on the unstated assumption that the top 700m of ocean are well mixed and in thermal equilibrium. I doubt that either of these assumptions are valid.
It is simple to imagine that different temperature profiles from 0-700m would have identical heat content but very different climatic effects and appearances on, say, satellite temperature measurements. A thin, warm layer lying atop a steady decline in temperature might conceivably drive higher levels of evaporation than a relatively cool, deep top layer with relatively warmer water as you approach 700m.
If we’re ultimately concerned with the energy balance of the planet, let’s look at it in terms of total energy rather than the temperature of a very thin layer of the whole system.
Willis wrote: “He notes that there has been no significant change in the OHCA in the last decade.”
1) There is much more to the ocean than the 0-700 m layer.
2) Even for this layer, the statement is incorrect: The OLS 10-year trend there is 44 TW, with a 2-sigma uncertainty of 30 TW. That’s statistically significant warming.
Willis wrote: “I find the claim that we know the average temperature of the upper ocean with an error of only one hundredth of a degree to be very unlikely … the ocean is huge beyond belief.”
We don’t know the average temperature of the upper ocean to 0.01 C, and no one is claiming that we do.
In fact, no one is making any claims about the temperature of the ocean.
The calculation is about the *change* in average temperature of the ocean: dT = dQ/mc , and the uncertainty attaches to dT, not to T.
Where are they getting these accurate thermometers? There are a lot more errors than they think. I have worked with “precision” laboratory standard NBS traceable thermometers for calibrating plant equipment. These instruments cost $5,000 to $10,000, and are just 0.05% accurate! When you “calibrate” another thermometer, the “calibrated” thermometer will be LESS accurate than the source. Then you have the process of calibrating the source against a reference. If done in a typical “calibrating” lab it will not be to the actual NBS reference standard but a “proxy.” Thus, more errors. Now you have to throw in the “measurement” while in use. Essentially every electronic temperature reading device will introduce another error in measuring the temperature based upon the ambient temperature of the area surrounding the electronic device (think the buoy that the equipment is in taking these readings). The most accurate “precision” measuring devices are designed for “Laboratory Use Only.” With the probe in a location away from the measuring instrument the temperature displayed will change due to the change in the laboratory ambient temperature. Typical numbers are in the order of 0.0001 to 0.00001 per degree change in the ambient temperature (and this is for the BEST Costliest Laboratory Grade instruments). Not much but that means that when the buoy is at the surface (85 D F) and then sinks to 700 meters (35 D F) you have just introduced an error of at least 0.0050! All of that RMS large number averaging B/S will not remove that error it will be the same for every one! Oh, and that reminds me, Proper use of RMS averaging of multiple readings assumes that the errors are random and more or less equally distributed on either side of the correct reading. The statistical theory behind the use of RMS averaging assumes this. Any good mathematician will tell you this. You can’t prove that 7 is the most likely number from a roll of dice if you are testing this theory with loaded dice, regardless of how many “samples” you take! You will only prove they are loaded.
Nick Stokes says:
February 25, 2013 at 10:41 pm
Willis,
” I find the claim that we know the average temperature of the upper ocean with an error of only one hundredth of a degree to be very unlikely … the ocean is huge beyond belief. This claimed ocean error is on the order of the size of the claimed error in the land temperature records, which have many more stations, taking daily records, over a much smaller area, at only one level. “
But the ocean temperature isn’t noisy.
###
You don’t know much about physical oceanography. E.g. you weasel about lack of wind at depth, while ignoring the well known and well documented existence of the oceans wind equivalent, which represents a far more energetic system then mere gas. You also seam to be under some delusion that ocean water is well mixed. It is not. One example would be the bubbles or lenses of sharply bounded lower or higher density water, who’s temperature is generally higher (but also can be lower) then the surrounding volume containing them. Until the mechanisms that create these structures and govern their behavior are understood, no one can claim that they understand the ocean well enough to be certain of anything, least of all the total heat content of the ocean from a few distribution biased measurements.
@Nick Stokes 10:41pm:
Define noisy in the context of knowing its average temperature to 0.03 deg K when the fluid body has a range of 0 to 32 deg K over the world, variable by season, latitude, Longitude, hour of the day and depth.
Interannual atmospheric variability forced by the deep equatorial Atlantic Ocean, Brandt-2011, Nature 473,497–500(26 May 2011).
http://www.nature.com/nature/journal/v473/n7348/fig_tab/nature10013_F2.html
Figure 2: shows E-W velocities (color) as a function of depth (-20 to 20 cm/sec), with peak to peak reversals in as little as 300 m of depth, repeatedly. (Y-axis depth to 3500 m. x-axis is time) Location: 0 N, 23 W. (Moored, non-Argo, data)
This is only one moored bouy from one location over two year span. But the shocking thing is the shear, the contra-flow of currents, stacked vertically over one spot on the equator. Admitedly, the velocities are not huge, but at differences of more than 1/3 of a knot they are not insignificant either.
Surface Currents in the Atlantic Ocean
http://oceancurrents.rsmas.miami.edu/atlantic/gulf-stream_2.html
is a good index page to many good maps on surface drift bouys (1978-2003) from a total dataset spaghetti map (Figure 3), individual bouys entrained in the florida current then meander in the atlantic (Fig. 6,7), and Vertical profile transects of the Gulf Stream velocity field (Fig. 13).
Is it noisy? Or is it signal? It sure isn’t uniform.
People send me stuff!
This is an intervju with J Gregory IPCC and Chambers even if you dont understand Swedish have patience because the interviews in english will surprise many off you. Its a completely new tone when it comes to sea level rise. (the upper pod cast link on the page)
http://sverigesradio.se/sida/artikel.aspx?programid=1650&artikel=5410247
Enjoy!!
If the error bars are correct and 1 sigma then multiply them by 3 to get to p<0.1 and see what happens. The delta in '98 is meaningless. The whole curve lies within the noise! It may be suggestive to some but for the whole time period it still lies w/in the noise.
GIGO!
MikeR says:
February 26, 2013 at 6:53 am
“Can someone give me some background on this issue – I’m totally confused. Graphs aside, surely no one is measuring the heat content of the ocean! Isn’t it correct that they are measuring temperature, with buoys and such?”
Yes, what they are measuring is temperature, whether with the older XBTs or the newer Argo floats. It is sensible, in principle at least, to convert this to energy units using thermal capacitance values, because energy is conserved, and in all calculations, the conservation equations are the key ones to be solved. Average temperature levels don’t really have a physical meaning, but average energy units do.
So what Willis is really doing here is a form of back-calculation, to get a feeling for the sensitivity of the original temperature measurements. Yes, oversampling and averaging can reduce the uncertainty in measurements, but that only goes so far, before any systematic errors can overwhelm the remaining random errors.
I, too, have many questions about the OHC values and the conversion from using older methods to Argo data in the years leading up to 2003. I have seen several claims that the jump seen in this plot is not reflected in several other data sets that should also be affected by a true temperature rise. I don’t have time now to look for references – can anyone find them?
Don’t forget, the uncertainty of an average is less than the uncertainty of any of what it’s averaging.
For example, if you have two temperatures T1 and T2, each with a measurement uncertainty of dT, the uncertainty dA of the average is
dA = dT/sqrt(2)
For N points the denominator is sqrt(N).
Levitus et al are averaging a very large number of temperature measurements. Their 2012 Supplementary Material shows this in Figure S12: the dTs range from 1.50 C, but they are averaging thousands of measurements all across the ocean,
Their Supp Material gives a detailed exposition of their error handling techniques. It’s worth reading.
Jim G says: “Sampling error derives not only from a lack of significant numbers of observations but also from a lack of representativeness of those samples collected to the universe being sampled and from methods and or equipment used to collect those samples (siting and callibrations come to mind).”
This is true for *any* measurement of *anything*. All data is model-dependent — all of it.
RE: ocean cooling between 2003-2005.
According to the NASA: “other indicators of ocean heat—satellite measurements of the balance of incoming and outgoing energy at the top of the atmosphere, and satellite and buoy data on sea level rise—didn’t show the cooling trend.”
However UHA shows a cooling from +0.3 to -0.2. and SST also shows a decline for this period.
Why does this not make sense?
In addition to my comment, even the graphs above show a decline in the period.
richard verney says:
February 26, 2013 at 1:42 am
…Accordingly, before any DWLWIR can reach the oceans it has to first find its way through the wind swept spray and spume which lies immediately above the oceans. Given the optical absorption of LWIR in water, for practical purposes if there is even just 6 microns of wind swept spray and spume lying above the oceans at most only about 25% of DWLWIR even gets to reach the top surface of the oceans. If there is more than 6 microns of windswept spray and spume, even less than 25% of DWLWIR could penetrate this barrier. This is an issue which seems to be overlooked by those promoting the AGW meme.
It may well be the case that in force 5 conditions and above, none of the DWLWIR even reaches the very top layer of the oceans because it cannot penetrate the IR block consisting of the wind swept spray and spume that exists immediately above but divorced from the ocean below.
Ah, doesn’t a significant amount of that spray fall back into the ocean? Transporting at least some of that DWLWIR energy it absorbed to the top layer of the ocean?
richard verney says:
February 26, 2013 at 2:04 am
It would appear that it cannot be conducted downwards since the energy flux is upwards not downwards at the top of the ocean and there is no known mechanism whereby conduction can take place against the direction of energy flux.
Yes, this is a very straightforward point that is mostly avoided in the ocean warming discussion by alarmists. The downwards infrared can warm only the surface of the ocean, as heat transfer cannot happen agains the temperature gradient,
“It is well known that temperatures at the sea surface are typically a few-tenths degrees Celsius cooler than the temperatures some tens of centimeters below [Saunders, 1967; Paulson and Simpson, 1981; Wu, 1985; Fairall et al., 1996; Wick et al., 1996; Donlon et al., 2002].”
If the ocean surface is getting warmer, then the below mass will get warmer as not be able to cool so fast, and might store more heat from the sun, however to my understanding the DLIR stops at the surface of the ocean.
Therefore with known SST we may evaluate the temperature of the ocean below. No increase in SST means no increase in the total heat content once the heat transfer below is in balance.