Guest Post by Willis Eschenbach
Previously, we discussed the errors in Levitus et al here in An Ocean of Overconfidence
Unfortunately, the supplemental information for the new Levitus et al. paper has not been published. Fortunately, WUWT regular P. Solar has located a version of the preprint containing their error estimate, located here. This is how they describe the start of the procedure they describe which results in their estimates:
From every observed one-degree mean temperature value at every standard depth level we subtract off a climatological value. For this purpose we use the monthly climatological fields of temperature from Locarnini et a. [2010].
Now, the “climatology” means the long-term average (mean) of the variable. In this case, it is the long-term average for each 1° X 1° gridcell, at each depth. Being a skeptical type of fellow, I though “how much data do they actually have”? It is important because if they don’t have much data, the long-term mean will have a large error component. If we don’t have much data, it increases the size of the expected error in the mean, which is called the “standard error of the mean”.
Regarding the climatology, they say that it is from the World Ocean Atlas 2009 (WOA09), viz: ” … statistics at all standard levels and various climatological averaging periods are available at http://www.nodc.noaa.gov/OC5/WOA09F/pr_woa09f.html “
So I went there to see what kind of numbers they have for the monthly climatology at 2000 metres depth … and I got this answer:
The temperature monthly climatologies deeper than 1500 meters have not been calculated.
Well, that sux. How do the authors deal with that? I don’t have a clue. Frustrated at 2000 metres, I figured I’d get the data for the standard error of the mean (SEM) for some month, say January, at 1500 metres. Figure 1 shows their map of the January SEM at 1500 metres depth:
Figure 1. Standard error of the mean (SEM) for the month of January at 1500 metres depth. White areas have no data. Click on image for larger version. SOURCE
YIKES! In 55 years, only 5% of the 1° X 1° gridcells have three observations or more for January at 1500 metre … and they are calculating averages?
Now, statistically cautious folks like myself would look at that and say “Well … with only 5% coverage, there’s not much hope of getting an accurate average”. But that’s why we’re not AGW supporters. The authors, on the other hand, forge on.
Not having climatological data for 95% of the ocean at 1500 metres, what they do is take an average of the surrounding region, and then use that value. However, with only 5% of the gridcells having 3 observations or more, that procedure seems … well, wildly optimistic. It might be useful for infilling if we were missing say 5% of the observations … but when we are missing 95% of the ocean, that just seems goofy.
So how about at the other end of the depth scale? Things are better at the surface, but not great. Here’s that map:
Figure 2. Standard error of the mean (SEM) for the month of January at the surface. White areas have no data. Click on image for larger version. Source as in Fig. 1
As you can see, there are still lots and lots of areas without enough January observations to calculate a standard error of the mean … and in addition, for those that do have enough data, the SEM is often greater than half a degree. When you take a very accurate temperature measurement, and you subtract from it a climatology with a ± half a degree error, you are greatly reducing the precision of the results.
w.
APPENDIX 1: the data for this analysis was downloaded as an NCDF file from here (WARNING-570 Mb FILE!). It is divided into 1° gridcells and has 24 depth levels, with a maximum depth of 1500 metres. It shows that some 42% of the gridcell/depth/month combinations have no data. Another 17% have only one observation for the given gridcell and depth, and 9% have two observations. In other words, the median number of observations for a given month, depth, and gridcell is 1 …
APPENDIX 2: the code used to analyze the data (in the computer language “R”) is:
require(ncdf)
mync=open.ncdf("temperature_monthly_1deg.nc")
mytemps=get.var.ncdf(mync,"t_gp")
tempcount=get.var.ncdf(mync,"t_dd")
myse=get.var.ncdf(mync,"t_se")
allcells=length(which(tempcount!=-2147483647))
zerocells=length(which(tempcount==2))
zerocells/allcells
hist(tempcount[which(tempcount!=-2147483647)],breaks=seq(0,6000,1),xlim=c(0,40))
tempcount[which(tempcount==-2147483647)]=NA
whichdepth=24
zerodata=length(which(tempcount[,, whichdepth,1]==0))
totaldata=length(which(!is.na(tempcount[,, whichdepth,1])))
under3data=length(which(tempcount[,, whichdepth,1] < 3))
length(tempcount[,, whichdepth,1])
1-under3data/totaldata
APPENDIX 3: A statistical oddity. In the course of doing this, I got to wondering about how accurate the calculation of the standard error of the mean (SEM) might be when the sample size is small. It’s important since so many of the gridcell/depth/month combinations have only a few observations. The normal calculation of the SEM is the standard deviation divided by the square root of N, sample size.
I did an analysis of the question, and I found out that as the number of samples N decreases, the normal calculation of the SEM progressively underestimates the SEM more and more. At a maximum, if there are only three data points in the sample, which is the case for much of the WOA09 monthly climatology, the SEM calculation underestimates the actual standard error of the mean by about 12%. This doesn’t sound like a lot, but it means that instead of 95% of the data being within the 95% confidence interval of 1.96 * SEM of the true value, only about 80% of the data is in the 95% confidence interval.
Further analysis shows that the standard calculation of the SEM needs to be multiplied by
0.43 N -1.2
to be approximately correct, where N is the sample size.
I also tried using [standard deviation divided by sqrt (N-1)] to calculate the SEM, but that consistently overestimated the SEM at small sample sizes
The code for this investigation was:
sem=function(x) sd(x,na.rm=T)/sqrt(length(x))
# or, alternate sem function using N-1
# sem=function(x) sd(x,na.rm=T)/sqrt(length(x) - 1)
nobs=30000 #number of trials
sample=5 # sample size
ansbox=rep(NA,20)
for (sample in 3:20){
mybox=matrix(rnorm(nobs*sample),sample)
themeans=apply(mybox,2,mean)
thesems=apply(mybox,2,sem)
ansbox[sample]=round(sd(themeans)/mean(thesems)-1,3)}
The NOAA is too busy emulating the EPA to bother much about accuracy:
http://www.americanthinker.com/2012/04/where_are_the_rolling_heads_from_noaa.html
I remain in awe of those of you who are not on a government payroll yet devote time and energy to reach for the truth. I applaud you all and am thankful that people like you exist.
Willis,
I was hoping you could confirm of refute how I was interpreting what the S.I. says about the std errors they used. It’s rather hard to digest and is confused by their equation using the symbol sigma to mean different things (at least according to the adjacent text).
I did not manage to completely understand what they were doing because they got their equation from a text book that I don’t have and their description suggests that sigma means three different things in the same equation. That indicates that they are confused about what they are doing or are making undeclared assumptions that different statistics are interchangeable.
Aside from the sigma problem, my reading of the S.I. was that when they did not have sufficient data they were using the SE from cells with lots of data.
This would be blatantly wrong and could explain the puzzle you posed in the black-jack thread.
They must be getting a SE form somewhere and it is not from the data because it is not there. I think this may be the key to the puzzle.
Lawrie Ayres says:
April 25, 2012 at 2:47 am
I remain in awe of those of you who are not on a government payroll yet devote time and energy to reach for the truth. I applaud you all and am thankful that people like you exist.
…Like Lawrie, me too!
PS I’m wondering whether they are confusing variance and std error. There would be some loose justification for assuming variance of adjacent cells is similar. But I get this impression that they are applying the std error from a well sampled cell to the single reading cells. Thus getting a totally spurious assessment of the propagated error which is the basis for their overall uncertainty.
I do enjoy reading your posts, the clarity and humor backed up by really excellent scientific analysis.
I’ve always wondered what average sea and land temperatures can actually tell us.
How can the average temperature of the Sahara, which can swing from +50 to -5 centigrade in one day tell us anything?
A temperature reading of the sea would seem from my nescient view to contain more warmth than the same temperature reading of land. I also don’t understand how the stored heat and cold in the oceans and ice caps affect an average temperature? I don’t see how it could be calculated.
Taking various temperatures of the sea and finding an average seems to me to be like a color blind person trying to determine a single color of marbles in a box of multi-colored marbles which are constantly changing color.
This is an interesting site (though not helpful) showing current sea weather as determined by ship locations:
http://www.sailwx.info/wxobs/watertemp.phtml
I also wonder how accurate the readings from buoys and voluntary observing ships are. Meteo-France has a blacklist of buoys and vos:
http://www.meteo.shom.fr/qctools/
@Evan Thomas: If you see a measured number like 15 degrees, that’s only half the story. The other half of the story is the accuracy of the measurement. If I tell you that it’s 15 degrees outside, but my thermometer reads +/- 10 degrees, you wouldn’t really pay much attention to me.
Similarly, if I do a statistical calculation and tell you the answer is 15 degrees, that’s only half of the story. What kind of variability would we expect around that number, given the model or procedure I used? Even if my measurements are +/- 0.01 degrees, if my statistical procedure introduces an additional uncertainty of +/- 2.5 degrees I need to acknowledge that my total uncertainty is more than +/- 2.5 degrees and not tell people that it’s +/- 0.01 degrees because my thermometers are so accurate.
In the paper Willis is discussing, there are (at least) three sources of variability: 1) how accurate are the measurement devices, 2) since they’re taking averages, how much variability is there in the average, where they do have data, and 3) how much data do they have and how well does it represent the huge ocean (i.e. how much data do they NOT have)? Each of these sources adds more uncertainty to the final answer.
If you only have three readings for one area of ocean and you take the average (assuming the thermometer has zero error, which is obviously unrealistic), that average will have a large variability/uncertainty associated with it. As I mentioned in a previous post, when you average three temperature readings and the standard error of that mean is 0.5 degrees, you would say that the 95% confidence interval around that average is about 2.5 degrees. That is, at a commonly used standard of statical certainty, the actual average temperature at that location could be 2.5 degrees warmer or cooler than the calculated average.
And that’s just at that measured point. If we try to make statements about average temperatures 100 miles away, whether about the average temperature based on the closest average temperatures, things get even more complicated.
BOY, what a shell game! Now you see the thousands of data points we base our error bars on and now you don’t.
I’d actually appreciate it if someone could check my logic on this one…
I just read the SkS analysis of Levitus et al (2012), just to see how the most strident alarmists were looking at the data. Obviously they are elated to find anyplace find the heat that’s missing from the atmosphere, so they not only accept the findings uncritically but use them to paint a horrifying picture of the future. I think they went off the rails with the following:
From Levitus, Putting Ocean Heat in Perspective:
“We have estimated an increase of 24×1022 J representing a volume mean warming of 0.09°C of the 0-2000m layer of the World Ocean. If this heat were instantly transferred to the lower 10 km of the global atmosphere it would result in a volume mean warming of this atmospheric layer by approximately 36°C (65°F).”
I immediately wondered where all this extra heat came from. Even the most dire assessments of CO2’s ability to trap long wave radiation couldn’t account for more than a fraction of the energy added to the deep oceans. Obviously there are forces in play that utterly dwarf GHG’s effects.
They also voice the conviction that part of this hidden heat will meander back to the atmosphere and global warming will resume. My question is how they can possibly know, since this is supposed to be “science”, how that will happen. Clearly there is enough energy there to devastate the planet, so if there is a mechanism that will release it, then we might as well pack our tent and go home. We’re not going to chase away 36C of warming by taxing carbon. Roughly half of that energy was accumulated in just the past 20 years, so presumably it could be released in a similar time frame.
None of this passes even a cursory common-sense test. A vast amount of energy is discovered to be accumulating in the deep oceans, so if that’s true then GHG’s are the least of our worries. The planet seems to have narrowly averted catastrophe from some unknown influence, and the oceans seem to have shielded us from a disaster of incomprehensible proportions. Somehow the relatively tiny effects of CO2 have been hidden there as well, and that portion of the heat (and only that portion) will eventually be returned to the atmosphere?
This sounds so ridiculous that I can’t help but think that I missed a fundamental part of the analysis. I haven’t read the whole paper and haven’t looked at much beyond the scale of the energy they claim to have found,so perhaps some of this is explained by them.
I love this site!
re 36C “This sounds so ridiculous that I can’t help but think that I missed a fundamental part of the analysis. ”
That’s because it is ridiculous. All that is saying is the heat capacity of water is hugely bigger than air. That fact would not shock or surprise anyone.
The catch is the “if”, because this heat will never be released quickly, there’s no reason or mechanism by which it could happen. In fact, this is GOOD news and rather reassuring, what it means is that we have some storage heaters that will be able to slow the cooling of the atmosphere during the next 20 or 30 years of solar minima.
This is a main part of the reason why our climate is so stable and why we are here to talk about it.
The engineer in me has a question – what is the reference condition for zero energy? The freezing point of water? If so, have they corrected for salinity and pressure/depth? The statement that the ocean contains “X 10^22 Joules” is meaningless unless you define the reference point. If I set the reference point high enough, the delta over time looks huge. If i set the reference point low enough (say absolute zero), then small changes in temperature mean small changes in “heat content”.
On some of the unaccounted errors and flawed methods. Please refer to the paper linked at the head of the article (line numbers help to find the text).
810 If an ODSQ analysed value is A(i,j) = ΣCn (the first-guess value F(i,j)
811 is a constant and we exclude it here)
We exclude the “first guess” value AND it’s uncertainty , which never gets mentioned or taken into account. This uncertainty must be as large or larger than the current study since it comes from an older one when less data was available. In doing the difference you should add arithmetical errors. Instantly their uncertainty is less than half what it should be.
The equation 4 shows how to calculate the compound error based on standard deviations of each element. HOWEVER, this requires a sufficiently large sample at each point to derive a valid statistic. There must be enough data for the presumed gaussian distribution of the errors to be accurately represented. You cannot use the S.D. of one reading (ie zero) as an estimate of it’s uncertainty.
The authors recognise this problem but the way they get around is by gross simplification and inappropriate substitution of other SD numbers.
844 Next we assume in equation (5) that σ Cn = σ 0 and we have the standard error of the
845 mean of the objectively analyzed value:
833 The standard deviation (σo) of all observed ODSQ mean anomalies within the
834 influence region surrounding an ODSQ at gridpoint (i,j) is defined as:
… see equation 6
−
838 in which C n is the average of the “N” ODSQ anomalies that occur within the influence
839 region.
So here they substitute SD of data within each cell with the SD of the *cell averages* . This is near to meaningless. They are in no way equivalent.
In this way, a one off reading which could be far from the true average of a 100km x 100 km by 100m deep slab of ocean is deemed to have an uncertainty of temperature equal to the SD of the of the means of the surrounding cells in the “influence region”. How can the number of cells in the region determine the accuracy of our one reading as a representation of the its cell? If it has more neighbours does it become more accurate ??
And this is not just some odd-ball peculiarity I have pull out to be awkward, as Willis points out this is the median situation: the most common case.
So after all the heavy duty , scary looking maths they wave at the reader they end up throwing it all out of the window and making a trivial and inappropriate substitution.
The still impressive equation 7 simply reduces to σa = k x σ0 , where k is a sum of all the weighting of the nice gaussian they initially said they were going to use for weighting. In reality we see they don’t . I have not calculated that sum but if it does not end up being equal to one I’d want to know why not. The weighting they don’t even use should should not affect the result !
The next processing stage seems to be more legitimate except that it is meaningless if the values from the first stage are not valid.
So the bottom line is all fancy maths is a farce. They are not accounting in any way for the uncertainty in using single samples to represent large volumes of ocean and they are totally ignoring the certainly larger uncertainty in the “first guess” values.
Are they being deceptive? I don’t know. I suppose the other possibility is incompetence.
Referring to my handy reference book “Introduction to Engineering Experimentation” (Wheeler/Ganji, 1996) chapter 6.4 and applying here:
For a given gridcell(cube) the sample size is very small with respect to spatial variance; therefore for each cell we must use the Student’s t distribution functions (dependent on the number of samples and confidence required) to estimate a particular cell’s confidence interval at the time of measurement. This states that the ½ confidence interval equals the t-distribution value times the sampled standard deviation divided by the square root of the number of samples.
For example let’s say a cell has three concurrent samples values of 285.00(Kelvin), 286.00(Kelvin) and 287.00(Kelvin). The mean is (285.00+286.00+287.00)/3= 286.00K. The sampled (though not necessarily the true) standard deviation is (((285-286)^2+(286-286)^2+(287-286)^2)/2)^0.5 = 1.00K. 95% confidence corresponds to a probability of the mean temperature falling outside of the confidence interval of 0.05. Using a Student’s t- table, degrees of freedom = #samples -1 = 3-1=2. The t-distribution from the table @ur momisugly 2 degrees of freedom and .05/2= .025 is 4.303. The estimated ½ confidence interval is the t distribution times the sampled standard deviation divided by the square root of the number of samples. This is 4.303*1.00K/(3)^0.5 = 2.48K. In other words, with only this data in hand, the best estimate @ur momisugly 95% confidence is 286K +/- 2.48K. Likewise @ur momisugly 90% confidence the result would be 286K +/- 1.69K. So even if you had hundredths of degree accuracy on the measuring device, your precision is wiped out by the spatial variance combined with low sample size. If your other cells have similar sample sizes and standard deviations, further processing to get the global average won’t improve the approximate size of this error.
Now let’s work this backwards; we want to guess how many sample sizes we need to get +/-0.1K @ur momisugly95% confidence. We don’t know the number of samples yet so we have to estimate using an initial guess and our low sample size data. I’ll assume the sample size is large (>30) so we get to use the normal distribution, using 1K as an estimate for standard deviation we get an estimate of (1.96*1K/.1K)^2 = 384 samples (in that grid cell at that time). So if you have a standard deviation of 1.00K and you want to achieve +/-0.1K @ur momisugly95% confidence, based on the three samples we took we estimate that we will need 384 samples to achieve that!
Willis my question to you is: what do the real world ARGO standard deviations and sample sizes per grid cell look like (on average)? Is that something you can calculate easily with the given data? We could obtain an estimate for spatial sampling error by using the average sample size per cell and the average standard deviation per cell (all for a given sample time) and running it through this process. This would provide a minimum spatial sampling error estimate (we would actually expect it to be larger). We could also make a minimum guess on how many samples would be needed per cell per for a given timeframe.
Taliesyn says:
April 25, 2012 at 9:38 am
What we are looking at is the change in the ocean’s heat content, not the absolute heat content. As a result it doesn’t matter what we take as the reference condition. If the next year the relevant volume (e.g. 0-2000 metres depth) is warmer, and we know the volume, we can calculate the change in heat content from year 1 to year 2.
All the best, if that’s not clear or if you have more questions, ask’em
w.
Hmmm says:
April 25, 2012 at 10:23 am
Fascinating analysis, Hmmm, very interesting.
Regarding your question, there is a variety of standard deviations and sample sizes in the Argo data depending on the year in question, the location, and the size of the gridcell. See my earlier articles on Argo for more information on these variables.
The problem I see is not so much with the Argo data, it is with the “climatology” data. They use very little Argo data for the climatology, in order to maintain continuity. This is almost all old-school, drop the thermometer overboard on a line data, and as my maps above show, there’s not much of it, particularly at depth.
Since all of the data (including Argo and XBT data) has the climatology subtracted from it, and since the errors in the climatology are much, much larger than in the Argo data, the error amounts are ruled by the climatology errors and not the Argo errors.
Thanks for your very clear explanation,
w.
Hmmm , good reply Hmmm.
Here’s what Willis posted in the other thread about the supposed accuracy expressed as temperature equivalent.
>>
Here’s the problem I have with this graph. It claims that we know the temperature of the top two kilometres (1.2 miles) of the ocean in 1955-60 with an error of plus or minus one and a half hundredths of a degree C …
It also claims that we currently know the temperature of the top 2 kilometers of the global ocean, which is some 673,423,330,000,000,000 tonnes (673 quadrillion tonnes) of water, with an error of plus or minus two thousandths of a degree C …
>>
Now you calculated:
>>
So if you have a standard deviation of 1.00K and you want to achieve +/-0.1K @95% confidence, based on the three samples we took we estimate that we will need 384 samples to achieve that!
>>
So in order to get the claimed level or accuracy you’d need a couple of gazillion data points for each cell. That gels with an intuitive idea of how difficult it would be to get that kind of accuracy and Willis’ statement that incredulity is sufficient to dismiss such a fanciful claim.
Now to be realistic in 100m of ocean over 100km x 100km you must be looking at of the order of 10 kelvin temperature range. You can be “95%” certain your one reading is within that range.
Who needs actual valid data when you have faith and models .
Hmmm, says: Using a Student’s t- table, degrees of freedom = #samples -1 = 3-1=2. The t-distribution from the table @ur momisugly 2 degrees of freedom and .05/2= .025 is 4.303.
What does the table say for #samples =1 ie zero DF? That is the dominant case here.
P. Solar that puts it off the charts! Basically you know nothing about your confidence interval with only 1 data point or 0. It tells you that you need way more buoys…
Hmmm said: “Basically you know nothing about your confidence interval with only 1 data point or 0. It tells you that you need way more buoys…”
Quite correct. I run a regression forecast for tropical cyclones in the West Pacific. In regressions where the initial conditions include only 1 or 2 years, I refuse to even calculate a confidence interval. In the rare instance of a perfect 1.00 correlation coefficient, I also refuse to calculate a confidence interval, having no error information to base it on. I fully realize that a regression like that simply does not have enough data to have made an error yet. Thanks for the insightful analysis, guys, especially Willis and Hmmm…
JohnH says:April 25, 2012 at 8:08 am
I’d actually appreciate it if someone could check my logic on this one…
………..
From Levitus, Putting Ocean Heat in Perspective:
“We have estimated an increase of 24×1022 J representing a volume mean warming of 0.09°C of the 0-2000m layer of the World Ocean. If this heat were instantly transferred to the lower 10 km of the global atmosphere it would result in a volume mean warming of this atmospheric layer by approximately 36°C (65°F).”
I immediately wondered where all this extra heat came from. Even the most dire assessments of CO2′s ability to trap long wave radiation couldn’t account for more than a fraction of the energy added to the deep oceans. Obviously there are forces in play that utterly dwarf GHG’s effects.
I think JohnH’s logic is correct here – this all fails the commonsense test.
I look forward to some further replies to JohnH’s thoughts.
JohnH says:
April 25, 2012 at 8:08 am
Yeah, I’d call that “Putting Heat into Ocean Alarmism”. That kind of comparison is just designed to scare, and nothing else. “If this heat were instantly transferred to your bathtub it would result in a volume mean warming of …”
w.
The more I read this paper the more concerned I get.
On page 4 at line 84-86 Levitus at el state:
The way I’m reading this the authors did not actually calculate the heat content (enthalpy) of the water at each depth with the passage of time. Instead they assumed that the salinity, specific heat (heat capacity), and density did not change over time. The only variable they changed was ocean temperature.
As a chemical engineer, that series of assumptions gives me considerable heartburn. I see two core problems. First, it is not reasonable to assume that the salinity will remain the same at a given depth with time. Second, the thermodynamic properties of salt water vary considerably with salinity. In particular specific heat (heat capacity) and enthalpy.
Of particular concern is that the heat capacity of salt water varies so much with salinity that I don’t feel it can be treated it as a constant when calculating changes in the enthalpy (heat content) of sea water.
You can see the variation of specific heat with salinity here. http://web.mit.edu/seawater/ & look for “Downloads” at the bottom and click on “Tables of properties in pdf format”)
I became doubly concerned when Levitus stated:
This implies there are “significant drift problems” with the salinity data.
To get a rough idea of the potential for error I selected a temperature of 10C to evaluate. I selected 10C largely because this is roughly equivalent to the temperature of sea water at depth of 600 feet and because using the value minimized the amount of interpolation needed. For simplification, I ignored adiabatic cooling with depth (largely because the data I had handy is at atmospheric pressure).
For typical ocean temperatures with depth see here: http://www.windows2universe.org/earth/Water/images/sm_temperature_depth.jpg
I used the following data
At 10C and a salinity of 30 ppt the enthalpy of salt water = 39.8 kJ/kg
At 10C and a salinity of 40 ppt the enthalpy of salt water = 39.1 kJ/kg
From this I calculated:
The average change in enthalpy per unit salinity = 0.07 kJ/kg ppt
The average enthalpy of ocean water at 10C & an average salinity of 35 ppt = 39.45 kJ/kg
(deliberately not rounded)
The error in calculating the heat content by missing one unit of salinity at 10C =
= 0.7 / 39.45 *100 = 1.77% error (say 1.8%)
Now a 1.8% error for a 1 ppt change is cause for concern, because, ocean salinity varies by 32 to 37 ppt and can be as low as 16 ppt in the black sea.
(see here: http://www.onr.navy.mil/focus/ocean/water/salinity1.htm)
As a check, I also calculated how much of a salinity change would be needed equal the entire 0.09 C temperature change proposed by the authors. Noting that a decrease in salinity will result in an increase the ocean temperatures with the same ocean heat content.
At 10.09 C and a salinity of 35 ppt the enthalpy of salt water = 39.81 kJ/kg (not rounded)
Change in enthalpy with a 0.09 C temperature change = (39.81 – 39.45)
= 0.37 kJ/kg
Change in salinity equal to a 0.09 C temperature change = 0.37 / 0.70
= 0.53 ppt
As a final check, I recalculated the figures above on a volume basis, after taking density variations with salinity into account (data not shown). The figures above did not change significantly.
From the above, it appears that the authors failed to properly take salinity’s impact on heat content and enthalpy into account. Coupled with the author’s stated concerns regarding “significant drift problems” with the salinity data… well it leaves me with serious doubts.
Couple this with Willis’s concerns, and… the whole thing appears to be a real mess.
Now I’m not saying that the world’s ocean are becoming more dilute. But, I do think it is reasonable to say that ocean salinity varies widely at a given location with depth and time and that these measurable changes should be taken into account.
Furthermore, I think it would be reasonable to suggest that annual Ocean Heat Contents (OHC) should be calculated from the enthalpies derived using the temperature, pressure, salinity values captured by the individual instrument and not by concocting questionable values and calling them “heat content anomalies”.
Regards,
Kforestcat
P.S. There is always a possibility I’m miss-interpreting Levitus approach or I’m missing something fundamental. Please let me know if you see a flaw in my analysis.
The World Ocean Atlas 09 search function is your friend in this, the following graphs are from there.


Other than in small enclosed areas (that are not studied by Levitus), salinity in the ocean varies from about 36 psu in the North Atlantic down to 34 in the North Pacific. That would make a difference of about a quarter of a percent in specific heat at 10°C.
However, the change in any one location over a year is small, with a standard deviation in the most variable areas of 0.3 psu.
And this 0.3 psu change makes a change (at 10°C) of about 0.04% in the specific heat …
So I’d say they are justified in using the climatology rather than the actual measured salinity for a given month and depth level, since the error is quite small.
The larger problem, of course, is the lack of climatology data for much of the world, getting worse with depth.
Finally, thanks immensely for the Properties of Water pdf, I’ll use that a lot.
w.