Guest Post by Willis Eschenbach
Well, I was going to write about hourly albedo changes, honest I was, but as is often the case I got sidetractored. My great thanks to Joanne Nova for highlighting a mostly unknown paper on the error estimate for the Argo dataset entitled On the accuracy of North Atlantic temperature and heat storage fields from Argo by R. E. Hadfield et al., hereinafter Hadfield2007. As a bit of history, three years ago in a post entitled “Decimals of Precision” I pointed out inconsistencies in the prevailing Argo error estimates. My calculations in that post showed that their claims of accuracy were way overblown.
The claims of precision at the time, which are unchanged today, can be seen in Figure 1(a) below from the paper Observed changes in top-of-the-atmosphere radiation and upper-ocean heating consistent within uncertainty, Norman G. Loeb et al, paywalled here, hereinafter Loeb2012
Figure 1. This shows Fig. 1(a) from Loeb2012. ORIGINAL CAPTION: a, Annual global averaged upper-ocean warming rates computed from first differences of the Pacific Marine Environmental Laboratory/Jet Propulsion Laboratory/Joint Institute for Marine and Atmospheric Research (PMEL/JPL/JIMAR), NODC, and Hadley, 0–700 m
I must apologize for the quality of the graphics, but sadly the document is paywalled. It’s OK, I just wanted to see their error estimates.
As you can see, Loeb2012 is showing the oceanic heating rates in watts per square metre applied over each year. All three groups report about the same size of error. The error in the earliest data is about 1 W/m2. However, the size of the error starts decreasing once the Argo buoys started coming on line in 2006. At the end of their record all three groups are showing errors well under half a watt per square metre.
Figure 2. This shows Fig. 3(a) from Loeb2012. Black shows the available heat for storage as shown by the CERES satellite data. Blue shows heating rates to 1800 metres, and red shows heating rates to 700 metres. ORIGINAL CAPTION: a, Global annual average (July to June) net TOA flux from CERES observations (based on the EBAF-TOA_Ed2.6 product) and 0–700 and 0–1,800 m ocean heating rates from PMEL/JPL/JIMAR
Here we see that at the end of their dataset the error for the 1800 metre deep layer was also under half a watt per square metre.
But how much temperature change does that half-watt per square metre error represent? My rule of thumb is simple.
One watt per square metre for one year warms one cubic metre of the ocean by 8°C
(Yeah, it’s actually 8.15°C, but I do lots of general calcs, so a couple of percent error is OK for ease of calculation and memory). That means a half watt for a year is 4°C per cubic metre.
So … for an 1800 metre deep layer of water, Loeb2012 is saying the standard error of their temperature measurements is 4°C / 1800 = about two thousandths of a degree C (0.002°C). For the shallower 700 metre layer, since the forcing error is the same but the mass is smaller, the same error in W/m2 gives a larger temperature error of 4°C / 700, which equals a whopping temperature error of six thousandths of a degree C (0.006°C).
I said at that time that this claimed accuracy, somewhere around five thousandths of a degree (0.005°C), was … well … highly unlikely.
Jo Nova points out that curiously, the paper was written in 2007, but it got little traction at the time or since. I certainly hadn’t read it when I wrote my post cited above. The following paragraphs from their study are of interest:
ABSTRACT:
…
Using OCCAM subsampled to typical Argo sampling density, it is found that outside of the western boundary, the mixed layer monthly heat storage in the subtropical North Atlantic has a sampling error of 10–20 Wm2 when averaged over a 10 x 10 area. This error reduces to less than 10 Wm2 when seasonal heat storage is considered. Errors of this magnitude suggest that the Argo dataset is of use for investigating variability in mixed layer heat storage on interannual timescales. However, the expected sampling error increases to more than 50 Wm2 in the Gulf Stream region and north of 40N, limiting the use of Argo in these areas.
and
Our analysis of subsampled temperature fields from the OCCAM model has shown that in the subtropical North Atlantic, the Argo project provides temperature data at a spatial and temporal resolution that results in a sampling uncertainty in mixed layer heat storage of order 10–20 Wm−2. The error gets smaller as the period considered increases and at seasonal [annual] timescales is reduced to 7 ± 1.5 Wm−2. Within the Gulf Stream and subpolar regions, the sampling errors are much larger and thus the Argo dataset will be less useful in these regions for investigating variability in the mixed layer heat storage.
Once again I wanted to convert their units of W/m2 to a temperature change. The problem I have with the units many of these papers use is that “7 ± 1.5 Wm−2” just doesn’t mean much to me. In addition, the Argo buoys are not measuring W/m2, they’re measuring temperatures and converting them to W/m2. So my question upon reading the paper was, how much will their cited error of “7 W/m2″ for one year change the temperature of the “mixed layer” of the North Atlantic? And what is the mixed layer anyhow?
Well, they’ve picked a kind of curious thing to measure. The “mixed layer” is the top layer of the ocean that is mixed by both the wind and by the nightly overturning of the ocean. It is of interest in a climate sense because it’s the part of the ocean that responds to the changing temperatures above. It can be defined numerically in a number of ways. Basically, it’s the layer from the surface down to the “thermocline”, the point where the ocean starts cooling rapidly with depth. Jayne Doucette of the Woods Hole Oceanographic Institute has made a lovely drawing of most of the things that go in the mixed layer. [For unknown reasons she’s omitted one of the most important circulations, the nightly overturning of the upper ocean.]
Figure 3. The mixed layer, showing various physical and biological process occurring in the layer.
According to the paper, the definition that they have chosen is that the mixed layer is the depth at which the ocean is 0.2°C cooler than the temperature at ten metres depth. OK, no problem, that’s one of the standard definitions … but how deep is the mixed layer?
Well, the problem is that the mixed layer depth varies by both location and time of year. Figure 4 shows typical variations in the depth of the mixed layer at a single location by month.
Figure 4. Typical variations of the depth of the mixed layer by month. Sorry, no provenance for the graph other than Wiki. Given the temperatures I’m guessing North Atlantic. In any case, it is entirely representative of the species.
You can see how the temperature is almost the same all the way down to the thermocline, and then starts dropping rapidly.
However, I couldn’t find any number for the average mixed layer depth anywhere. So instead, I downloaded the 2°x2° mixed layer depth monthly climatology dataset entitled “mld_DT02_c1m_reg2.0_Global.nc” from here and took the area-weighted average of the mixed layer depth. It turns out that globally the mixed layer depth averages just under sixty metres. The whole process for doing the calculations including writing the code took about half an hour … I’ve appended the code for those interested.
Then I went on to resample their 2°x2° dataset to a 1°x1° grid, which of course gave me the same answer for the average, but it allowed me to use my usual graphics routines to display the depths.
Figure 5. Average mixed layer depth around the globe. Green and blue areas show deeper mixed layers.
I do love climate science because I never know what I”ll have to learn in order to do my research. This time I’ve gotten to explore the depth of the mixed layer. As you might imagine, in the stormiest areas the largest waves mix the ocean to the greatest depths, which are shown in green and blue. You can also see the mark of the El Nino/La Nina along the Equator off the coast of Ecuador. There, the trade winds blow the warm surface waters to the west, and leave the thermocline closer to the surface. So much to learn … but I digress. I could see that there were a number of shallow areas in the North Atlantic, which was the area used for the Argo study. So I calculated the average mixed layer depth for the North Atlantic (5°N-65°N, 0°W-90°W. This turns out to be 53 metres, about seven metres shallower than the global average.
Now, recalling the rule of thumb:
One watt per square metre for one year raises one cubic metre of seawater about eight degrees.
Using the rule of thumb with a depth of 53 metres, one W/m2 over one year raises 53 cubic metres (mixed layer depth) of seawater about 8/53 = .15°C. However, they estimate the annual error at seven W/m2 (see their quote above). This means that Hadfield2007 are saying the Argo floats can only determine the average annual temperature of the North Atlantic mixed layer to within plus or minus 1°C …
Now, to me that seems reasonable. It is very, very hard to accurately measure the average temperature of a wildly discontinuous body of water like oh, I don’t know, say the North Atlantic. Or any other ocean.
So far, so good. Now comes the tough part. We know that Argo can measure the temperature of the North Atlantic mixed layer with an error of ±1°C. Then the question becomes … if we could measure the whole ocean with the same density of measurements as the Argo North Atlantic, what would the error of the final average be?
The answer to this rests on a curious fact—assuming that the errors are symmetrical, the error of the average of a series of measurements, each of which has its own inherent error, is smaller than the average of the individual errors. If the errors are all equal to say E, then if we are averaging N items each of which has an error E, the error scales as
sqrt(N)/N
So for example if you are averaging one hundred items each with an error of E, your error is a tenth of E [ sqrt(100)/100 ].
If the 118 errors are not all equal, on the other hand, then what scales by sqrt(N)/N is not the error E but
sqrt(E^2 + SD^2)
where SD is the standard deviation of the errors.
Now, let’s assume for the moment that the global ocean is measured at the same measurement density as the North Atlantic in the study. It’s not, but let’s ignore that for the moment. Regarding the 700 metre deep layer, we need to determine how much larger in volume it is than the volume of the NA mixed layer. It turns out that the answer is that the global ocean down to 700 metres is 118 times the volume of the NA mixed layer.
Unfortunately, while we know the mean error (7 W/m2 = 1°C), we don’t know the standard deviation of those errors. However, they do say that there are many areas with larger errors. So if we assumed something like a standard deviation of say 3.5 W/m2 = 0.5°C, we’d likely be conservative, it may well be larger.
Putting it all together: IF we can measure the North Atlantic mixed layer with a mean error of 1° C and an error SD of 0.5°C, then with the same measurement density we should be able to measure the global ocean to
sqrt(118)/118 * sqrt( 1^2 + 0.5^2 ) = 0.1°C
Now, recall from above that Loeb2012 claimed an error of something like 0.005°C … which appears to be optimistic by a factor of about twenty.
And my guess is that underestimating the actual error by a factor of 20 is the best case. I say this because they’ve already pointed out that “the expected sampling error increases to more than 50 Wm2 in the Gulf Stream region and north of 40N”. So their estimate doesn’t even hold for all of the North Atlantic
I also say it is a best case because it assumes that a) the errors are symmetrical, and that b) all parts of the ocean are sampled with the same frequency as the upper 53 metres of the Mediterranean. I doubt if either of those is true, which would make the uncertainty even larger.
In any case, I am glad that once again, mainstream science verifies the interesting work that is being done here at WUWT. If you wonder what it all means, look at Figure 1, and consider that in reality the errors bars are twenty times larger … clearly, with those kinds of errors we can say nothing about whether the ocean might be warming, cooling, or standing still.
Best to all,
w.
PS: I’ve been a bit slow writing this because a teenage single mother and her four delinquent children seem to have moved in downstairs … and we don’t have a downstairs. Here they are:
CUSTOMARY REQUEST: If you disagree with someone, please quote the exact words you find problems with, so that all of us can understand your objection.
CODE: These days I mostly use the computer language “R” for all my work. I learned it a few years ago at the urging of Steve McIntyre, and it’s far and away the best of the dozen or so computer languages I’ve written code in. The code for getting the weighted average mixed layer depth is pretty simple, and it gives you an idea of the power of the language.
# specify URL and file name ----------------------------------------------- mldurl="http://www.ifremer.fr/cerweb/deboyer/data/mld_DT02_c1m_reg2.0.nc" mldfile="Mixed Layer Depth DT02_c1m_reg2.0.nc" # download file ----------------------------------------------------------- download.file(mldurl,mldfile) # extract and clean up variable ( 90 rows latitude by 180 colums longitude by 12 months) nc=open.ncdf(mldfile) mld=aperm(get.var.ncdf(nc,"mld"),c(2,1,3)) #the “aperm” changes from a 180 row 90 col to 90 x 180 mld[mld==1.000000e+09]=NA # replace missing values with NA # create area weights ------------(they use a strange unequal 2° grid with the last point at 89.5°N) latline=seq(-88,90,2) latline[90]=89.5 latline=cos(latline*pi/180) latmatrix2=matrix(rep(latline,180),90,180) # take array gridcell averages over the 12 months mldmap=rowMeans(mld,dims = 2,na.rm = T) dim(mldmap) #checking the dimensions of the result, 90 latitude x 180 longitude [1] 90 180 # take weighted mean of gridcells weighted.mean(mldmap,latmatrix2,na.rm=T) [1] 59.28661

Willis, here is a link to the paper:
Loeb, et al. Observed changes in top-of-the-atmosphere radiation and upper-ocean heating consistent within uncertainty. (Nature Geoscience Vol 5 February 2012)
URL: http://www.met.reading.ac.uk/~sgs02rpa/PAPERS/Loeb12NG.pdf
Thanks, Frederick, much appreciated.
w.
Graeme L. Stephens et al, An update on Earth’s energy balance in light of the latest global observations. Nature Geoscience Vol. 5 October 2012
URL: http://www.aos.wisc.edu/~tristan/publications/2012_EBupdate_stephens_ngeo1580.pdf
I think these estimates of energy imbalance are based on a 2011 paper by Hansen and others that revised the earlier (Hansen, 2005) estimates of energy imbalance to the new estimate of +0.58 W/m3.
(Earth’s energy imbalance and implications, Atmos. Chem. Phys., 11, 13421-13449, 2011)
Just so we’re all on the same page: If their sampling errors are different at different places then we are no longer talking about random variances in the production of the floats. We are talking about variations in the water temperature that aggravate the float. That is, the floats accuracy is vastly worse than stated under real world conditions. And we’re only getting under 50w/m^2 based on ‘unfavourable’ local conditions.
So we can’t take the float fleet as a set and reduce their errors based on the multiplicity of them existing in different places and measuring different depths at different times. That’s the no brainer. But because it is also the case that it is the temporally changing environmental conditions of a given float that are inducing these errors, then we cannot even average down the errors for a single float. Not based on the design and calibration of the float. To do so we would necessarily have to have the same float making repeat measurements at depth in the same conditions. And we simply do not have that occurring.
So if we’re talking about a design tolerance — appropriately converted however — of 7.5 w/m^2 then it says nothing about the error under real world conditions. If I assume the various floats are being audited and verified for sanity checks, then we cannot state that any float has a better known accuracy of 50w/m^2 until and unless it has been independently validated.
But since this is all purely observational skew — and thus purely correlative — it does not establish that we will continue to have any given audited amount of error at a specific location. To know that we’d have to know the conditions that lead to these errors and then go through the model, experiment cycle until we knew what we had to know.
Even if that was simply that we needed to build new buoys with additional instrumentation installed such that we could overcome the errors being induced.
I was quite dissapointed when I saw this article. The whole thing is based on an fundamental error. You simply can not use sqrt(N)/N to calculate the error as described by numerous comments here. This should be crystal clear for everyone.
I have been wondering why sceptics have swalloved the claimed error of 0,005 C in ARGO data so easily. And I must say I am not satisfied to find out the reason being even prominent sceptics have no basic knowlegde of error calculations.
The good thing is it is easy to make things better. Now you should start asking proper error estimations for ARGO and any other data sets. If you don’t know the error of your measurements your data is pretty much useless. For Argo data you can do the calculations yourself ( or maybe better hire a professional ) since raw data is available.
Naalin Ana commented on Can We Tell If The Oceans Are Warming?.
in response to Jquip:
Hire a professional? Maybe a professional might offer to help Pro Bono, as I don’t think Willis is getting paid, I know I’m not getting paid.
I tried to execute your R code but the statement nc=open.ncdf(mldfile)
produces the following error:
Error in R_nc_open: Invalid argument
Error in open.ncdf(mldfile) :
Error in open.ncdf trying to open file Mixed Layer Depth DT02_c1m_reg2.0.nc
I thank you in advance for your suggestions.
You should verify that the file, mldfile, was properly downloaded from the url, mldurl.
I think the file needs to be downloaded in binary mode
download.file(mldurl,mldfile,mode=”wb”)
I think it also need a
library(“ncdf”)
command (you must have figured that). And of course. to have the package ncdf installed.
Thanks for the clarification, Nick. Regarding the download, if it won’t download for some reason you can always just download it manually.
And yes, you do need the
library(ncdf)
for it to work.
w.
Willis
Always a joy to read your independent approach. After reading Jo Nova’s post, I checked out the 9 papers that cited Hadfield. One of them had its own estimate of the variance (0.05 oC)^2, the square root of whicdh works out to about 0.22 oC not far from your 0.1 oC. Another paper referenced by one of the 9 was very close to the Hadfield estimate of about 0.5C, at 0.48 C. So we have three papers and now your estimate, all in the range of 0.1-0.5 C.
If readers are interested, I made the two papers available on the public Dropbox link (see my comment at Jo Nova–#39).
Thanks, Lance. Let me note that the “Hadfield estimate of about 0.5C” actually is the following:
In fact, although both you and Jo Nova seem to think that the Hadfield estimate is 0.5°C, that actually was not their estimate of the error in the Argo temperature field. Instead, it is a somewhat vague description of the difference between their cruise transect data and the Argo data for the same line of locations.
To get the error for an actual volume you need to take their results (which only relate to a single line through the ocean) and expand them to the 3-D mixed layer. The results of that analysis were reported by the authors as follows:
They later specify more exactly that the annual sampling error is 7 W/m2.
As far as I can see this is the only area-averaged error data in the study. It is well reported in that they’ve given the sampling error, the values for two different time frames (monthly and annual), and the area involved. Once I calculated the depth, it allowed me to convert their error (7 W-years/m2) into a temperature error of 1°C. Note that this 1°C is different from your reported “Hadfield estimate of about 0.5C” because the 1°C is measuring a different thing. It is measuring the annual error in a 10°x10°x53 metres block of the North Atlantic.
Finally, let me say that the claim that there is an “Argo error” of a certain size is inadequately specified. As my head post shows, the same density of measurements leads to a 1°C error in the North Atlantic mixed layer, but would give a global 0-700 metre error of about a tenth of that. Which one of those is the “Argo error”, 1°C or 0.1°C?
Without specifying the area (global or some defined sub-area) and the depth (0-700 metres, 0-53 metres, or …) and the time interval (month, year), the error is inadequately defined, and as such is without real meaning.
As a result, I would say that your claim over at Jo Nova’s that
is comparing apples, oranges, and potatoes …
Best regards,
w.
“Now, recall from above that Loeb2012 claimed an error of something like 0.005°C … which appears to be optimistic by a factor of about twenty.”
Because the matter has been raised before about the ability of the ARGO instrument itself, I can report the following:
Yesterday I spent a few hours calibrating Resistance Temperature Detectors (RTD’s) and observed their drift relative to each other in a bucket of room temperature water. These are high end 4-wire RTD’s and are being read using an instrument that reads 0-100.0000 millivolts. The RTD is the type of device that is in the ARGO. The claimed 1-year best range accuracy is 0.06 C. The repeatability is I think within 0.03.
Findings:
The two RTD’s I tested were initially different by 0.105 degrees for which I entered an offset correction to one to bring them in line. The variability, one relative to the other, was almost always within 0.004 Deg C. That supports the contention that they are accurate to 0.01 degrees and undermines the claim that anything floating around in the ocean can read to a precision of 0.002 which is the minimum requirement to secure an believable precision of 0.005 (assuming they are calibrated). No way. First the devices are not precise enough and second they are not accurate enough. The ARGO instrumentation is not claimed to be more precise than 0.01 and the 1-year accuracy is probably not better than 0.06.
A note about the claims that adding up a lot of measurements and calculating the average ‘increases the accuracy’. That only applies to multiple measurement of the same thing. The ARGO Floats are not measuring the same thing, they measure different things each time. Each measurement stands alone with its little 0.06 error bar attached. If there were thousands of measurements of the same location, the centre of the error band is known with great precision, but the error range is inherent to the instrument. It remains 0.06 degrees for any and all measurements. Knowing where the middle of the error bar is, does not tell us where within that range the data point really lies.
And speaking of lies: “…an error of something like 0.005°C.”
“A note about the claims that adding up a lot of measurements and calculating the average ‘increases the accuracy’. That only applies to multiple measurement of the same thing.”
Exactly right!
There was a discussion a few months ago in which I seemed to be the only one who was willing to recognize that measurements taken at different times and places cannot give an accuracy, when averaged, greater than the accuracy of each reading.
This can be easily proven by a simple thought experiment.
And assuming that disparate measurements can be treated using the same statistical methods as repeat measurements of the same thing, seems to be done over and over again in climate science,.
The entire subject of precision and accuracy is played fast and loose in climate studies.
Crispin: There are climatologists implying (without saying it openly) that if you measure a temperature with 1,000 thermometers with an error bound of 1 degree you get an overall accuracy of 0.001 degree.
Climatology is definitely a post-normal science.
Curious George:
Well, what they really mean is that they know, using statistical analysis, where the middle of the error range is. The innumerate public upon which CAGW relies has little idea about things like this. If the measuring device has an error of +-1 degree, each measurement is known to +- 1 degree. The reason we can’t know the ocean temperature to 0.005 C is that the ARGO measurements are not made in the same locale.
I use computer logged scales a lot. Let’s say we know the number is going to transmitted from the read head to the nearest gram and that it is varying – like the “Animal” function of a platform scale which can weigh a live, moving animal. We read it 100 times per second and average the readings. That is a completely different ‘thing’ from getting 100 measurements from 100 scales with 100 different objects on them. Remember these ocean temperatures are used to calculate the bulk heat content, not the ‘average temperature’. Finding the average temperature would require making well-spaced measurements in 3D. Is anyone claiming to have done that? Do they average the temperatures first or calculate the heat content per reading? It matters. Volume +-1% x temp +-1% cannot give a heat content +-0.1%. Adding up 1000 such results does not give a total +-0.1%.
Suppose the scale I am reading 100 times is actually jiggling slightly and the voltage from the sender is slightly variable (because I am reading the signal more precisely, ‘down in the mud’). Getting 40 readings of 1000 g and 60 readings of 1000.1 g from a 1 g resolution scale tells me that the mass is very likely to be 1000.6 g with a high level of confidence (a real confidence, not an IPCC opinion). The read head does not have to be set up to give one gram readings to do this. The numbers might only be ‘certifiable’ at a 5 g resolution, but if I have access to the raw voltage such little sums can be done. The ‘certified’ accuracy of 5 g is for every single measurement reported. That is a very different situation. I want many ‘opinions’ of a single mass, or a varying mass or single temperature and then will use normal methods to calculate the centre point of the range and a StD and CoV.
In order to take ocean measurements and use them, it must be remembered that each reading ‘stands alone’. There is no way to get even a second opinion of each. Each must therefore be treated according to their certified accuracy (akin to the 5 g certification) and used to calculate the heat content of the local water.
Yes they are measuring ‘the same ocean’ but the numbers are used to show local variation. Averaging them and claiming both to have considered local spatial variation and to have improved the accuracy is ‘not a valid step’.
There are lots of analogies. Measure the temperature in 1000 homes within 1 degree. What is the average temperature in all homes and and what is the accuracy of that result? If you measured 5000 homes to within 5 degrees, do you get the same results? Now measure 100 randomly selected homes to within 1 degree and estimate the temperature of the other 900. How good are those numbers? +-0.1 degrees? I think not.
Lastly, use 4000 available numbers not randomly distributed (ARGO floats) and estimate the temperature at all other possible positions a float might be located. Calculate the bulk temperature of the ocean. What is the result and what is the accuracy? Ten times better than any one reading? Calculate the heat content of the ocean. Ditto.
Lovely sources: Darrell Huff, How to Lie With Statistics (W.W. Norton, 1954), Chapter 4: “Much Ado about Practically Nothing”, pp. 58-59 which is quite apropos. I found it at the webpage
http://www.fallacyfiles.org/fakeprec.html which cites T. Edward Damer, Attacking Faulty Reasoning: A Practical Guide to Fallacy-Free Arguments (Third Edition) (Wadsworth, 1995), pp. 120-122: “Fallacy of Fake Precision”
and
David Hackett Fischer, Historians’ Fallacies: Toward a Logic of Historical Thought (Harper & Row, 1970), pp. 61-62: “The fallacy of misplaced precision”.
Alias: Fake Precision / False Precision / Misplaced Precision / Spurious Accuracy
Taxonomy: Logical Fallacy > Informal Fallacy > Vagueness > Overprecision
Your quandary with the standard deviation of the mean error (“Unfortunately, while we know the mean error (7 W/m2 = 1°C), we don’t know the standard deviation of those errors.”) prompted me to search in vain for any association between sea surface temperatures, errors, and the Poisson statistical distribution.
http://ds.data.jma.go.jp/tcc/tcc/library/MRCS_SV12/figures/2_sst_norm_e.htm
http://www.pmel.noaa.gov/pubs/outstand/haye1160/surfaceh.shtml
“The mean latent heat flux is approximately 45 W m , which again agrees with Reed’s result, and the standard deviation is about 12 W m.”
There was a method to my madness and perhaps a more exhaustive search might be fruitful. The Poisson distribution has an interesting property that its variance is equal to its mean. And standard deviation is the square root of variance. With the Poisson distribution, if you know the mean, you know the standard deviation.
Also, a crawl space is a nice enough “downstairs” for your house guests, not much more annoying than bats upstairs in a cabin I rented.
Neil, you say:
Dang, I’d forgotten that. The distribution of errors is at least pseudo-Poisson. So 1 W/m2 would be the best-guess variance, and of course it would also be the standard deviation. That would increase my estimate by sqrt(2)/sqrt(1.25), which is about 25% larger.
w.
you could randomly sample Argo and very likely arrive at the normal distribution due to the CLT.
for example, rather than trying to average every argo in a grid, then averaging all the grids, sample them randomly instead. Pick the grids randomly and pick a float randomly in each grid. Repeat for each time slice (perhaps hourly?). Then average these to compute your trends.
what you will have afterwards should be normal, allowing you to do all sorts of argo analysis that climate science hasn’t yet dreamed of. And more importantly, your average should be much more reliable, as should your variance.
averaging averages is statistical nonsense dreamed up by climate scientists. it smears the average and hides the variance.
and since your data is normal when sampled randomly, the standard deviation gives you the standard error.
@ferd
I don’t think it’s as easy as you make it sound. As I recall, the floats run their measurement cycle only 3 times per month (actually every 10 days). You are unlikely to get much in the way of coordinated measurement, and certainly not hourly.
Crispin…
I agree with you. Multiple measurements increasing the accuracy only applies when you are measuring the same thing! So the only thing that is varying is the measurement, not what is being measured.
So if I am interpreting correctly the error in measurement of temperatures from Argo are about 1C and the estimated temperature change in sea water temperature is on the order of about 0.1C then we can say nothing about the average temperature of the upper seal level ~ 60 m deep world wide, So we don’t know whether the ocean in that mean depth of water is increasing decreasing, or not changing.
Thus, we don’t know how the temperature of the upper ocean waters behave and we will not, until an order of magnitude or more reduced uncertainty of the temperature measuring device(s) has been achieved and we have measured a sufficient number of years in a spatial density sufficient to compute an average temperature change of sea water to ~60 m in all the seawater around the world.
I think the estimated ocean t change WAG is on the order of .2 degrees per century.
My understanding is that “sqrt(N)/N” only applies where the data is homogeneous. That is, it would apply if multiple measurements were being made of the temperature of one sample (say a cubic metre) of ocean. It does not apply where data is heterogeneous, such as in multiple measurements of entirely different bits of ocean. If it did apply we could have millions of boat owners stick a finger in the water and with enough measurements we’d get the error of finger-temperature-estimates down to thousandths of a degree.
I would expect that any single reported measurement, was actually multiple samples taken over a short period of time, like a few hundred to thousands over a 1 sec period(actually 1Hz). But you would need to look at the ARGO design spec to know exactly what that was.
At the point where you want a measurement, it too easy to take a bunch of samples for your measurement, and the design team would have to know about the issues with a single sample, and account for it (well they might not, but boy would that be dumb).
I found this spec for one of the buoy designs.
Now what is the time constant of the sensors?
Likely dependent on the mass and thermal conductivity of the sensor housing and the water being tested.
I haven’t had the chance to read the doc’s posted on the topic.
I think it is sufficient that the measurand is well defined and that it can be characterized by an essentially unique value. Ref. ISO Guide to the expression of uncertainty in measurement. Section 1.2 and 4.2.3.
I further believe that the average temperature of a defined volume of water over a defined period is sufficiently well defined in this respect. The sampling should be random and representative of the volume and the period. The standard uncertainty of the average value can than be calculated as the standard deviations of all your measurements divided by the square root of the number of measurements.
This however does not provide you any information about the variability of your measurand over time, or the average value for another volume. The only thing it provides you information about is the standard uncertainty of the average value for the defined volume over the defined period. If you randomly divides your measurements in two groups, and calculate the average value and the standard uncertainty for each of these two groups, the standard uncertainty of the average value provides you information about how large difference you can expect between the two average values.
Yes, the water is moving. The floats are moving as well. I do not see how an average latitude bias would not, over time, be introduced.
Also I do not see how there would not be a tendency for the floats to gather in certain ocean currents and locations; also potentially producing a systemic bias.
Are either of these factors adjusted for, and how in the hell could one do that adjustment?
another good question to me…
well from my point of view an error bar undercondition is an underestimated error bar.
But” we don’t know” seems to be for some reasons unacceptable .
Indeed Of course fortunately our land based stations are not moving nearly as much.
So, in our infinite wisdom, (sarc) we change stations, eliminate long running stations, and homogenize data up to 1200 K away.
David, the distribution of the floats is indeed a concern. However, they seem to scatter widely and reasonably evenly. Here’s an analysis I did a couple of years ago showing the number of samples per 100,000 sq. km. over the duration of the Argo program.
As you can see, the distribution is pretty even.
w.
Thanks Willis, yet your answer confuses me;
———-
“Here’s an analysis I did a couple of years ago showing the number of samples per 100,000 sq. km. over the duration of the Argo program.”
=========================
is this the total measurements over the duration of the program, and if so how does that illustrate how they have changed relative to time; I.E. the first year verses the last year.
David A, that’s total Argo profiles up to the date of the analysis, from memory 2013. It says nothing about how distribution has changed over time.
w.
“As you can see, the distribution is pretty even.”
I suppose this depends on what one considers “pretty even”.
Anywhere between 0 and >96 measurements per 10k sq.km.?
And large patches and latitudes showing a lot of clumpiness.
And says nothing about temporal distribution.
The harder I look, the worse it seems.
In fact, I can honestly say IT IS WORSE THAN I THOUGHT!
Loving them satellites more every day.
So perhaps the error bars are even larger then indicated?
Willis,
Would it perhaps be possible to extract an independent estimate of the accuracy by looking at ARGO float pairs that happen to be in each other’s neighborhood? It’d be interesting to see one of your famous scatter plots of average (squared) temperature difference versus distance.
Frank
Willis,
Interesting question, but I don’t think you’d get accuracy out of it. Instead, you’d just be measuring the “correlation distance”, the distance at which the average correlation of temperatures drops below some given level.
w.
Willis.
Off topic sorry, but in your albedic meanderings, you mention albedo decreasing with temperature up to 26C, what would be the mechanism for this decrease?
One factor would be cloudiness. On average cold seas have more clouds than warm ones (the ITCZ is an exception).
Another would be the amount of nutrients in the water. Warm water can hold less nutrients in solution than cold. This means much fewer pelagic organisms and consequently clearer, more translucent water that absorb more light.
Many people think that warm tropical seas are biologically rich. They are not, they are biological deserts. There are parts of the South Central Pacific where the particle count is lower than in lab quality distilled water.
Thank you tty.
I thought it must be something like less clouds, but I hadn’t thought about the clearer water, and I shouldv’e, because I came from the UK and now live in the Philippines.
No hang on thats not right murky water would not have higher albedo, maybe less.
“Murky” water would have higher albedo. Small particles scatter light, and part of it is scattered back towards the surface/sky. And if You have practical experience of the sea you will have noticed that tropical waters are deep blue while oceans at higher latitudes are greenish and distinctly lighter in color.
Backscattering is lowest in the tropics as you can see here:
http://oceancolor.gsfc.nasa.gov/cgi/l3
However it is true that this effect is relatively small at lower latitudes. ERBE showed that the ocean surface albedo only varies between about 0.08-0.13 at low latitudes and only rises above 0.20 north of 50 N and south of 60 S (=south of the Antarctic convergence):
http://www.eoearth.org/view/article/149954/
“Another would be the amount of nutrients in the water. Warm water can hold less nutrients in solution than cold.”
I believe this is incorrect.
For solids and liquids, solubility in water generally increases with temperature.
For gasses, it decreases.
I believe cold water is often more nutrient laden because it often is the result of upwelling of deep water which has been enriched in nutrients for reasons other than temperature.
“There are parts of the South Central Pacific where the particle count is lower than in lab quality distilled water.”
I think if this is the case you need to find a new source of distilled water.
one thing i have noted is for the northern hemisphere colder periods coincide with increased plankton production,which in turn increases biomass of all fish species in the relevant areas. the gadoid outburst is one such episode . these cold periods obviously follow warm periods ,so the question i have is how much energy can massive increases in plankton production consume ?
I’m a lazy fellow at heart and one of the laziest pastimes I have is reading. I love reading science papers, posts and research, because of all the grunt work, measurements, searching for data and stuff that you scientists do. It’s something I could never be bothered to get off my arse to do myself. It’s also why I smile when a warmist asks me “are you a scientist?” Why would I go to that amount of trouble when there are so many people running around measuring and collecting data for me?
Being a lazy person, I also have an incredible ability to cut through the superfluous and see the fastest and easiest ways to an outcome.
So onto the topic of oceans. What percentage of water in the ocean is being measured? Answer, a tiny fraction. As a lazy person that’s all I need to know, but hey, if it makes a scientist feel good about themselves and gives purpose to their lives to run around like a headless chicken claiming this or that about its temperature, who am I to spoil their game?
Is there any information that I can look at that can give me an idea of temperature? As far as I can tell, there is only one. It’s quick and easy and suits lazy people to a tee! It’s also the only thing that has relevance the the global warming scaremongering. That is global sea ice. A lazyman like me can take in the info at a glance of a couple of pictures and go “nope, the ocean isn’t warming”. End of story, no need to send me a cheque!
Sea level is probably a better indication of ocean temperature, if you can remove all the complications associated with it.
“Sea level is probably a better indication of ocean temperature, if you can remove all the complications associated with it.”
Piffle.
There are so many things that affect sea level besides temp, it would be looking for a needle in a ocean of needles.
Tony :June 6, 2015 at 11:21 pm
This is a important point and has been discussed before in relation to acclaimed 0.005K accuracy of ARGO measurements.
If you measure the same thing 100 time you divide the uncertainty by 10 ( with all assumptions about the nature of the errrors etc. )
The fundamental folley is that 3000 ARGO measurements are NOT 3000 measurements of the same thing: the global average temp. or whatever because an average is a statistic ifself, not a measureable quantity.
BTW Willis, the Met Office have just released the NMAT2 data that Karl et al used of remove the hiatus.
http://www.metoffice.gov.uk/hadobs/hadnmat2/data/download.html
Unfortunately for the moment it is only in less accessible NetCDF format.
Since you are proficient in R maybe you could extract something more useful.
It is to the credit of M.O that they have been responsive to demands for the data but it is not yet provided in the more readily accessed ASCII formats like thier other datasets.
Neither is there any global time series that can be compared to Karl’s manipulations.
http://www.metoffice.gov.uk/hadobs/hadnmat2/data/download.html
Thanks Mike.
I assume that based on: “We know that Argo can measure the temperature of the North Atlantic mixed layer with an error of ±1°C.” … then instead of “…then with the same measurement density we should be able to measure the global ocean to … 0.1°C ” … it should be ±1°C.
In the same way, having lots of folk put their fingers in the water, doesn’t help with accuracy.
Mike I think you posted first but mine is higher up.
“The fundamental [folly] is that 3000 ARGO measurements are NOT 3000 measurements of the same thing: ”
Precisely! But these guys have been getting away with an additional misrepresentation: that more precisely locating the centre of the error range with multiple measurements of the same thing (which ARGO’s do not) reduces the error range. It just uses statistics to better locate the mean. These guys are claiming that the error range is reduced, even though it is inherent in the instrument.
Given that the ARGO measurements are of different things, no such claim can be made for any numbers – they all stand alone. Without even starting to look at the representativeness of the sampling, the instrument itself cannot produce a temperature value to within 0.005 degrees C.
How does this get past peer review?
“Given that the ARGO measurements are of different things, no such claim can be made for any numbers”
Measurements are taken at least 1 per second (and this spec doesn’t actually mean they aren’t taking multiple submeasurements in the 1 second period ), per ARGO requirements spec.
Therefore it is not unreasonable to call those multiple measurements of the same thing at least for each reported data point.
“Given that the ARGO measurements are of different things, no such claim can be made for any numbers – they all stand alone.”
So glad to see this being recognized. And it is not just for ARGO umbers…but for all temps measured at all the surface stations.
This same statistical fallacy is used, or so it seems, to overstate the accuracy of the average global temperature taken from all surface readings.
I have seen many instances of this.
They are treated as if they are multiple measurements of the same thing.
Even if was just different days at the same location, it is still a different quantity being measured, and errors are compounded, not reduced, by adding a bunch of them up.
Menicholas
Be careful with terminology: when things are multiplied the errors are literally compounded. When averaged they are treated differently. One can average readings with different accuracies, but that is not compounding either. A few inaccurate numbers are more like ‘contaminating’ the result. Please see my note above on false precision.
micro6500 multiple readings per second:
I agree that is how they get a number to report. So suppose there were 10 readings taken per 1 second. The float is moving at the time. Well, hardware cannot get a 0.01 degree reading in 0.1 seconds. It literally takes time to get a reading that good – and it is done by taking multiple readings and ‘treating’ them. So we are up against physical limits. How good is the motherboard? How fast can it produce a certifiable 0.01 degree reading? It means getting 0.005 accuracy to report ‘correctly’.
Ask: Can it do that, that fast? Check really expensive instrument. They will give a reading to 0.01 precision but the number may not be that accurate: Precision 0.01 and accuracy 0.06 (measured against a reference temperature). A $5000 instrument cannot produce 10 x 0.01 +-0.005 readings per second. How good is an ARGO? Someone surprise me! The RTD is not that accurate.
Oceanographers are using the reported temperature, which is +-0.03 to 0.06. Whatever the numbers are, averaging two locations does not increase the accuracy of the result.
Your phone digitizes at least 48khz, could be over 200khz, and in the 80’s there were a2d’s running in the Mhz’s.
Sampling speed isn’t an issue. The tc could be, but there’s no reason it could not sample multiple times per second, and it could have multiple sensors as well.
They built to a spec, and had to test to a spec, all of these issues could be accounted for.
I’m not saying they are, but they could be.
It is unreasonable. Time is not the determining factor for uniqueness.
Each measurement is from different equipment, at a different position, at a different depth, at a different pressure level, under different conditions (clear, storm,…), through different sensors, with different electronics, different wires and possibly different coding for the hardware chipsets.
This equipment is not managed for consistent quality. It is put to sea, literally, and left for the duration. Every piece of deployed equipment is liable to degrade over time.
Multiple measurements taken by the same individual piece of equipment might provide a reasonable average for that specific measurement, but as equipment ages while journeying through the oceans surfacing and plummeting through the depths not every measurement offers any assurance of continued fantastic accuracy or precision.
The equipment accuracy at time of deployment is likely to be the best of it’s lifespan. Without thousands of floating technicians maintaining quality, that quality level is temporary and possibly illusion.
Simple it is not.
First I wasn’t referring to measurements from a different buoy.
Second, I never said it was simple.
What I said was that based on mission requirements, there are technical solutions that provide stable repeatable measurements for a long mission lifecycle. I followed that with a couple of methods a single buoy could provide highly reliable temperature measurement, they would include multiple sensors, and multiple sub samples per reported reading.
Multiple sensors could provide higher accuracy, multiple samples higher precision.
Lastly, the spec and design doc’s would be key to knowing what was done.
Crispin in Waterloo but really in Yogyakarta June 7, 2015 at 5:08 pm
You bring up a good point about the time constant of the measuring apparatus. All that is being reported is the static accuracy (.005C) Which is believable. They say nothing about the dynamic accuracy.
I encourage everyone to get a grip on the difficulty involved I getting a accurate, repeatable measurement to a precision of 0.01 degrees C. It is not done in a flash. Look at the very precise instruments available on the Internet. Look at the frequency at which readings can be obtained and the precision for different time intervals.
Each ARGO reading made is in a different vertical and spatial position. Each one is presumed to be valid. Very accurate readings take about three seconds. A powerful specialised instrument is possible. Is that what the ARGO has in it? How accurate is the one second reading, if that is the frequency?
All the specs on how accurate temperature armatures work are in the manuals. There is a lot of thought in it, but nothing floating around the oceans provides ‘averages’ to 0.005 degrees!
Comments please Willis? Is your “0.1°C” flawed?
BTW, Phil Jones uses the same technique to claim his ridiculously high accuracy for weather station temperatures. His paper was on the CRU page (which now has bad links). 7 billion people with their fingers in the air estimating temperatures, should score at least the equivalent accuracy.
Willis:
What are those critters?
An old Yankee would guess fox of some sort or juvenile coyote with a real coloring problem.
Eastern New England, here. Back in the day, we never saw fox at all, as they were few and shy. Now they can be seen in quiet places, but seldom.
As for coyote, I have never seen one, but from 11:00PM to 2:00 AM they can be heard regularly.
Raccoons can climb up to upper cabinets, get cereal from boxes, open refrigerators, trash your whole house, and look at you to say “who me?”.
Ask me how I know.
A raccoon can be tamed, sometimes a fully wild raccoon can act tamed, if it is in it’s interest to do so.
Never think a raccoon is domesticated, ever.
Ask me how I know.
micro6500…
Measuring with the same instrument, that has the same biases, multiple times is not the same thing as measuring with multiple instruments, the same cubic meter of sea water. The same quantity has to be measured multiple times with multiple instruments to increase accuracy in accordance with the math used in this blog post.
Scott, multiple measurements by the same instrument is used to remove instrument errors,and can be used to improve the precision of that measurement, unless ARGO has multiple sensors (which it could ) it as well as just about all of the temperature measurements are made with a single sensor(land as well as satellite ).
Yes, Micro6500,
I don’t think that the satellite measurements claim 0.005 C accuracy since it is only one sensor per satellite. I think they claim only 0.1 C accuracy. Multiple measurements with one sensor improves accuracy but does not remove instrument bias. It seems to me that the accuracy of the average of all ARGO measurements for making an average temperature result of the whole ocean would depend on the range of measurements you are talking about. For example, if the water temperature at the equator were 100C and 0C at the poles then the accuracy you could claim would not be as good as if the equator were 60C and the poles were 10C. I don’t see anything in the error calculations that takes this into account. The closer all the ARGO thermometers are to measuring the same thing, the more accurate they could be as an average. Correct me if I am wrong.
I don’t buy the accuracy they claim for ocean temp, I’m not sure I buy the accuracy of a single buoy, but it could be possible, I do think the measurements are most likely precise.
“The same quantity has to be measured multiple times with multiple instruments to increase accuracy in accordance with the math used in this blog post.”
Yes, otherwise it is the same as using an instrument to calibrate itself.
As far as I can tell, this is one of the few wiki articles that is untainted by biased alterations.
Everyone should know all of this stuff:
http://en.wikipedia.org/wiki/Accuracy_and_precision
Micro6500 – I do not believe that ISA (the Instrument Society of America) or ANSI (American National Standards Institute) agree with that. And know of no standard that allows that Including the Nuclear Regulatory Commission requirements.
Why is measuring something once a second better than measuring something continually with an analog device?
Menicholas
June 7, 2015 at 1:58 pm
“The same quantity has to be measured multiple times with multiple instruments to increase accuracy in accordance with the math used in this blog post.”
Yes, otherwise it is the same as using an instrument to calibrate itself.
It can reduce measurement noise. The biases remain.
Foxes. Gray foxes.
w.
When steam is coming from the oceans, and the surface is covered with boiled fish, we will know for sure.
In the meantime, I will point out that sometimes when I am in the sea I detect a warm patch around the lower half of my body. It dissipates quickly, though. If this phenomenon is widespread (and I know other people have had the same experience) eventually the accumulated heat will raise the temperature of the oceans by noticeable amounts.
RoHa,
If there is a little kid standing near you when this happens, I think I can tell you how the warm patch of water forms.
Off topic, but that Win 10 upgrade button reappeared today. Here’s the easy and official way to squash it permanently –
Can I turn off the notifications?
Yes. Click “Customize” in the System Tray [that’s the area in the lower right where the upgrade button appears. Hit the UP arrow to expand it] and turn off the Get Windows 10 app notifications in the menu that comes up.
Mike
At 1905 Z I tried that.
Little pesky icon has gone – as I write.
Definitely – fingers crossed.
My thanks!
Auto
@2020 Z today – the delightful icon [well, I do not wish to be offensive] is back.
Tried again.
Maybe 3rd time lucky. Perhaps.
Auto
In this case we have no idea what the energy imbalance at ToA may be. See CERES EBAF Net Balancing.
Willis
This is a very interesting article. The answer to your question is we do not know and cannot tell.
I would like you to consider the following, and I would like to see your views.
As I repeatedly state, the key to understanding the climate on this water world of ours is to understand the oceans. If one is concerned with GLOBAL warming, then since the oceans are the heat pump of the planet and distribute surplus solar energy which is input in the equatorial and tropical oceans, in 3 directions (namely pole-wards, and via ocean overturning and the thermohaline circulation to depth), it is only the oceans that need to be measured, investigated and assessed. ARGO should have been run out when the satellite measurements were launched in 1979. This was a missed opportunity, but that is water under the bridge.
All the data sets have their own issues, but with the exception of the CO2 Mauna Loa and perhaps the satellite temp sets, all have huge unacknowledged margins of error. Now, I do not know whether CO2 does anything of significance, but what we do know is that Climate Sensitivity (if any) to CO2 is less than a combination of natural variation plus the error bounds of our various measuring devices and that is why we have been unable to detect the signal to CO2 in any temperature data set. Thus if natural variation is small and the error bounds are small, then Climate Sensitivity is small. If natural variation and error bounds are large, Climate Sensitivity could theoretically be large. So may be there is a role for CO2, and your article begs the question: if the oceans are warming why and how is this taking place?
In praise of our host, it is interesting to consider whether all Watts are equal, or does it matter where within the system the Watts may exist or penetrate? Personally, I consider that not every watt is of equal significance.
You sate: “The “mixed layer” is the top layer of the ocean that is mixed by both the wind and by the nightly overturning of the ocean. It is of interest in a climate sense because it’s the part of the ocean that responds to the changing temperatures above.” This begs the questions: precisely what energy is getting into the mixed layer? Is it simply solar (the amount of insolation may vary from time to time due to changes in patterns of cloudiness, or levels of atmospheric particulates) or is it solar plus DWLWIR (the latter increasing over time due to increasing levels of CO2 in the atmosphere whether of anthropogenic origin or otherwise)?
You state: “My rule of thumb is simple. One watt per square metre for one year warms one cubic metre of the ocean by 8°C “ Let us consider that in relation to the K&T energy budget cartoon and the absorption characteristics of LWIR in water.
According to K&T, there is on average some 324 W/m2 of backradiation “absorbed by the surface”, and some 168 W/m2 of solar insolation “absorbed by the surface.” What is interesting about this is that the oceans are a selective surface the effect of this is that there is about as much DWLWIR absorbed in a volume represented by just 3 MICRONS, as there is solar energy absorbed in a volume represented by a depth of about 3 metres!
The above is a rule of thumb assessment. In practice almost no solar is absorbed in the top few microns but some 50% of all incoming solar is absorbed within the top 1 metre with 80% of all incoming solar in the top 10 metres, with only 20% making it past 10 metres down to depth, with some small part of which getting down to about 100 metres. By way of contrast, the absorption characteristics of LWIR are:
I
t can be seen that 50% of LWIR is fully absorbed in 3 microns and 60% within 4 microns. However, that is the vertical penetration of LWIR, but since DWLWIR is omni-directional with much of it having a grazing angle of less than 30 degrees, it follows that at least 60% of all DWLWIR must be fully absorbed within 3 just MICRONS
The upshot of the above is that in the volume represented by the top 3 metres of the ocean there is about 109 W/m2 of solar (ie., 168 W/m2 x 65%), and in the volume represented by the top 3 MICRONS of the ocean there is about 194 W/m2 of DWLWIR (ie., 324 W/m2 x 60%). In ball park terms there is nearly twice as much energy from DWLWIR contained in a volume of a million times less! Pause to consider the implications that that inevitably leads to.
If DWLWIR is truly being absorbed and if it is sensible energy capable of performing sensible work in the environ in which it finds itself, it would cause the oceans to boil off, from the top down, unless it can be sequestered to depth, and thereby diluted by volume, at a rate faster than the rate that it would otherwise be driving evaporation.
The first question to consider is how much evaporation would be driven by and at what rate if 3 MICRONS of water are receiving and absorbing 194 W/m2 of DWLWIR? Using your rule of thumb (8°C per watt per year per cbm), the temperature (if my early morning maths after Saturday night partying is right) is raised by about 16.5°C per second. In the tropical seas which are about 27°C to 31°C the entire top few MICRONS would be boiling within 4 seconds. Whilst the oceans are vast, after some 4 billions years, there would not be much oceans slopping around the surface of planet Earth. The oceans would be in the atmosphere having been boiled off from the top down.
So how is DWLWIR absorbed in the top few MICRONS mixed into the mixed lawyer and at what rate?
First, there is conduction. This is a non starter since the energy flux at the very top is upwards (the top microns, and indeed mm are cooler than the bulk below) and (unless we are wrong about how energy can transfer), energy cannot swim against the direction of energy flow/flux. See the ocean temperature profile, with plot (a) being the nighttime, and plot (b) the daytime profile.
http://disc.sci.gsfc.nasa.gov/oceans/additional/science-focus/modis/MODIS_and_AIRS_SST_comp_fig2.i.jpg
Second, it cannot be ocean overturning. As you note this is a diurnal (“nightly”) event so for example during the day it is not mixing the DWLWIR energy being absorbed in the top 3 MICRONS and which is causing that volume to rise in temperature at a rate of 16.5°C per second. In any event, even when operative, it is (in relative terms) a slow mechanical process quite incapable of sequestering the energy absorbed in the top few microns to depth at a rate faster than that energy would otherwise drive copious evaporation.
Third, there is the action of the wind and the waves. However, this again is, in relative terms, a slow mechanical process which again is also inoperative or effectively inoperative for much of the time. Consider that according to Stanford University “The global average 10-m wind speed over the ocean from measurements is 6.64 m/s” (see:http://web.stanford.edu/group/efmh/winds/global_winds.html),
which is speed horizontal to the surface. 6.64 m/s is just under 24 km/h and this means that the average conditions over the oceans is BF4, which covers the range 19.7 to 28.7 km/h. If the average conditions over the ocean is BF4, it follows that much of the oceans must for much of the time be experiencing conditions of BF2 and below (after all we know that there are storms, cyclones and hurricanes, and these have to be off-set elsewhere by benign conditions). BF2 is described as “Light breeze Small wavelets. Crests of glassy appearance, not breaking ” and BF1 is described as “Light air Ripples without crests.” Note the reference to “glassy appearance” so we know that even in BF2 conditions the surface is not being broken. The surface tension of water is such that in these very light conditions there is no effective mixing of the very top surface of the ocean by the operation of wind and waves (there is no waves just ripples or at most wavelets). Indeed, there must be areas (eg., in harbours, inland lakes, particularly crater lakes) where for lengthy periods the conditions encountered (at surface level) are BF0 where there is effectively no wind, and calm as a mill pond conditions encountered which means that the top MICRONS in these areas is not being mixed at all.
I have never seen anyone put forward a physical process which could effectively mix the top MICRONS at a vertical penetrative rate faster than the rate at which copious evaporation would be driven by the DWLWIR energy absorbed in these few MICRONS. If this energy cannot be sequestered to depth (and thereby diluted and dissipated) at a rate faster than the rate of evaporation, there is a significant issue.
I do not profess to have answers, but I do see significant issues such that my considered view is that if the oceans are warming this is only because of changes in the receipt of solar insolation or slight changes in the profile and distribution of currents (laterally and/or vertically). It would appear to be the result of natural phenomena, not anthropogenic activity (unless shipping and/or fishing is having an adverse impact on a micro biological level because as you note in your article, biology plays a role in the mixed layer).
I look forward to hearing your comments.
PS. You state: “The “mixed layer” …is of interest in a climate sense because it’s the part of the ocean that responds to the changing temperatures above.” However, is it not the oceans that warms the air above them and is responsible for air temperatures immediately above them, and not the air that heats the oceans?.
If one is in the middle of the ocean, and if there is little wind speed, the air temperature is almost the same as the ocean surface temperature day and night.
Further the heat capacity of these mediums is such that the air a degree or so warmer than the oceans could perform little heating of the ocean. It is the sun that heats the oceans, although the amount of solar insolation received is variable.
Richard: Isn’t there a big mismatch between the claimed absorbed energy at the surface and the total evaporation from the oceans, by something like 50%? There is far too little evaporation to represent that much energy in. The water is not heating enough. Where is the error? How is it transferred to the lower atmosphere without evaporating a heck of a lot more rain, or does it condense immediately, transfer heat to air, then precipitate in the few mm of the water surface?
I think everyone who is interested in this, get an IR thermometer and measure the sky.
Those 324/168W/M^2 numbers require a lot of condition averaging.
Clear sky Tsky in the 8-14u range has a BB temp from say 0F to -70 to -80F (@N41,W81), clouds increase the temp, thin hight clouds add 5-10F, all the way to being 5-10F colder than air temp.
Clear sky low humidity is in the range of the 168W/M^2, the 324 would have to be humid with a heavy cloud cover some place warm. And while this might be prevelent for a lot of places I can’t imagine it could average that high.
More measurements need to be taken.
Instead of wasting time taking more measurements, be aware that an IR thermometer is designed to operate in a wavelength range called the ‘atmospheric window’, i.e. it is not supposed to see radiation from the atmosphere (sky)
Which is why I included the wavelength. But you can turn that measurement into w/m^2 and then add the specified Co2 flux, and it is still far below the quoted numbers.
It also means that most of the surface sees that temp, which as you say is a window to space.
And when you point that IR thermometer to the sky WHAT are you really measuring? What material is the IR Thermometer calibrated for (Air, wood, steal, or water) Makes a big difference. Different materials need different lenses. FACT. Then what is in the range of focus for the lens on the IRT? MORE GIGO
“And when you point that IR thermometer to the sky WHAT are you really measuring? What material is the IR Thermometer calibrated for (Air, wood, steal, or water) Makes a big difference. Different materials need different lenses. FACT. Then what is in the range of focus for the lens on the IRT? MORE GIGO”
No they don’t need different lens, if anything think of it as a pin hole camera with 1 pixel.
What it’s detecting is a flux of IR. It’s calibrated against the 8-14u portion of a BB spectrum. BB’S have a emissivity depending on material and surface texture, mine is adjustable for emissivity, but all that really accounts for is fewer photons entering the detect than the expected amount (which is why mine has a K type thermal couple built in). I found a paper that placed the emmisivity of the sky ~.69, but as compared to e=.95 all lowering it does is make the sky even colder.
Pekka pointed out the cut at 14u, but as I mentioned you can convert the temp to a flux, add the Co2 flux in, and turn it back to a temp. It’s still frigid.
From the ground looking up,any IR down enters the detector, oh NASA even has a page on measuring the temp of the sky with a handheld IR thermometer.
Micro6500
That does not seem to agree with the Data FLUKE has
http://support.fluke.com/raytek-sales/Download/Asset/IR_THEORY_55514_ENG_REVB_LR.PDF
and is the way I was taught to calibrate IRT’s
There is nothing in that doc that is at odds with what I’ve said. And I can’t help that Raytec doesn’t also include a k-type thermal couple for an added method of determining emissivity.
The key is spot size, mine is 30:1, and the sky accounts for a near full 180 degrees, spot size isn’t an issue. It’s all about the amount of IR radiation entering the sensor from any source, in this case it’s the sky.
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CDUQFjAA&url=http%3A%2F%2Fwww.patarnott.com%2Fatms411%2Fpdf%2FStaleyJuricaEffectiveEmissivity.pdf&ei=3Qk1UYaQFvOs0AGmqIHABw&usg=AFQjCNFcFabJSCvkCExhCKl0j-fRuTbW3A&sig2=aQskGxoRKTIEvvSFP7G3Tg&bvm=bv.43148975,d.dmQ
richard verney June 7, 2015 at 4:23 am
My views? My view, I fear to say, is “TLDR”. Boil it down some and make it plain just what it is you want me to consider, and I’m more than happy to give it a go. But that’s just too diverse and vague to understand what you want my views on.
Thanks,
w.
“TLDR”
I had to look that up.
Glad I did.
Too long, didn’t read is a very important to keep in mind when commenting.
I had to skip it too, even though I generally read every comment and usually like reading what Mr. Verney has to say.
Ya gotta break it into smaller bites.
Richard:
The key point you are missing in looking at the surface layer of the ocean and its opacity to longwave infrared is the comparable longwave emission of this surface layer. If the surface layer absorbs virtually all LWIR in the top few microns, then virtually all of its thermal emissions will be from the top few microns as well, as it would absorb any LWIR from below before it could “escape” to the atmosphere.
And since the surface is generally warmer than the atmosphere it is in radiative exchange with, the upward LWIR flux density usually be larger than the downward LWIR. Picking typical (“average”) numbers from K&T, this very thin surface layer will be absorbing ~324 W/m2 downward LWIR and emitting upward ~390 W/m2 upward LWIR. So the radiative flux balance between this surface layer would be ~66 W/m2 upward. So in no way would the downward LWIR cause boiling off of this surface layer.
I find it useful when thinking of the radiative behavior of water with regard to LWIR to consider it like more familiar opaque objects, like rocks. We don’t have any trouble considering the exchange of the surface of a rock to shortwave or longwave radiation in this way. (Of course, the rock can’t have convective mixing under the surface.)
Of course, the rock can’t have convective mixing under the surface.
Well, actually it can, but it takes a few hundred million years.
Richard, the energy absorbed from the Sun (that is not captured long term for the depth) is removed by radiation up, conduction/convection, and evapotransporation. The radiation up is reduced from the value it would be in the case of no back-radiation due to the presence of back-radiation, and the other means of removal (conduction/convection and evapotransporation make up the loss, so that the energy balance is maintained). There never is an average gain in net energy to the surface by back-radiation that is absorbed, only a reduction in net radiation up, that was compensated for by the other processes. Thus back-radiation never heats the surface ON AVERAGE (it can sometimes at night when water temperature is cooler than air temperature, or where currents move cold water into high air temperature areas).
Downwelling long wave radiation is a fiction. In thermodynamics the temperature difference between the skin layer of oceans and the effective temperature of the sky above what matters. For water is almost always warmer than the sky above, which means it emits a higher thermal radiation flux upward than it absorbs from above. And only their difference, the net heat transport has any thermodynamical meaning.
Therefore radiative transfer in the thermal IR range almost always cools the (several microns thick) skin layer, sometimes does so mightily.
The case of short wave radiation is an entirely different issue, because its color temperature is high (~6000 K), that is, way higher than anything on the surface of planet Earth. In contrast to the thermal IR case, oceans never emit radiation in this frequency range, therefore net heat transfer equals to the radiation flux absorbed.
Sounds like the most concise explanation I have heard on this subject in a very long time.
It is very disheartening to read, over and over again, days long and very contentious arguments on the subject of how ordinary materials, at ordinary temperatures, on and near the surface of the Earth, cool down or heat up in response to various inputs from well known sources.
If I’m not mistaken argo claimed to discover a “calibration” error in tier floats after years of not noticing and no ocean warming around 2007. Then they claimed to “fix” this problem and suddenly there was a very slight upward trend in ocean temps. They changed past data also. It was just another data “boo boo” that changed a non trend or a cooling trend to a slight warming trend. There has never been a corrected mistake that would go in the opposite direction temp trend wise during this whole global warning campaign ever. i suppose that is a coincidence.
Not ARGO, that was XTB: one-off cable connected sounding devices thrown off the back of ships.
They showed cooling so they must have been defective, they were removed.
NMAT data from ships, which requires “corrections” for the changing height of ships’ deck ( which is not recorded in metadata and has to be guessed on the basis of what ‘typical’ ships looked like ) and then assumptions about how the air temperature varies with this height above sea level involved further guessing games.
This is deemed by Karl et al as being more reliable than a purpose build floating sensor fleet. So the scientific instruments are “corrected” to fit the guestimated ‘adjusted’ NMAT data.
Why wouldn’t they just remove the data from those ship divers completely then
Errors were reportedly found, and adjustments made to BOTH XBT and Argo data.
Some Argo bouys were found to be making depth calculation errors, according to the researchers, and were eliminated from the records, thus removing the ‘cooling trend’ which was developing.
I will find and post the links in the morning, when I’m on my computer.
“Why wouldn’t they just remove the data from those ship divers completely then”
Because then they would have no old data at all.
You find what you are looking for, not what you are not looking for. This is called bias.
The important thing is that every single adjustment made, past and present, large and small, has the same effect…it makes the predetermined conclusion of the warmistas…the ones doing the adjusting…appear to be justified.
Every single past datum was flawed in the precise way that would bollix up this conclusion, until it was belatedly discovered and “fixed”. And fixed again. And again…
A seven year old could see through such nonsense.
The climate obsessed response will be that the inaccuracy means the models are correct and it is a failure of the measurements to accurately gather the data. The heat really could be hiding, so it must be hiding.
So it is still worse then they thought.
The lack of supporting reality- storms, pack ice, floods, droughts, sea level rise only means it a failure of *those* data sources as well.
For the climate obsessed, everything works together to show the handiwork of CO2.
OT, but –
just wondering if the US can watch life the gathering to Elmau G7:
world’s leaders single elevated by military helicopters to an alpine summit; the last hundert meters transported by CO2 neutral electro caddies through the pastures to discuss ecologic footprints.
Never avoid laughability.
Regards – Hans
The entire mess we see in ‘climate science’ is the imposition of ‘absolute accuracy’ that is FAKE. These ‘precise’ measurements are FANTASY.
This leads to fraud. When climatologists claim their various instruments can measure the temperature of the entire planet and all the oceans far more accurately to a finer degree than thermometers in laboratories, we are seeing people lying.
They do this because the temperature changes they are claiming are so minimally tiny, they can only be detected by observing weather changes which are tricky to quantify. We do have clues as to whether or not the general climate is growing colder or hotter overall but this is seen mainly in hindsight with the annual frost line shifting north or southwards over time.
Of course, we do have very alarming information that all Ice Ages begin and end very suddenly which is both puzzling and quite scary and we still don’t know for certain what mechanism aside from the sun, is causing this.
If it is the sun as I suspect, then seemingly small changes in solar energy output has a dire ability to suddenly flip the climate of this planet from one extreme to the other. This information is highly important because the fantasy that we will roast to death due to a small increase in CO2 makes no sense at all since we are very obviously at the tail end of the present Interglacial era.
All these ‘measurements’ were set up by NOAA and NASA to prove ‘global warming’ not track the real situation we are facing. They are attempts at gaming the data to create a false picture to justify a gigantic energy use tax imposed on all humanity. There is zero interest in understanding what is really going on which is why if the data doesn’t give ‘global warming’ signals, they toy with the data to create this out of thin air.
This is why they ignore any incoming data showing global cooling over the last 8,000 years. Ignoring this is getting harder and harder but they work hard at doing this. It is, of course, destroying the concept of how science works, alas.
I’ve adapted Willis’ code to extract the NMAT2 data just released by Hadley ( thanks Willis ) :
[sourcecode]
# http://www.metoffice.gov.uk/hadobs/hadnmat2/data/download.html
### extract NMAT from netCDF format.
nmat_url="http://www.metoffice.gov.uk/hadobs/hadnmat2/data/HadNMAT.2.0.1.0.nc"
nmat_file="HadNMAT.2.0.1.0.nc"
download.file(nmat_url,nmat_file)
nc=open.ncdf(nmat_file)
nmat=get.var.ncdf(nc,"air_temperature")
nmat=get.var.ncdf(nc,"night_marine_air_temperature_anomaly")
nmat["night_marine_air_temperature_anomaly"==-999.]=NA
[/sourcecode]
now to get it into a more accessibe format.
I do admire a man who picks up on an idea and runs with it …
What is the problem that you have with the format?
w.
Willis says: One watt per square metre for one year warms one cubic metre of the ocean by 8°C.
——-
Believing you are using Q = m * Cp * T I think your statement can only be true if you think the sun shines 24 hours a day.
Cp sea water is 3.93 Kj/kg – K
M is 1024 kg/m^3
T is 8 K
Q is 32,194,560 J
Seconds In a year is 31,536,000
mkelly June 7, 2015 at 5:45 am
Thanks, mkelly. In fact, in climate science most all intermittent variables are simply divided by 4 (the difference between the area of a disk and a sphere with radius R) to convert them to a 24/7 average.
This includes the forcings, which regardless of whether they are intermittent (solar) or semi-constant (DLR) are always given as global 24/7 averages so they can be compared to each other.
Regards,.
w.
If the solar constant is 1,366 +/- 0.5 W/m^2 why is ToA 340 (+10.7/- 11.2)1 W/m^2 as shown on the plethora of popular heat balances/budgets? Collect an assortment of these global energy budgets/balances graphics. The variations between some of these is unsettling. Some use W/m^2, some use calories/m^2, some show simple %s, some a combination. So much for consensus. What they all seem to have in common is some kind of perpetual motion heat loop with back radiation ranging from 333 to 340.3 W/m^2 without a defined source. BTW additional RF due to CO2 1750-2011, about 2 W/m^2 spherical, 0.6%.
Consider the earth/atmosphere as a disc.
Radius of earth is 6,371 km, effective height of atmosphere 15.8 km, total radius 6,387 km.
Area of 6,387 km disc: PI()*r^2 = 1.28E14 m^2
Solar Constant……………1,366 W/m^2
Total power delivered: 1,366 W/m^2 * 1.28E14 m^2 = 1.74E17 W
Consider the earth/atmosphere as a sphere.
Surface area of 6,387 km sphere: 4*PI()*r^2 = 5.13E14 m^2
Total power above spread over spherical surface: 1.74E17/5.13E14 = 339.8 W/m^2
One fourth. How about that! What a coincidence! However, the total power remains the same.
1,366 * 1.28E14 = 339.8 * 5.13E14 = 1.74E17 W
Big power flow times small area = lesser power flow over bigger area. Same same.
(Watt is a power unit, i.e. energy over time. I’m going English units now.)
In 24 hours the entire globe rotates through the ToA W/m^2 flux. Disc, sphere, same total result. Total power flow over 24 hours at 3.41 Btu/h per W delivers heat load of:
1.74E17 W * 3.41 Btu/h /W * 24 h = 1.43E19 Btu/day
Thank you Willis – you have done interesting work with the ARGO system/data and reached a credible conclusion.
I have so little time these days that I have developed a highly successful empirical approach to these matters.
Observations:
Warmists have repeatedly demonstrated unethical conduct.
Warmists have repeatedly misrepresent facts.
Warmists have repeatedly made false alarmist predictions that have NOT materialized to date and have a STRONGLY NEGATIVE predictive track record. Their predictions have consistently been false.
Conclusion:
Accordingly, every claim of warmists should be viewed as false until it is CONCLUSIVELY PROVEN by them to be true.
This is a logical approach to evaluate the work of scoundrels and imbeciles.
To date, this empirical approach has worked with remarkable success – dare I say with precision and accuracy.
Best to all, Allan 🙂
Allan
QUOTE
Conclusion:
Accordingly, every claim of warmists should be viewed as false until it is CONCLUSIVELY PROVEN by them to be true.
END QUOTE
So –
Conclusion:
Accordingly, every claim of warmists should be viewed as false until it is CONCLUSIVELY PROVEN by folk, without a dog in the fight, to be true.
Now, doesn’t that look a bit better?
Auto
Thank you Auto – but are you assuming you can find an informed individual who does not have “a dog in the fight”? 🙂
http://wattsupwiththat.com/2012/09/07/friday-funny-climate-change-is-not-a-joke/#comment-1074966
[excerpt]
The “climate skeptics” position is supported by these FACTS , and many others:
– there has been no net global warming for 10-15 (now 15-20) years despite increasing atmospheric CO2;
– the flawed computer climate models used to predict catastrophic global warming are inconsistent with observations; and
– the Climategate emails prove that leading proponents of global warming alarmism are dishonest.
The political left in Europe and North America have made global warming alarmism a matter of political correctness – a touchstone of their religious faith – and have vilified anyone who disagrees with their failed CAGW hypothesis. The global warming alarmists’ position is untenable nonsense.
I dislike political labels such as “left“ and “right”. Categorizing oneself as “right wing” or “left wing” tends to preclude the use of rational thought to determine one’s actions. One simply choses which club to belong to, and no longer has to read or think.
To me, it is not about “right versus left”, it is about “right versus wrong”. Rational decision-making requires a solid grasp of science, engineering and economics, and the global warming alarmists have abjectly failed in ALL these fields. Their scientific hypothesis has failed – there is no global warming crisis. Their “green energy” schemes have also failed, producing no significant useful energy, squandering scarce global resources, driving up energy costs, harming the environment, and not even significantly reducing CO2 emissions! The corn ethanol motor fuel mandates could, in time, be viewed as crimes against humanity.
It is difficult to imagine a more abject intellectual failure in modern times than global warming alarmism. The economic and humanitarian tragedies of the Former Soviet Union and North Korea provide recent comparisons, a suitable legacy for the cult of global warming alarmism.
Allan, and Auto,
This really cuts to the chase.
Agree 100%.
If knowing that are lying would settle anything, or prevent bad policies from being instituted, there would be no point in saying anything else about the whole CAGW meme.
Unfortunately, this is not the case.
People must be convinced who are not convinced yet, and for that to occur, false information must be refuted, convincing arguments must be fashioned and honed, and the whole Warmista movement denounced and discredited.
There has been a pretty good jump in Ocean heat content numbers over the last six months after several periods of decline/flat estimates.
We’ll have to see if the new higher levels continue into the next sets of data but the accumulation rates will have to be recalculated higher if it does.
http://data.nodc.noaa.gov/woa/DATA_ANALYSIS/3M_HEAT_CONTENT/DATA/basin/3month/ohc2000m_levitus_climdash_seasonal.csv
http://data.nodc.noaa.gov/woa/DATA_ANALYSIS/3M_HEAT_CONTENT/DATA/basin/3month/ohc_levitus_climdash_seasonal.csv
Willis, could you address my questions here.
http://wattsupwiththat.com/2015/06/06/can-we-tell-if-the-oceans-are-warming/#comment-1956153