A Small Margin Of Error

Guest Post by Willis Eschenbach

I see that Zeke Hausfather and others are claiming that 2018 is the warmest year on record for the ocean down to a depth of 2,000 metres. Here’s Zeke’s claim:

Figure 1. Change in ocean heat content, 1955 – 2018. Data available from Institute for Applied Physics (IAP). 

When I saw that graph in Zeke’s tweet, my bad-number detector started flashing bright red. What I found suspicious was that the confidence intervals seemed far too small. Not only that, but the graph is measured in a unit that is meaningless to most everyone. Hmmm …

Now, the units in this graph are “zettajoules”, abbreviated ZJ. A zettajoule is a billion trillion joules, or 1E+21 joules. I wanted to convert this to a more familiar number, which is degrees Celsius (°C). So I had to calculate how many zettajoule it takes to raise the temperature of the top two kilometres of the ocean by 1°C.

I go over the math in the endnotes, but suffice it to say at this point that it takes about twenty-six hundred zettajoule to raise the temperature of the top two kilometres of the ocean by 1°C. 2,600 ZJ per degree.

Now, look at Figure 1 again. They claim that their error back in 1955 is plus or minus ninety-five zettajoules … and that converts to ± 0.04°C. Four hundredths of one degree celsius … right …

Call me crazy, but I do NOT believe that we know the 1955 temperature of the top two kilometres of the ocean to within plus or minus four hundredths of one degree.

It gets worse. By the year 2018, they are claiming that the error bar is on the order of plus or minus nine zettajoules … which is three thousandths of one degree C. That’s 0.003°C. Get real! Ask any process engineer—determining the average temperature of a typcial swimming pool to within three thousandths of a degree would require a dozen thermometers or more …

The claim is that they can achieve this degree of accuracy because of the ARGO floats. These are floats that drift down deep in the ocean. Every ten days they rise slowly to the surface, sampling temperatures as they go. At present, well, three days ago, there were 3,835 Argo floats in operation.

Figure 2. Distribution of all Argo floats which were active as of January 8, 2019.

Looks pretty dense-packed in this graphic, doesn’t it? Maybe not a couple dozen thermometers per swimming pool, but dense … however,  in fact, that’s only one Argo float for every 93,500 square km (36,000 square miles) of ocean. That’s a box that’s 300 km (190 miles) on a side and two km (1.2 miles) deep … containing one thermometer.

Here’s the underlying problem with their error estimate. As the number of observations goes up, the error bar decreases by one divided by the square root of the number of observations. And that means if we want to get one more decimal in our error, we have to have a hundred times the number of data points.

For example, if we get an error of say a tenth of a degree C from ten observations, then if we want to reduce the error to a hundredth of a degree C we need one thousand observations …

And the same is true in reverse. So let’s assume that their error estimate of ± 0.003°C for 2018 data is correct, and it’s due to the excellent coverage of the 3,835 Argo floats.

That would mean that we would have an error of ten times that, ± 0.03°C if there were only 38 ARGO floats …

Sorry. Not believing it. Thirty-eight thermometers, each taking three vertical temperature profiles per month, to measure the temperature of the top two kilometers of the entire global ocean to plus or minus three hundredths of a degree?

My bad number detector was still going off. So I decided to do a type of “Monte Carlo” analysis. Named after the famous casino, a Monte Carlo analysis implies that you are using random data in an analysis to see if your answer is reasonable.

In this case, what I did was to get gridded 1° latitude by 1° longitude data for ocean temperatures at various depths down to 2000 metres from the Levitus World Ocean Atlas. It contains the long-term monthly averages at each depth for each gridcell for each month. Then I calculated the global monthly average for each month from the surface down to 2000 metres.

Now, there are 33,713 1°x1° gridcells with ocean data. (I excluded the areas poleward of the Arctic/Antarctic Circles, as there are almost no Argo floats there.) And there are 3,825 Argo floats. On average some 5% of them are in a common gridcell. So the Argo floats are sampling on the order of ten percent of the gridcells … meaning that despite having lots of Argo floats, still at any given time, 90% of the 1°x1° ocean gridcells are not sampled. Just sayin …

To see what difference this might make, I did repeated runs by choosing 3,825 ocean gridcells at random. I then ran the same analysis as before—get the averages at depth, and then calculate the global average temperature month by month for just those gridcells. Here’s a map of typical random locations for simulated Argo locations for one run.

Figure 3. Typical simulated distribution of Argo floats for one run of Monte Carlo Analysis.

And in the event, I found what I suspected I’d find. Their claimed accuracy is not borne out by experiment. Figure 4 shows the results of a typical run. The 95% confidence interval for the results varied from 0.05°C to 0.1°C.

Figure 4. Typical run, average global ocean temperature 0-2,000 metres depth, from Levitus World Ocean Atlas (red dots) and from 3.825 simulated Argo locations. White “whisker” lines show the 95% confidence interval (95%CI). For this run, the 95%CI was 0.07°C. Small white whisker line at bottom center shows the claimed 2018 95%CI of ± 0.003°C.

As you can see, using the simulated Argo locations gives an answer that is quite close to the actual temperature average. Monthly averages are within a tenth of a degree of the actual average … but because the Argo floats only measure about 10% of the 1°x1° ocean gridcells, that is still more than an order of magnitude larger than the claimed 2018 95% confidence interval for the AIP data shown in Figure 1.

So I guess my bad number detector must still be working …

Finally, Zeke says that the ocean temperature in 2018 exceeds that in 2017 by “a comfortable margin”. But in fact, it is warmer by only 8 zettajoules … which is less than the claimed 2018 error. So no, that is not a “comfortable margin”. It’s well within even their unbelievably small claimed error, which they say is ± 9 zettajoule for 2018.

In closing, please don’t rag on Zeke about this. He’s one of the good guys, and all of us are wrong at times. As I myself have proven more often than I care to think about, the American scientist Lewis Thomas was totally correct when he said, “We are built to make mistakes, coded for error”

Best regards to everyone,

w.

PS—when commenting please quote the exact words that you are discussing. That way we can all understand both who and what you are referring to.

Math Notes: Here is the calculation of the conversion of zettajoules to degrees of warming of the top two km of the ocean. I work in the computer language R, and these are the actual calculations. Everything after a hashmark (#) in a line is a comment.

heatcapacity=sw_cp(t=4,p=100) # heat capacity, with temperature and pressure at 1000 m depth
print(paste(round(heatcapacity), "joules/kg/°C"))
[1] "3958 joules/kg/°C"

seadensity=gsw_rho(35,4,1000) # density, with temperature and pressure at 1000 m depth
print(paste(round(seadensity), "kg/cubic metre"))
[1] "1032 kg/cubic metre"

seavolume=1.4e9*1e9 #cubic km * 1e9 to convert to cubic metres
print(paste(round(seavolume), "cubic metres, per levitus"))
[1] "1.4e+18 cubic metres, per levitus"

fractionto2000m=0.46 # fraction of ocean above 2000 m depth per Levitus

zjoulesperdeg=seavolume*fractionto2000m*seadensity*heatcapacity/1e21
print(paste(round(zjoulesperdeg), "zettajoules to heat 2 km seawater by 1°C"))
[1] "2631 zettajoules to heat 2 km seawater by 1°C"

z1955error = 95 # 1955 error in ZJ
print(paste(round(z1955error/zjoulesperdeg,2),"°C 1955 error"))
[1] "0.04 °C 1955 error"

z2018error = 9 # 1955 error in ZJ
print(paste(round(z2018error/zjoulesperdeg,3),"°C 2018 error"))
[1] "0.003 °C 2018 error"

yr2018change = 8 # 2017 to 2018 change in ZJ
print(paste(round(yr2018change/zjoulesperdeg,3),"°C change 2017 - 2018"))
[1] "0.003 °C change 2017 - 2018"
The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
372 Comments
Inline Feedbacks
View all comments
January 11, 2019 7:14 pm

There are nearly 4000 of them in service now, but that number was much lower in prior years.
The first units went in in 1999, and by the year 2000 there was about a hundred.
Over the next seven years a few hundred were added every year, and the 3000 target level was reached sometime in 2007.
One might also wonder about the testing and calibration phase and how often each unit is recalibrated?
Ever? Are they sure they all go to the proper depth?

Anyway, in 2009 there was a ten year meeting and discussions on how to improve distribution was one topic.
So it must be assumed that the distribution has not been as good as it is now for many of those years.
Here is a site which gives a map and a count, and it is interactive:
http://wo.jcommops.org/cgi-bin/WebObjects/Argo

January 11, 2019 7:25 pm

SST animation of the roiling SST corroborates that the claimed accuracy is bogus. http://www.ospo.noaa.gov/Products/ocean/sst/anomaly/anim.html

A. Patterson Moore
January 11, 2019 7:57 pm

Thanks, Willis. Your BS detector is finely tuned. The calculated error bars in Zeke’s graph are a joke. If they really wanted to know, they would run some experiments to find out. Put a couple of hundred floats in one 50,000 square mile grid cell, widely dispersed. Your error bar is the difference in their measurements. Chances are, we are talking error bars in tens of degrees, not tenths of a degree.

Red94ViperRT10
Reply to  A. Patterson Moore
January 14, 2019 6:59 am

That’s not error, that’s standard deviation, σ.

January 11, 2019 8:01 pm

Hmm, if x amount of energy is equal to three one thousandths of a degree averaged out over the whole ocean, it seems to me that one might consider this in many ways to decide if it passes any possible credibility test.
Lets say that instead of the gain in temperature being dispersed evenly throughout the ocean down to 2000 meters, we instead had a situation where the entire ocean stayed exactly the same temp, except that an area of three one thousandths of the surface warmed by one degree and down to 200 meters.
Same amount of heat, no? (check me on that…it is late and I was up early)
Three one thousandths of the ocean is, if I figured it right and roughly speaking, about a square 1000 km on a side (361,000,000msk x 3 / 1000).

This is an area about the size of the country of Chile, minus one fifth (Chile= 1,250,00 sq km).
IOW…small.
A tiny slice of the Humboldt Current, which runs from the southern ocean up the west side of South America offshore of Chile, and is an upwelling current.
OK?
Consider that…all of the zeta joules they are talking about is like keeping the entire part they are measuring exactly the same, and raising one tiny sliver of the Pacific ocean by 1 degree! (If I have it figured out correct, proportionally…again, check me on that)

Now consider that the top 2000 meters is a small fraction of the ocean.
How small?
Here is a link to a ma of Earth with bathymetric color coding of the ocean. 2000 meters is near the middle of the color scale. There are giant areas three to four times that deep…most of the ocean in fact. And considerable portions over 5 times that deep.
Plus…and this is interesting, huge areas that are nowhere close to 2000 meters!
All of the continental shelves. Considerable areas. Bigger in the aggregate by many times over than 1 million sq km.
The last thought I had…why report a result in joules, for water temp changes over a global ocean?
Well, it sounds like a lot of heat.
But also it seem to me if they made a scary map, it would not look very scary with the warming spots graduated in thousandths of a degree, no matter what they did.
Here is that world atlas with bathymetry readings:

http://wo.jcommops.org/cgi-bin/WebObjects/Argo

Conclusion: The more you think about this, and think about it, and look at maps, and consider they are measuring a small fraction of the ocean, and it varies tremendously in temp from pole to pole and top to bottom, the weaker this all looks.
Hard to believe, in fact, that the signal is as big as the noise, and hard to make an argument, no matter how you slice it, that they are not just measuring, if anything real at all, just water moving around which is very non-uniform in temp.

Red94ViperRT10
Reply to  Menicholas
January 11, 2019 9:02 pm

DuckDuckGo finds the average depth of the world’s oceans at 3,796 m or 3,730 m, depending on whose study you select, so the top 2,000 m should be slightly more than half that.

Reply to  Red94ViperRT10
January 11, 2019 10:06 pm

Seems I attached the wrong link. Here is the ocean bathymetry data.
Large areas of the ocean are very shallow, far under 2000 meters.
And most of the rest is abyssal plain, far deeper than 2000 meters.
Large parts of the southern ocean are not covered. The artic has no coverage.
Having said that, I see that I may have been somewhat imprecise to say that the top 2000 meters is a small fraction of the total ocean volume.
It is, using the average number you gave without checking it myself, just under half of the total volume.
My mistake. Thank you for pointing that out.
One error I made was neglecting that the map is not an equal area projection.
But I do not think this invalidate my observation, that they are claiming increase in total ocean heat without measuring anything like the entire water column.
Do you are with my point that one million square km raised by one degree is what we are talking about here? And that this is a tiny slice of the total?
And that, intuitively, this demonstrates that the headline claim by Zeke and whoever compiled the number he quoted is almost surely unjustifiable?
IOW…they have no idea from this data what the total heat content of the ocean is, or how much it may or may not have changed by?
It is not even a rounding error, as near as I can tell.
And that is not even taking into account that the numbers may be massaged and invented data, as recounted by the earlier observation by the same people that the oceans were cooling, not warming, and they then “fixed it”, so that it agreed with the alarmist claims that the sea level rise is due to warming of the ocean as a whole?
http://planetolog.com/maps/map-world/big/bathymetric-world-map.jpg

Reply to  Menicholas
January 11, 2019 10:50 pm

Darn, did it again.
Those 200s should each be 2000 up top.

Red94ViperRT10
Reply to  Menicholas
January 12, 2019 9:22 am

Yes, I like your depiction, you could hold the entire ocean constant and warm just a small piece of it by 1ºC and get the same change in OHC.

Red94ViperRT10
Reply to  Menicholas
January 12, 2019 7:33 pm

BTW, the fact that large parts of the ocean are <lt;2,000 meters, does this mean the Argo buoys, if located in one of those areas, would submerge until it hits bottom, then begin it's ascent while recording temperatures? That being the case, then that reduces the percentage of the oceans the buoys just completely leave out.

Reply to  Red94ViperRT10
January 12, 2019 10:47 pm

I do not know the answer to that question, but over the past 24 hours of reading from various sources and previous articles, that the current practice of descending to 2000 feet is a recent change, sometime in the last five years. Since the 2007 date of reaching the 3000 unit coverage goal, the distribution and sampling protocol have been changed several times.
So, whatever is now the case in terms of the volume of water being, theoretically at least, sampled by the probes, was not true in the past. It used to be that far less of the total ocean volume was sampled.
Which makes their time series and the claimed uncertainty bar in past years, even more dubious.
Which in turn makes any claims of unprecedented ocean heat content even more meaningless.
And for consistency, lets just keep in mind that whatever they are saying their data indicates, was not what they were finding several years back before they completely altered it. Altered data from people that change any result they do not like is not data.
There is little reason to suppose it correlates in any fashion to objective reality.

January 11, 2019 8:03 pm

” The 95% confidence interval for the results varied from 005°C to 0.1°C.”

A missing decimal in 0.05?

January 11, 2019 8:17 pm

“In closing, please don’t rag on Zeke about this. He’s one of the good guys,…”

When he is spewing BS climate science nonsense, when he should know better, he SHOULD be “ragged on.”

“We can now say with confidence…” – Zeke

I wouldn’t buy a used car from that guy, much less let him inform on government policy that wants to restrict my liberties and make energy un-affordable in the name of faux alarmism, just so he can keep getting DOE intramural grants and promotions in Laboratory chock-full with rentseekers.

Reply to  Willis Eschenbach
January 12, 2019 2:00 am

Yeah, these folks are either clueless and incompetent or just plain crooks.
Sorry but I’ve had it with this medacity and/or stupidity. You are giving these people FAR too much credence. Ridicule is our most powerful weapon. See Alinsky’s Rules.

Phil
January 11, 2019 8:19 pm

Temperature is an intensive variable. The average of two intensive data points is not straightforward. What is the average of a container A at a temperature of 80°C and a container B at a temperature of 20°C? The formula (t(A) + t(B)) / 2 is not necessarily applicable to finding the average of two intensive data points. The average temperature of the containers (if they were poured into another container C that is a perfect insulator and allowed to come to equilibrium) cannot be calculated without knowing the sizes of both containers. If container A is a pasta pot and container B is a thimble, the average temperature of the two containers would be very close to 80°C and NOT 50°C.

It is well know that there are warm and cold currents in the oceans. In order to obtain an estimate of the average temperature of the oceans, the temperatures of the warm and cold currents would need to be measured as well as the amount of water in each current and then that would have to be combined with an measurement of the temperature of the non-current parts of the ocean and an measurement of the amount of water in each non-current part of the ocean (a tropical sector would be warmer than on at high latitudes). There isn’t enough data to be able to even begin to estimate the heat content of the ocean, except by ignoring the existence of ocean currents and that is just for starters.

Mr. Hausfather’s assertions of uncertainty completely ignore the fact that temperature is an intensive variable and that each buoy is not measuring the state of the oceans as a whole. Since each buoy is not measuring the state of the oceans as a whole, there is no basis to claim a statistical miracle. There is no way to tell if a particular buoy data point is being measured within an ocean current where the temperature may be many whole integers of degrees C different from that in an area of the ocean a short distance away that is not part of a current. That alone would represent a systemic uncertainty that makes all of his calculations meaningless.

Phil
Reply to  Phil
January 11, 2019 8:42 pm

I got carried away by Willis’ conversion to temperature. The conversion to joules of the Argo data is an attempt to avoid the criticism of averaging intensive variables. However, the conversion to joules masks the uncertainty in the conversion as an estimate has to be made of the volume of ocean water represented by each Argo data point. The uncertainty of the estimate of the volume of ocean water represented by each temperature data point is not disclosed and is effectively assumed to be zero. Therefore the gist of my comment I think still applies. You have to watch the little red ball very carefully when this shell game is being played. It is very easy to get distracted.

John Shotsky
January 11, 2019 8:23 pm

Measurements are never accurate. They are ALWAYS estimates. On a digital thermometer that shows 70.1, 70.2 and 70.4, you cannot average to a temperature of 70.23. Why? Because you can NEVER average measurements to a decimal degree (sic) of accuracy better than the lowest resolution of any measurement in the series. Why? Because you don’t know if the thermometers were about to click up to 70.2, 70.3 and/or 70.3 – or vice versa. You don’t KNOW the threshold inside the thermometer. That (new) average would result in 70.26. But wait – the temperature was not changed. All that was said is that we don’t know what the thermometer was about to click to…All three ‘could’ click one way or the other, without the temperature actually changing. And, if one thermometer was in whole degrees, the rest in tenths, the average would have to be in whole degrees – because you don’t know if it is about to click up or down one tick. So ITS resolution defines the resolution for the series. There is a huge difference between measurements (estimates) and counts. Counts might be the number of people in a stadium. Each count is exact. The average can be to 6 decimals – not a problem because you can average counts, but not measurements. As an instrument designer for many years, this was drilled into me. And my BS detector goes off any time ANYONE talks about measurements with results in sub-tenths of a degree. It is meaningless unless both the resolution of the measuring device AND it’s error band are known.
To refresh, look up significant digits in measurement. The last digit is always an estimate, and you cannot average estimates.
So, ALL of the above and original charting is meaningless. Follow the math laws to ascertain valid results.

Rob_Dawg
January 11, 2019 8:28 pm

Willis mentions:
> Looks pretty dense-packed in this graphic, doesn’t it? Maybe not a couple dozen thermometers per swimming pool, but dense

This is a great idea. All the big schools have swimming pools. A diving pool would be even better. Instrument the heck out of one for temperature over spring break. I`d be willing to bet there are instantaneous variations well outside any instrument error. I’d also bet that no matter how good the instruments many of them will have drifted a bit by the time the kids get back from Daytona Beach.

Phodges
January 11, 2019 8:47 pm

Does the increase in atmospheric CO2 produce enough additional w/m2 to heat that volume of water?

January 11, 2019 8:53 pm

I will say this about why I think the climatists use OHC in zetajoules rather than the recorded temps.
The reason is one is a whole bunch of disparate methods and instrument instrumental records (the latter), while the OHC is a combined estimate of all those temps. Thus converting it all to OHC, they think they can get away with stringing together different records into one data set. Sort of Mike’s Nature trick for ocean temp records.

Björn
January 11, 2019 9:12 pm

Willis I think you made a small error when calculating the coverage of the argo floats. You state there is one float per 133000 square km. But 3835*133000 is few square Km north of 510 million Km². That number is the total area of the earth. The oceans cover approximately 70.8% of the earth surface , it equals 362 millinn square km , give or take. That gives me 362/3.835 e+(06-03) ~ 94394 Km² as the base area of the box covered , so the the base side length is c.a 307 Km instead of the the 365 you get. I do not this affects your argument much , and do knot know if has any bearing on your calulations, but just in case it does , correct and update if needed.

geoff@large
Reply to  Willis Eschenbach
January 13, 2019 10:42 pm

Hi Willis,

Great work. Small mistake. “…in fact, that’s only one Argo float for every 93,500 square km (36,000 square miles) of ocean. That’s a box that’s 300 km (190 miles) on a side and two km (1.2 miles) deep … containing one thermometer”.

Just for the record since your comment will be quoted – I used the global ocean volume from NOAA (https://web.archive.org/web/20150311032757/http://ngdc.noaa.gov/mgg/global/etopo1_ocean_volumes.html) of 1.335 billion km3 (of course not squared). If I divide by the number of Argo bathymeters, that gives 348,110 km3 for earch Argo (gross). Dividing that by the average depth of 3700 meters and multiplying by the measured depth of 2000 meters I get 188,167 km3. Dividing by 2 km gives a box on the sides of 306 km each side, each box 94,083 km3. Practically the same as your calculation, only with your typo of squared corrected to cubed. I’m sure you’ve done it correctly before and were just too busy pointing our the ridiculousness of what is being proposed to have been accurately measured. And just to use your favorite comparison, that’s 19.12 Lake Michigans (the lake being 4920 km3).

Since the true average depth of the ocean is not known (last I checked we have mapped about 10% of the ocean floor) this figure could be off by a few percent either way, but your point remains the same.

geoff@large
Reply to  geoff@large
January 13, 2019 11:21 pm

Oops, I did it myself. Each box being measured by one Argo float is 188,167 km3, with the box being 306km by 306km (93,636 km2) times 2km deep. So each Argo float is supposed to measure the equivalent of 38.24 Lake Michigans.

geoff@large
Reply to  geoff@large
January 14, 2019 12:22 am

So for the record:

Using the global ocean volume from NOAA (https://web.archive.org/web/20150311032757/http://ngdc.noaa.gov/mgg/global/etopo1_ocean_volumes.html) which is 1.335 billion km3 (of course not squared). Dividing by the number of Argo bathymeters, that gives 348,110 km3 for earch Argo (gross). Dividing that by the average depth of 3700 meters and multiplying by the measured depth of 2000 meters I get 188,167 km3. Dividing by 2 km gives a box on the sides of 306 km each side (so 94,083 km2) for the same total of 188,167km3.

For comparison, that’s 38.24 Lake Michigans (the lake being 4920 km3).

Björn
Reply to  Björn
January 11, 2019 9:30 pm

argh , meant the write ” I do not think this affects …” , instead of the actual ” I do not this affects…” at one place in the comment above, did not review before posting.

Clyde Spencer
January 11, 2019 9:12 pm

Willis,
You said, “And that means if we want to get one more decimal in our error, we have to have a hundred times the number of data points.” I think that this is a best case scenario. It has long been the practice of land surveyors to take multiple readings of an angle turned to improve the precision. However, the circumstances are the same operator using the same instrument on a fixed value. The assumption is that the only variance is random and normally distributed.

However, when measuring a variable, such as temperatures, one is dealing with both the range in temperatures and the random error related to the instrumentation. The range in temperature is probably at least one or two orders of magnitude larger than the random error of measurement

I would ask the question of why it is common in most (if not all) sciences other than climatology and oceanography to state uncertainty as +/- 2 standard deviations rather than one standard error of the mean?

Red94ViperRT10
January 11, 2019 9:13 pm

Willis,

Now, there are 33,713 1°x1° gridcells with ocean data. … And there are 3,825 Argo floats. On average some 5% of them are in a common gridcell. So the Argo floats are sampling on the order of ten percent of the gridcells…”

(At this point I must resist the urge to say, “It’s worse than we thought!” But it is.) You left out the part, the buoys take measurements at different depths. So the cells to be measured (I haven’t seen the data, how are the depths divided? …so how many depths are there?) each have multiple depths, let’s call them cubes, though I’m pretty sure they’re not, times those 33,713 gridcells, times 12 months. And we only have 3,825 floats to measure all of that. I’d do the arithmetic myself but I don’t have the number of depths, but I’m thinking it comes out to way less than 10% coverage, maybe even <1%? When doing field sampling of equipment I was told to observe at least 10% of the total population, so we haven't got there yet, I don't think we can conclude anything meaningful from the datapoints we have.

angech
January 11, 2019 9:22 pm

“In closing, please don’t rag on Zeke about this. He’s one of the good guys,”
Like Rosenstein I guess.
He is unfailing polite, unlike me.
He produces the “data”.
But he pushes the global warming agenda severely.
He often pops up on a triumvirate with Mosher and Nick when an inconvenient truth emerges to their belief system.
He is honest in the data he states, but as with USHCN in the past the real messages are hidden in what he does not say.
He has clearly stated in the past that the warming record automatically updates and devalues past historical data. He has said that nearly less than half the USHCN stations no longer present data 4 years ago. No one listens.
As stated the actual purported change in temperature of the upper 2000 m is extremely small. He knows that no one will care about a change of 0.03C plus the large error bars but still goes all out to push it as hard as possible.
As a Princess Bride comment, he will appreciate. shame on you, Zeke.

angech
January 11, 2019 9:25 pm

“90% of the 1°x1° ocean gridcells are not sampled. Just sayin …”
Data is data , we can only use what we have.
The problems with Argo data are shifting positions of the buoys, breakdown of the buoys, unreliability of the thermometers used.
Just have to take this into account with the error range.

January 11, 2019 9:47 pm

One of our Australian politics had a saying as to how he worked Politics, “Whatever it takes.”

The Warmers Lobby obviously think the same.

To hell with data and accuracy, its the final result that counts, and what is that. Certainly it has nothing to do with saving the Planet.

Its just a grab for power. True both the Russians and now the Chinese tried Communism, but they both failed. We will learn from their errors s and end up with a perfect world wide system.

MJE

SMS
January 11, 2019 9:59 pm

Why not determine the increase in ocean heat content using tide gauge results? Expansion of the oceans due to warming and cyclic melting of Greenland are the two components that provide most of the answer.

Looking back on tide gauge results suggests that the ocean warming implied by Zeke is normal and has existed since the last ice age ended, and most probably cycled through the other ice ages as well.

Zeke is just cherry picking.

GregK
Reply to  SMS
January 12, 2019 4:05 am

Note also that the very deep ocean is cooling
https://www.sciencedaily.com/releases/2019/01/190104121426.htm
[I think that this was discussed recently]

And a thought….would not an increase in sea ice lead to a rise in sea level?
Maybe not a large increase but the density of sea ice is less than the water it floats in and the ice must displace a volume equal to its weight.

Hugs
Reply to  GregK
January 12, 2019 5:01 am

Some deep ocean is cooling. Wholly we really don’t know. But remember attribution rule one; cooling is always meaningless and natural, where warming is alarming and man-made. Even El Niños.

MeanOnSunday
January 11, 2019 10:24 pm

Your simulation seems very generous as you are taking the monthly averages as a fixed known quantity. But they themselves are only estimates with errors according to variation with time during the month, positional variation within the grid square, etc. If you look at the data where you have multiple floats in the same grid square in the same month you could get a crude approximation of this variability within grid-month. Then in your Monte Carlo process you have generate your fixed average plus a random component with the appropriate variance.

The idea of any kind of accuracy or precision before the last 15 years just seems ridiculous. Making adjustments to 1/100th of a degree for how much the water temperature changed while a guy dragged up water in his canvas bucket and sat for a minute waiting for a mercury thermometer to stabilize? When those measurements were taken the magnitude of potential errors were well understood. Now we have so called scientists that perform statistical analysis as if the data has no sampling error, and that they can with incredible precision find conversion factors between different measurement techniques.

J Mac
January 11, 2019 10:38 pm

Willis,
Minor tweak: “As the number of observations goes up, the error bar decreases by one divided (by?) the square root of the number of observations. “

Excellent re-analysis of Hausfather’s flawed assertions!

January 11, 2019 10:48 pm

Willis, you mentioned about 2 years ago that Zeke was a “good smart guy”, yet still hadn’t answered you question about the sawtooth record where the scalpel removes the recalibration information.

Willis Eschenbach January 30, 2017 at 7:58 pm
Sadly, Stephen, that question still isn’t answered. I saw Zeke Hausfather at the recent AGU meeting and he said they were looking at the issue … however, given that that has been the answer since June 2014, I have to confess that I figured his statement would sell at a significant discount from full retail price …
It’s too bad, because both Zeke and Mosher are good smart guys … does make a man wonder.
w.

Has this “good guy” ever answered you on this fundamental problem at the core of BEST analysis?

See this June 12, 2017 at 11:03 am comment for a traceback of the discussion.

Prjindigo
January 11, 2019 10:51 pm

You need to increase that “100 times as many data points” to “100 plus the inverse of the error margin times as many data points” because you must overcome the error to begin with.

Prjindigo
January 11, 2019 10:53 pm

oops… +”times the inverse of the error of individual units allowed over lifetime” as well. Sorry.

January 11, 2019 11:11 pm
James McCown
January 11, 2019 11:36 pm

Besides the fact that the 350 Zettajoule increase corresponds to less than one tenth of one degree Celsius, then is an additional question:

Have Trenberth and Hausfather come up with a satisfactory explanation of how that heat got sucked into the oceans in the first place?