Guest Post by Willis Eschenbach
We have gotten three more years of data for the CERES dataset, which is good, more data is always welcome. However, one of the sad things about the CERES dataset is that we can’t use it for net top-of-atmosphere (TOA) radiation trends. Net TOA radiation is what comes in (downwelling solar) minus what goes out (upwelling longwave and reflected solar). The difference between the two is the energy that is being stored, primarily in the ocean.
The problem is that according to the raw, unadjusted CERES data, there’s an average net TOA radiation imbalance of ~ 5 W/m2 … and that amount of imbalance would have fried the planet long ago. That means that there is some kind of systematic error between the three datasets (solar, reflected solar, and longwave).
So, the CERES folks have gone for second best. They have adjusted the CERES imbalance to match the Levitus ocean heat content (OHC) data. And not just any interpretation of the Levitus data. They used the 0.85 W/m2 imbalance from James Hansen’s 2004 “smoking gun” paper. Now to me, starting by assuming that there is a major imbalance in the system seems odd. In any case, since the adjustment is arbitrary, the CERES trends in net TOA radiation are arbitrary as well. Having said that, here’s a comparison of what the Levitus ocean heat content (OHC) data says, with what the CERES data says.
Figure 1. CERES and Levitus ocean heat content data compared. The CERES data was arbitrarily set to an average imbalance of +0.85 W/m2 (warming).
I must admit, I don’t understand the logic behind setting the imbalance to +0.85 W/m2. If you were going to set it to something, why not set to the actual trend over the period of the CERES data? My guess is that it was decided early on, say in 2006, when the trend was much closer to +0.85 W/m2 and people still believed James Hansen. In any case, the way they’ve set it doesn’t tell us much. Let’s see what else we can learn from the two datasets. First lets take a look at the full Levitus dataset, and its associated error estimates.
Figure 2. The Levitus ocean heat content (OHC) dataset (upper panel), and its associated error.
I gotta say, I’m simply not buying those errors. Why would the error in 2005 be the same as the error in 1955?
In any case, we’re interested in the period during which the CERES and the Levitus datasets overlap, which is March 2000 to February 2013. To compare the two, we can adjust the CERES trend to match the Levitus data. Figure 3 shows that relationship. I’ve included the error data (light black lines.
Figure 3. Ocean heat content, with the trend of the CERES data re-adjusted to match the Levitus data. Light black lines show standard error of the Levitus data.
Now, I’m sure that you all can see the problems. In the CERES data, the change from quarter to quarter is always quite small. And this makes sense. The ocean has huge thermal mass. But according to the Levitus data, in a single quarter the ocean takes huge jumps. These lead to excursions that are much larger than the error bars.
To visualize this, we can plot up the quarter-to-quarter changes in ocean heat content. Figure 4 shows that relationship.
Figure 4. Quarterly changes in the ocean heat content. Note that this shows the quarterly change in OHC, so the units are different from those in Figures 1 and 3. Standard errors of the quarterly change are larger than those of the quarterly data, because two errors are involved in the distance between the two points.
As Figure 4 highlights, the disagreements between the Levitus and the CERES data are profound. For some 60% of the Levitus data, the error bars do not intersect the CERES data …
Conclusions? Well, my first conclusion is that I put much more stock in the CERES data than I do in the Levitus data. This is because of the very tight grouping of the CERES data in Figures 3 and 4. Here are the boxplots of the data shown in Figure 4:
Figure 5. Boxplots of the quarter-to-quarter differences of the Levitus and CERES datasets.
Remember that the tight grouping of the CERES data is the net of three different datasets—solar, reflected solar, and longwave. If you can get that tight a group from three datasets, it indicates that even though their accuracy is not all that hot, their precision is quite good. It is for that reason that I put much more weight on the CERES data than the Levitus data.
And as a result, all that this does is reinforce my previous statements about the error bars of the Levitus data. I’ve held that they are way too small … and both Figures 3 & 4 show that the error bars should be at least twice as large.
Next, the CERES data doesn’t vary a lot from a straight line. In particular, it doesn’t show the change in trend between the early and the later part of the Levitus record.
Finally, the CERES data provides a very precise measurement of the quarterly changes in OHC. Not only is their overall variation quite small, but they are highly autocorrelated. In no case are they greater than 0.5e+22 joules.
So for me, until the Levitus quarter-to-quarter changes get down to well under 1e+22 joules, I’m not going to put a whole lot of weight on the Levitus data.
Regards,
w.
NOTE: see my previous post for the data and code.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Beware these words throughout all science: “adjusted data”
(As I’m sure you know… just adding a halfpenny’s worth of comment)
charles the moderator says:
January 5, 2014 at 5:50 am
Thanks, Charles. A couple of comments. First, what we see in the Figures above is the annual anomaly after the annual cycle has been removed. The normal seasonal cycles don’t show up at all.
Second, whatever warms the ocean also melts the ice and evaporates the water. As a result, the melting and evaporation will add to any OHC swings.
And that means that the melting and the evaporation is already included in the CERES data above … and thus, in the immortal words of Jim Hansen, it’s worse than we thought …
w.
Joe Born says:
January 5, 2014 at 6:29 am
I don’t know any way to get better numbers. You are right that more will melt and freeze than I estimated.
The main point, however, is that the overwhelming majority of the ice signal is cyclical, and so it will be removed because we are only looking at the non-seasonal (anomaly) signal.
w.
thanks REJ. 130TW working out to 0.25W/m2 seemed low me because somewhere it was cited that photosynthesis efficiency was 3-6%. If global PS is only 0.25W/m2 versus average sunlight at 340W/m2 (1361 / 4, area of circle vs surface area of sphere) would mean that only 1.2-2.5% of the surface of earth was engaged. Land is 30% but some PS also occurs in the ocean/water.
The value of 60×10^22 j/yr is 1900TW, which is a huge, 3.7W/m2, requiring PS on 18-36% of earths surface, which seems high to me. Anyways, should photosynthesis be included in the accounting for the difference between inbound and outbound radiation? or is life not important?
I did say 5W/m2 is a lot, Considering the 340W/m2 inbound, that’s 1.5%, which is perhaps not bad considering the complexity in making this measurement.
http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/global.daily.ice.area.withtrend.jpg
In 2000 global ice area was around the 1979-2008 average used as base in that plot. At the beginning of 2013 is was down nearly 1.5 10^6 km^2, about 1/4 of your ball park annual swing calcs. ie 0.1e+22 joules
Clearly there has been substantial loss in thickness in areas that are still covered year round, so that needs scaling up by some factor. The effect would be of the order of magnitude to be visible on figure 3.
I know there have been a number of papers from GRACE and other satellite estimations but they had massive uncertainty figures even when they were being honest about it. However, it looks big enough to be counted.
Since we seems to have regained as much ice in the last 12mo , it could account for the uptick at the end.
Nic Lewis says:
January 5, 2014 at 7:50 am
Thanks, Nic. The 0.85 W/m2 number comes from an actual area-weighted average of the full EBAF Ed2.7 dataset. Let me check the period you mention …
Nope. The average of the toa_net_all dataset from July 2005 to June 2010 is 0.84. Go figure …
w.
timetochooseagain says:
January 5, 2014 at 9:45 am
1360.9 W/m2 …
w.
Joe Chang, you jumped a decimal point on me. It was 6 not 60. However, thinking about the numbers, I believe that most estimates given for productivity are net and not gross. So, this estimate at 60 PgC y-1 is low by a factor of two. The real number should be greater than 10^23 joules per year. These are large numbers, you should read the paper and see how they are generated.
One thing that I think could produce an systematic imbalance in CERES is surface reflection at low incidence angle. This particularly affects the polar regions on open water and even more so on the stiller melt pond water.
Satellite measurements are essentially downward looking but have to be protected against getting a direct flash when their orbit puts them facing the sun as they come over the pole, so they have a shutter which protects the radiometer in this position.
Most of the time they will not be measuring surface reflected solar because they are not in the right position and when they are they flip the shutter and don’t get any data.
We are all familiar with sun reflected off water and at low incidence it can be a large proportion of the light which is reflected.
If we are talking about 5/1361 that’s only 0.36% of full incoming flux.
This is never taken into account in “albedo” figures which is the specular reflectivity. Neither is it accounted for in PIOMAS and other ice models as far as I have been able to ascertain.
It will only be a very small fraction of the exposed surface which is concerned by this but then 0.36% is “a very small fraction”.
PS, this reflection will be much more important with the larger exposed area in Arctic and melt ponds since 2007.
Well darn, there goes my first guess as to where the problem is coming from.
So we are left with: CERES has systematically underestimated the reflected solar radiation by about 5 W/m^2, systematically underestimated the outgoing longwave by about 5 W/m^2, or it’s a shared underestimate and some comes from too little reflected solar, some from too little outgoing longwave.
Greg says:
January 5, 2014 at 10:28 am
Greg and others, let me thank you for your speculations. To focus them, however, let me remind everyone that whatever climate phenomena you come up with to explain the large quarterly changes in the Levitus data, you also need to explain why these phenomena have not affected the CERES data.

I do like your idea about the changes in the global sea ice area, however. Because these changes do NOT seem to show up in the CERES data, this could provide further evidence that the temperature of the earth is thermoregulated.
For folks information, here’s what we really have to explain—the changes (and the lack of changes) in the CERES data.
Figure S1. Decomposition of CERES system energy content. Since this contains all changes in system energy and not just ocean heat content (OHC) I have named it accordingly. Note the range bars at the right side of each panel, that show the relative sizes of the residuals at the various scales.
The panel of interest is panel three, “Trend”. This shows the variations in the total system energy from all causes.
w.
Willis writes:
“all that this does is reinforce my previous statements about the error bars of the Levitus data. I’ve held that they are way too small … and both Figures 3 & 4 show that the error bars should be at least twice as large.”
How large do you think Figures 3 and 4 suggest the error bars should be? If these are one standard deviation error bars on the ocean heat content then crudely we would expect 68% of CERES points within one standard deviations, 95% within 2 standard deviations and the 99.7% within 3 standard deviations.
Of course if there are errors on the CERES data (there surely are, but I’m not sure how large) or other sources of heat storage that others have pointed too then we would expect somewhat more disagreement than that.
If there are some outliers greater than 3 standard deviations that probably just reflects some lack or normality?
Overall, I don’t think Figures 3 & 4 make a strong case that errors are underestimated. But to back that up I would have to look at more quantitative calculations rather than just eyeballing.
Willis, I have been thinking about your data and I think that you have uncovered a case of deception. Why do I say that? If you are a scientist or engineer and your hardware is not getting good data; you want to fix it in the worst way. The data is good or there would have been a team of technical specialists, engineers, and scientists working the problem or proposing a new instrument. What does this mean? It means that they understand what is happening and want to keep it from the public. The data has been faked to cover something that they feel would not be to their advantage if it were widely known.
Willis, look at the headline graph on this article on arctic ice extent:
http://climategrog.wordpress.com/category/periodic-analysis/
compare to your extracted “trend” line.
General drop from 1997 to 2007, post 2007 recovery for a couple of years recovery at the end ( your trend starts up turn earlier than Arctic).
Now look at global ice area here.
http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/global.daily.ice.area.withtrend.jpg
Similar picture except that global up turn at end starts late 2011 like you trend.
How about you grab the global ice data and try to overlay it on the trend? 😉
Here is a simple experiment. You are trying cut a board that must be exactly 8′ 7”-13/64′ long to for a shelf in your closet. The only, and I mean ONLY, thing you have to make any measurements with is an old hardware store yard stick. The smallest readable graduations or 1/4 inch. You have no string, rope, building square, framing triangle, etc. just the yard stick. I don’t care how many times you measure the board and how many times you average each measurement or use “RMS averaging”, you will never get the correct length. Eventually you may cut a board and it may fit the shelf, but it probably is not the exact length.
Data available on the internet about the ARGO buoys provides information about the accuracy. To achieve that accuracy it must be calibrated to a NBS traceable standard and is normally done in an environmentally controlled area. The facility I worked with had NBS traceable monitors with alarms on several walls and a double-door-lock entrance. To get readings from the equipment in turn would be obtainable only if used in laboratory conditions (an environmentally controlled area, including at least temperature, humidity, and pressure conditions equal to the conditions of calibration, +/- a few degrees). The ARGO probes are, from my understanding, subject to about 50 degrees F change from bottom to top of travel. From the seabird technical references you can find me that you will get about a 1% error from a temperature change of 50 degrees C.
Now explain how they compensate/correct for the fact that different buoys will have different surface and lower sample point temperatures.
With that fixed, explain how they correct for the fact that some buoys may take longer ascending/descending than others (on purpose or otherwise) and this will cause different errors (equipment inside the buoy warms/cools faster/slower), and the errors will be different ascending than descending.
1. The fact that you have 3000 buoys does not mean that you have 3000 samples of the same temperature that can be averaged using the RMS accuracy rules (square root of the sum of the squares – in the industry we usually said RMS). In my study and review of the use of RMS averaging in temperature measurement during the development of the Instrument Society of America (ISA) Standard on this topic we concluded that all measurements MUST be of the exact same entity at the exact same time under the exact same conditions.
2.
The fact that you add up 3000 surface level temperatures, (e.g., numbers between 30 and 90 degrees F) then divide by 3000 and gives you a number out to 3 or more decimal points does not mean that is the temperature within anything other than +/- 1.5 Degrees C (or 0.25%, in fact you must use the WORST accuracy to be accurate). PERIOD. The RMS accuracy rules for averaging samples do not apply. IT DOES NOT WORK THAT WAY.
3.
Also, the accuracy for essentially every instrument I have ever worked with, including precision laboratory standard instruments when expressed in terms of % were percent of the MAXIMUM reading for the range selected. (It is a common misconception that the 0.01%) means of the reading – this is not normally the case. Even for instruments selling for more than $10,000. Read the fine print on the accuracy specifications.
All of this tells me that their error bands are MUCH larger. A will allow that the “trend” is shown, assuming that the trend is not caused by other environmental factors affecting the measuring equipment. E.g. are more buoys in an area that is warming and fewer in an area that is not warming?
Willis, my fault entirely. I read your post just after getting up this morning (UK time). I don’t think I was fully awake. You are, of course, correct. A constant or near constant TOA imbalance will produce a trend in net energy storage. I realised my mistake when I took a closer look at your Fig 1.
Thanks for your reply.
Want to make the warmists sick? Ask them how that AGW is working for them when we have snow owls in California: http://www.miamiherald.com/2014/01/03/3850232/snowy-owl-invasion-of-us-extends.html#
The Levitus data: more regionalism? Data is not distributed evenly enough, so different groupings result in different mathematically derived averages that drive the end result? A Yamal problem, whereby one or two strong “regional” datapoints dominate the computation?
Determinable by a colour coding of initial data that generates the curves. Or frequency breakdown of data by coding of regions.
response to a comment upstream: No the air does not need to heat first before the oceans warm. SW infrared zooms right through dry air, bypassing molecular components such as CO2. It can be reflected for sure (by clouds and other forms of water vapor), but the air can be cold as hell yet the ocean will warm from SW IR.
Willis, you say:
“The average of the toa_net_all dataset from July 2005 to June 2010 is 0.84. Go figure …”
Well, that’s weird. I downloaded the monthly 03/2000 – 06/2013 TOA Net Flux – All-Sky global data for CERES_EBAF-TOA_Ed2.7 and took the mean of months 65 to 124. It is neither 0.58 not 0.84, but 0.623. Beats me…
Doug, there may be some truth in that but the main difference between OHC and CERES is that OHC is just one part (albeit the largest part) of what CERES is measuring. Or to be more accurate what Willis is calling CERES here which I’m assuming is the cumulative integral of CERES TOA radiation budget.
What is labelled “trend” in the latest graph, which is I think a detrended , low-pass filtered (SVD?) component of it. seems to resemble changes in ice area, which is a (questionable) proxy for ice volume which is also an energy term via latent heat of fusion.
As ice is freezing, it’s dumping latent heat back into the system and this is visible in the TOA budget.
@Willis main post:
The problem is that according to the raw, unadjusted CERES data, there’s an average net TOA radiation imbalance of ~ 5 W/m2 … and that amount of imbalance would have fried the planet long ago. That means that there is some kind of systematic error between the three datasets (solar, reflected solar, and longwave).
Systematic Error. I think it is best to remember from where the “CERES Dataset” comes. CERES data comes from a device on solar synchronous satellites (TERRA, AURA, AQUA) which are 720 km high, 99 min. orbits. TERA only sees the earth at solar 10am to 11am (depending on Latitude) making its equatorial pass at 10:30 am. AURA and AQUA are in the same orbital plane train, 8 minutes apart making sunlight equatorial passes at 1:30 pm. But that does not cover the entire CERES Dataset.
In the CERES dataSET 12 pm, 1 pm, 2pm, 3pm, etc. coverage comes from low resuolution geosynchronous MODIS data from GOES satellites, that are converted (SOMEHOW!!) into CERES data using the high res CERES data from 10:30 am and 1:30 pm passes. The CERES DATEASET is really GOES data recalibrated from CERES data from about 10:30 am and about 1:30 pm.
Just how good the recalibrated GOES solar 3:00pm data would match to real CERES data at a solar 3:00 pm orbit is at present an unanswerable question because because there is no solar 3 pm CERES data collected.
(continuation of 3:37 pm above)
Willis (main post):
A poorly understood adjusting out the systematic error within the insufficiently calibrated GOES data embedded in 85% of the CERES dataset is the only smoking gun worth remembering.
More from my Oct 10, ’13 post
Frankly, I am not surprised that The raw Ceres Dataset, which is GOES hourly data, calibrated by CERES at only two times of the data, has a systematic error in heat (either plus or minus). What I am surprised to find is the bald-faced adjustment of the data to justify a claim of “missing heat”.
There is no “missing heat”. There is only the inability to measure the heat flux with the precision needed. What IS missing is honesty.
http://www.geoffstuff.com/The%20problem%20-solar%20irradiance.JPG
http://www.atmos-chem-phys.net/13/3945/2013/acp-13-3945-2013.pdf
http://spot.colorado.edu/~koppg/TSI/index.html#references
http://spot.colorado.edu/~koppg/TSI/