Guest Post by Willis Eschenbach
Among the recent efforts to explain away the effects of the ongoing “pause” in temperature rise, there’s an interesting paper by Dr. Anny Cazenave et al entitled “The Rate of Sea Level Rise”, hereinafter Cazenave14. Unfortunately it is paywalled, but the Supplementary Information is quite complete and is available here. I will reproduce the parts of interest.
In Cazenave2014, they note that in parallel with the pause in global warming, the rate of global mean sea level (GMSL) rise has also been slowing. Although they get somewhat different numbers, this is apparent in the results of all five of the groups processing the satellite sea level data, as shown in the upper panel “a” of Figure 1 below

Figure 1. ORIGINAL CAPTION: GMSL rate over five-year-long moving windows. a, Temporal evolution of the GMSL rate computed over five-year-long moving windows shifted by one year (start date: 1994). b, Temporal evolution of the corrected GMSL rate (nominal case) computed over five-year-long moving windows shifted by one year (start date: 1994). GMSL data from each of the five processing groups are shown.
Well, we can’t have the rate of sea level rise slowing, doesn’t fit the desired message. So they decided to subtract out the inter-annual variations in the two components that make up the sea level—the mass component and the “steric” component. The bottom panel shows what they ended up with after they calculated the inter-annual variations, and subtracted that from each of the five sea level processing groups.
So before I go any further … let me pose you a puzzle I’ll answer later. What was it about Figure 1 that encouraged me to look further into their work?
Before I get to that, let me explain in a bit more detail what they did. See the Supplemental Information for further details. They started by taking the average sea level as shown by the five groups. Then they detrended that. Next they used a variety of observations and models to estimate the two components that make up the variations in sea level rise.
The mass component, as you might guess, is the net amount of water either added to or subtracted from the ocean by the vagaries of the hydrological cycle—ice melting and freezing, rainfall patterns shifting from ocean to land, and the like. The steric (density) component of sea level, on the other hand, is the change in sea level due to the changes in the density of the ocean as the temperature and salinity changes. The sum of the changes in these two components gives us the changes in the total sea level.
Next, they subtracted the sum of the mass and steric components from the average of the five groups’ results. This gave them the “correction” that they then applied to each of the five groups’ sea level estimates. They describe the process in the caption to their graphic below:

Figure 2. This is Figure S3 from the Supplemental Information. ORIGINAL CAPTION: Figure S3: Black curve: mean detrended GMSL time series (average of the five satellite altimetry data sets) from January 1994 to December 2011, and associated uncertainty (in grey; based on the dispersion of each time series around the mean). Light blue curve: interannual mass component based on the ISBA/TRIP hydrological model for land water storage plus atmospheric water vapour component over January 1994 to December 2002 and GRACE CSR RL05 ocean mass for January 2003 to December 2011 (hybrid case 1). The red curve is the sum of the interannual mass plus thermosteric components. This is the signal removed to the original GMSL time series. Vertical bars represent the uncertainty of the monthly mass estimate (of 1.5 mm22, 30, S1, S3; light blue bar) and of the monthly total contribution (mass plus thermosteric component) (of 2.2 mm, ref. 22, 30, 28, 29, S1, S3; red bar). Units : mm.
So what are they actually calculating when they subtract the red line from the black line? This is where things started to go wrong. The blue line is said to be the detrended mass fluctuation including inter-annual storage on land as well as in water vapor. The black line is said to be the detrended average of the GMSL The red line is the blue line plus the “steric” change from thermal expansion. Here are the difficulties I see, in increasing order of importance. However, any of the following difficulties are sufficient in and of themselves to falsify their results.
• UNCERTAINTY
I digitized the above graphic so I could see what their correction actually looks like. Figure 3 shows that result in blue, including the 95% confidence interval on the correction.
Figure 3. The correction applied in Cazenave14 to the GMSL data from the five processing groups (blue)
The “correction” that they are applying to each of the five datasets is only statistically different from zero for 10% of the datapoints. This means that 90% of their “correction” is not distinguishable from random noise.
• TREND
In theory they are looking at just inter-annual variations. To get these, they describe the processing. The black curve in Figure 2 is described as the “mean detrended GMSL time series” (emphasis mine). They describe the blue curve in Figure 2 by saying (emphasis mine):
As we focus on the interannual variability, the mass time series were detrended.
And the red curve in Figure 2 is the mass and steric component combined. I can’t find anywhere that they have said that they detrended the steric component.
The problem is that in Figure 2, none of the three curves (black:GMSL, blue:mass, red:mass + steric) are detrended, although all of them are close. The black curve trends up and the other two trend down.
The black GMSL curve still has a slight trend, about +0.02 mm/yr. The blue steric curve goes the other way, about -0.6 mm/yr. The red curve exaggerates that a bit, to take the total trend of the two to -0.07 mm yr. And that means that the “correction”, the difference between the red curve showing the mass + steric components and the black GMSL curve, that correction does indeed have a trend as well, which is the sum of the two, or about a tenth of a mm per year.
Like I said, I can’t figure out what’s going on in this one. They talk about using the detrended values for determining the inter-annual differences to remove from the data … but if they did that, then the correction couldn’t have a trend. And according to their graphs, nothing is fully detrended, and the correction most definitely has a trend.
• LOGIC
The paper includes the following description regarding the source of the information on the mass balance:
To estimate the mass component due to global land water storage change, we use the Interaction Soil Biosphere Atmosphere (ISBA)/Total Runoff Integrating Pathways (TRIP) global hydrological model developed at MétéoFrance22. The ISBA land surface scheme calculates time variations of surface energy and water budgets in three soil layers. The soil water content varies with surface infiltration, soil evaporation, plant transpiration and deep drainage. ISBA is coupled with the TRIP module that converts daily runo simulated by ISBA into river discharge on a global river channel network of 1 resolution. In its most recent version, ISBA/TRIP uses, as meteorological forcing, data at 0.5 resolution from the ERA Interim reanalysis of the European Centre for Medium-Range Weather Forecast (www.ecmwf.int/products/data/d/finder/parameter). Land water storage outputs from ISBA/TRIP are given at monthly intervals from January 1950 to December 2011 on a 1 grid (see ref. 22 for details). The atmospheric water vapour contribution has been estimated from the ERA Interim reanalysis.
OK, fair enough, so they are using the historical reanalysis results to model how much water was being stored each month on the land and even in the air as well.
Now, suppose that their model of the mass balance were perfect. Suppose further that the sea level data were perfect, and that their model of the steric component were perfect. In that case … wouldn’t the “correction” be zero? I mean, the “correction” is nothing but the difference between the modeled sea level and the measured sea level. If the models were perfect the correction would be zero at all times.
Which brings up two difficulties:
1. We have no assurance that the difference between the models and the observations is due to anything but model error, and
2. If the models are accurate, just where is the water coming from and going to? The “correction” that gets us from the modeled to the observed values has to represent a huge amount of water coming and going … but from and to where? Presumably the El Nino effects are included in their model, so what water is moving around?
The authors explain it as follows:
Recent studies have shown that the short-term fluctuations in the altimetry-based GMSL are mainly due to variations in global land water storage (mostly in the tropics), with a tendency for land water deficit (and temporary increase of the GMSL) during El Niño events and the opposite during La Niña. This directly results from rainfall excess over tropical oceans (mostly the Pacific Ocean) and rainfall deficit over land (mostly the tropics) during an El Niño event. The opposite situation prevails during La Niña. The succession of La Niña episodes during recent years has led to temporary negative anomalies of several millimetres in the GMSL, possibly causing the apparent reduction of the GMSL rate of the past decade. This reduction has motivated the present study.
But … but if that’s the case then why isn’t this variation in rainfall being picked up by the whiz-bang “Interaction Soil Biosphere Atmosphere (ISBA)/Total Runoff Integrating Pathways (TRIP) global hydrological model”? I mean, the model is driven by actual rainfall observations, including all the data of the actual El Nino events.
And assuming that such a large and widespread effect isn’t being picked up by the model, in that case why would we assume that the model is valid?
The only way that we can make their logic work is IF the hydrologic model is perfectly accurate except it somehow manages to totally ignore the atmospheric changes resulting from El Nino … but the model is fed with observational data, so how would it know what to ignore?
• OVERALL EFFECT
At the end of the day, what have they done? Well, they’ve measured the difference between the models and the average of the observations from the five processing groups.
Then they have applied that difference between the two to the individual results from the five processing groups.
In other words, they subtracted the data from the models … and then they added that amount to the data. Lets do the math …
Data + “Correction” = Data + (Models – Data) = Models
How is that different from simply declaring that the models are correct, the data is wrong, and moving on?
CONCLUSIONS
1. Even if the models are accurate and the corrections are real, the size doesn’t rise above the noise.
2. Despite a claim that they used detrended data for their calculations for their corrections, their graphic display of that data shows that all three datasets (GMSL, mass component, and mass + steric components) contain trends.
3. We have no assurance that “correction”, which is nothing more than the difference between observations and models, is anything more than model error.
4. The net effect of their procedure is to transform observational results into modeled results. Remember that when you apply their “correction” to the average mean sea level, you get the red line showing the modeled results. So applying that same correction to the five individual datasets that make up the average mean sea level is … well … the word that comes to mind is meaningless. They’ve used a very roundabout way to get there, but at the end they are merely asserting is that the models are right and the data is wrong …
Regards to all,
w.
PS—As is customary, let me ask anyone who disagrees with me or someone else to quote the exact words that you disagree with in your reply. That way, we can all be clear about what you object to.
PPS—I asked up top what was the oddity about the graphs in Figure 1 that made me look deeper? Well, in their paper they say that the same correction was applied to the data of each of the processing groups. Unless I’m mistaken (always possible), this should result in a linear transformation of each month’s worth of data. In other words, the adjustment for each month for all datasets was the same, whether it was +0.1 or -1.2 or whatever. It was added equally to that particular month in the datasets from all five groups.
Now, there’s an oddity about that kind of transformation, of adding or subtracting some amount from each month. It can’t uncross lines on the graph if they start out crossed, and vice versa. If they start out uncrossed, their kind of “correction” can’t cross them.
With that in mind, here’s Figure 1 again:
I still haven’t figured out how they did that one, so any assistance would be gratefully accepted.
DATA AND CODE: Done in Excel, it’s here.
ATheoK says:
March 28, 2014 at 8:54 pm
I’m ashamed to see that ‘sky ending on Lewny’s name; people of Polish and Ukrainian ancestry everywhere are similarly embarrassed.
===========================
They should be, it’s for the first case worldwide. We have our own Lew here in Brazil: Ricardo Lewandowsky, a Supreme Court Justice as pathetic as his namesake.
Alcheson says:
March 28, 2014 at 8:49 pm
Thanks, Alcheson. It stops in 2010 because it’s a five year centered average, so it stops two years before the end of the record.
w.
I think it’s important to figure out the right way to analyze sea level data in conjunction with other data and models, so I don’t mean to dismiss either the paper or this critique as being irrelevant, but it does seem that they have been overtaken by events. Data from more recent years ( see http://sealevel.colorado.edu/content/2014rel2-global-mean-sea-level-time-series-seasonal-signals-retained for example) strongly suggest that the Cazenave paper was at least broadly correct. Sea levels seem to have resumed their upward march now that the La Nina event is past.
To Mod; comment above posted on the wrong thread. Please delete.
Just occurred to me…. Shouldn’t they go back and also apply the corrections to the pre 1998 days? If they did, wouldn’t that once again show a decreasing rate of Sea level rise? Comparing corrected data to the uncoorected prior rate is dishonest.
I don’t mean to be dismissive either of the paper or this critique, since it’s important to figure out how sea level data should be analyzed in the context of other data, and models. But the basic conclusions of Cazenave’s paper do seem to be strongly supported by more recent data, which show that sea level has resumed its upward march: http://sealevel.colorado.edu/content/2014rel2-global-mean-sea-level-time-series-seasonal-signals-retained
we should forgive the alarmists,
http://discovermagazine.com/2010/jul-aug/29-why-scientific-studies-often-wrong-streetlight-effect
Why Scientific Studies Are So Often Wrong: The Streetlight Effect
Researchers tend to look for answers where the looking is good, rather than where the answers are likely to be hiding.
oh and spare a thought for Spain on Earth hour day, hope the electrics don’t fail them
Snowfall warning for Spain
A total of 24 provinces will be under hazard warning (yellow) or significant risk (orange) this Saturday
Gamecock says:
March 29, 2014 at 5:46 am
Gamecock, if you have EVIDENCE to back up your claim that the sea level data is affected by the changes in the mid-ocean ridge, this would be the time to bring it up … it’s an interesting hypothesis, but without data it’s not much use.
In any case, I doubt that the changes in the seafloor in 20 years are enough to make any difference in this analysis …
w.
Bill_W says:
March 29, 2014 at 6:50 am
I and some other folks have been pointing that out that people are dying as loud as we can. See for example “We Have Met The 1% And He Is Us” and James Hansen’s Policies Are Shafting The Poor” as examples.
The main point is not the assigning of blame or responsibility, however. The main point is that the insane war on carbon is impoverishing and killing the poor.
w.
It is almost “on any given day” Any Brazen Knave might put out a “study” in hopes of sucking in some money .
Sometimes they succeed, indeed…
george e smith says:
March 29, 2014 at 9:40 am
Good question, george. To measure the sea surface to 1 mm accuracy from a satellite 1,300 km above the surface you have to be precise to one part in 10^9 …
I assume that is among the reasons for the difference in e.g. the trends shown in Figure 1 as calculated by the five processing groups.
w.
It is not directly obvious, nonetheless there are implicit decisions and tough choices behind this paper.
Their grant contract is nearing it’s end and the grantor is asking about results plus the recipient organization is insisting on peer (cough, pal, cough, spouse) review published paper to bolster the organizations ranking.
Which is worse, after eighteen months of loose schedules, fine meals, fine residences and sun drenched trips; to be found with near zero research work or deliver a gobbledygook manifesto aiding the distraction claims for cause of the pause.
That’s easy, though not as easy as the eighteen months of good living, writ a quickie paper using math jargon, funky math and quintuple speech. Plus they need the paper to justify their current
(g)rant requests.
After all, what is one more dodgy paper amongst so many alarmist dodges; and they’re guaranteed alarmist cries of pleasure and blind veneration.
/sarc
Willis, if they are going to apply corrections to the recent satellite data to show that the increase is continuing at the same rate as it was earlier. Don’t they also need to apply those corrections to the earlier data as well if they want to have a valid comparison? What happens to the earlier data if they apply these same corrections to it as well? If these corrections, applied to the earlier data make it even more in disagreement with the tide gauge data, then I think that shows these corrections are likely inappropriate. To me, it’s kind of like a “Mike’s Nature Trick”, applying a correction only to a part of the data they don’t like.
rgbatduke says:
March 29, 2014 at 5:39 am “It’s a bit more complicated than that — ”
While globa laverage mean sea level rise may be more complicated than that. When it comes to what each specific area needs to prepare for, tide gauges are the only ones that matter because they are actually measuring the REAL change in sea level relative to that specific location.
For EX, if your local tide gauges are showing a decreasing sea level in your area with time, makes absolutely no sense to base your local laws and environmental policies assuming the area will be underwater in the near future. Same goes for the opposite…. if mean sea level was dropping but your local tide gauges show that there is rapid relative sea level rise occurring in this particular area, you better plan accordingly and NOT base you regulations on the decreasing MEAN sea level rise. So, in the end, it is the tide gauges that are the ONLY important measure with respect to policy. The rest is just a academic exercise.
Hello,
Sorry for being off topic, but your graphs Willis, they are almost always very clear, very nice looking, and combining different information, or using a background picture, in a way that melts in very well. What tool do you use for doing these nice looking graphs?
— Mats —
If the heat is hiding in the deep oceans, why has the sea volume (and hence its level) gone down in recent years? Options are: it’s evaporating more (which might be the case if there were greater surface temperatures), there is less fresh water feeding into it, or it is cooling.
“””””…..Willis Eschenbach says:
March 29, 2014 at 11:02 am
george e smith says:
March 29, 2014 at 9:40 am
I discovered, that you could not get very accurate results; dependent on measuring the actual optical length in air of this device. …..”””””
They do get a break, that I didn’t Willis, presumably, they use a laser probe, so at least they can forget the dispersion of the atmospheric refractive index.
In determining the optical length of the resonant cavity, in an FP etalon, you have to account for the fact that there is a non mechanical phase shift, on reflection, due to the optical parameters of the etalon reflecting mirrors, which was simply silver in my gizmo, but that translates into just another unknown in the optical length. You can resolve a small fraction of a wavelength, because of the sharp, multiple beam interference in the FP, whereas, a Michelson interferometer, gives you sinusoidal fringes, with a lot less fractional resolution. The problem is that you might be able to determine the fractional fringe to 0.01 wavelength (these days) but the mechanical uncertainty, could be ten wavelengths, due to micrometer accuracy.
The trick is, that by using a number of wavelengths, each giving some integer uncertainty in the fringe count, if you add the observed fractional fringe, to say a dozen sequential “guesses” at the integer fringe count, and do that for four or five wavelengths some distance apart, then only one of those guesses, will yield the same cavity length for all wavelengths, and that resolves the micrometer uncertainty.
Modern FPs with dielectric mirrors, have way more resolving power than the Edmunds Scientific, type of gizmo, we had in school.
Robert brown
Sure, the British do have centuries old ships quays. There is one some 400 years old a few hundred yards from my home. It’s very difficult to discern any sea level rise over several hundred years but levels are greatly complicated by the fact that land can be either rising or falling as a result of glacial action and this rise or fall is likely to be greater than the actual change in water volume caused by melting glaciers or thermal expansion. Global averages for sea levels are as nonsensical as global averages for land temperatures
Tonyb
There is a limit to how much heat you can hide in the deep ocean, because you are constrained by the thermosteric expansion coefficient of about 150-300ppm/°C, depending on pressure, salinity, and temperature, for almost all of the volume. If the rise of the oceans since 1900 at a fairly steady 2mm/year were 100% thermal expansion, with no melting glaciers, etc., then given the average ocean depth of about 4000m, 0.002m/4000m = 0.5ppm/year. That translates to a temperature change of 0.5ppm/(150-300ppm/°C) = 0.0033 to 0.0067°C/year. If you multiply that by the ocean volume of 1.37×10^9 cubic km at 1cal/degree/cc, and divide by the surface area of the Earth, you get (1.37×10^24 cc)(1cal/degree/cc)(4.184watts/(cal/sec))(0.0033 degrees/year)/(31,536,000 seconds/year)(5.1×10^14m2) = 1.18 – 2.36 W/m2.
So the extreme upper bound on the heat you can hide in the deep ocean that is consistent with the recent level of sea level rise, distributed over the globe, is about 1-2W/m2. The real number has to be less, probably quite a bit less. The IPCC estimates the total net anthropogenic forcing to be 1.5W/m2, so we are pushing the edge of the envelope of plausibility now. Continued failure of sea level rise to accelerate will invalidate the ocean as a hiding place for anthropogenic warming, if it hasn’t already been invalidated.
“Gamecock, if you have EVIDENCE to back up your claim that the sea level data is affected by the changes in the mid-ocean ridge, this would be the time to bring it up … it’s an interesting hypothesis, but without data it’s not much use.”
I can’t quantify that which the world’s scientists haven’t quantified.
http://en.wikipedia.org/wiki/Ocean_basin
“The Atlantic ocean and the Arctic ocean are good examples of active, growing oceanic basins, whereas the Mediterranean Sea is shrinking. The Pacific Ocean is also an active, shrinking oceanic basin, even though it has both spreading ridge and oceanic trenches.”
What is the net affect of these changes? No one knows.
The ocean basin is affected by tectonics, volcanism, and sedimentation. I say that the belief that these have no affect on sea level is the radical position.
http://www.ucmp.berkeley.edu/fosrec/Metzger3.html
“The creation of new sea-floor at mid-ocean spreading centers and its destruction in subduction zones is one of the many cycles that causes the Earth to experience constant change.”
http://www.nature.com/nature/journal/v278/n5700/abs/278161a0.html
“We show here that the mean annual load of suspended sediment at Óbidos, Brazil is between
8 and 9 × 10 8th tonnes yr−1”
If I didn’t mess up a zero, that’s 1.7 trillion pounds of material every year from the Amazon basin onto the continental shelf.
http://www.earthobservatory.nasa.gov/IOTD/view.php?id=1257
500 million tons a year for the Mississippi River.
http://eoimages.gsfc.nasa.gov/images/imagerecords/1000/1257/modis_mississippi_sed_lrg.jpg
I think I have reasonably proved the ocean basin is NOT a fixed size. Until it is measured in detail, and monitored for change, discussion of causes of changes in sea level is gross speculation. Focusing on water mass alone fails.
pnprice says:
March 29, 2014 at 10:34 am
Mmmm … well, not really. You’ve confused yourself by using the Colorado data with the seasonal trends included. You need to look at the same graph but with the seasonal variations removed …


What she is talking about is the kink that started in 2006, which is still clearly visible. Draw a line from the start to that hump and you’ll see the problem …
Let me recommend, however, that you actually run the numbers yourself, which are available here. Here’s a simple analysis. I’ve used 5-year centered trends just as they did.
As to the “basic conclusions” of the paper, I have no idea what they might be. What they did is take the data, add the difference between that and the models, and declare victory … which means that their basic conclusion is that models trump data.
Finally, I was stunned that they didn’t include any data on the actual El Nino over the time period in question. Their ostensible hypothesis was that the lack of sea level rise was due to El Ninos … why wouldn’t they look at that directly?
I just took a quick squint. I find no correlation between El Nino (using the Oceanic Nino Index) and sea level at any lag or lead. However, I find a correlation between El Nino and sea level trends. Comparing five year centered gaussian averages of El Nino and 5-year sea level trends, there is a negative correlation of -.5 between El Nino today and sea level trends two years later. Unfortunately, it’s a long ways from statistically significant, p=0.1 …
All the best,
w.
matsibengtsson says:
March 29, 2014 at 11:48 am
Thanks, Mats. Only Figure 3 in this is my graph, and it was done in Excel. I just add the background picture in Excel as a part of the chart.
I also do a number of graphs in my other tool of choice, the R computer language. It can do the heavy lifting numerically, but I’m still very much a beginner as far as my graphics there go … haven’t figured out yet how to put a background on an R char.
w.
Tonyb says:
March 29, 2014 at 12:18 pm
In your historical analyses of temperature data, have you perchance ever studied the frequency of super El Niños & La Niñas during the Medieval Warm Period & Little Ice Age Cold Period, as compared with the Modern Warm Period?
CACA advocates are fervently hoping & praying for another super El Niño this year:
http://thinkprogress.org/climate/2014/03/26/3417812/el-nino-extreme-weather-global-temperature/
Meanwhile, local sea levels keep on rising at about the same rate as usual since the Roman Warm Period. Australia seems to me a good place to study sea level changes, as it wasn’t heavily glaciated even during the LGM.
UnfrozenCavemanMD says:
March 29, 2014 at 12:23 pm
Dang … I do admire a man who runs the numbers himself.
And I particularly like the idea of bounding the thermal gain using the steric expansion. I’ll have to look into that, unless you beat me to it.
w.