Sea Water Level, Fresh Water Tilted

Guest Post by Willis Eschenbach

Among the recent efforts to explain away the effects of the ongoing “pause” in temperature rise, there’s an interesting paper by Dr. Anny Cazenave et al entitled “The Rate of Sea Level Rise”, hereinafter Cazenave14. Unfortunately it is paywalled, but the Supplementary Information is quite complete and is available here. I will reproduce the parts of interest.

In Cazenave2014, they note that in parallel with the pause in global warming, the rate of global mean sea level (GMSL) rise has also been slowing. Although they get somewhat different numbers, this is apparent in the results of all five of the groups processing the satellite sea level data, as shown in the upper panel “a” of Figure 1 below

cazenave figure 2Figure 1. ORIGINAL CAPTION: GMSL rate over five-year-long moving windows. a, Temporal evolution of the GMSL rate computed over five-year-long moving windows shifted by one year (start date: 1994). b, Temporal evolution of the corrected GMSL rate (nominal case) computed over five-year-long moving windows shifted by one year (start date: 1994). GMSL data from each of the five processing groups are shown.

Well, we can’t have the rate of sea level rise slowing, doesn’t fit the desired message. So they decided to subtract out the inter-annual variations in the two components that make up the sea level—the mass component and the “steric” component. The bottom panel shows what they ended up with after they calculated the inter-annual variations, and subtracted that from each of the five sea level processing groups.

So before I go any further … let me pose you a puzzle I’ll answer later. What was it about Figure 1 that encouraged me to look further into their work?

Before I get to that, let me explain in a bit more detail what they did. See the Supplemental Information for further details. They started by taking the average sea level as shown by the five groups. Then they detrended that. Next they used a variety of observations and models to estimate the two components that make up the variations in sea level rise.

The mass component, as you might guess, is the net amount of water either added to or subtracted from the ocean by the vagaries of the hydrological cycle—ice melting and freezing, rainfall patterns shifting from ocean to land, and the like. The steric (density) component of sea level, on the other hand, is the change in sea level due to the changes in the density of the ocean as the temperature and salinity changes. The sum of the changes in these two components gives us the changes in the total sea level.

Next, they subtracted the sum of the mass and steric components from the average of the five groups’ results. This gave them the “correction” that they then applied to each of the five groups’ sea level estimates. They describe the process in the caption to their graphic below:

cazenave figure S3Figure 2. This is Figure S3 from the Supplemental Information. ORIGINAL CAPTION: Figure S3: Black curve: mean detrended GMSL time series (average of the five satellite altimetry data sets) from January 1994 to December 2011, and associated uncertainty (in grey; based on the dispersion of each time series around the mean). Light blue curve: interannual mass component based on the ISBA/TRIP hydrological model for land water storage plus atmospheric water vapour component over January 1994 to December 2002 and GRACE CSR RL05 ocean mass for January 2003 to December 2011 (hybrid case 1). The red curve is the sum of the interannual mass plus thermosteric components. This is the signal removed to the original GMSL time series. Vertical bars represent the uncertainty of the monthly mass estimate (of 1.5 mm22, 30, S1, S3; light blue bar) and of the monthly total contribution (mass plus thermosteric component) (of 2.2 mm, ref. 22, 30, 28, 29, S1, S3; red bar). Units : mm.

So what are they actually calculating when they subtract the red line from the black line? This is where things started to go wrong. The blue line is said to be the detrended mass fluctuation including inter-annual storage on land as well as in water vapor. The black line is said to be the detrended average of the GMSL The red line is the blue line plus the “steric” change from thermal expansion. Here are the difficulties I see, in increasing order of importance. However, any of the following difficulties are sufficient in and of themselves to falsify their results.


I digitized the above graphic so I could see what their correction actually looks like. Figure 3 shows that result in blue, including the 95% confidence interval on the correction.

cazenave %22correction%22Figure 3. The correction applied in Cazenave14 to the GMSL data from the five processing groups (blue)

The “correction” that they are applying to each of the five datasets is only statistically different from zero for 10% of the datapoints. This means that 90% of their “correction” is not distinguishable from random noise.


In theory they are looking at just inter-annual variations. To get these, they describe the processing. The black curve in Figure 2 is described as the “mean detrended GMSL time series” (emphasis mine). They describe the blue curve in Figure 2 by saying (emphasis mine):

As we focus on the interannual variability, the mass time series were detrended.

And the red curve in Figure 2 is the mass and steric component combined. I can’t find anywhere that they have said that they detrended the steric component.

The problem is that in Figure 2, none of the three curves (black:GMSL, blue:mass, red:mass + steric) are detrended, although all of them are close. The black curve trends up and the other two trend down.

The black GMSL curve still has a slight trend, about +0.02 mm/yr. The blue steric curve goes the other way, about -0.6 mm/yr. The red curve exaggerates that a bit, to take the total trend of the two to -0.07 mm yr. And that means that the “correction”, the difference between the red curve showing the mass + steric components and the black GMSL curve, that correction does indeed have a trend as well, which is the sum of the two, or about a tenth of a mm per year.

Like I said, I can’t figure out what’s going on in this one. They talk about using the detrended values for determining the inter-annual differences to remove from the data … but if they did that, then the correction couldn’t have a trend. And according to their graphs, nothing is fully detrended, and the correction most definitely has a trend.


The paper includes the following description regarding the source of the information on the mass balance:

To estimate the mass component due to global land water storage change, we use the Interaction Soil Biosphere Atmosphere (ISBA)/Total Runoff Integrating Pathways (TRIP) global hydrological model developed at MétéoFrance22. The ISBA land surface scheme calculates time variations of surface energy and water budgets in three soil layers. The soil water content varies with surface infiltration, soil evaporation, plant transpiration and deep drainage. ISBA is coupled with the TRIP module that converts daily runo simulated by ISBA into river discharge on a global river channel network of 1 resolution. In its most recent version, ISBA/TRIP uses, as meteorological forcing, data at 0.5 resolution from the ERA Interim reanalysis of the European Centre for Medium-Range Weather Forecast ( Land water storage outputs from ISBA/TRIP are given at monthly intervals from January 1950 to December 2011 on a 1 grid (see ref. 22 for details). The atmospheric water vapour contribution has been estimated from the ERA Interim reanalysis.

OK, fair enough, so they are using the historical reanalysis results to model how much water was being stored each month on the land and even in the air as well.

Now, suppose that their model of the mass balance were perfect. Suppose further that the sea level data were perfect, and that their model of the steric component were perfect. In that case … wouldn’t the “correction” be zero? I mean, the “correction” is nothing but the difference between the modeled sea level and the measured sea level. If the models were perfect the correction would be zero at all times.

Which brings up two difficulties:

1. We have no assurance that the difference between the models and the observations is due to anything but model error, and

2. If the models are accurate, just where is the water coming from and going to? The “correction” that gets us from the modeled to the observed values has to represent a huge amount of water coming and going … but from and to where? Presumably the El Nino effects are included in their model, so what water is moving around?

The authors explain it as follows:

Recent studies have shown that the short-term fluctuations in the altimetry-based GMSL are mainly due to variations in global land water storage (mostly in the tropics), with a tendency for land water deficit (and temporary increase of the GMSL) during El Niño events and the opposite during La Niña. This directly results from rainfall excess over tropical oceans (mostly the Pacific Ocean) and rainfall deficit over land (mostly the tropics) during an El Niño event. The opposite situation prevails during La Niña. The succession of La Niña episodes during recent years has led to temporary negative anomalies of several millimetres in the GMSL, possibly causing the apparent reduction of the GMSL rate of the past decade. This reduction has motivated the present study.

But … but if that’s the case then why isn’t this variation in rainfall being picked up by the whiz-bang “Interaction Soil Biosphere Atmosphere (ISBA)/Total Runoff Integrating Pathways (TRIP) global hydrological model”? I mean, the model is driven by actual rainfall observations, including all the data of the actual El Nino events.

And assuming that such a large and widespread effect isn’t being picked up by the model, in that case why would we assume that the model is valid?

The only way that we can make their logic work is IF the hydrologic model is perfectly accurate except it somehow manages to totally ignore the atmospheric changes resulting from El Nino … but the model is fed with observational data, so how would it know what to ignore?


At the end of the day, what have they done? Well, they’ve measured the difference between the models and the average of the observations from the five processing groups.

Then they have applied that difference between the two to the individual results from the five processing groups.

In other words, they subtracted the data from the models … and then they added that amount to the data. Lets do the math …

Data + “Correction” = Data + (Models – Data) = Models

How is that different from simply declaring that the models are correct, the data is wrong, and moving on?


1. Even if the models are accurate and the corrections are real, the size doesn’t rise above the noise.

2. Despite a claim that they used detrended data for their calculations for their corrections, their graphic display of that data shows that all three datasets (GMSL, mass component, and mass + steric components) contain trends.

3. We have no assurance that “correction”, which is nothing more than the difference between observations and models, is anything more than model error.

4. The net effect of their procedure is to transform observational results into modeled results. Remember that when you apply their “correction” to the average mean sea level, you get the red line showing the modeled results. So applying that same correction to the five individual datasets that make up the average mean sea level is … well … the word that comes to mind is meaningless. They’ve used a very roundabout way to get there, but at the end they are merely asserting is that the models are right and the data is wrong …

Regards to all,


PS—As is customary, let me ask anyone who disagrees with me or someone else to quote the exact words that you disagree with in your reply. That way, we can all be clear about what you object to.

PPS—I asked up top what was the oddity about the graphs in Figure 1 that made me look deeper? Well, in their paper they say that the same correction was applied to the data of each of the processing groups. Unless I’m mistaken (always possible), this should result in a linear transformation of each month’s worth of data. In other words, the adjustment for each month for all datasets was the same, whether it was +0.1 or -1.2 or whatever. It was added equally to that particular month in the datasets from all five groups.

Now, there’s an oddity about that kind of transformation, of adding or subtracting some amount from each month. It can’t uncross lines on the graph if they start out crossed, and vice versa. If they start out uncrossed, their kind of “correction” can’t cross them.

With that in mind, here’s Figure 1 again:

cazenave figure 2Figure 1 redux …

I still haven’t figured out how they did that one, so any assistance would be gratefully accepted.

DATA AND CODE: Done in Excel, it’s here.


newest oldest most voted
Notify of

What happens to the amount of water vapor in nino and nina states, given this extract:-
“So, are the satellite estimates reliable? Well, in order to answer that, we have to learn a little bit about how they were actually constructed.
Unfortunately, satellite altimeters don’t actually measure sea levels directly. Instead, they measure the length of time it takes light signals sent from the satellite to bounce back. In general, the longer the signal takes, the further the satellite is from the sea surface. So, in theory, this measurement could be converted into a measure of the sea surface height, i.e., the mean sea level.
However, the conversion is complicated, and a number of other factors need to be estimated and then taken into account. For instance, the distance of the satellite from the Earth’s surface varies slightly as it travels along its orbit, because the gravitational pull of the Earth is not exactly uniform – see the Wikipedia page on “geoid”, and the maps in Figure 19.
So, in order to convert a particular “satellite-sea surface distance” into a sea level measurement, the “satellite-Earth’s surface distance” also needs to be independently measured, e.g., using the DORIS system.
Another complexity is that light takes slightly longer to travel when travelling through water vapour than dry air. So, the water vapour concentrations associated with a given satellite reading also need to be estimated, and accounted for.”


I had a letter that I can’t find but wish I could. I sent it to Richard Torbay years ago, asking that I felt Tim Flannery’s prediction (Like Al Gore’s) of impending sea level rises was flawed. I got a letter back from the BOM, that predicted 177 MM rise by 2050. Not cms, well I am sure we can deal with what 6.5 inches without selling one’s expensive water fronts.


The Remains of The Day, This Day.
LOL !!!!
1. Check
2. Check
3. Check
4. Check
Once upon a time ago I had respect for Anny Cazenave.
Not now.
Nothing at all. Nothing remains.
The Remains of the Day, This Day.


The problem is with me, I don’t have the patience nor knowledge to look at graphs, I had enough of them in school. So depend on someone to interpret them. We wouldn’t have fresh water unless rain fell on land, and why, because the sea evaporates and with galactic sub atomic particles help create clouds. Sounds simplistic, well I think it is.

If Trenberth were right about all that missing heat ending up deep in the oceans, wouldn’t it cause some noticeable rise in sea levels?

David Riser

If it were a linear function, besides maintaining the crossing nature the difference between wouldn’t change either. The second graph is definitely more compressed. Perhaps they mislabeled the Y axis, or some other FUBARism.
David Riser

You can read the paper here.

More on this paper, which claims that the missing heat is still increasing in the oceans causing thermal expansion, but that sea level rise has decelerated because a model conveniently says ENSO made it rain more over land [and less over the oceans]!
The authors also find that even with this huge adjustment to sea level rise, there is no evidence of acceleration over the past 20 years, which means there is no evidence of a human influence on sea levels.
The authors redeem themselves a bit in the conclusion and appear to contradict their earlier statements in the paper: “Although progress has been achieved and inconsistencies reduced, the puzzle of the missing energy remains, raising the question of where the extra heat absorbed by the Earth is going. The results presented here will further encourage this debate as they underline the enigma between the observed plateau in Earth’s mean surface temperature and continued rise in the Global Mean Sea Level [GMSL].”
Climate science has sunk just like the ‘missing heat’ to the depths of the ocean trying to explain away the “pause” of both global warming and global sea level rise, using synthetic data generated by climate models that can be programmed to obtain any result one desires.

Alan Robertson

She’s a data tweaker, but still shows sea level rise slowed, not that I worry, here @ 1300′.


I presume they detrended each of the lines separately. So each has a slightly different value applied, because each has a slightly different gradient in the original.

Mike Bromley the Kurd

The FIRST sentence after the abstract:
“Precisely estimating present-day sea-level rise caused by anthropogenic global warming is a major issue that allows assessment of the process-based models developed for projecting future sea level.”
I beg your pardon? A major issue “allows assessment”? How does one “precisely estimate” something? What the hell are you people smoking? Let’s just look at this for a second. A precise estimate can be used to ‘project’ future sea level? Er, isn’t this just a goddamn GUESS? Thanks, Nick Stokes, for providing us a link to the paper…paywalled?? What nature of CRAP is paywalled these days!! I think the reviewers are borderline illiterate if they can allow such a cumbersome statement of import in the INTRODUCTORY SENTENCE……..
“Sea-level rise is indeed one of the most threatening consequences of ongoing global warming, in particular for low- lying coastal areas that are expected to become more vulnerable to flooding and land loss.”
…I guess…
yep, Willis, you are braver than most to even TRY to ‘analyze’ the ‘data’ that allowed these otherwise talentless zombies to precisely estimate guess the reasons for an imperceptible change in sea level rate. What gets in my craw is their singular cause…that global warming is the ONLY cause of plus-delta sea level….then they smear it with meaningless drivel accompanied by ‘sciency-looking’ graphs and stats. Amazing, truly amazing.


Why does their graph stop just before 2010? Seems they have data all the way out to about 2012. Is it because the La Nina would put a big downward spike at the end of their graph (even in the corrected “b” version) and make it look bad?


Our measurements of a surface in motion are getting more precise.
But are we just measuring Jell-O after its been disturbed, and within our limited timeframe ?

Still, at the end of the day, tide gauges are the only ones that matter. They are the ones most accurate and tell you most accurately exactly where you have to (or not) worry about sea level encroachment (regardless of the cause). And if you have thousands of them around the world you will also get the most accurate measure of average sea level rise (or fall). If a satellite tells me my house should be under water, but I am standing in the back yard and my tide gauge says all is fine, which one should I believe?

Rud Istvan

Willis, one of your best nonsense paper deconstructions yet. Absolutely spot on. Your summary logic is irrefutable. Many thanks for a good read and a great laugh.

About the de-trending etc : the way I read it, they de-trended the data to get the inter-annual variation, which they then subtracted from the data (not from the de-trended data). Under figure 2 it says “This is the signal removed to the original GMSL time series” [presumably they meant ‘from’ not ‘to’]. The end -result of that would presumably be just the trend, and part b of the graph is pretty close to horizontal – ie. just the trend.

Leonard Lane

I don`t get it. If the objective is to somehow quantify trends in sea levels, why start de-trending the data then add, subtract, sum, and subtract again. Maybe I missed a step or two. But when looking for trends, why de-trend before you look for trends? My head is starting to hurt. How do things like this get published?


Well, we can’t have the rate of sea level rise slowing, doesn’t fit the desired message. So they decided to subtract out the inter-annual variations in the two components that make up the sea level—the mass component and the “steric” component. The bottom panel shows what they ended up with after they calculated the inter-annual variations, and subtracted that from each of the five sea level processing groups.
The above is formatted as if it’s part of the caption for Fig.1. Was it really part of the original caption? It reads like your perspective, though.
[Thanks, Katherine, well spotted. I wondered where that paragraph had gotten to … it got swept up in the caption. -w.]

Tom Cubbage

Applying Occam’s Razor and the great ocean flow patterns, all the how water is at the top and all the cold is at the bottom. The can be no deep hot current. Period. That,s my conclusion and I am sticking with it.

Looks like some very dodgy pseudoscience. How to tell? Remove the trend the simple way.
Take the original CU data, calculate rate of change by taking differences. get an average of all difference data = -0.114
Now subtract -0.114 from each difference, which moves the difference line up by exactly 0.114,
then reverse the now adjusted (0.114) difference data via a cumulative sum and bingo, the trend is no longer.
Whether intentionally or unintentionally, this is what they have done. They have made the detrended data just a widdle bit positive. Doesn’t have to be much (only 0.114!!)
From Willis’ excel, the diff between before and after (ave. for each set):
0.20 0.24 0.32 0.19 0.24


Computerised modelling takes up many person years of climate scientists time and energy it is therefore not surprising that they write justifications for their investment and their product. Naturally their justification will support their choice of addiction.

Dodgy Geezer

Has anyone estimated the amount of water delivered to our planet from space? That could be a used as a convenient fudge-factor

Crispin in Waterloo but really in Johannesburg

The money quote from Willis is: “The only way that we can make their logic work is IF the hydrologic model is perfectly accurate except it somehow manages to totally ignore the atmospheric changes resulting from El Nino … but the model is fed with observational data, so how would it know what to ignore?”
Incorporating the comment from DocMartyn above, what we have here is not a set of observations and a set of model outputs, we have two sets of model outputs one of which is assumed to be perfect.
Fundamentally, the authors trust the model of water sloshing around the hydrosphere more than they trust the model of sea levels produced by the satellite pings-times-fudge-factors-and-corrections-applied as people now think they should be, i.e. with current understandings and within the limits of their equipment.
This tends to weaken the level of nefariousness one might say is there, but is really just telling us which model they trust most.
Personally I would trust the model with the least number of steps, assumptions and fiddles. Thus I will trust the satellite results more than the rainfall (etc) and steric modeling version of the same thing (which is also based on observations that have corrections and assumptions).
As for uncrossing lines, well spotted. Is it perhaps an artifact of the smoothing method? Is there a different method used in the two sets that produce the graphs? That at least is a simple explanation and could produce the same effect.


Thermal changes to sea water results in a rise, or fall, in the sea level. But there are other inputs many of which are not measured.
Sedimentary rates, total erosion rates, plate tectonic effects on the sea floor, continental crust growth, all affect sea levels and mainly to increase them.


Anny says in her abstract “However, over the last decade a slowdown of this rate, of about 30%, has been recorded. It coincides with a plateau in Earth’s mean surface temperature evolution, known as the recent pause in warming”
Occam’s razor has something to say about this and it doesn’t include modelled output comparisons.


Anny also writes in the paper “This decreasing GMSL rate coincides with the pause observed over the last decade in the rate of Earth’s global mean surface temperature increase, an observation exploited by climate sceptics to refute global warming and its attribution to a steadily rising rate of greenhouse gases in the atmosphere”
Oh and “exploited” by climate sceptics ? Nice one Anny. Thanks for painting a big “biased” target on your paper. It makes it much easier for true sceptics to find papers that are likely to be trash.

This is yet another major embarrassment of the models and claims concerning SLR. The ocean is basically a giant thermometer. It is asserted that missing heat is disappearing into the deep ocean (raising its temperature by an amount far smaller than the reliable resolution of their measuring devices that is nevertheless reported as if it is a meaningful number). At the same time, satellites are doing to SLR what they have done to claims about the surface temperature record — they are starting to impose a strict upper bound to sanity, because not even the most egregious recomputation of net satellite SLR can make a negative rate into a positive rate, and it appears that SLR has gone negative with the last La Nina. This paper is attempting to claim not that SLR is continuing to increase (the data confounds that), but that they can somehow infer a continuing increase in the total liquid plus vapor water (as opposed to ice) in the global water system, so that they can assert that glaciers and polar icecaps are continuing to melt. However, SLR in the past has never been presented with the opposite correction, the correction one might reasonably expect from water running off land and leaving the atmosphere or adding to glacial ice. For good reason — we have no meaningful data on any of the above — we have at best sparse samples on a huge planet. GRACE might soon give us a better picture of the big ice repositories and for that matter the ocean, but so far the news from GRACE is bad news for SLR as it shows that if anything SLR is being slightly inflated by the satellites compare to the actual mass that is supposedly raising the sea level.
Negative SLR is a disaster for all claims for warming, because the ocean is a giant thermometer. One is stuck between a rock and a hard place. If one asserts that the water is ending up in the air, it should show up in increased clouds and albedo and cooling. If one asserts that the water is ending up on land (but not as ice) then one’s claims of drought and famine “evaporate” — quite the opposite — and one expects to see the water show up in the air, in increased cloudiness, in cooler land surface temperatures as it evaporates and increases latent heat losses. If one asserts that the water is getting bound up into ice, well, that’s the worst possible news for catastrophists, as it implies that the entire global subtext of shrinking glaciers and icecaps is false, or at least down there in the noise in the data. And even then one is constrained by sea surface data and ARGO data.
The GCMs are dying a death of a thousand cuts. Global surface temperatures aren’t increasing the way that they predict. Direct satellite measurements (and multiple soundings) of the troposphere limit any attempt to further monkey with the already corrupted station data they’ve adjusted in the past to create warming when the data refused to cooperate with the models. SSTs have for decades formed another major surface constraint on the models, and ARGO promises to do the same to depth. Now SLR is behaving precisely the way it did after the early 20th century warming that almost precisely matched the late 20th century warming — it is flattening out and returning to the plodding ~2 mm/year rate after a comparatively brief sojourn at a higher rate during the single 15 year stretch of late 20th century warming visible on in the average surface record. Even the IPCC is acknowledging that there are no species that have been driven to extinction by purported warming. Even the IPCC is forced to acknowledge that global weather has not displayed increased frequency or severity of storms (although the media and politicians seeking to sell catastrophe haven’t gotten the message). Even the IPCC has to acknowledge that the world is not suffering from an unusual number of droughts or floods, that in fact the world is pretty boring in this regard (especially compared to the 1930’s, for example, or the early 1600’s). Even the IPCC is forced to acknowledge “The Hiatus” (as they call it) and to try to explain the increasing divergence between model predictions and temperature, because they can read a graph and can see as well as anyone that if things continue as they are trending, no possible amount of curve-jiggering is going to fix AR6 to hide the fact that the GCMs in CMIP5 are toast, and with them, all of their egregious predictions of catastrophe.
The current AR5-fest is quite possibly the last hurrah of this particular golden horde. The data are at the outer range of their ability to claim that “warming continues”, and all they need is a FEW ways to jigger things like SLR that are failing to conform to their certainty that the models are more accurate than measured reality for just a bit longer and MAYBE, if they go for broke in this last brief interval before pesky nature quite possibly reveals that the “science” they’ve been presenting is far, far from certain, they can convince world leaders to beggar themselves and murder millions of people a year for decades in the name of catastrophic anthropogenic global warming by outlawing inexpensive energy throughout all human civilization.

Steve Case

alcheson said at 8:57 pm
Still, at the end of the day, tide gauges are the only ones that matter.

Besides that, the satellite data is managed by a small number of scientists who depend on government grants for a living.
As presdent Eisenhower said in 1961:
“The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.
Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”

Steve Keohane

Thanks Willis. I notice the pause has SLR at 8″ a century, down from the ‘catastrophic’ 12″ per century.


Still, at the end of the day, tide gauges are the only ones that matter. They are the ones most accurate and tell you most accurately exactly where you have to (or not) worry about sea level encroachment (regardless of the cause). And if you have thousands of them around the world you will also get the most accurate measure of average sea level rise (or fall). If a satellite tells me my house should be under water, but I am standing in the back yard and my tide gauge says all is fine, which one should I believe?
It’s a bit more complicated than that — warm water is more buoyant than cold water because it expands, meaning that the ocean isn’t perfectly isostatic or isobaric. Further, the Earth isn’t spherical, is rotating, isn’t uniform in its mass distribution, and that mass distribution is moving at a significant rate as the tectonic plates move around and crust erupts and subducts. Non-uniform mass equals non-uniform distribution of oceanic water — it actually experience a significant lateral attraction to oceanic boundaries which causes water to pile up at continental shores relative to mid-ocean, or concentrates water in places like the Gulf or Mediterranean that are surrounded by land at a comparatively short distance relative to the surface curvature. Tide gauge data reflects a mix of all of this — the gauge itself can upheave or subside along with the coast it is sitting on, local warming or cooling of the ocean it measures (many tide gauges sit on rivers mouths and are subject to warming when the outflowing river warms) — response to warming or cooling mid ocean or to continental or land shifts, or simply response to slow changes in the patterns of tides all can contribute to variation that may or may not be significant. SOME tidal gauges are likely to show significant changes that aren’t GLOBALLY significant at all, and one has to hope that the “noise” factors are unbiased so that an average does reflect an oceanic mean.
The satellites — especially with the inclusion of GRACE to correct for gravitation — are likely to eventually do a much better job, but it is as you say good to have the coastal tide gauge data as a boundary condition and sanity check. It is entirely possible for the two to differ year to year, but it is not reasonable for the two to systematically diverge, just as it is not reasonable for RSS and UAH LTT to systematically diverge from the global average surface temperature.
We are actually getting to where it is no longer possible for ANY single source of global data to be jiggered by much because there are simply too many checks and balances. It is especially interesting that this time coincides with the pause, the hiatus, the plateau in global surface temperatures. GISS and HADCRUT are pretty much stuck, forcing the invention of entirely new ways of re-averaging surface data in order to maintain a warming rate since both have levelled off, but the new surface averages are being born into a world bounded by RSS and SSTs, and if they go up when the latter do not change (or go up egregiously over intervals in the past where the latter did not change) the game is over.
I suspect that we’re entering a period where the game, if any gaming has indeed been occurring, is over. Short of direct corruption, actually falsifying data, I don’t see any future in trying to continue to adjust processing methodology to produce the “missing heat”. The GCMs are sooner or later going to have to come face to face with the data itself in such a way that no amount of jiggering will save them, and I think a lot of climate scientists are seeing this and it is having a sobering effect on the entire discipline.


“The mass component, as you might guess, is the net amount of water either added to or subtracted from the ocean by the vagaries of the hydrological cycle”
johnmarshall beat me to it: the container is not a fixed size. They make the absolutely ridiculous assumption that “the ocean” has one size, and all surface level variability is from the addition or subtraction of water, mid ocean ridge be damned.

Steve from Rockwood

“detrended”…might as well call it a “cat in the hat” filter. We stuff it in here and comes out there.

John Moore

Surely, the test is how far up the beach or cliff does the tide come at Spring High Tide and is it different to what it was fifty to a hundred years ago? Yes, I know wind strength and direction will make a difference. From the Met Office and the Ordance Survey the Mean Sea Level is just 7.5 inches rise since 1915 and that is how maps of the Uk are calibrated.


She lost me at hubris before we could get really hot and sweaty with complexity and chaos.
Another time perhaps?


Robert Brown, I assume you are also rgbatduke. I appreciate the depth of your feelings on this but one should not call it murder, but perhaps negligent homicide as the people advocating policies that end up causing deaths in the 3rd world are just too lazy or stupid to look at other arguments and realize they may be responsible for other’s deaths.

Subjective circular reasoning has been and is the biggest flaw in the CAGW argument and is not objective science. This morning I read the news article that the administration is using the CAGW argument to “justify” controlling anthropogenic emissions of methane. That is not science.

R. de Haan
Pamela Gray

Now wait just a gol dern minute. With CO2 rise, we can’t say that it is all just recycled stuff, even from burning oil. NOOOOO! It’s all “new” stuff added (so they say in essence we create CO2 out of nothing). We can’t just subtract it and say now look.
But this paper says that rain is different and they can subtract it from their data???? Well, turn about is fair play. If we can’t subtract CO2 rise from the data, they can’t subtract rain.
Yeh, I know I am being too simplistic, but there are times I just want to think about the bottom line and the 1 foot space in front of me. These are the times I wish we had a simple-minded pres in office who quaffs a cheap beer fairly regularly and names stupid when he sees it.

Steve from Rockwood

The mass component, as you might guess, is the net amount of water either added to or subtracted from the ocean by the vagaries of the hydrological cycle—ice melting and freezing, rainfall patterns shifting from ocean to land, and the like. The steric (density) component of sea level, on the other hand, is the change in sea level due to the changes in the density of the ocean as the temperature and salinity changes. The sum of the changes in these two components gives us the changes in the total sea level.

Why would they remove these effects if they were adding to total sea-level rise? Isn’t it important to know what the total sea-level rise is? After all, it’s not just the steric component that floods the town.


Robert Brown, I assume you are also rgbatduke. I appreciate the depth of your feelings on this but one should not call it murder, but perhaps negligent homicide as the people advocating policies that end up causing deaths in the 3rd world are just too lazy or stupid to look at other arguments and realize they may be responsible for other’s deaths.
Yeah, you are right. But then again:
…no less than the World Health Organization is perfectly happy to attribute “140,000 excess deaths since 1970 by 2004” to global warming.
Quite aside from the fact that this is total bullshit, who is tallying the deaths that are occurring because of the money we’re spending on measures that even the proponents of the measures agree will make no substantive difference in future global warming even if the most pessimistic predictions are right? Perhaps especially so.


What about all the fresh water that people have pumped up out of the ground or impounded upon it? If researchers really want to measure the human contribution to sea level rise, how about something more meaningful than CO2?


A long time a go in high school physics we became acquainted with the concept of the “universal-constant variable fudge factor” (U). When you boil it down, it’s just the ratio of expected results (X) to observed results(O), or U = X/O. When you know something is true, you don’t have to put up with pesky experimental noise. You can never be disappointed, at least initially, when your final result (F) is simply F = O*U. That was a very good lesson. Knowledge of the process has served us well, helping us keep an eye out for people using other similar methodologies.
I had a hard time with Willis’ post because it was never clearly stated the authors were trying to remove the el niño / la niña variations. Whether that is a good idea or not is a separate question. It seems they were successful in removing the “inter-annual” variations. Whether they have accurately described the steric and hydrological components in sea level rise is dubious. By adjusting parameters, no doubt any desired result could be obtained. Why should we be surprised they got their desired outcome?
As pointed out, the authors missed the last la niña. Do they need to adjust their method to make that last variation match their expectations? Keep watching Nature Climate Change for more!. Oh, how I feel the need to spend $199 for full access to that barrel of BS. Indeed, Nature calls. Gotta go.

Martin 457

Thanks, Willis. Another excellent deconstruction of a poor paper.
But it is not only that Cazenave2014 is poorly thought out; it was meant to mislead, in my opinion this is even worst.

george e smith

When I hear about “papers”, like this one, Willis reports on, I don’t have time to check all the math and logic, so I often, just read them as anecdotes; unless somehow they move my curiosity.
Back when I was an undergrad, I did a lab experiment, using a 1cm Fabry Perot etalon, to measure the wavelengths of some lines in the Neon spectrum. The experiment procedure, described in the lab manual, used some ingenious tricks, to get around the limits of mechanical measurements of the quartz spacer in the etalon. But the procedure was easy to follow, and the lab took a couple of hours to measure half a dozen lines.
But I got into this marvelous instrument in a big way, and ended up spending a good part of a month of my time, which resulted in rewriting the lab manual, on that experiment.
I discovered, that you could not get very accurate results; dependent on measuring the actual optical length in air of this device. I ended up having to account for the changes in the refractive index of the air, because of the air temperature, the barometric pressure, the humidity, and I also corrected for the dispersion of the refractive index of moist air, over the total range of wavelengths that were measured.
I ended up measuring about 23 lines in the Neon spectrum, to a couple of parts in 10^8. This was way beyond what they had thought that etalon was capable of.
So when I hear that someone is measuring time of flight of photons, in the entire earth atmosphere thickness, I start to see wheels spinning in my head.
So what did they use for continuous monitoring of the refractive index profile of all that atmosphere thickness, when I found it was significant over just one cm of relatively stationary air..


Thanks Willis. I notice the pause has SLR at 8″ a century, down from the ‘catastrophic’ 12″ per century.
Which is, incidentally, almost exactly the 140 year average rate:,_1870-2008_%28US_EPA%29.png
even including the brief “catastrophic” interval from 1930 to 1950 and the even briefer one from 1990 to 2009 (which conveniently enough is smoothed to avoid the hiatus, starting both rises a full decade after the actual warming). If that one is too noisy, you can always try this, generated from 23 “stable” tide gauge records:
Wow, an almost perfectly linear trend from 1900 to the present, without so much as a hint of a hockey stick to be found.
Of course it is always useful to put the last century in perspective:
where both MWP and LIA are clearly evident as blips on an otherwise remarkably smooth curve. Oh, you can’t see much on this scale of tens to hundreds of meters? Try this:
Not horribly consistent, but at least it shows the supposed noise. Curious how there is a clear uncertainty of over a meter in even remarkably recent data from various location specific proxies. Santa Catarina, Senegal, Rio — all bouncing up and down by meters according to data with uncertainties given as a few tens of centimeters over a timescale of a few hundred years when there is no friggin’ way that global sea level varied by more than a few tens of centimeters.
BTW, for all its warts, since sea level is arguably one of the best real smoothed thermometers out there, and note that the holocene at higher resolution shows sea level as being damn close to flat over the last 1000 years, flat within a few tens of centimeters. Even the LIA/MWP/RWP blips probably aren’t real — there is so much unbelievable spread in the data that the error bars presented are clearly meaningless, underestimating the actual error by meters. One can do better by using direct historical/geological/archeological data associated with actual harbors and ports over the post-Columbus era, although I don’t have any idea whether or where such a study might be found. But surely the British have ship quays that are centuries old in England, in India, and elsewhere around their empire with very clear records of mean sea level (and with equally clear opportunity to read off the past by means of carbon dating of barnacle layers and son on).
The thermometer of the ocean reveals basically no catastrophic alteration of the rate of global warming associated with anthropogenic CO_2 production in a way that is probably more accurate than any number of bristlecone pines. Sure, sea level is multivariate and not a simple linear proxy of temperature, but it is a slowly varying, mostly linear proxy of temperature and it would require an awe-inspiring coincidence for its multivariate dependence to have almost precisely cancelled its temperature-linked expansion to maintain a nearly perfectly linear, incredibly slow rate of increase across one or more centuries. The lack of any sort of resolvable fluctuation-correlation with either the late 20th century warming or the advent of rising CO_2 in general, the lack of any sort of structural relationship between the rise in CO_2 and rise in sea level (beyond the fact that both are monotonic) is a further problem.

Gary Pearse

As with all climatic records, I’ve always contended that if we are headed to a catastrophic future on all these elements, then the raw data will serve well to show this – probably 50 stations globally would do the job. The effort put into squeezing meaningless precision from data records is in itself the work of people who must in their subconscious be concerned we may not be heading for disaster and it may need a helping hand. If the sea is expected to rise a few or many metres in a century, why quibble over a millimetre or two in the present? If we are heading for 4-5C increase in a century, or even 2C, why must we be adjusting by a tenth or two the present record. The progress would be inexorable and obvious from the raw data. We definitely need competition in the record keeping business.

One nice of the things about computer models is you can easily adjust things to give the needed outcome. In the old days you had to hand graph the data and were stuck with the results as to rejigger the data was too much work and you would have to start all over again for each change. Computer programs today make things so easy and you can just point to “The Computer did it” so it must be correct!. pg


“and even in the air as well”
In which case it would presumably show up in the atmospheric water vapour content, which Humlum and Vonder Vaar’s analyses show to be effectively trendless post ~1980, and Solomon et al shows a 10% decrease in the decade post 2000.

Willis Eschenbach

Nick Stokes says:
March 28, 2014 at 8:04 pm

You can read the paper here.

Thanks, Nick.