The Ups and Downs of Sea Level

Guest Post by Willis Eschenbach

Much has been made in AGW circles of the sea level forecast of Vermeer and Rahmstorf, in “Global sea level linked to global temperature” (V&R2009).  Their estimate of forecast sea level rise was much larger than that of the IPCC Fourth Assessment Report (FAR). Their results have been hyped at places like RealClimate as being much more realistic than the IPCC estimates.

So I figured I’d see how Vermeer and Rahmstorf are faring to date. Their results for each of the IPCC “scenarios” are archived here, and the first thirty years of their estimates are presented along with nearly twenty years of actual observations in Figure 1.

Figure 1. Satellite-based sea level observations (blue line), along with the V&R2009 sea level estimates corresponding to the various IPCC future scenarios. Sea level observations from the University of Colorado. PHOTO SOURCE

So … how are the V&R2009 predictions holding up?

Well, … or to be accurate, not well. As you can see, the observations showed an actual sea level rise that is below the lowest of the V&R2009 estimates from the lowest of the IPCC scenarios.

At present, assuming that the distance between their “best” estimate and their “lower” estimate is two standard deviations, the data is now more than four standard deviations below the “best” V&R2009 estimate.

So in answer to how their forecasts are faring, the answer is … very poorly. Abysmally, in fact. Actual observations are lower by four standard deviations than the V&R2000 “best” estimate, and are two standard deviations lower than their “lower” estimate.

w.

Technote 1 – The Colorado folks have recently included a 0.3mm/year increase in sea levels in their results. They say (possibly correctly) that this is necessary to adjust for the sinking of the ocean floor with the increasing weight of sea water from the melting at the end of the last ice age. However, since neither the IPCC nor the V&R2009 figures include that adjustment, I have not included it in this analysis so that we can compare apples to apples.

Technote 2 – I have aligned the Colorado observational results so that their trend line is zero in 1990, in order that they can be compared directly with the V&R2009 results, which have 1990=0 as their starting point. This also aligns the starting observations with the V&R2009 “best” estimate.

Technote 3 – some folks felt that my last post, “Yes, Virginia, there is an FOIA” was short on science content and long on passion … hey, what can I say, I’m a passionate guy. I trust this post will redress the balance in their estimation.

[UPDATE] Steven Mosher has graciously pointed me to a stunning disassembly of V&R2009, at the blog Climate Sanity. Makes my effort above look simplistic by comparison. He shows, among other things, that the V&R formula for sea level leads to ridiculous results when it is fed with actual data rather than IPCC scenarios … quite lethal to their claims. Well done, that man. – w.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
109 Comments
Inline Feedbacks
View all comments
Stan Luckhardt
July 4, 2011 7:48 pm

What is modeling, anyhow? Good question. Generally, in the physical sciences and elsewhere, “Modeling” is a term that has a specific meaning. Namely modeling is a procedure for numerical fitting and interpolation of existing sets of observational data. Such modeling aims to provide a simplified analytic function or set of functions that match discrete data points and interpolate between them. A mathematical property of such models is their neighborhood of convergence around the data set. Usually the size of the neighborhood of convergence (or validity) is not known exactly. It can be estimated. In this discussion, we view data as a collection of discrete points embedded in an abstract continuum parameter space. Independent variables might be time, physical location, etc. Dependent variables, are the quantities for which observations exist in the data set. So temperature, or the non-thermodynamic quantity “global average temperature” we hear about would be examples of dependent variables.
Models are validated by measuring a “goodness of fit” to the data base. If the model parameters can be adjusted to fit the existing data to some desired degree of accuracy it is considered a validated model. It is usually not considered valid outside it’s range of validation. e.g. John von Neumann’s Elephant.
What about extrapolation? Often, modelers are asked to extrapolate their models beyond the validated data base. Into the unknown future, or elsewhere. These extrapolations are notoriously un-reliable for several reasons, among them are (1) the fact that models do not obey causality, (2) they may not properly conserve invariants of the underlying physical system, (3) are often mathematically unstable and exhibit divergent behavior in the limit of large dependent variable, (4) non-linear regression fits used in climate modeling are especially prone to instability. Such instabilities would inevitably “predict” catastrophic values of the dependent variables as an artifact of their instability.
Modelers often spice up the mix by invoking sets of model equations that may be solved numerically to propagate the model into the future. Such model equations may have some physics in them, but inevitably they leave out important physical processes. If they included the real physics of the complete system, they would be simulations, not modeling. Simulations obey causality, and usually consist of sets of time dependent PDEs that are directly obtainable from underlying physical laws.
Running long here, but in most fields of physics, models are considered useful tools for data analysis, but their known limitations in range of validity are widely appreciated. There are just too many ways for extrapolations to go wrong.
For these reasons, we question the methodology of climate modeling. It seems too much weight is given to unreliable extrapolations beyond the mathematical range of validity of such models.
Of course, everyone knows all of this. So why are climate models treated as infallible when the methodology is known to be unreliable for extrapolation?
Maybe that’s enough for the moment. Responses welcome. A little dialog is a good, but let’s keep it on the top two levels of the Graham hierarchy.

Paul S
July 5, 2011 3:42 am

‘However, since neither the IPCC nor the V&R2009 figures include [GIA] adjustment, I have not included it in this analysis so that we can compare apples to apples.’
V&R2009 is an investigation into the link between sea level and temperature. To test their model you need to remove any known non-climatic causes of sea level change. That means applying the GIA correction and the Chao 2008 reservoir adjustment, which are both mentioned in V&R2009. They wouldn’t be explicitly included in the projection figures because the adjustments are purely a means to correct biases in physical measurements.
Your ‘observations’ curve also looks very wrong at the end. I think the problem is there are only 3 datapoints (usually about 36 for a year) in 2011 so far and you’ve placed too much weight on a small amount of data. At this point you should probably ignore 2011 altogether.
I plotted the Church & White 2011 tide gauge and altimeter data (http://www.psmsl.org/products/reconstructions/church.php) against the V&R2009 projection. This data includes the GIA already and I applied a 0.2mm/yr adjustment on top to simulate the reservoir adjustment. That represents a fair test of the model projections. The data only goes up to 2009 but, by reference to the Colorado data, you can add about 0.8mm on top pf the 2009 figure to get a 2010 approximation.
Looking at my plot the observations are skirting along the lower projection. At this point I think it can be inferred that the V&R2009 model is maybe a little too ‘hot’. For those talking about the IPCC projections I placed them on the same graph and they are nowhere near observations. That said I’m not sure whether or not it’s appropriate to apply GIA and reservoir adjustment for comparison with the IPCC projections since they’re presumably trying to determine what will be the overall sea level change relative to land, regardless of causation.

Stan Luckhardt
July 5, 2011 6:51 am

It seems there is a developing consensus that Climate Models are almost as reliable as two parameter Linear Regression at predicting the future.

SteveSadlov
July 5, 2011 7:52 pm

The local media in the San Francisco metro have latched onto a notion of “a 55 inch rise by 2100!”
That is in spite of the falling sea level as measured at the Ft. Point tide gauge.
Ask any port around here if their dredging budget is decreasing.
Nonetheless, the meme is now out there and soon there will be takings on a scale that no environmental regs could ever achieve, e.g. being deemed in a future intertidal zone.

JimF
July 5, 2011 8:25 pm

@LazyTeenager says:
July 4, 2011 at 2:20 am re: “viscous”. The only thing viscous here is your brain. Why do you waste people’s time here with your simpleton comments?

Stan Luckhardt
July 5, 2011 11:14 pm

Models cannot predict anything because they do not obey causality. Models are inherently unreliable in extrapolation beyond their validated domain. Models nearly always unstable when independent variables are extrapolated beyond their validated domain. Models are usually unstable when extrapolated. This model generates catastrophic “predictions” as an artifact of this instability. More on this at http://syntheticinformation.blogspot.com/2011/07/what-is-modeling.html a draft version of a “think piece” on Modeling and its Inherent Limitations. Hope its ok to mention it here.

Stan Luckhardt
July 5, 2011 11:15 pm

Ooops some typos in the last note….apologies. Its late.

Stan Luckhardt
July 6, 2011 11:37 am

New revised and polished version of a think-piece on Modeling and Simulation. It’s not titled “Why Climate Models Suck” but I suppose it could be. More accurately, we discuss modeling in the abstract. Highlight the non-causal nature of Climate Models and discuss the inherently unreliability upon extrapolation of models beyond their range of validation.
TY to WUWT for allowing me to mention this new relevant blog article…. http://syntheticinformation.blogspot.com/2011/07/what-is-modeling.html

1 3 4 5