by Joe Born
This is kind of a coda to Willis Eschenbach’s recent post about sea-level rise. In that post Mr. Eschenbach argued against confidence in the recently observed accelerations’ reality. His post was characteristically compelling. But there’s a sense in which the question of whether apparent acceleration is real seems secondary to whether the acceleration means much if it actually is real.
How would our resultant conclusions differ if we were certain that Mr. Eschenbach’s skepticism is misplaced? Suppose we knew that the published oceans-average values are precisely accurate and that all tide-gauge locations exhibit them uniformly. Would we then base our expectation of future rise on the observed acceleration? I’m no scientist, but I don’t think we would. Or at least we shouldn’t if we’re serious people. Acceleration is too fickle an indicator.
To place that claim into context, let’s briefly set acceleration aside and look at the trends of which accelerations are the first derivatives. The plots below show that, if we accept the PSMSL data as accurate, the 50-year trend has been increasing for about ten years. Before that it had decreased for a quarter of a century from a mid-century peak of nearly 3 mm/year.
Those observations are reasonably helpful. They tell us that the latest 50-year trend is in the lower portion of a range that’s prevailed for over a century, a range that would suggest an increase through the end of this century of between 4 and 10 inches.
Of course, we don’t know that the trend will remain in that range. Maybe we’d get a better sense of whether it will, though, if we shortened the intervals our projections are based on? That way the trend would be less insensitive to the most-recent data.
But here’s the problem. The data tell us it’s a mistake to start making projections based on shorter trend lengths. As the plot below shows, projections of the 2010 level based on past intervals’ trends can err wildly. Note how often the error exceeds the entire differences between the 2010 value and the past interval’s values themselves. The worst projections are the ones based on shorter intervals; those based on 25- and 50-year intervals tend to be better.
I hasten to add that the plot shows only extreme projections; other intervals’ trends project values that are almost exactly correct. But that’s just because the projections vary so widely that they’re bound to be right sometime.
Not only do the trends vary widely but, particularly for the shorter intervals, they also reverse frequently. And that brings us back to accelerations; it tells us that accelerations—the trends’ first derivatives—tend not only to be large but also to be followed by decelerations. The following plots show this.
Those plots also show that accelerations as high as recent ones are not uncommon. So there’s no reason to believe that the current acceleration, real or not, is cause for alarm. It is not unprecedented. And, if the past is prologue, it will be followed by deceleration. To assume the opposite is to ignore the existing record.
In short, basing projections on current acceleration has little to recommend it. But let’s try it anyway:
Obviously, basing projections of today’s level on accelerations would have been worse than basing them on trends. Basing projections of the future on accelerations would be a triumph of hope over experience. So how can scientists profess alarm at signs of acceleration?
Now permit me a digression. In The Death of Expertise: The Campaign against Established Knowledge and Why it Matters, Thomas Nichols decries the public’s failure to trust experts. Dr. Nichols admits that experts sometimes get it wrong, but he argues that this doesn’t justify my or some other layman’s basing rejection of an expert-proffered proposition on half an hour’s Googling.
In my own areas of expertise I, too, have experienced frustration at seeing laymen reject facts there’s really little room to doubt. Yet as a layman I’ve seen too many instances in which experts have lacked candor about which propositions fall into that category. In a world in which climate scientists base catastrophic projections on as fickle an indicator as sea-level acceleration, it’s questionable that as laymen we err more in skepticism than in credulity.
That’s why, for example, laymen like me often reject experts’ “social cost of carbon” estimates in favor of back-of-the-envelope calculations. In my case I’ve estimated that over this century CO2 fertilization would yield over a quarter million dollars in increased grain production alone per acre of land lost to a 3 mm/year sea-level rise. So to me increased carbon-dioxide concentration doesn’t seem like such a bad thing.
It’s not lost on us laymen that this type of analysis is highly simplistic, that it ignores the myriad facts the experts took into account. We’d like to have better alternatives. But, given scientist arguments like the acceleration one, our default position is to go with our guts unless the scientist can make a pretty compelling case for his reliability. True, this approach is non-intellectual. But under the circumstances I think it’s realistic.
End of digression.
Mr. Eschenbach did a good job of showing how shallow the evidence is for sea-level acceleration. But let’s not lose sight of how little meaning acceleration has in the first place. Or of who relies on it anyway.