Guest Post by Willis Eschenbach
Gavin Schmidt, who I am given to understand is a good computer programmer, is one of the principals at the incongruously named website “RealClimate”. The name is incongruous because they censor anyone who dares to disagree with their revealed wisdom.
I bring this up because I’m on Twitter, @WEschenbach. You’re welcome to join me there, or at my own blog, Skating Under The Ice … but I digress. I always tweet about my new posts, including my most recent post, Changes in the Rate of Sea Level Rise, q.v.
To my surprise, Gavin responded to my tweet, saying:

I responded, saying:

Now, to paraphrase Pierre de Fermat, “I have an explanation of this claim which the margin of this tweet is too small to contain.” So I thought I’d write it up. Let me start with the money graph from my last post:

Figure 1. 31-year trailing trends in the rate of sea level rise.
Is the sea level rise accelerating in this graph? It depends on which section you choose. It decelerated from 1890 to 1930. Then it accelerated from 1930 to 1960, decelerated to 1975, stayed flat until about 2005, and accelerated since then … yikes.
As I mentioned, until we have some explanation of those changes, making predictions about the future sea levels is a most parlous endeavor …
HOWEVER, Gavin wants to look at the overall changes, so let’s do that. Figure 2 shows the entire record shown in Figure 1, along with lines indicating the best linear fit, and the best accelerating (quadratic) fit.

Figure 2. The Church and White sea level record, along with best-fit linear (no acceleration, blue) and quadratic (acceleration, red) lines.
Now, this is what Gavin is talking about … and yes, it certainly appears that the quadratic (accelerating) red line is a better fit. But that’s the wrong question.
The right question is, is that a significantly better fit? When we have two choices, we can only pick one with confidence if it is statistically a significantly better fit than the other option.
The way that we can measure this is to look at what are called the “residuals”, or sometimes the “residual errors”. These are the distances between the actual data points, and the predicted data points from the red or blue fitted line. The line which is a better fit will have, on average, smaller error residuals than the other option.
Now, we can use a measure called the “variance” of the residuals to determine which one has the better fit on average. And as you might expect from looking at Figure 2, the variance of the straight-line residuals (no acceleration) is larger (80.2 mm) than that of the residuals of the red line showing acceleration (53.3 mm) … so the acceleration does indeed give the better fit.
But how much better?
There’s no easy way to answer that, so we have to do it the hard way. The hard way means a “Monte Carlo” analysis of the two sets of residuals. We create “pseudo-data”, random data which has statistical characteristics which are similar to the real residuals. Now, the residuals are not simply random numbers. Instead, they both have a high “Hurst Exponent”, which can be thought of as measuring how much “memory” the data has. If there is a long memory (high Hurst Exponent), then e.g. this decade’s data depends in part on the last decade’s data. And this changes what the pseudo-data looks like
So what I did was to generate a thousand samples of pseudodata which had about the same Hurst Exponent (± 0.05) as each residual, and which on average had the same variance as each residual. Then, I looked at the variance of each individual example of the groups of pseudo-data, to determine how much of a range the individual variances covered. From that, I calculated the “95% confidence interval” (95%CI), the range in which we would expect to find the variance for that exact type of data.
It turns out that the 95% confidence interval of the variance is not symmetrical about the variance. It is larger on the positive side and smaller on the negative side. This is a known characteristic of the uncertainty of the variance, and it is what I found for this data.
So with that as prologue, here is the comparison of the variances of the two options, acceleration and no acceleration, along with their 95% confidence intervals:

Figure 3. Variance and 95%CI for the acceleration and no acceleration situations.
Here’s the thing. The 95% CI for each of the residuals encompasses the variance of the other residual … and this means that there is no statistical difference between the two. It may just be a random fluctuation, or it might be a real phenomenon. We cannot say at this point.
We can understand this ambiguity by noting that from the start to about 1930, the trailing trend line in Figure 1 shows a strong deceleration in the rate of sea level rise. We have no clear idea why this occurred … but it increases the uncertainty in our results. If there were a clear acceleration from the beginning to the end of the dataset, the uncertainty would be much smaller, and we could say confidently that there was acceleration over the entire period … however, that’s not the case. The trends went up and down like a yo-yo … and no one knows why.
Finally, let me caution Gavin and everyone else against extending such a trend into the future. This happens all the time in climate science, and it is a pernicious practice. If we had extended the decelerating trend back in 1930, we would have predicted a large fall in sea levels by the year 2000 … and obviously, that didn’t happen.
My own strong wish in all of this is that climate scientists should declare a twenty-year hiatus in making long-range predictions of any kind, and just focus on trying to understand the past. Why did the rate of sea level rise decelerate in the early part of the Church and White record, and then accelerate so rapidly? Why did we come out of the Little Ice Age? Why are we not currently in a glacial epoch?
Until we can answer such questions, making predictions for the year 2050 and the like is a fool’s errand …
My best to all, including Gavin. Unlike many folks on Twitter, he tweets under his own name, and I applaud him for that. In my experience, anonymity, whether here or on Twitter, leads to abuse. I also invite him to come here and make his objections, rather than trying to cram them into 240 characters on Twitter, but … as my daughter used to say, “In your dreams, Dad” …
w.
As Always—Please quote the exact words that you are discussing, so we can all understand who and what you are referring to.
PS—Note that I have not included all of the uncertainty in these calculations. Remember that each of the Church and White data points has an associated uncertainty. I have only calculated the statistical uncertainty, as I have not included the uncertainty of the individual data points. This can only increase the uncertainty of the variance of both of the conditions, acceleration and no acceleration.
Why didn’t I include the uncertainty of the individual data points? Work and time … the statistical uncertainty alone was large enough to let me know that there is no statistical difference between acceleration and no acceleration, so I forbore doing a bunch more Monte Carlo analyses which would show even larger uncertainty. So many interesting questions … so little time. Clearly, I need minions to give some of this work to … all the evil global overlords in the comic books have minions, where are the minions of Willis The Merciless? Or at least the educational equivalent of minions … graduate students … my regards to everyone, especially the poor overworked graduate students.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
For a moment, I thought the title said: “Guest post by Willis Eschenbach and Gavin Schmidt….”
Nearly had a heart attack….
R
Willis, I mentioned this article in a blog posting:
Hide the Decline Part Deux
https://co2islife.wordpress.co
The last two years of satellite data is pretty flat.
If by ‘pretty flat’ Bill you mean Sea Level satellite data, as of Dec 2017, is at the highest ever recorded, then yes, except obviously still rising.
“For ppl (like @mattwridley) who still want to ignore the facts of sea level rise” … that bit of arrogance says it all. No humility, no willingness to concede integrity on the part of other scientists who see things differently (or who suggest that they might be seen differently).
Any person given to doomsaying, name-calling, or indeed any kind of religious fervor in the practice of science is unfit to be called a scientist.
Willis, A- why do your first two Church & White charts contradict each other? The first shows a large decline from 1900 – 1940 while the second shows nothing of the sort. In fact the 2nd shows a steady rise during that same time frame.
B- In your 2nd C&W chart why is a rise of 200+mm w your linear trend blue line fit considered ‘No Acceleration’ by yourself?
There is no contradiction, Lou. The two charts are showing very different things It appears that you are confusing the acceleration (slope of the line in the first map) with the trend (slope of the line in the second map).
In mathematical terms the trend is the first derivative of the data and the acceleration is the second derivative.
Regards,
w.
Thanks for your reply Willis. So what you’re saying is that including the second derivative data(acceleration) with the first derivative data(trend) to give you an overall trend line is how you claim ‘No Acceleration’ in that particular graph? Isn’t that a mathematical impossibility? Including your second derivative which is acceleration into your first derivative trend line is adding acceleration to your trend. So the ‘No Acceleration’ blue line would be incorrect.
You seem to have confused my two questions. In the first I was questioning your choice of a 1.4mm chart v a 250mm chart as the first shows no trend line.
Gavin uses Fortran . You can’t think in Fortran .
http://cosy.com/Science/NotationExamples.gif
It takes just a handful of lines in an APL to quantitatively disprove the spectral GHG hypothesis for Venus , ergo , in general .
Are programmers/computer science types drifting away from STEM attitudes and disciplines and getting closer to other, less demanding fields? Some thoughts from Lubos here: ‘The three “main” sections – mathematics, physics, and computer science – were comparably geeky in their own ways. I was recently led to believe that the computer science folks, not particularly at that school but probably also at that school, have drifted most quickly towards the “ordinary people” that are increasingly incorporated into the cultural Marxist scheme of the world.’
https://motls.blogspot.co.uk/2018/05/most-programmers-think-like-folks-in.html
Willis wrote: “So what I did was to generate a thousand samples of pseudodata which had about the same Hurst Exponent (± 0.05) as each residual, and which on average had the same variance as each residual.”
What was the Hurst exponent you used? Why did you add +/-0.05 to that exponent before creating the pseudodata?
I’ve experimented with satellite altimetry data before, using Excel to do a multiple linear regression with both t and t^2. (If you make t=0 the midpoint of the period, there is no correlation between t and t2.) Using this approach, if the 95% confidence interval for the t^2 coefficient includes zero, the acceleration is not statistically significant. I presume this is the standard approach, but this details are not disclosed.
The tricky part is that sea level data is highly auto-correlated, with a lag_1 correction of 0.96 (IF I remember correctly). The Quenouille correction reduced the number of independent data point from one per month to one every 18 months (for satellite altimetry data). You address this problem using a Hurst exponent, but I have no experience with using this approach. In the end, “statistical significance” depends on your assumptions about the nature of the noise present in your data and the rigor with which you analyze it.
Whether or not acceleration is “statistically significant” is far less important than the central estimate for acceleration. The acceleration detected in the satellite data is consistent with the IPCC’s central estimate for SLR (about 2X more SLR this century than last), and far below what is needed to reach 1 m or more.
Whether or not acceleration is “statistically significant” is far less important than the central estimate for acceleration.
==================
unless of course the acceleration is actually an oscillation.
fredberpie: There is little chance we need to worry about any oscillations in the near future. With the 5 K of warming at the end of the last ice age sea level rose about 120 m or 24 m/K. It has warmed about 1 K over the last century. As ice caps have retreated poleward, the is less ice to melt per K of warming, so the future may not hold anything close to 24 m/K of SLR. Even 10% or 20% concerning.
The rapidly melting at the end of the last ice age ended about 7000 years ago and SLR slowed below the 20th-century rate around then. As temperatures dropped slightly since the Holocene Climate Optimum, SLR has basically come to a stop over the last 4 millennia – and presumably restarted at the end of the LIA. Until we enter another LIA (which rising GHGs as likely to overwhelm), you won’t need to worry about oscillating SL. The direction isn’t in question – the rate is.
Re: “My own strong wish in all of this is that climate scientists should declare a twenty-year hiatus in making long-range predictions of any kind, and just focus on trying to understand the past.”
My prediction — Climate “scientists” (alarmists) depend on long-range doomsday predictions to ensure their income. I predict this will either not change in twenty years or will exactly follow a hockey stick graph until we reach a tipping point of “fed up.”