Guest Post by Willis Eschenbach
Gavin Schmidt, who I am given to understand is a good computer programmer, is one of the principals at the incongruously named website “RealClimate”. The name is incongruous because they censor anyone who dares to disagree with their revealed wisdom.
I bring this up because I’m on Twitter, @WEschenbach. You’re welcome to join me there, or at my own blog, Skating Under The Ice … but I digress. I always tweet about my new posts, including my most recent post, Changes in the Rate of Sea Level Rise, q.v.
To my surprise, Gavin responded to my tweet, saying:

I responded, saying:

Now, to paraphrase Pierre de Fermat, “I have an explanation of this claim which the margin of this tweet is too small to contain.” So I thought I’d write it up. Let me start with the money graph from my last post:

Figure 1. 31-year trailing trends in the rate of sea level rise.
Is the sea level rise accelerating in this graph? It depends on which section you choose. It decelerated from 1890 to 1930. Then it accelerated from 1930 to 1960, decelerated to 1975, stayed flat until about 2005, and accelerated since then … yikes.
As I mentioned, until we have some explanation of those changes, making predictions about the future sea levels is a most parlous endeavor …
HOWEVER, Gavin wants to look at the overall changes, so let’s do that. Figure 2 shows the entire record shown in Figure 1, along with lines indicating the best linear fit, and the best accelerating (quadratic) fit.

Figure 2. The Church and White sea level record, along with best-fit linear (no acceleration, blue) and quadratic (acceleration, red) lines.
Now, this is what Gavin is talking about … and yes, it certainly appears that the quadratic (accelerating) red line is a better fit. But that’s the wrong question.
The right question is, is that a significantly better fit? When we have two choices, we can only pick one with confidence if it is statistically a significantly better fit than the other option.
The way that we can measure this is to look at what are called the “residuals”, or sometimes the “residual errors”. These are the distances between the actual data points, and the predicted data points from the red or blue fitted line. The line which is a better fit will have, on average, smaller error residuals than the other option.
Now, we can use a measure called the “variance” of the residuals to determine which one has the better fit on average. And as you might expect from looking at Figure 2, the variance of the straight-line residuals (no acceleration) is larger (80.2 mm) than that of the residuals of the red line showing acceleration (53.3 mm) … so the acceleration does indeed give the better fit.
But how much better?
There’s no easy way to answer that, so we have to do it the hard way. The hard way means a “Monte Carlo” analysis of the two sets of residuals. We create “pseudo-data”, random data which has statistical characteristics which are similar to the real residuals. Now, the residuals are not simply random numbers. Instead, they both have a high “Hurst Exponent”, which can be thought of as measuring how much “memory” the data has. If there is a long memory (high Hurst Exponent), then e.g. this decade’s data depends in part on the last decade’s data. And this changes what the pseudo-data looks like
So what I did was to generate a thousand samples of pseudodata which had about the same Hurst Exponent (± 0.05) as each residual, and which on average had the same variance as each residual. Then, I looked at the variance of each individual example of the groups of pseudo-data, to determine how much of a range the individual variances covered. From that, I calculated the “95% confidence interval” (95%CI), the range in which we would expect to find the variance for that exact type of data.
It turns out that the 95% confidence interval of the variance is not symmetrical about the variance. It is larger on the positive side and smaller on the negative side. This is a known characteristic of the uncertainty of the variance, and it is what I found for this data.
So with that as prologue, here is the comparison of the variances of the two options, acceleration and no acceleration, along with their 95% confidence intervals:

Figure 3. Variance and 95%CI for the acceleration and no acceleration situations.
Here’s the thing. The 95% CI for each of the residuals encompasses the variance of the other residual … and this means that there is no statistical difference between the two. It may just be a random fluctuation, or it might be a real phenomenon. We cannot say at this point.
We can understand this ambiguity by noting that from the start to about 1930, the trailing trend line in Figure 1 shows a strong deceleration in the rate of sea level rise. We have no clear idea why this occurred … but it increases the uncertainty in our results. If there were a clear acceleration from the beginning to the end of the dataset, the uncertainty would be much smaller, and we could say confidently that there was acceleration over the entire period … however, that’s not the case. The trends went up and down like a yo-yo … and no one knows why.
Finally, let me caution Gavin and everyone else against extending such a trend into the future. This happens all the time in climate science, and it is a pernicious practice. If we had extended the decelerating trend back in 1930, we would have predicted a large fall in sea levels by the year 2000 … and obviously, that didn’t happen.
My own strong wish in all of this is that climate scientists should declare a twenty-year hiatus in making long-range predictions of any kind, and just focus on trying to understand the past. Why did the rate of sea level rise decelerate in the early part of the Church and White record, and then accelerate so rapidly? Why did we come out of the Little Ice Age? Why are we not currently in a glacial epoch?
Until we can answer such questions, making predictions for the year 2050 and the like is a fool’s errand …
My best to all, including Gavin. Unlike many folks on Twitter, he tweets under his own name, and I applaud him for that. In my experience, anonymity, whether here or on Twitter, leads to abuse. I also invite him to come here and make his objections, rather than trying to cram them into 240 characters on Twitter, but … as my daughter used to say, “In your dreams, Dad” …
w.
As Always—Please quote the exact words that you are discussing, so we can all understand who and what you are referring to.
PS—Note that I have not included all of the uncertainty in these calculations. Remember that each of the Church and White data points has an associated uncertainty. I have only calculated the statistical uncertainty, as I have not included the uncertainty of the individual data points. This can only increase the uncertainty of the variance of both of the conditions, acceleration and no acceleration.
Why didn’t I include the uncertainty of the individual data points? Work and time … the statistical uncertainty alone was large enough to let me know that there is no statistical difference between acceleration and no acceleration, so I forbore doing a bunch more Monte Carlo analyses which would show even larger uncertainty. So many interesting questions … so little time. Clearly, I need minions to give some of this work to … all the evil global overlords in the comic books have minions, where are the minions of Willis The Merciless? Or at least the educational equivalent of minions … graduate students … my regards to everyone, especially the poor overworked graduate students.
The climate according to Ken MacLeod :
Remember the asteroid that killed off the Dinosaurs? Now you would have suspected that the asteroid would have thrown up dust, blotting out the sun causing catastrophic cooling, would’t you? Not so! It caused Global warming, which killed off the Dinosaurs. The lesson we have to learn from this is, according to MacLeod, we are drastically underestimating the warming to come due to our evil ways and the wrath to come.
https://www.newscientist.com/article/2170015-asteroid-that-killed-the-dinosaurs-caused-massive-global-warming/
[Very localized global warming, of course. .mod]
w – Thanks for yet another very clearly expressed and explained article. One aspect of the statistics/logic interests me: Of Figure 3, you say “The 95% CI for each of the residuals encompasses the variance of the other residual … and this means that there is no statistical difference between the two.“. I am not in any way suggesting that this is incorrect, but I wonder if there is a more useful way (to us non-statisticians) of expressing it. What I observe is that in this example each datum point falls well within the 95% range of the other. But the conclusion would have been the same even if each had only just scraped into the error range of the other. So, it seems to me, “no statistical difference” is not really a flat zero in both cases, all it tells you is that 95% certainty has not been reached. In this case, it falls a long way short; in a “just scraped in” situation it would be very close to 95%. I wonder whether there would be value in calculating the %age error range which hits the other datum point exactly. In other words, identifying the statistical probability of a difference. I’m no statistician, so apologies if I’ve not expressed this correctly.
W, sure you know this. The various Church and White estimates are comparatively compromised by changing selection of tide gauges., so not strictly comparable. For general support of your otherwise excellent comclusions, see my previous sympatico guest post Sea Level Rise, Acceleration, and Closure at WUWT. Regards.
Thanks, Rud, and I have been following your work there.
w.
ristvan
Am I correct in understanding you to be saying Church & White used different sets of tide gages for different years?
I was blocked at unrealclimate long ago, for nothing really significant other than asking the wrong question or disagreeing a little bit. I think the scientific sceptics outnumbered the usual sheep so some of us had to go.
Once in a while I check the site for amusement, but there is not much to learn there.
You really can tell some things quickly at a glance. That is the point of graphs.
For someone who is supposed to be an accomplished mathematician, and head of a large government agency, he has a very tenuous grasp of reality.
Michael
It is not about scientific truth, it is about choosing someone competent enough to maintain the status quo after the previous leader. Someone who will reliably sing the same song with emotive conviction. That is why Gavin is in the role.
Regards
In truth, I know, ozonebust. My biggest outstanding objection to the Gavage is that he speaks with an English accent, and is giving the rest of us a bad name.
“who I am given to understand is a good computer programmer”
I was not at all impressed with the ModelE GCM code that has his name all over it. It’s a jumbled mess of spaghetti Fortran, poorly organized and poorly documented. The guts of the radiative transfer portions of the code, Radiation.f, has thousands of floating point constants baked into the code, many of which are poorly documented and many are not documented at all. Having managed code developers in the past, I can’t see how anyone would confuse this code with the work product of a good computer programmer, especially one developing models from which trillions of dollars in counterproductive and ineffective policy are to be based.
Got to stop hitting post so quickly, it should have read, “many of which are poorly documented and many are not documented at all”.
co2isnotevil May 24, 2018 at 6:31 pm
Fixed … I hate typos and un-noticed errors …
And I agree with you about the ModelE code, although I suspect that it was something that “‘jes growed” at the hands of many people. But there’s more.
A decade or more ago, I asked Gavin how they handled the question of the conservation of energy in the GISS computer ModelE. He told me a curious thing. He said that they just gathered up any excess or shortage of energy at the end of each cycle, and sprinkled it evenly everywhere around the globe …
As you might imagine, I was shocked … rather than trying to identify the leaks and patch them, they just munged the energy back into balance.
So I asked, how much on average, and at peaks, is the imbalance in the conservation of energy? … again I was shocked, this time when he said he had no idea …
I said that me, I’d put some kind of Murphy Gauge on that variable, something that would give me a warning if there was a sudden imbalance that was just being swept under the rug.
He acted like this was both unheard of and goofy …
All of which explains why I said I was “given to understand” that he was a good computer programmer …
w.
Willis Eschenbach May 24, 2018 at 7:28 pm
co2isnotevil May 24, 2018 at 6:31 pm
Got to stop hitting post so quickly, it should have read, “many of which are poorly documented and many are not documented at all”.
Fixed … I hate typos and un-noticed errors …
And I agree with you about the ModelE code, although I suspect that it was something that “‘jes growed” at the hands of many people.
A problem with any ‘legacy code’ that has had many authors and ‘like Topsy,just growed and growed’!
As I recall Hansen first referred to it in 83? I think they recently ‘bit the bullet’ and did a rewrite (ModelE2?), rather like Spencer et al. did with AUH. I think the new version is documented and available to anyone who wants to use it.
Phil,
“I think they recently ‘bit the bullet’ and did a rewrite …”
Yes, but the rewrite is just the old code ‘upgraded’ to a newer version of Fortran. The logic, algorithms and comments are basically the same, although they did replace some of the spaghetti with while loops and case statements.
What they should have done is start from scratch and construct a new GCM in a more modern and capable language like C++. Most of it could even be written in Java with only the relatively small performance sensitive pieces implemented in C or C++.
I too used to be shocked by learning such things, such as how the enthalpy of vaporization of water varies by ~5% between the equator and colder latitudes, but the modelers just shrug it off and pretend it won’t make any difference.
They may, or may not, be competent computer programmers, but they are not not scientists as we know them, Captain.
That is merely a product of how climate ‘scientists’ both consider themselves, despite the counter evidenced, to be experts in everything while at the same time being very unwilling to collaborate with those who are experts, be it coding or statistics. And there is some sense to that, keeping everything in-house helps to ensure the ‘right results ‘, while it makes it harder for others outside their ‘magic cycle of conformation ‘to challenge these results.
And when as is so often is the case, you are a third rate academic, like Gavin, if a first rate dogmatists , that certainly makes life easier and merely forms part of the smoke and mirrors game played in this area.
I suspect a good programmer and fortran cap don’t very often come in the same person.
Any very old software becomes a load of cruft. You should use new tools and new programmerrs called software architects. They can select a language (R, C++, Python, Ruby…) and make the project testable, robust, and maintainable.
But none can fix wrong assumptions and existence of hand-picked parameters.
hugs says “They can select a language (R, C++, Python, Ruby…) and make the project testable, robust, and maintainable.”.
hugs, i strongly suspect that is exactly the opposite of what the current programmers want.
Hi Willis,
Will you be following your own wish and wait 20 years until making any more posts about your own climate
model? Can you explain why we are we not currently in a glacial epoch using your climate model? Making a claim that the temperature is likely to be the same in 20 years time as it is today is as much a long term prediction as any predictions claiming global warming. So you should be following your own wishes and stick to explaining the past.
Extensive glaciation begins in the northern hemisphere when NH temperature drops sufficiently low that winter snow survives summer melting (even at lower elevations as is much of northern Canada). The most recent NH insolation decrease produced by orbital variations has not been sufficient to drop temperature low enough, and that NH insolation is predicted to starting increasing in a few thousand years. So, no glaciation this insolation cycle. It probably takes a really large insolation decrease to initiate glaciation, such as occurred ~118 thousand years ago to begin the last glaciation.
donb
Are you sure about that. Now, sea ice at latitude 60 is NOT land ice between latitude 70 and latitude 60 north, but in both 2016 and 2017, for the first time in our recorded history (that sounds more dramatic than “since the satellite records began in 1979), the sea ice in the Bering Sea and Sea of Okhotsk remained frozen though all of August, and all but 3 days in September. Further east, the sea ice in the S=Gulf of St Lawrence remained frozen many weeks longer than ever before, and the 2016-2017-2018 trends in the Hudson Bay are strongly positive (compared on a day-by-day basis.) Now, the all time high sea ice levels of 1980-1983 remain records. But isn’t it remarkable that lower latitude sea ice – where many times more sunlight is reflected than up north above 70 north – 80 north latitudes – is increasing now?
It has been shown that glaciations depend on decreasing obliquity, not precession-associated decreasing northern summer insolation. And obliquity will continue decreasing for another 10,000 years. So, yes glaciation this obliquity cycle unless alarmists are correct about both CO₂ effect and CO₂ long residence in the atmosphere. For what is worth, I think they are wrong on both.
RACookPE1978 May 24, 2018 at 7:34 pm
Are you sure about that. Now, sea ice at latitude 60 is NOT land ice between latitude 70 and latitude 60 north, but in both 2016 and 2017, for the first time in our recorded history (that sounds more dramatic than “since the satellite records began in 1979), the sea ice in the Bering Sea and Sea of Okhotsk remained frozen though all of August, and all but 3 days in September.
Care to document that statement, it doesn’t agree with any of the satellite images I’ve seen.
The Bering sea had a historically low maximum this spring and has now melted out.
Nope, not true.
NSIDC regional data for 6 June 2018 still has Bering Sea Ice extent up at 14,000 sq km, Bering Sea ice area lower at 4,500 sq km^2.
https://nsidc.org/arcticseaicenews/sea-ice-tools/
Both down from their 1980-2010 averages, but neither has melted out. Both a little below where they were 2016 and 2017 on this date. Same as 2015 though.
Hudson Bay is much higher than 2016 or 2017 on this date, Sea of Okhotsk about the same as Bering Sea: Not quite as high an area as 2016-2017, but still higher than 2015. We shall see what happens through the summer.
Germinio May 24, 2018 at 6:36 pm
You see the part up there at the end of the head post where it says to QUOTE THE EXACT WORDS THAT YOU ARE DISCUSSING?
Come back when you are willing to do that. Until then, you can rot. I have no clue what you are babbling about, but your tone is ugly.
w.
Willis:
You state “My own strong wish in all of this is that climate scientists should declare a twenty-year hiatus in making long-range predictions of any kind, and just focus on trying to understand the past.”
My question is whether or not you will follow your own stated wish and stop making any predictions about the future climate yourself? A claim that the global temperature is stable and will not change due to raising CO2 levels is just as much a prediction as one that claims it will increase.
Germinio May 24, 2018 at 7:56 pm
Are you really a stupid person, or do you just play one on TV? I have no idea what “predictions about the future climate” you think I’ve made, but until you QUOTE THE EXACT WORDS OF MY SO-CALLED “PREDICTIONS” you can stew in your own bile …
w.
Willis,
It is quite simple. You frequently claim that there is a thermo-regulatory engine in operation that
controls the temperature. The direct implication of this is that you are claiming that the temperature
will be stable and that the effects of increasing CO2 are small to non-existent. This is a definite prediction about the future climate. You might not have stated it as such but it is trivial consequence of your theory.
@germinio
“You frequently claim that there is a thermo-regulatory engine in operation that controls the temperature. ”
Who doesn’t claim that?
“The direct implication of this is that you are claiming that the temperature will be stable”
Non sequitur. Your body has a “thermo-regulatory engine in operation that controls the temperature”, but variation nonetheless occur.
“and that the effects of increasing CO2 are small to non-existent.”
Non sequitur again. IPCC also claim that there is such engine, but it also claim that CO2 is sort of a control knob with great effect.
Germinio, so basically you are asking Willis to stop doing what you admit that he has never done?
Geronimo,
Please show where Willis made a specific prediction about the future.
tia
I think you should have quoted fully as requested. You misrepresent because you feel offended.
But don’t worry, we know your style.
germinio ,you are asking willis the equivalent of the “when did you stop beating your wife” question. not clever and i am glad to see it got the response it deserved.
Geronimo is simplybtrying to silence skeptics.
dnftt
Hunter,
I am not trying to silence skeptics but merely suggesting that they should follow their own advice. Personally
I think it is far more fun giving them rope and watching them hang themselves. I notice for instance no one here has commented on the theory proposed by Mo Brooks that sea level rise was caused by rocks falling into the ocean. As the saying goes with friends like those who needs enemies.
The 1930s were a famously ‘hot’ decade, the warmest on record until the last 20 years or so.
So if rising atmospheric temperatures are causing an increase in the acceleration of sea level rise now, why was there a decrease in the acceleration in the 1930s which had similar atmospheric temperatures?
pauly
The 1730’s were the hottest on record until the 1990’s. They came to a shuddering halt in 1740 with one of the most severe winters in the record.
It caused Phil Jones to write a paper on the subject and remark that climate variation was greater than had hitherto been realised.
tonyb
As I mentioned, until we have some explanation of those changes, making predictions about the future sea levels is a most parlous endeavor …
Making predictions is a most parlous endeavor. Yet the perils do not seem to deter anyone, least of all the climate speculators themselves.
There is a linear trend in modern times and it is fair to call for an explanation of that before panicking over any perceived changes from that trend.
Eyeballing the data and the two lines in Figure 2, for most years both lines are just as good. There are stretches where the data is below trend and stretches where it is above trend. For all of the past excursions, the data has returned to the trend line. For after 2000, the alarmist “prediction” is that this time it’s different, it’s not going to return to the long term trend line. Gavin is going to have to invent some better statistics to show that.
Schmidt and Mann both blocked me on twitter last July for demanding a scientific explanation of how a piece of the Larsen C ice shelf was doing anything to “hold back” the glacier if it was able to float away? I maintained and continue to maintain that it is physically impossible for any solid to do that if it is in compression. Of course they had to block me, it was their only alternative to admitting that they are completely wrong on that point. (Plus my tweets to them and their replies were all deleted from my view BTW..)
:large
I understand that a person may block someone else from its thread.
I also understand that NO public funded institution as any right to block anyone. Everybody is entitled to everything they produce, including any tweets. And they alos have no right to delete such public records.
So if PSUclimate is somehow related to Penn State uni, and blocks you, you can sue them, and they will lose.
And if PSU climate is not related, they are not entitled to the claimed affiliation, with just as nasty legal consequences for making the claim.
Another proof they don’t care about the public or the law.
Before you fit anything add the actual error bars. They aren’t less than 1 mm. Any analysis after that is hypothetical exercising
I fail to understand the urge to fit straight lines to every data set available. It is a well known adage that there are no straight lines in nature.
Willis. I notice the rate of sealevel rise has been around 1.5 – 1.6 mm/yr over the whole of the data set. The rate in the North Sea basin was about 1.9 mm/yr over the same period. These rates are way less than the 3 mm/yr measured with satellites. Yet the two tidal gauge data sets are taken thousands of miles apart. This suggests to me that neither is a local aberation but that the satellite stuff is ‘biased’.
Another point about fitting curves to this kind of data and error estimates. There are two crucial assumptions underlaying the least squares method. The first is that your model, the curve, properly represents the underlying process. The second, much less appreciated but equally crucial, in particular for error analysis, is that the residuals are random and follow a Gauss distribution. The latter is obviously not the case here, the residuals are correlated with the next one partly determined by its predecessor. This mucks up the estimates of significance big time, unfortunately. Perhaps a better method to analysis this kind of data is to use ARIMA models.
Willis, an important point I think you should remember to address is that a higher order fit ALWAYS has smaller residuals. It’s simply a fact. Saying that with no qualifier as Gavin did is ridiculous. Without an anaylsis of the type you show here (there are other ways to address this) Gavin’s statement is at best foolish.
Spot on!
+1
I say to Gavin, GISS should get out of climatology and look for aliens. Use your computer programming skill to detect alien signals instead of “adjusting” temperature data. BTW you can’t hear the aliens. She’s just listening to Red Hot Chili Peppers
http://www.btchflcks.com/wp-content/uploads/2016/07/Contact.jpg
She is listening RHCP ? not even one of the other of a number of “aliens” or “the aliens” band?
RHCP look like aliens. You think they’re Earthlings?
More important than linear or quadratic, world sea-level has been rising since 1850. What is causing that?
When we measure sea level change, is there any way to account for changes in the volume of the container? If I have a bucket of water, and add a hand full of sand, the water level goes up. If the earths rivers are constantly depositing silt into the oceans, how do I separate the causes for sea level change? Does sea level have any meaning at all when the shape of the container itself is constantly changing?
They have little meaning full stop , and by that I mean ‘scientific ‘ , the story around ‘scary headlines ‘ is a different game . And is this climate ‘science’ that is the game that matters.
I think even Gavin could haul his over-stuffed behind away from a couple millimeters/yrs rise. Go, Gavin!
Interesting. From your graph, Church & White 2011, it looks like we are not out of the little ice age yet. How did Church & White come up with the “0” around 1975? Since about 1975 the oceans have risen about 2″ or about 1.28/32″ per year. Whoopee doooooo.
@ur momisugly Willis:
Have you considered residual plots for the red and blue curves? I don’t have the data, but looking by eye, the blue trendline seems to have residuals that show more trend than the red. If we’re talking bets, that would give the odds edge to the red curve.
There is no money in that.
Willis Thanks for taking and clarifying the first steps in statistical analysis to see if the apparent sea level “acceleration” is really significant.
Richard Feynman (1974) admonished:
For those wishing to take the next steps in scientifically how not to fool ourselves or the laymen, see the international (JCGM/WG 2008) and NIST (1994) guides to the expression of uncertainty in measurement, neither of which the IPCC appears to dare reference or follow. Note particularly Type B errors which can include systematic uncertainties and systemic errors. Those can be as large or larger than the statistical Type A errors Willis was exploring.
Feynman, R.P., 1974. Cargo cult science. Engineering and Science, 37(7), pp.10-13. http://calteches.library.caltech.edu/51/2/CargoCult.htm
JCGM/WG 1 2008 Working Group. Evaluation of measurement data–guide to the expression of uncertainty in measurement. InTech Rep JCGM 100: 2008 (BIPM, IEC, IFCC, ILAC, ISO, IUPAC, IUPAP and OIML 2008.)
https://www.bipm.org/en/publications/guides/gum.html
Taylor BN, Kuyatt CE. Guidelines for evaluating and expressing the uncertainty of NIST measurement results. Gaithersburg, MD: US Department of Commerce, Technology Administration, National Institute of Standards and Technology; 1994 Sep 1.
https://dx.doi.org/10.6028/NIST.tn.1297
The increase in slr is trivial, slow and changing little if any.
No matter how the alarmists and climate profiteers declare otherwise.
Willis, you are far more patient and diplomatic than most.
Keep up the good work.
Fig 1. There just has to be a link with this graph and the USA 1930s dustbowl temperatures, which are anomalously high in unadjusted data (and in human experience).
Is the 31 year trailing average moving the peak effectively forward 20 years?
Ah, error measurement, the bane of climate scientists everywhere. Those pesky error bars are always getting in the way explaining why we never see them associated with, well, anything.