From the “(pick one: 90% 95% 97%) certainty department, comes this oopsie:
Via Bishop Hill:
=============================================================
Doug Keenan has just written to Julia Slingo about a problem with the Fifth Assessment Report (see here for context).
Dear Julia,
The IPCC’s AR5 WGI Summary for Policymakers includes the following statement.
The globally averaged combined land and ocean surface temperature data as calculated by a linear trend, show a warming of 0.85 [0.65 to 1.06] °C, over the period 1880–2012….
(The numbers in brackets indicate 90%-confidence intervals.) The statement is near the beginning of the first section after the Introduction; as such, it is especially prominent.
The confidence intervals are derived from a statistical model that comprises a straight line with AR(1) noise. As per your paper “Statistical models and the global temperature record” (May 2013), that statistical model is insupportable, and the confidence intervals should be much wider—perhaps even wide enough to include 0°C.
It would seem to be an important part of the duty of the Chief Scientist of the Met Office to publicly inform UK policymakers that the statement is untenable and the truth is less alarming. I ask if you will be fulfilling that duty, and if not, why not.
Sincerely, Doug
============================================================
To me, this is just more indication that the 95% number claimed by IPCC wasn’t derived mathematically, but was a consensus of opinion like was done last time.
Your article asks “Were those numbers calculated, or just pulled out of some orifice?” They were not calculated, at least if the same procedure from the fourth assessment report was used. In that prior climate assessment, buried in a footnote in the Summary for Policymakers, the IPCC admitted that the reported 90% confidence interval was simply based on “expert judgment” i.e. conjecture. This, of course begs the question as to how any human being can have “expertise” in attributing temperature trends to human causes when there is no scientific instrument or procedure capable of verifying the expert attributions.
So it was either that, or it is a product of sleep deprivation, as the IPCC vice chair illustrated today:
There’s nothing like sleep deprived group think under deadline pressure to instill confidence, right?
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

The 95% # is total f***ing b*llsh*t.
Paul Vaughan is right. As the temperature continues to decline, the IPCC’s ‘confidence’ continues to rise.
It’s a ‘confidence‘ scam.
Nullius in Verba says:
September 28, 2013 at 1:21 am
You’re doing a great job, even if it appears to be falling on deaf ears. lund@clemson.edu appears to think that the variance of a random walk expresses high frequency noise level increasing in time. He or she is mistaking an ensemble statistic with a sample statistic.
@ur momisugly dbstealey September 28, 2013 at 12:29 pm)
We’re dealing with kamikaze suicide-bomber types. Status quo daily wuwt operations alone can’t and won’t stop such a viciously aggressive force of nature.
For a thorough scientific treatment of a similar perspective as that which Doug is presenting here, I would recommend people read Cohn and Lins 2005 (“Nature’s Style: Naturally Trendy”). It describes in some detail different calculations and the consequence thereof.
http://www.timcohn.com/Publications/GRL2005Naturallytrendy.pdf
Saying AR(1) CIs provide some insight is, I am afraid, incorrect. It is like saying IID CIs would provide some insight. Well, no, it just gives an arbitrary and meaningless set of CIs which tell us nothing about the real world. Likewise AR(1) tells us nothing about the real world.
For Brandon: you ask a good question, and in fact the CIs do not say we cannot tell the world is warming. There are two separate tests; one is can we tell whether the world is warmer than 100 years ago. To do this, we would use the error associated with the measurement of global temperature. And I think we can be quite confident that the world has warmed. But this has nothing to do with AR(1) CIs, since the error associated with temperature measuring equipment is not AR(1).
The second question is: can we rule out the possibility of natural variability causing that warming. For this, we need a model for how temperature behaves naturally, and that is where the AR(1) assumption comes in. But the AR(1) assumption is as inappropriate as IID and the CIs generated are meaningless. The consequences of this are outlined in Tim Cohn’s article above.
Sorry for the double post. I felt for clarity (particularly wrt Brandon’s question) I should quote from Cohn and Lins’ conclusions:
[…] with respect to temperature data there is overwhelming evidence that the planet has warmed during the past century. But could this warming be due to natural dynamics? Given what we know about the complexity, long-term persistence, and non-linearity of the climate system, it seems the answer might be yes. Finally, that reported trends are real yet insignificant indicates a worrisome possibility: natural climatic excursions may be much larger than we imagine. So large, perhaps, that they render insignificant the changes, human-induced or otherwise, observed during the past century.with respect to temperature data there is overwhelming evidence that the planet has warmed during the past century. But could this warming be due to natural dynamics? Given what we know about the complexity, long-term persistence, and non-linearity of the climate system, it seems the answer might be yes. Finally, that reported trends are real yet insignificant indicates a worrisome possibility: natural climatic excursions may be much larger than we imagine. So large, perhaps, that they render insignificant the changes, human-induced or otherwise, observed during the past century.
Aargh! Double paste in the above italicised quote – the same text is repeated twice. Sorry 🙁 If a mod could fix I would be hugely grateful… otherwise readers should just read half of it 🙂
If it has been repeated twice, it must be there three times. Can only see one repetition?
Spence_UK (September 29, 2013 at 1:29 am) wrote:
“But could this warming be due to natural dynamics?”
Centennial timescale warming not only “could” be natural — it is natural.
Anthony/Mods
umm
the title
SIGNIFICANT missed an I fellas:-)
[Done. Mod]
As painful and ridiculous the various IPCC ARxx reports are, at least we know there will never be an AR15 report. I wonder, will they go from AR14 to AR16 like hotel floors go from 12 to 14?
Dudley, very good, yes it is repeated once 🙂
Paul, indeed, that is the null hypothesis (although some want to reverse it…)
Bart says:
September 28, 2013 at 12:35 pm
Nullius in Verba says:
September 28, 2013 at 1:21 am
You’re doing a great job, even if it appears to be falling on deaf ears. lund@clemson.edu appears to think that the variance of a random walk expresses high frequency noise level increasing in time. He or she is mistaking an ensemble statistic with a sample statistic.
**************
Lund sez: I hope you guys know that I have been teaching graduate time series for 20 years at the PHD level in a math department. Much of the above is jibberrish. Fine…tell me I don’t jack about the topic. That I don’t understand sampling varaibility versus theoretical means. Somehow, I don’t think my stint as chief editor of Journal of the American Statistical Association puts me in the statistical hack category.
As for what the best model is, it contains errors that are fractionally differenced (not fully differenced). How do I know this? I’ve done the comparison on the CONUS series (admittedly not the global series).
Nullius, we don’t fit random walk models to temperature series because they are overdifferenced and have physically unreasonable properties. Please read about fractionally differenced, fully differenced (ARIMA), and ARMA series in a time series text. All I’m trying to tell you is that if one accounted for the longer memory issues in the series than AR(1) models (yeah, one can do better than AR(1)), you still wouldn’t get different conclusions.
You might want to read Bloomfield and Nychka (1992), Climatic Change.
Lund@clemson.edu says:
September 29, 2013 at 3:52 pm
This is total gibberish. Your words from previous post:
“Do prove me wrong, but the model you propose has a random walk component, meaning the variance increases linearly in time. That is clearly not the case with this data.”
There is no such “clearly”. A sample series of a random walk does not grow more noisy as you move away from the origin. It simply wanders from the origin, with 1-sigma bounds growing as the square root of time. There is no way to identify by inspection whether this data series has such a non-stationary component as it wanders quite a bit. So clearly, your assertion, that it was “clearly not the case”, is wrong.
“All I’m trying to tell you is that if one accounted for the longer memory issues in the series than AR(1) models (yeah, one can do better than AR(1)), you still wouldn’t get different conclusions.”
Incorrect. The PSD of the detrended temperature series looks like this. There are significant AR(2) processes with energy concentrated at frequencies associated with periods of about 65 and 21 years. You account for these, and I guarantee your conclusions will be wildly different.
Lund@clemson.edu
Please read the paper by Cohn and Lins I linked to above. It shows if you deal with longer range dependencies properly you DO get different conclusions.
Please note that time series that exhibit long term persistence (such as climate) result in nontrivial biases in estimated parameters. Many people who have classical training in time series analysis miss this important point.
“Were those numbers calculated, or just pulled out of some orifice?”
Not constructive.
… but big thank yous to both Nullius and Lund, who fought bravely and on topic. I wish I could contribute something more meaningful than a compliment from the peanut gallery.
I judged it an even match until Lund collapsed into an argument to the authority of his own resume, and attributed Bart’s comment to Nullius rather than responding to Nullius , September 28, 2013 at 12:17 pm
whicjh looks to me like the winning answer to the thread.
Don’t be a stanger, Lund. People who enjoy thinking will take the time to read you here, and even if our intent be to attack you, you’ll be reaching a wider and more attentive audience than your Journal ever did. It ain’t that bad.
Bart said “There is no such “clearly”. A sample series of a random walk does not grow more noisy as you move away from the origin. It simply wanders from the origin, with 1-sigma bounds growing as the square root of time.”
Bart, you are very wrong. Your first two sentences are complete contradictions.
If X_t is a random walk at time t, then
X_t = X_{t-1}+Z_t, t >= 1 , with X_0=0
and Z_t is some IID noise. So X_t = Z_1 + Z_2+ …. + Z_t and the variance of X_t = t times the variance of Z_1. I think you have been told this!!!! Standard deviations are the square root of the variance.
Grover Trattingham says:
September 30, 2013 at 10:44 am
“I think you have been told this!!!”
I think I wrote it!!!!!:
The mean square error does not tell you everything you need to know about how a time series varies. You need to know the full autocorrelation.
Random walks look like this (upper plot). Not like the lower plot.
When we speak of “noise”, we generally mean higher frequency variation. Random walks do not display this, as the accumulation of independent increments naturally attenuates high frequencies.
There is very clearly some random walk-like behavior in the temperature data within the timeline of interest. Lund made an error. Perhaps in a rush to write something in opposition, but an error nevertheless.
At the end of the day the “global’ climate is simply the sum of its factors – an equation. Some of the factors we know, or think that we know, and can attribute a value to them. There are many similar factors that we know, or think that we know, must feature, but to whose value we can only guess. We know that other factors exist, but we do not know how many and can attribute no weight to them. The equation is, for the moment unsolvable, It follows that any attempted solution, no matter how esoteric the statistical manipulation, can be allocated a very precise confidence level, and that level is Zero.
Yessir! Knocked that one out of the park!! I’m not a statistician, but I can tell when we’ve degenerated to the point of discussing the size of angel’s arses and the area of a pin’s head. It is incontrovertible fact that GCMs are NOT reality. This is simply the modern argument about determinism taken to a new level of abstraction: it is presently impossible to put all the variables into a program and have it accurately model the real world. Even were it possible to “model” the real world, it could not accurately APE the real world, as we cannot predict all the possible extra-solar input to the system, even if we DO know it’s impact (currently we do not).
To assert that we know enough about all the variables, and that they are all accurately accounted for in the GCMs, such that those GCMs are accurate enough that a statistical analysis of their results is the equivalent of a statistical analysis of actual historical record, is utter absurdity.
Neither GCMs nor their human, thus fallible, programmers are prescient to any reasonable degree. Anyone who argues to the contrary, regardless the math they cite or the degrees the claim, is a feather merchant. The Second Law of Thermodynamics is supreme, the universe moves toward increased entropy, and computer models are not reality, and cannot be made to be.
In spite of all the alarmist’s assertions to the contrary, reality persists in being what it is, not what they wish it to be.
Sorry, the preceding in agreement with and appreciation of @ur momisugly Ken Harvey:
At the end of the day the “global’ climate is simply the sum of its factors – an equation. Some of the factors we know, or think that we know, and can attribute a value to them. There are many similar factors that we know, or think that we know, must feature, but to whose value we can only guess. We know that other factors exist, but we do not know how many and can attribute no weight to them. The equation is, for the moment unsolvable, It follows that any attempted solution, no matter how esoteric the statistical manipulation, can be allocated a very precise confidence level, and that level is Zero.
Bart,
“lund@clemson.edu appears to think that the variance of a random walk expresses high frequency noise level increasing in time.”
What I think Lund is talking about is an initiated random walk. That’s a random variable that has a known value (zero, say) at the first time, is Bernoulli +/-1 at the second time, a Binomial (probabilities 1/4,1/2,1/4 of values -2, 0 , 2) at the second step, and so on. The variance of the variable at each timestep increases linearly with time, which is equivalent to saying the standard deviation increases as the square root of time. (Variances of independent variables add.)
A random walk, however, extends forever back in time too and has infinite variance at every time; all you can talk about is the variance of differences in position for times with a given separation, and stuff like that. Non-stationary processes are indeed very strange objects, and we tend to abuse the rigorous mathematics somewhat in practical applications. It’s only because we’re discussing a finite segment of data that we can get away with it.
Lund,
“Nullius, we don’t fit random walk models to temperature series because they are overdifferenced and have physically unreasonable properties. Please read about fractionally differenced, fully differenced (ARIMA), and ARMA series in a time series text. All I’m trying to tell you is that if one accounted for the longer memory issues in the series than AR(1) models (yeah, one can do better than AR(1)), you still wouldn’t get different conclusions.”
Ummm. We were talking about ARIMA. ARIMA(3,1,0), to be specific.
ARFIMA, which I presume is what you meant, is a further generalisation. And sure, if you want to argue that ARFIMA would have been an even better choice, I doubt either Doug or I would disagree. Would you agree that the original point still stands – that the IPCC’s use of linear+AR(1) is an untenable choice for a model, given that we know the climate is not?
But as for whether its properties are physically unrealistic, bear in mind that a linear trend, if extended backwards a few thousand years, drops below absolute zero. Which is impossible. The point being that the linear trend is only intended as an approximation for a finite segment of the series, and in just the same way the ARIMA model is only an approximation for a finite segment. We can be sure that if we had sufficient data, we would eventually be able to resolve the difference.
If you wish to demonstrate that you wouldn’t get different conclusions, that would be interesting.
“You might want to read Bloomfield and Nychka (1992), Climatic Change.”
I prefer Box and Jenkins, thanks.
Now the discussion is getting somewhere.
If a random walk was best, how likely do you think it would be that the mean is not changing in time but the variance is…..this is a property of the ARIMA(3,1,0) statistical error model proposed and should be one reason it is discarded.
Nullius in Verba says:
September 30, 2013 at 2:17 pm
I am speaking of a finite random walk segment as well. Informally, in engineering applications, we usually model such a process as the sampled output of an integrator driven by wideband (effectively “white”, over the frequencies we are interested in), zero mean noise (Wiener Process). We generally assume the input noise is Gaussian which, due to the Central Limit Theorem, is generally a fairly safe assumption. If the input noise has an RMS measure of “sigma”, then the standard deviation of the random walk relative to the initial value grows as sigma * sqrt(time).
The point, though, is that the samples of the random walk are highly correlated in time. If x(t_k) is the process at kth sampled instant t_k, then the autocorrelation function is E{x(t_k)*x(t_n)} = sigma * min(t_k,t_n). This is not independent noise which flails wildly between the RMS bounds, but rather a slowly evolving trajectory which “walks” up and down over time. Such trajectories typically look like the sample series I plotted in the upper plot here. The standard deviation of sigma * sqrt(time) is a measure of the RMS spread over an ensemble of such processes at the given time, but not of a single sample of such a process.
If you filter out the processes in the temperature record associated with what I suspect is an oceanic-atmospheric resonance at ~65 years, and the ~21 year process which is probably associated with solar cycles, then you are left with something which looks very much like some of these random walk trajectories. There’s no way you can see this by inspection of the raw data. Hence, Lund’s claim that it was “clearly” not there was puzzling. I think it is possible that he momentarily forgot how random walks behave, expecting something more akin to the lower plot at my link above. At least, that was the impression under which I made my comments.
On the other hand, maybe he was saying that any such behavior was clearly not dominant. I can agree with that. Or, maybe he was saying that, as a random walk tends to infinity with time, there clearly cannot be an open ended accumulation over eons of time. I can agree with that, too. But, the answer to that, as you pointed out in a post above, is that over a finite interval of time, a true random walk can be indistinguishable from a stationary low frequency process which is ultimately bounded over time.
lund@clemson.edu says:
September 30, 2013 at 4:32 pm
Please do not disregard my point about the PSD. There are powerful, at-least quasi-cyclical processes evident in the data record.
The ~65 year process, in particular, boosted the underlying trend during the latter portion of the 20th century. This natural (obviously, because it has executed more than two complete cycles within the record, before human influence could have been significant) boost in the trend was mistaken for CO2 accelerated warming. If you remove that cyclical influence, you are left with a modest trend which, again, is natural because it has been in evidence since the end of the LIA, before humans could have had an influence.
Properly analyzed, the data show no statistically significant evidence for CO2 induced warming at all.