Physicist Luboš Motl of The Reference Frame demonstrates how easy it is to show that there is: No statistically significant warming since 1995
First, since it wasn’t in his original post, here is the UAH data plotted:
By: Luboš Motl
Because there has been some confusion – and maybe deliberate confusion – among some (alarmist) commenters about the non-existence of a statistically significant warming trend since 1995, i.e. in the last fifteen years, let me dedicate a full article to this issue.
I will use the UAH temperatures whose final 2009 figures are de facto known by now (with a sufficient accuracy) because UAH publishes the daily temperatures, too:
Mathematica can calculate the confidence intervals for the slope (warming trend) by concise commands. But I will calculate the standard error of the slope manually.
x = Table[i, {i, 1995, 2009}]
y = {0.11, 0.02, 0.05, 0.51, 0.04, 0.04, 0.2, 0.31, 0.28, 0.19, 0.34, 0.26, 0.28, 0.05, 0.26};
data = Transpose[{x, y}]
(* *)
n = 15
xAV = Total[x]/n
yAV = Total[y]/n
xmav = x - xAV;
ymav = y - yAV;
lmf = LinearModelFit[data, xvar, xvar];
Normal[lmf]
(* *)
(* http://stattrek.com/AP-Statistics-4/Estimate-Slope.aspx?Tutorial=AP *)
;slopeError = Sqrt[Total[ymav^2]/(n - 2)]/Sqrt[Total[xmav^2]]
The UAH 1995-2009 slope was calculated to be 0.95 °C per century. And the standard deviation of this figure, calculated via the standard formula on this page, is 0.88 °C/century. So this suggests that the positivity of the slope is just a 1-sigma result – a noise. Can we be more rigorous about it? You bet.
Mathematica actually has compact functions that can tell you the confidence intervals for the slope:
lmf = LinearModelFit[data, xvar, xvar, ConfidenceLevel -> .95]; lmf["ParameterConfidenceIntervals"]
The 99% confidence interval is (-1.59, +3.49) in °C/century. Similarly, the 95% confidence interval for the slope is (-0.87, 2.8) in °C/century. On the other hand, the 90% confidence interval is (-0.54, 2.44) in °C/century. All these intervals contain both negative and positive numbers. No conclusion about the slope can be made on either 99%, 95%, and not even 90% confidence level.
Only the 72% confidence interval for the slope touches zero. It means that the probability that the underlying slope is negative equals 1/2 of the rest, i.e. a substantial 14%.
We can only say that it is “somewhat more likely than not” that the underlying trend in 1995-2009 was a warming trend rather than a cooling trend. Saying that the warming since 1995 was “very likely” is already way too ambitious a goal that the data don’t support.

Dave F (20:44:25) :
Why 95?
I’m wondering if it’s because Richard Lindzen has spoken about global warming ending in 1995?
crosspatch (23:06:29) :
The statement of 80% of “observed warming”
UHI is manmade so that is part of manmade global warming. So most of the commenters here believe in ‘manmade global warming’ since they know UHI is happening. 😉
What % of warming is UHI? And if UHI is taken out has there even been any warming in the last ~30 years??
How about trying “Benford’s law” to temperature data? Should it reveal manipulation?
Who me? Far to lazy.
photon without a Higgs
[ Stamp prices have gone up at the same time as temperatures have gone up. So it appears stamp prices control temperature. ]
This is the kind of solipsistic argument that gives climate change sceptics a bad name. The methodology used is a good bit more sophisticated than your comment acknowledges.
crosspatch:
[ Any linkage of climate warming and CO2 is pretty much a guess and nothing more. ]
No, this is incorrect. We predict that co2 will cause warming from scientific information derived form both classical mechanics and quantum theory. We can then do a range of lab experiments, and planetary observations to see whether the theory fits the data. The correspondence is reasonably good. There’s a bunch of things in the rest of your post that are plain incorrect as well (plenty of warming since the 1930s etc).
M.Simon:
The standard deviation of the annual average is quite high, your comments about 10ºC daily range is irrelevant to this – the standard deviation is not the same thing, and its important to measure it across a consistent time scale (for the purposes of the present example that would be the standard deviation of the annual mean temperature).
The 1998 hot year features prominently in the top graph. It is not a global feature as there are many stations where there is no hot 1998 – all 3 Australian Antarctic stations (Mawson, Casey, Davis)plus Macquarie Island, for example. I have been doing a simple anlysis by subtracting each month in 1997 from each month in 1998. This result for Meekatharra Australia shows a typical result.
http://i260.photobucket.com/albums/ii14/sherro_2008/MeekaJ.jpg?t=1261905136
The 4-lobed difference graph is a feature of almost all I have studied. Is it a valid conclusion that global station data, in some stage of the gridding/interpolation procedure, are converted to quarterly inputs?
I’d be delighted to see readers post other examples of this simple 1998-1997 routine from all over the world. In particular, it does not seem to indicate a month or place where the warming started for the 1998 hot year. I’d do the differencing myself more often, but it usually takes local knowledge to choose data sets that have not been excessively “adjusted”.
Dear readers,
the reason for going back to 1995 is to see how long intervals can end up showing no statistically significant warming, assuming the annual white noise null hypothesis.
Of course, if one goes before 1995, the warming will become statistically significant with these choices. But 15 years is a pretty impressively long timescale over which the global warming can be seen to be “statistically non-existent” – which tells us something about the non-urgency of the problem, even if the problem exists at all.
By the way, UAH datasets show a cooling trend not only from 1998 to 2009, as all of us have heard many times, but also in intervals 2001-2009, 2002-2009, 2003-2009, 2004-2009, 2005-2009, 2006-2009, and 2007-2009. That’s, in fact, most intervals among the 14 intervals 1995-2009 and 2008-2009. See
http://motls.blogspot.com/2009/12/no-statistically-significant-warming.html
But clearly, for longer periods than 15 years, one is gradually raising the confidence – the warming trend becomes statistically significant. It surely is statistically significant for the last 30 years.
Also, if you replace the annual data by the monthly data, you may restore the statistical confidence, even for the last 15 years. It means that the null hypothesis “the monthly data since 1995 are white noise with the appropriate parameters” may be robustly falsified. However, this hypothesis is invalid a priori – because the monthly data are continuous and their detailed behavior is more similar to red noise than white noise (the color never becomes white – the autocorrelation never disappears).
However, if you postulate that the monthly data are a random walk – red noise – you will be unable to falsify this null hypothesis by the observed data, too. There’s just too much noise in them. In fact, you will be surprised why the observed accumulated temperature change in 15 or 30 years is so tiny.
One must be careful about the null hypotheses. If a null hypothesis is falsified, it doesn’t yet prove a man-made or another, otherwise “unnatural” signal. It is usually because of the naivite of the null hypothesis. See more comments about these and other issues at
http://motls.blogspot.com/2009/12/no-statistically-significant-warming.html
Best wishes
LM
Re kdkd (01:10:30) : Stamp prices.
The price of a stamp saying “Organic” or “Green” has gone up, but I guess you mean postage stamps. I’m a keen collector with a modestly good Australia 1913-2009 mint unhinged (plus I make a CD that cross-references and illustrates about 4,000 stamps in that period). Fell free sherro1 at optusnet dot com dot au End of commercial.
Galen Haugh (23:25:31) :
Good on you. You might have been successful with a job application with our mining company. Alas, I’ve been saying the same as you for some years now. I just get called a denialist. But you can’t deny the presence or absence of an ore deposit outlined by drilling.
Ditto with you on the falsity of a change of collar coordinates. You can’t shift a hole and call it the same as before.
Luboš Motl,
I see you have updated your original article with a much longer consideration of the monthly temperatures I raised in an attempt to try to salvage your conclusions:
http://motls.blogspot.com/2009/12/no-statistically-significant-warming.html
After much verbiage you now say:
“I am convinced that such an improved model could match the autocorrelation and the distribution of increments at all timescales and that the null hypothesis that the underlying trend is zero would statistically survive, too.”
without offering any such model. This is proof by assertion.
But you might want to ask Anthony to update your headpost here so that readers can enjoy your full updated analysis at this site.
You claim in the title that you have produced “a quick mathematical proof.” Your original post was indeed quick. What you have now produced deserves none of those three words.
So I’ve been reading through this thread with a bit of a rubberneck, passerby of an accident sort of fascination and I keep coming back to a couple of points raised regarding the utility of anomolies and the ridiculousness of some of the techniques used in the creation of the surface temperature models (my aplogies to the posters I reference, working from a mobile makes things difficult sometimes). What I think is getting lost in all these conversations is this:
What is the problem we’re trying to solve… and what is the best way to do it?
It seems to me that we spend a whole lotta time and effort debating issues that are at best 2-3 steps removed from that core question and where there is lack of direction and clarity, confusion reigns.
What, exactly, are we trying to measure and represent by these reconstructions? Is it surface temperature, is it near surface temperature, is it atmospheric temperature, or is it total heat content (which, correct me if I’m wrong here would have to factor in humidity alongside temperature for atmospheric measurements… wouldn’t it)?
Maybe that’s a stupid question, let the flogging commence 😀
kdkd:
When was the last time you conducted a controlled experiment with a planet the size of the earth with a similar atmosphere, geography and geology? And, while you’re at it, explain why it was warmer in the middle ages.
Who is kdkd? And who is kdkd’s “we” ?
Either way, appeal to authority, baffle with BS (bad science), and dismissing a deliberately ludicrous demonstration that “correlation is not causation”… yeah, you’re not doing yourself any favors there, kdkd.
From what I can see here, you’ve got exactly nothing.
Rather than comparing the fuel use at one home over time, a better check on temperature records kept by the gov’t. would be “degree-day” records kept by fuel companies. (And local weather bureaus?) These records could be a fruitful source of insight and add oomph to a critique of the official stats.
If 2010 & 2011 are as cool as 2008, any uptrend will be so attenuated as to lose its rhetorical force, and cooler heads will get more of a hearing.
DirkH
Thanks for posting the Hansen vid. Makes me feel so much better already – happy new year!
I like Lubos’s analysis, not because it shows anything significant and not because it shows something significant. You see, I believe what it shows is a simple piece of analysis as it should be done. Metadata, method, data and results followed by his conclusions. Unlike the Tam et al he makes no attempt to hide what he is doing.
A quick opinion on trends. The linear trend trick really has no value in climate analysis. Why? because climate as Arhenius knew, has several ‘failure mechanisms. If you linear trend those change points you eliminate the very thing for which you should be searching. In climate analysis, we need to know about the points of inflexions/change like the cooling and warming points in the 20th cent. These inflexion points in climate can be at intervals from 30 to 100,000 to 500,000 years.
Icarus (13:33:14) :
The long-term warming trend is around 0.13C per decade according to the entire UAH record. What you should be calculating is whether there is any statistically significant deviation from that warming trend – otherwise you’re just grasping at straws.
I’m afraid I have to agree with this. We should be testing the hypothesis that the trend has changed since 1995 and it clearly hasn’t – not in any statistically significant sense at least.
One interesting point, though, is the fact that the years immediately before 1995 were affected by the 1991 Pinatubo eruption. If data were adjusted to account for the Pinatubo affect (up to 0.5 deg) the non-significant trend could stretch back to the early 1990s.
Steve J (19:49:16) :
Funny,
I thought the temp increase preceeds the co2 increase by 800 years.
Based upon what we know (Thanks Anthony) about the dubious quality of the data.
Why would any rational being attempt to develop any trends based upon such a small data slice?
2,500 years should be the min. or maybe 10,000 years.
The entire argument evaporates under those conditions.
Think about how quickly the world’s climate responded to the Pinatubo eruption and I’m sorry to say that your argument evaporates. We see palaeoclimatic changes on scales of hundreds and thousands of years because the causes change at that pace, not the effects – changes in insolation due to periodic changes in the Earth’s orbit play out over many thousands of years. If the forcings change much faster, then the effects play out much faster too, and this is what we are seeing with anthropogenic forcings from CO2, methane, black carbon on snow and so on.
OK, so I think I just answered my own question – h/t to photon without a Higgs in the thread here (http://wattsupwiththat.com/2009/12/26/satellite-measurements-prove-our-quiet-sun-is-cooling-the-upper-thermosphere/) with the link to this http://www.youtube.com/watch?v=Ykgg9m-7FK4
So, disregarding the rest of the stuff in the video on Miskolczi, what we’re trying to get to is the average *surface* temperature of the earth so we can plug it into the energy balance equations
So let me refine my question… is averaging of daily T-Min/T-Max of near surface air temps even a remotely good way of going about this?
Seems to me, with all the uncontrollable variables involved both real (instrument, UHI, other siting issues, etc) and Mann-made (ba-duh-dum – i.e. statistical) in land surface temperature records… couldn’t there be a better way to go about this?
An excellent post in the same vane using GISS data is at http://tamino.wordpress.com/2009/12/15/how-long/#more-2124
Of course if you are a glacier it the cumulative trend of warm temperatures that will melt you all too quickly.
http://glacierchange.wordpress.com/2009/12/19/helm-glacier-melting-away/
So, disregarding the rest of the stuff in the video on Miskolczi, what we’re trying to get to is the average *surface* temperature of the earth so we can plug it into the energy balance equations
Since radiation goes as T^4, average temperature does you no good in an energy balance.
Dave F (21:16:46) :
[Tenuc (17:57:39) …]
“That has always puzzled me a bit too.
Why not just eliminate even carving out a mean for the data? Why not just throw it all against the wall and see what sticks? If it sticks, it is done! Seriously, though, why not just analyze the entire temperature set, every single point? Then you don’t have to deal with the logistical headaches of transporting a temperature to the next closest station. The way I understand radiative forcing, and anyone feel free to correct me if I am wrong, the temperatures should be Higher highs and higher lows, so the entire dataset should exhibit an upward trend, right? So, I say make pasta. Throw it against the wall and see what sticks.”
You are correct when you imply that averaging and homogenising the data is a fruitless exercise. In fact, because climate is driven by deterministic chaos (not randomness as is often stated), by doing this you are losing information content. The very noise which people try to remove so that a long-term climate signal can be seen, is actually the product of the whole intricate system in action.
Each mechanism effects how the other’s respond and total system energy is constantly varying at any chosen moment in time. It is impossible to tease out the tiny bit of information about how much CO2 affects the overall system as it is a micro-scale process easily swamped by macro-scale processes, for example the tilt in Earth axis of spin, changes to total albedo.
However, if you just take the individual bits of temperature data and “throw it against the wall and see what sticks”, you will not get any insight. The data granularity is too low by several orders of magnitude for anything useful to be ascertained. The only honest answer that science can give when asked what the future will bring is ‘Climate will continue to change, but we don’t know at the moment the direction or magnitude of these quasi-cyclical changes’.
Tom P (19:00:56) :
Luboš Motl,
The monthly data from UAH is readily available and can be used to provide a much better estimate of the confidence limits – we now have 180 monthly points rather than 15 averaged annual points to look at the variability of the temperature measurements.
Following precisely your method, the trend now drops to 0.94C/century from your figure of 0.95C, not a significant difference. However, with the additional information from twelve times the data, the standard error of this slope drops to 0.31C, substantially less than your figure of 0.88C.
Hence the warming slope is more than three standard deviations above zero, which means we can say with 99.9% confidence, not your 86%, that there has been warming on the basis of the UAH data.
I have followed your algorithm exactly here, though as has been pointed out, for the correlated temperatures in a time series the confidence level will be even higher than 99.9%.
This is incorrect. The confidence interval is lower. The effect of autocorrelation is to make the standard deviation lower than it should be, and if corrected for serial correlation, the standard deviation will be higher, and the confidence level lower. In this case, using the monthly data, and no corrrection for serial correlation, the regression for the trend is:
coefficient std. error t-ratio p-value
———————————————————
const -0.0241097 0.0745042 -0.3236 0.7466
time 0.000776549 0.000258984 2.998 0.0031 ***
which indeed looks very significant. But corrected for serial correlation, it is
coefficient std. error t-ratio p-value
———————————————————
const -0.0241097 0.146453 -0.1646 0.8694
time 0.000776549 0.000484935 1.601 0.1111
The standard error is now doubled, and the trend is no longer significant.
It is ludicrous to expect any prediction to be 100% valid using any length of historical data, be it 10, 100, 1000, 1000, …. years. It would be like predicting the future trend of a stock purely based on past data. It can’t be done. If it could we’d all be trillionares by now.
For those of you who think monthly anomoly data is a valid unit … well, why not use daily data? Why not use hours, minutes, nanoseconds?
Clearly I could show any trend I wanted by making the units small enough and I could get great statistical verification.
And, for those who seem to have their heads buried in the sand, it was climate scientists responding to the 10 years of cooling that was getting lots of play in the press that stated unequivocally that it would have to be 15 years to be meaningful. Don’t complain to me, complain to those scientists. I already stated I thought a single unit should be a full PDO cycle.
But, now we have reason to ask these climate scientists if they have changed their position on AGW by their very own reasoning.