Here’s something really interesting: two comparisons between model ensembles and 3 well known global climate metrics plotted together. The interesting part is what happens in the near present. While the climate models and climate measurements start out in sync in 1979, they don’t stay that way as we approach the present.
Here are the trends from 1979-”Year” for HadCrut, NOAA, GISSTemp compared to the trend based on 16 AOCGMs models driven by volcanic forcings:
A second graph showing 20 year trends is more pronounced.Lucia Liljegren of The Blackboard did both of these, and she writes:
Note: I show models with volcanic forcings partly out of laziness and partly because the period shown is affected by eruptions of both Pinatubo and El Chichon.
Here are the 20 year trends as a function of end year:
One thing stands out clearly in both graphs: in the past few years the global climate models and the measured global climate reality have been diverging.
Lucia goes on to say:
I want to compare how the observed trends fit into the ±95 range of “all trends for all weather in all models”. For now I’ll stick with the volcano models. I’ll do that tomorrow. With any luck, HadCrut will report, and I can show it with March Data. NOAA reported today.
Coming to the rescue was Blackboard commenter Chad, who did his own plot to demonstrate +/- 95% confidence intervals using the model ensembles and HadCRUT. He showed very similar divergent results to Lucia’s plots, starting about 2006.
So the question becomes: is this the beginning of a new trend, or just short term climatic noise? Only time will tell us for certain. In the meantime it is interesting to watch.



Thom Scrutchin (17:54:00) wrote:
“The new acronym should be AGWA for Anthropogenic Global Warming Alarmism.”
How about GWAVA, Global Warming, Anthropogenic, Very Alarming…
That one sounds silly too…
David Stockwell,
Your analysis is heavily skewed by a single year’s inclusion, 2008. 2009 is shaping up considerably warmer, and so the trend is now being pushed up again.
Given the noise in the data, the end of trend line will always be wiggling around like the head of a snake. You’ve snapped the snake head looking down, as it has before, but the full movie will show it’s often pointing above trend. If you use monthly rather than yearly data and calculate up to date you’ll see the snake is starting to look up again.
Should a paper on this topic be so time sensitive that its conclusions depend on whether the trend is calculated in 2006, 2008 or 2009?
Tom P
“Given the noise in the data, the end of trend line will always be wiggling around like the head of a snake.”
That is the point, that Rahmstorf, Hansen and others made their heavily cited
conclusions on the basis of an uncertain pretext. To say, ‘O but its only
a couple of years data and the endpoint is very uncertain’, and then say that
Rahmstorf et al’s analysis is correct and wonderful is completely inconsistent.
I am not claiming that the trend has turned down. I am claiming that
Rahmstorf’s conclusion that the climate models underestimate was so fragile
that even two years data shows it to be bogus.
David,
I see your point, though “fragile” is the right word, not bogus – you can’t show Rahmstorf is wrong, just that he hasn’t made the case.
There is certainly a contradiction between your results and Lucia’s analysis. You show that temperatures are near the mid-range of IPCC TAR,
http://landshape.org/enm/wp-content/uploads/2008/10/liite5_paivitetty_rahmstorf.jpg
while Lucia shows them falling below. I think the model comparison is different, but this might also to be due to her using long-term linear least-square trends, which will always fall below even a perfect model which predicts temperatures above a linear trend.
An OLS trend is also a poor choice as it weights data points at the beginning of the series equally to the most recent. The best estimate of the future trend at any point will be from a Kalman filter, which reduces to an exponentially weighted least squares trend. Wouldn’t this be the best way of comparing data to the model?
By the way, have you posted a version of your submitted article on this?
Tom P
The specific error I claim, is that they made a type 1 error, rejecting a null hypothesis when there was no change in trend. While this appears inconsistent with Lucia, it actually makes no claim about that, and I don’t believe you can conclude that trend has not changed, as that is another test. Lucia uses different methods so they are not comparable. And anyway, their claim was that the trend had changed into high model areas. The data shows trend has not. Says nothing about whether its changed middle or low, and there is also the fuzzy issue of how model projections are spliced onto the temperature data that makes their analysis very poorly structured from an analytical pov.
If a forecast is wrong, it is not surprising for it to be quickly shown to be wrong by subsequent data. But the method needs to be held constant. Sure there are other methods, but then you get into my method vs your method, rather than showing an error of judgment, on their chosen method.
I don’t think its fair to the journal to post before publication, and editors can reject on that basis.