Guest Post By Werner Brozek, Edited By Just The Facts
In order to answer the question in the title, we need to know what time period is a reasonable period to take into consideration. As well, we need to know exactly what we mean by “stalled”. For example, do we mean that the slope of the temperature-time graph must be 0 in order to be able to claim that global warming has stalled? Or do we mean that we have to be at least 95% certain that there indeed has been warming over a given period?
With regards to what a suitable time period is, NOAA says the following:
”The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
To verify this for yourself, see page 23 of this NOAA Climate Assessment.
Below we present you with just the facts and then you can assess whether or not global warming has stalled in a significant manner. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on several data sets. The second section will show for how long there has been no significant warming on several data sets. The third section will show how 2012 ended up in comparison to other years. The appendix will illustrate sections 1 and 2 in a different way. Graphs and tables will be used to illustrate the data.
Section 1
This analysis uses the latest month for which data is available on WoodForTrees.org (WFT). (If any data is updated after this report is sent off, I will do so in the comments for this post.) All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.
On all data sets below, the different times for a slope that is at least very slightly negative ranges from 8 years and 3 months to 16 years and 1 month:
1. For GISS, the slope is flat since May 2001 or 11 years, 7 months. (goes to November)
2. For Hadcrut3, the slope is flat since May 1997 or 15 years, 7 months. (goes to November)
3. For a combination of GISS, Hadcrut3, UAH and RSS, the slope is flat since December 2000 or an even 12 years. (goes to November)
4. For Hadcrut4, the slope is flat since November 2000 or 12 years, 2 months. (goes to December.)
5. For Hadsst2, the slope is flat since March 1997 or 15 years, 10 months. (goes to December)
6. For UAH, the slope is flat since October 2004 or 8 years, 3 months. (goes to December)
7. For RSS, the slope is flat since January 1997 or 16 years and 1 month. (goes to January) RSS is 193/204 or 94.6% of the way to Ben Santer’s 17 years.
The following graph, also used as the header for this article, shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the sloped wiggly line shows how CO2 has increased over this period:
The next graph shows the above, but this time, the actual plotted points are shown along with the slope lines and the CO2 is omitted:

Section 2
For this analysis, data was retrieved from WoodForTrees.org and the ironically named SkepticalScience.com. This analysis indicates how long there has not been significant warming at the 95% level on various data sets. The first number in each case was sourced from WFT. However the second +/- number was taken from SkepticalScience.com
For RSS the warming is not significant for over 23 years.
For RSS: +0.127 +/-0.136 C/decade at the two sigma level from 1990
For UAH, the warming is not significant for over 19 years.
For UAH: 0.143 +/- 0.173 C/decade at the two sigma level from 1994
For Hacrut3, the warming is not significant for over 19 years.
For Hadcrut3: 0.098 +/- 0.113 C/decade at the two sigma level from 1994
For Hacrut4, the warming is not significant for over 18 years.
For Hadcrut4: 0.095 +/- 0.111 C/decade at the two sigma level from 1995
For GISS, the warming is not significant for over 17 years.
For GISS: 0.116 +/- 0.122 C/decade at the two sigma level from 1996
If you want to know the times to the nearest month that the warming is not significant for each set, they are as follows: RSS since September 1989; UAH since April 1993; Hadcrut3 since September 1993; Hadcrut4 since August 1994; GISS since October 1995 and NOAA since June 1994.
Section 3
This section shows data about 2012 in the form of tables. Each table shows the six data sources along the left, namely UAH, RSS, Hadcrut4, Hadcrut3, Hadsst2, and GISS. Along the top, are the following:
1. 2012. Below this, I indicate the present rank for 2012 on each data set.
2. Anom 1. Here I give the average anomaly for 2012.
3. Warm. This indicates the warmest year on record so far for that particular data set. Note that two of the data sets have 2010 as the warmest year and four have 1998 as the warmest year.
4. Anom 2. This is the average anomaly of the warmest year just to its left.
5. Month. This is the month where that particular data set showed the highest anomaly. The months are identified by the first two letters of the month and the last two numbers of the year.
6. Anom 3. This is the anomaly of the month immediately to the left.
7. 11ano. This is the average anomaly for the year 2011. (GISS and UAH were 10th warmest in 2011. All others were 13th warmest for 2011.)
Anomalies for different years:
| Source | 2012 | anom | warm | anom | month | anom | 11ano |
|---|---|---|---|---|---|---|---|
| UAH | 9th | 0.161 | 1998 | 0.419 | Ap98 | 0.66 | 0.130 |
| RSS | 11th | 0.192 | 1998 | 0.55 | Ap98 | 0.857 | 0.147 |
| Had4 | 10th | 0.436 | 2010 | 0.54 | Ja07 | 0.818 | 0.399 |
| Had3 | 10th | 0.403 | 1998 | 0.548 | Fe98 | 0.756 | 0.340 |
| sst2 | 8th | 0.342 | 1998 | 0.451 | Au98 | 0.555 | 0.273 |
| GISS | 9th | 0.56 | 2010 | 0.66 | Ja07 | 0.93 | 0.54 |
If you wish to verify all rankings, go to the following:
For UAH, see here, for RSS see here and for Hadcrut4, see here. Note the number opposite the 2012 at the bottom. Then going up to 1998, you will find that there are 9 numbers above this number. That confirms that 2012 is in 10th place. (By the way, 2001 came in at 0.433 or only 0.001 less than 0.434 for 2012, so statistically, you could say these two years are tied.)
For Hadcrut3, see here. You have to do something similar to Hadcrut4, but look at the numbers at the far right. One has to back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less.
For Hadsst2, see here. View as for Hadcrut3. It came in 8th place with an average anomaly of 0.342, narrowly beating 2006 by 2/1000 of a degree as that came in at 0.340. In my ranking, I did not consider error bars, however 2006 and 2012 would statistically be a tie for all intents and purposes.
For GISS, see here. Check the J-D (January to December) average and then check to see how often that number is exceeded back to 1998.
For the next two tables, we again have the same six data sets, but this time the anomaly for each month is shown. [The table is split in half to fit, if you know how to compress it to fit the year, please let us know in comments The last column has the average of all points to the left.]
| Source | Jan | Feb | Mar | Apr | May | Jun |
|---|---|---|---|---|---|---|
| UAH | -0.134 | -0.135 | 0.051 | 0.232 | 0.179 | 0.235 |
| RSS | -0.060 | -0.123 | 0.071 | 0.330 | 0.231 | 0.337 |
| Had4 | 0.288 | 0.208 | 0.339 | 0.525 | 0.531 | 0.506 |
| Had3 | 0.206 | 0.186 | 0.290 | 0.499 | 0.483 | 0.482 |
| sst2 | 0.203 | 0.230 | 0.241 | 0.292 | 0.339 | 0.352 |
| GISS | 0.36 | 0.39 | 0.49 | 0.60 | 0.70 | 0.59 |
| Source | Jul | Aug | Sep | Oct | Nov | Dec | Avg |
|---|---|---|---|---|---|---|---|
| UAH | 0.130 | 0.208 | 0.339 | 0.333 | 0.282 | 0.202 | 0.161 |
| RSS | 0.290 | 0.254 | 0.383 | 0.294 | 0.195 | 0.101 | 0.192 |
| Had4 | 0.470 | 0.532 | 0.515 | 0.527 | 0.518 | 0.269 | 0.434 |
| Had3 | 0.445 | 0.513 | 0.514 | 0.499 | 0.482 | 0.233 | 0.403 |
| sst2 | 0.385 | 0.440 | 0.449 | 0.432 | 0.399 | 0.342 | 0.342 |
| GISS | 0.51 | 0.57 | 0.66 | 0.70 | 0.68 | 0.44 | 0.56 |
To see the above in the form of a graph, see the WFT graph below.:
Appendix
In this part, we are summarizing data for each set separately.
RSS
The slope is flat since January 1997 or 16 years and 1 month. (goes to January) RSS is 193/204 or 94.6% of the way to Ben Santer’s 17 years.
For RSS the warming is not significant for over 23 years.
For RSS: +0.127 +/-0.136 C/decade at the two sigma level from 1990.
For RSS, the average anomaly for 2012 is 0.192. This would rank 11th. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2011 was 0.147 and it will come in 13th.
Following are two graphs via WFT. Both show all plotted points for RSS since 1990. Then two lines are shown on the first graph. The first upward sloping line is the line from where warming is not significant at the 95% confidence level. The second straight line shows the point from where the slope is flat.
The second graph shows the above, but in addition, there are two extra lines. These show the upper and lower lines for the 95% confidence limits. Note that the lower line is almost horizontal but slopes slightly downward. This indicates that there is a slightly larger than a 5% chance that cooling has occurred since 1990 according to RSS per graph 1 and graph 2.
UAH
The slope is flat since October 2004 or 8 years, 3 months. (goes to December)
For UAH, the warming is not significant for over 19 years.
For UAH: 0.143 +/- 0.173 C/decade at the two sigma level from 1994
For UAH the average anomaly for 2012 is 0.161. This would rank 9th. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.66. The anomaly in 2011 was 0.130 and it will come in 10th.
Following are two graphs via WFT. Everything is identical as with RSS except the lines apply to UAH. Graph 1 and graph 2.
Hadcrut4
The slope is flat since November 2000 or 12 years, 2 months. (goes to December.)
For Hacrut4, the warming is not significant for over 18 years.
For Hadcrut4: 0.095 +/- 0.111 C/decade at the two sigma level from 1995
With Hadcrut4, the anomaly for 2012 is 0.436. This would rank 10th. 2010 was the warmest at 0.54. The highest ever monthly anomaly was in January of 2007 when it reached 0.818. The anomaly in 2011 was 0.399 and it will come in 13th.
Following are two graphs via WFT. Everything is identical as with RSS except the lines apply to Hadcrut4. Graph 1 and graph 2.
Hadcrut3
The slope is flat since May 1997 or 15 years, 7 months (goes to November)
For Hacrut3, the warming is not significant for over 19 years.
For Hadcrut3: 0.098 +/- 0.113 C/decade at the two sigma level from 1994
With Hadcrut3, the anomaly for 2012 is 0.403. This would rank 10th. 1998 was the warmest at 0.548. The highest ever monthly anomaly was in February of 1998 when it reached 0.756. One has to back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less. The anomaly in 2011 was 0.340 and it will come in 13th.
Following are two graphs via WFT. Everything is identical as with RSS except the lines apply to Hadcrut3. Graph 1 and graph 2.
Hadsst2
The slope is flat since March 1997 or 15 years, 10 months. (goes to December)
The Hadsst2 anomaly for 2012 is 0.342. This would rank in 8th. 1998 was the warmest at 0.451. The highest ever monthly anomaly was in August of 1998 when it reached 0.555. The anomaly in 2011 was 0.273 and it will come in 13th.
Sorry! The only graph available for Hadsst2 is this.
GISS
The slope is flat since May 2001 or 11 years, 7 months. (goes to November)
For GISS, the warming is not significant for over 17 years.
For GISS: 0.116 +/- 0.122 C/decade at the two sigma level from 1996
The GISS anomaly for 2012 is 0.56. This would rank 9th. 2010 was the warmest at 0.66. The highest ever monthly anomaly was in January of 2007 when it reached 0.93. The anomaly in 2011 was 0.54 and it will come in 10th.
Following are two graphs via WFT. Everything is identical as with RSS except the lines apply to GISS. Graph 1 and graph 2.
Conclusion
Above, various facts have been presented along with sources from where all facts were obtained. Keep in mind that no one is entitled to their own facts. It is only in the interpretation of the facts for which legitimate discussions can take place. After looking at the above facts, do you think that we should spend billions to prevent catastrophic warming? Or do you think that we should take a “wait and see” attitude for a few years to be sure that future warming will be as catastrophic as some claim it will be? Keep in mind that even the MET office felt the need to revise its forecasts. Look at the following and keep in mind that the MET office believes that the 1998 mark will be beaten by 2017. Do you agree?

Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


richardscourtney says:
February 11, 2013 at 3:24 am
And an important consideration in this determination is whether or not (at the 95% level) a zero trend has existed for 15 or more years. This is because in 2008 NOAA reported that the climate models show “Near-zero and even negative trends are common for intervals of a decade or less in the simulations”.
But, the climate models RULE OUT “(at the 95% level) zero trends for intervals of 15 yr or more”.
I explain this with reference, quote, page number and link to the NOAA 2008 report in my post at February 10, 2013 at 5:48 pm.
But you neglect to mention that those models did not include ENSO (clearly stated in the report).
I understand your difficulty. What the models “rule out” nature has done, and this falsifies your cherished models. You need to come to terms with it.
Since nature has not eschewed ENSO, nature has not in fact ‘done it’, in fact when the data is corrected for the presence of ENSO no such 15 year period is observed. You need to come to terms with that. Pick a period starting with an El Niño and ending with a La Niña and you’d expect flattening.
Philip Shehan says:
More cherry picking.
Let’s just concentrate on one of the data sets shown – the claim that Hadcrut4 temperature data is flat since November 2000.
Look also at the data from 1999. Or compare the entire Muana Loa data set from 1958 with temperature.
And remember, “statistical significance” here is at the 95% level. That is, even a 94% probability that the data is not a chance result fails at this level.
http://tinyurl.com/atsx4os
==========
Yes, I would agree with your introductory phrase , what you are doing is “more cherry picking”. The usual IPCC deception. Chose a period where both are going in the same direction and pretend this shows causation.
Why did you chose 1968? Let’s see. What does 1928 look like?
http://www.woodfortrees.org/plot/hadcrut4gl/from:1928/mean:12/plot/esrl-co2/from:1928/normalise/scale:0.75/offset:0.2/plot/hadcrut4gl/from:1999/to:2013/trend/plot/hadcrut4gl/from:2000.9/to:2013/trend
Now Mauna Loa record does not go that far back but no one pretends CO2 was dropping from 1930-1960. So what YOU were doing is cherry picking.
What the author was doing was testing the data against a specific claim that was intended to be falsifiable statement … which turned out to be falsified. There is no “cherry-picking” involved in testing where a hypothesis fits the claims of its authors.
That is simple, honest application of scientific method.
What part of “simple” and “honest” is causing you problems Professor Cherry-picker?
Werner Brozek says:
February 11, 2013 at 9:13 am
Phil. says:
February 11, 2013 at 5:09 am
Thank you! So I want to be sure I said the correct thing. Earlier, I said the following:
“If you go to:
http://www.skepticalscience.com/trend.php
And if you then start from 1989.67, you would find:
“0.131 ±0.132 °C/decade (2σ)”
In other words, the warming is not significant at 95% since September 1989.”
Is this last statement completely correct or should it be modified in some way? If so, how? Thanks!
You’re welcome. In your last statement I’d say that the null hypothesis is that no warming took place, i.e. that the real trend is less than or equal to zero. My alternate hypothesis would be that warming did take place, I therefore need to reject the null hypothesis on statistical grounds, I choose to do so at the 95% level. Using the data we have and doing a ‘one tailed’ test I would need to show that there is a less than 5% chance of the trend actually being zero or less. In the case you presented the data shows an ~2.5% chance of zero or below so the null hypothesis is rejected therefore warming is significant at the 95% level. I’d prefer to work with the original data and do a Pearson type test but I’d expect a similar result.
To fail to reject the null at the 95% level on this basis the threshold would be at 1.65 sigma so roughly 0.131±0.216°C/decade based on my back of the envelope calc.
Phil says: Since nature has not eschewed ENSO, nature has not in fact ‘done it’, in fact when the data is corrected for the presence of ENSO no such 15 year period is observed. You need to come to terms with that.
===
Oh, you gotta love it.
When the data is “corrected” ….. Hey bud, the DATA is correct, that’s why it’s called the data (singular datum: point of reference) . That’s where science starts. Now how about correcting the frigging models?
No one was “correcting” for the lack of ENSO when they were winding up the world at the end of the century. Then the “uncorrected” warming was a “wake up call” etc. etc. Now it we have to take it into account.
How about we “correct” for the 60y cycle and the 10y and the 9y and then come back and look at what climate is really doing?
When all the cycle peaked at once it was all fine and dandy to claim the world was about to turn into Venus. Now the wind is blowing in the other direction , it suddenly has to be corrected…. until next time it starts going up again and we can forget the “corrections”.
The sad thing is you guys are probably serious and honestly believe this garbage.
Phil. says:
February 11, 2013 at 10:00 am
Since nature has not eschewed ENSO, nature has not in fact ‘done it’, in fact when the data is corrected for the presence of ENSO no such 15 year period is observed. You need to come to terms with that. Pick a period starting with an El Niño and ending with a La Niña and you’d expect flattening.
I would be more inclined to agree with you if there had been a strong El Nino and then neutral conditions afterwards. However as it turns out, the La Ninas that followed the 1998 El Nino effectively cancelled it out. So the slope since 1997, or 16 years and 1 month is 0, however the slope from March, 2000, or 12 years and 11 months, is also 0. So in my opinion, nature has corrected for ENSO. However even if it didn’t, there comes a point in time where one has to stop blaming an ENSO from 14 years back for a lack of catastrophic warming. It would be like a new president blaming the old president for things 7 years after taking office. Note that the graph below actually starts in the La Nina region.
http://www.woodfortrees.org/plot/rss/from:1997/plot/rss/from:1997/trend/plot/rss/from:2000.16/trend
P.S. Thank you for
Phil. says:
February 11, 2013 at 10:29 am
richardscourtney says:
February 10, 2013 at 5:48 pm
DirkH says:
February 11, 2013 at 5:48 am
The term “falsification criterion” is highly misleading. There is no “falsification criterion” published by NOAA or anyone else. Here again is the relevant quote:
This defines a discrepency, not falsification. There’s no reason to believe NOAA indends these words to be synonlyms.
To start with, when an event takes place that was only 5% probable, the only thing we can say for sure is that we are surprised. (Or, in this case if you’re Phil Jones, you can say that you’re worried.) Sometimes we have tests that will accept or reject a hypothesis (really, rejecting or accepting a null hypothesis) at the 95% level–if we define it that way, and we’re willing to live with the mistakes. We do this In the context of, say, drug testing, We can’t do statistical tests without error, and getting down to a 1% error is very expensive. And, for the final tests, the statistics is all we have. So, some benificial drugs are rejected, and some useless ones accepted, because that simply cannot be helped.
But in science, when the question is, “what’s going on? How does nature work in this system?” we look at the data, perhaps noting that some of it is surprising, and then view it in context with the rest of the data. Some data may show a problem for the hypothesis (or the model) but there’s no universal, automatic rejection criterion.
Even if you decide that the hypothesis (or computer model) is not working, it may be possible to salvage it. It may be that a modified version will work much better, and the discrepencies between data and prediction may even point the way to refining the hypotheses (or model). This is quite common in science. A great many scientists could give advice on how to do this, and, importantly, on when to give up. But, there aren’t any hard-and-fast rules on it, certainly none so simplistic as a 95% criterion for one test.
So: There is no specific “falsificaton citerion” for models based on flat temperatures, only an informal definition of “discrepency.” A discrepency does not falsify the models; it is interesting enough to merit a closer look.
BUT: there is an even more serious problem with this 95% discrepency definition: not all of the processes contributing to temperature are explicitly predicted in the models. Specifically, you cannot predict ENSO, Solar radiation, or even volcanoes many years into the future. So, instead, the models include future El Nino and La Nina events at random (although with frequency and strength consistent with what we know), and do not consider solar and volcanic forcings at all for the future. (Anybody who understands these things can at least guess how solar brightening or dimming, or volcanic eruptions, would affect the model projections.) These all come into play for “hindcasting,” wherein they run the models using the observed ENSO, solar, and volcanic events. 1998 saw a record-breaking El Nino, followed by La Nina events soon afterards and also recently. The sun has been quiet lately, too, which should be of great interest to anyone interested in climate issues.
People keep ignoring this. Really, this is not that difficult. Suppose a soccer (football) commentator, doing TV broadcasts of the games, acquired an uncanny knack for predicting the winners and scores of individual games. But, one week, some of the best players on the stronger team are injured an auto accident and cannot play. They lose. The following week, there’s a surprise announcement; a weaker team acquires a star player and wins handily. The commentator is wrong on both counts. Does everyone say, “he’s lost his touch, he’s no good at predicting!” Or do they say, “He could not have foreseen that?” Reasonable people would say the latter.
So, in looking at recent temperatures, how to deal with ENSO, modeled only statistically in forecasts, and with solar (and volcanic) influences? Rahmstorf and Foster did so with a multivarite regression analysis: http://iopscience.iop.org/1748-9326/7/4/044035/article
This is from 2011, but since then, the issues have been more of the same.
Bob Tisdale has taken issue with the ENSO part, at least:
http://wattsupwiththat.com/2012/11/28/mythbusting-rahmstorf-and-foster/
and
http://wattsupwiththat.com/2012/01/14/tisdale-on-foster-and-rahmstorf-take-2/
Although I think Rahmstorf and Foster have shown that it’s plausible that there is no discrepency for temperatures over the last two decades and longer–or to put it another way, there is not necesarily any discrepency–I would like to see some hindcasts run over that time period, showing how ENSO, solar, and volcanoes fit with known physical parameters, not just fitting them to one another.
Science is complicated. We’re used to getting simple explanations that summarize a lot of this complexity into a more digestible form. But some explanations, like “flat temperatures–models falsified” are much too simple. Even a little digging is enough to falsify a misleading explanation such as that one.
Werner,
Thanks for your kind responses.
Although I don’t think you answered your headline question, you’re quite right to point out that the global temperature anomaly series are all showing long periods of time during which temperatures have shown no significant upward trend. It’s clearly a striking departure from the sharp rise that was clearly playing out between 1970 and the late 1990s, and it is well worth drawing attention to this.
I’ve been playing around with LOESS curve fitting to the temperature series, using a 15-year smoothing period, (which to me seems to be a way to ask the question about changes in trends without cherry picking too badly) and they’re all showing similar behaviour, with the WoodForTrees index dead flat since 2007, Hadcrut4 showing a downward trend in the last 5 years, but UAH (including the January 2013 figure) is still rising slowly.
I’m not sure that it has yet reached the point where anyone can really say that the expectation of continued rise in temperature is disproved or falsified. If you fit a straight line to the first half of the satellite era (when everyone agrees temperatures were rising) and extrapolate it forward to the second half, it really doesn’t look that stupid. OK, most series are currently below the extrapolated trend line, but not by an amount that is exceptional.
Lucia Liljegren (The Blackboard) periodically attempts to analyse the question of whether current temperature observations are compatible (at the 95% confidence level) with the predictions of climate models, or with an assumed 0.2C/decade trend. Her understanding of the impact of autocorrelation on linear trend confidence intervals is way beyond mine, She seems to be of the view that recent observations have been skirting very much along the bottom of, and (depending on choice of model) possibly outside, the 95% envelope, but at this stage it wouldn’t take much of an upward move to bring them back into line.
I fear you may find that your analysis is rather sensitive to what happens next. If temperatures drop, then you are going to be able to report longer and more significant non-warming periods as time goes on (and paradigms will probably have to change!), but temperatures show much of a rise, the opposite will happen.
The January 2013 figure for UAH of +0.506, which you haven’t included in this analysis, has already invalidated the 8 year flat-or-negative trend that you report above. With that figure included, you’re down to 2008 (4 years) as the longest flat-or-negative trend.
The very impressive over-16-years RSS flat trend includes the January 2013 figure, but if RSS stays at exactly the same level for the rest of 2013, I’m afraid you won’t reach Ben Santer’s 17 full years of no warming, because by Dec 2013, you’ll still have a month to go and the trend will have turned positive!
Werner,
Apologies for the second post, but I wanted also to respond specifically to your comment (February 11, 2013 at 9:28 am) about my suggestion:
You said:
Werner, if you think that the slope of 0.015 has any validity as a predictor of what might happen next, then you have to embrace the entire linear regression model. The linear regression model is telling us both that the slope is 0.015 and that the best estimate of the real value (after eliminating noise) at the end of 2012 is 0.381. You can’t pick out the slope figure and apply it using the Dec 2012 figure of 0.274 as the starting point, because the model tells us we’re already at 0.381 at Dec 2012.
The latest observation, 0.274, is just a single observation. It happens to be 0.108 (roughly 2 standard deviations according to Skeptical Science’s calculator) below the best estimate of the central trend value at Dec 2012. If the linear model is right, that 0.108 deviation is noise. By 2017, we’re as likely to end up 0.108 above the trend as 0.108 below it. So a figure of 0.35 is possible, but so is a figure of 0.57! And the most likely outcome (if the linear model is right) is going to be bang on the current trend line, or round about 0.45, which is comfortably above the 1998 figure.
Nigel Harris says:
February 11, 2013 at 12:40 pm
Nigel,
Thanks for your kind responses.
As far as me not including January for UAH, that is because UAH has not put the January numbers in the data set that WFT uses. But it gets worse than that. I only could go to November with GISS, Hadcrut3, and WTI since WFT does not have the December numbers on its site yet. If I see new numbers tonight, I will give an update. Should all numbers appear, then UAH will get shorter and Hadcrut3 and WTI will get longer. I do not know about GISS since while there was a huge drop in December, they adjusted many numbers upwards last month.
As far as the jump in January was concerned, I did find it odd since ENSO has been dropping for the last 5 months. However sudden rises and drops are not unheard of. See:
Hadcrut4 data is shown from October 1996 to March 1997. From November to January, the anomaly jumped by 0.293. Then from January to February, it dropped by 0.28.
http://www.woodfortrees.org/plot/hadcrut4gl/from:2006.75/to:2007.25
Nigel Harris says:
February 11, 2013 at 1:00 pm
Hello Nigel,
The Hadcrut people did something really strange with regards to their prediction of 0.43. They used a different base line where the 1998 value became 0.40 instead of 0.56. That is why I had to offset it by -0.16. Here is what they really should have used:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1997/mean:12
However this makes no difference to my final conclusion.
Note that the final point on the above graph is close to the 2012 average of 0.434. I do not think the slope will go up at 0.015/year. I am just saying that even if it did, you would not reach the 1998 height. That would rival the fastest trends ever recorded. See:
http://news.bbc.co.uk/2/hi/science/nature/8511670.stm
Phil.:
At February 11, 2013 at 10:00 am you perhaps inadvertently mislead by only quoting the synopsis in my post at richardscourtney February 11, 2013 at 3:24 am.
Please read my earlier post at February 10, 2013 at 5:48 pm.
In that post I cite, reference, link to and quote the entire NOAA falsification criterion. To save you finding it I copy the pertinent section from it here.
Please note that I specifically stated
“We now see that reality has had (at the 95% level) zero trends for more than 17 years whether or not one interpolates across or extrapolates back across the 1998 ENSO peak.”
So, a claim that I misstated the criterion and did not consider ENSO is a falsehood.
Richard
Phil.:
This is a deliberate addendum to my reply to your post at February 11, 2013 at 10:00 am.
This is separate so it is clear and not hidden among other points.
You wrongly assert
ENSO is not understood so no method for removing ENSO can be rationally asserted as being better than any other.
I wrote
You do not have a point.
Richard
JazzyT:
I read your sophistry at February 11, 2013 at 11:49 am.
If you insist then I am willing to agree that the criterion be called a ;’discrepancy criterion’ and not the usual ‘falsification criterion’.
So, according to your wording
the models are not falsified when they are “discrepant” with reality.
Perhaps you would explain how “discrepant” they have to be for them to be falsified?
Richard
Werner Brozek says:
February 11, 2013 at 11:28 am
Phil. says:
February 11, 2013 at 10:00 am
“Since nature has not eschewed ENSO, nature has not in fact ‘done it’, in fact when the data is corrected for the presence of ENSO no such 15 year period is observed. You need to come to terms with that. Pick a period starting with an El Niño and ending with a La Niña and you’d expect flattening.”
I would be more inclined to agree with you if there had been a strong El Nino and then neutral conditions afterwards. However as it turns out, the La Ninas that followed the 1998 El Nino effectively cancelled it out. So the slope since 1997, or 16 years and 1 month is 0, however the slope from March, 2000, or 12 years and 11 months, is also 0. So in my opinion, nature has corrected for ENSO. However even if it didn’t, there comes a point in time where one has to stop blaming an ENSO from 14 years back for a lack of catastrophic warming.
Certainly there comes a time however the statistical treatment using ENSO Index suggests that that time is not yet. However the main point is that you can’t use a criterion based on models which don’t include an ENSO model and apply it to a time series which does include ENSO.
Greg Goodman says:
February 11, 2013 at 10:41 am
Phil says: Since nature has not eschewed ENSO, nature has not in fact ‘done it’, in fact when the data is corrected for the presence of ENSO no such 15 year period is observed. You need to come to terms with that.
===
Oh, you gotta love it.
When the data is “corrected” ….. Hey bud, the DATA is correct, that’s why it’s called the data (singular datum: point of reference) . That’s where science starts. Now how about correcting the frigging models?
The data is correct, but it can not be compared with the results of models which don’t contain a model for the ENSO phenomenon. Yes to incorporate a model for ENSO would be nice, but since it’s a phenomenon which doesn’t occur on a regular basis that’s difficult to do! What has been done is to account for the known events and then compare, which shows no such extended flat period.
richardscourtney says:
February 11, 2013 at 2:52 pm
Phil.:
This is a deliberate addendum to my reply to your post at February 11, 2013 at 10:00 am.
This is separate so it is clear and not hidden among other points.
You wrongly assert
Since nature has not eschewed ENSO, nature has not in fact ‘done it’, in fact when the data is corrected for the presence of ENSO no such 15 year period is observed. You need to come to terms with that. Pick a period starting with an El Niño and ending with a La Niña and you’d expect flattening.
ENSO is not understood so no method for removing ENSO can be rationally asserted as being better than any other.
I wrote
“We now see that reality has had (at the 95% level) zero trends for more than 17 years whether or not one interpolates across or extrapolates back across the 1998 ENSO peak.”
You do not have a point.
Actually I do, you are attempting to compare apples with oranges, you also assert that no method can be rationally used to correct for ENSO in which case even the comparisons made using statistical adjustments for ENSO aren’t valid, So your continued quoting of the NOAA criterion isn’t applicable to real world data. The ENSO phenomenon over the last 17 years is much more than a single event in 1998. So I reiterate, nature has not done it.
Greg Goodman says:
February 11, 2013 at 10:27 am…
You miss my point. I cherry picked 1999 to show that it gave a quite different result to November 2000.The point being that you can cherry pick a short term period to show any trend you like. (My choice of 1958 was not cherry picking. That is when Muana Loa data starts.)
Furthermore, the concentration on “statistical significance” actually shows precisely why short term data trends are misleading. In the short term the noise swamps the signal, and the shorter the time frame the greater the uncertainty. Take the Hadcrut4 data above, and proceed backwards decade by decade. The trend and 95 % confidence limits per decade are as follows :
11/2000: -0.008 ± 0.171
1999 : 0.079 ± 0.149 °
1995: 0.098 ± 0.111
1990: 0.144 ± 0.080
1980: 0.158 ± 0.045
1970: 0.164 ± 0.031
1960: 0.132 ± 0.025
And so on the further back you go.
http://www.skepticalscience.com/trend.php
So those who pick short periods of time and declare them “not statistically significant” are actually explaining why their data should be ignored.
They are setting the data up to fail. Only multidecadal trends are meaningful.
“Has Global Warming Stalled?”
Of course it has. Only the ignorant and the dishonest would claim otherwise.
Phil.:
At February 11, 2013 at 3:12 pm you say
I feedback what I read that to say because if this is how I understand your comment then other will, too.
You are saying that when the empirical data don’t agree with the model then the empirical data must be adjusted to agree.
That contravenes every principle of scientific modelling.
Of course, one may want to exclude the effect of ENSO from the data because the model does not emulate ENSO. But nobody understands ENSO and, therefore, parsimony dictates the exclusion needs to be interpolation across – or extrapolation across – an ENSO event. Any other ‘adjustment’ for ENSO is a fudge.
The global temperature time series each shows (at the 95% level) zero trends for more than 17 years whether or not one interpolates across or extrapolates back across the 1998 ENSO peak.
I am sure you will want to respond to this and I apologise that I will not see any reply for about a week because I am about to go on one of my frequent but irregular trips which exclude me from communications.
Richard
Philip Shehan says:
February 11, 2013 at 3:27 pm
In the short term the noise swamps the signal, and the shorter the time frame the greater the uncertainty. …..Only multidecadal trends are meaningful.
But on the other hand, if the warming rate is high enough, then 16 years is sufficient. Check out the following for Hadcrut4:
Start of 1995 to end 2009: 0.133 +/- 0.144. Warming for 15 years is NOT significant.
Start of 1995 to end 2010: 0.137 +/- 0.129. Warming for 16 years IS significant.
Start of 1995 to end 2011: 0.109 +/- 0.119. Warming for 17 years is NOT significant.
Start of 1995 to October 2012: 0.098 +/- 0.111. Warming for 18 years is NOT significant.
Werner based on the ‘one tailed’ test all of those examples show significant warming at the 95% level.
Phil.,
You sound positively jealous that Werner Brozek has posted such a credible argument.
There’s nothing stopping you from posting your own article, you know. Then you would see all the similar nitpicking comments about whether a test has one tail or two, etc.
Face facts, Werner has made a good case. Global warming has stalled.
UPDATE
UAH has now been updated for January and it has had a huge effect on the length of time that the slope is at least slightly negative. The time dropped from 8 years and 3 months to 4 years and 7 months. So the slope is now negative from June 2008 to January 2013. See the green line at:
http://www.woodfortrees.org/plot/uah/from:2000/plot/uah/from:2008.5/trend
Werner, You are not only cherry picking years, you are picking years and months within a very short time frame and find one set that is only just significant. You are rather making my point for me.
Phil. says:
February 11, 2013 at 6:57 pm
Werner based on the ‘one tailed’ test all of those examples show significant warming at the 95% level.
Phil, I appreciate your help, and from earlier comments, I understand where you are coming from. I realize your expertise in statistics is greater than mine and I see no reason to dispute the above statement. I checked the site at: http://www.skepticalscience.com/trend.php
I found this statement with regards to using this site:
“What can you do with it?
That’s up to you, but here are some possibilities:
Examine how long a period is required to identify a recent trend significantly different from zero.”
You may recall Phil Jones’ comments in 2010 that the warming was not significant for 15 years from 1995 to 2009, but later said it was significant for 16 years from 2009 to 2010. The only thing that makes sense to me is that he used the ‘two tailed’ test that skeptical science uses. Would you agree with that? I appreciate you informing me that I cannot mention the 95%. I will therefore play it safe and say that according to the skeptical science site, such and such is where the trend is “significantly different from zero”. Does the skeptical science site do things wrongly? Perhaps, although I am not in a position to judge that.
Regards