
Image Credit: WoodForTrees.org
Guest Post By Werner Brozek, Edited By Just The Facts, Update/Additional Explanatory Commentary from Nick Stokes
UPDATE: RSS for October has just come out and the value was 0.207. As a result, RSS has now reached the 204 month or 17 year mark. The slope over the last 17 years is -0.000122111 per year.
—
The graphic above shows 5 lines. The long horizontal line shows that RSS is flat since November 1996 to September 2013, which is a period of 16 years and 11 months or 203 months. All three programs are unanimous on this point. The two lines that are sloped up and down and which are closer together include the error bars based on Nick Stokes’ Temperature Trend Viewer page. The two lines that are sloped up and down and which are further apart include the error bars based on SkS’s Temperature Trend Calculator. Nick Stokes’ program provides much tighter error bars and therefore his times for a 95% significance are less than that of SkS. In my previous post on August 25, I said: On six different data sets, there has been no statistically significant warming for between 18 and 23 years. That statement was based on the trend from the SkS page. However based on the trend from Nick Stokes’ page, there has been no statistically significant warming for between 16 and 20 years on several different data sets. In this post, I have used Nick Stokes’ numbers in section 2 as well as row 8 of the table below. Please let us know what you think of this change. I have asked that Nick Stokes join this thread to answer any questions pertaining to the different methods of calculating 95% significance and defend his chosen method. Nick’s trend methodology/page offers the numbers for Hadsst3 so I have also switched from Hadsst2 to Hadsst3. WFT offers numbers for Hadcrut3 but I can no longer offer error bars for that set since Nick’s program only does Hadcrut4.
In the future, I am not interested in using the trend methodology/page that offers the longest times. I am not interested in using trend methodology/page that offers the shortest times. And I am not interested in using trend methodology/page that offers the highest consensus. What I am interested in is using the trend methodology/page that offers that is the most accurate representation of Earth’s temperature trend. I thought it was SkS, but I may have been wrong. Please let us know in comments if you think that SkS or Nick Stokes’s methodology/page is more accurate, and if you can offer a more accurate one, please let us know that too.
According to NOAA’s State of the Climate In 2008 report:
The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.
In this 2011 paper “Separating signal and noise in atmospheric temperature changes: The importance of timescale” Santer et al. found that:
Because of the pronounced effect of interannual noise on decadal trends, a multi-model ensemble of anthropogenically-forced simulations displays many 10-year periods with little warming. A single decade of observational TLT data is therefore inadequate for identifying a slowly evolving anthropogenic warming signal. Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.
In 2010 Phil Jones was asked by the BBC, “Do you agree that from 1995 to the present there has been no statistically-significant global warming?”, Phil Jones replied:
Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. The positive trend is quite close to the significance level. Achieving statistical significance in scientific terms is much more likely for longer periods, and much less likely for shorter periods.
I’ll leave it to you to draw your own conclusions based upon the data below.
Note: If you read my recent article RSS Flat For 200 Months (Now Includes July Data) and just wish to know what is new with the August and September data, you will find the most important new information from lines 7 to the end of the table. And as mentioned above, all lines for Hadsst3 are new.
In the sections below, we will present you with the latest facts. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on several data sets. The second section will show for how long there has been no statistically significant warming on several data sets. The third section will show how 2013 to date compares with 2012 and the warmest years and months on record so far. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.
Section 1
This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.
On all data sets below, the different times for a slope that is at least very slightly negative ranges from 8 years and 9 months to 16 years and 11 months.
1. For GISS, the slope is flat since September 1, 2001 or 12 years, 1 month. (goes to September 30, 2013)
2. For Hadcrut3, the slope is flat since May 1997 or 16 years, 5 months. (goes to September)
3. For a combination of GISS, Hadcrut3, UAH and RSS, the slope is flat since December 2000 or 12 years, 10 months. (goes to September)
4. For Hadcrut4, the slope is flat since December 2000 or 12 years, 10 months. (goes to September)
5. For Hadsst3, the slope is flat since November 2000 or 12 years, 11 months. (goes to September)
6. For UAH, the slope is flat since January 2005 or 8 years, 9 months. (goes to September using version 5.5)
7. For RSS, the slope is flat since November 1996 or 17 years (goes to October)
RSS is 203/204 or 99.5% of the way to Ben Santer’s 17 years.
The next link shows just the lines to illustrate the above for what can be shown. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the sloped wiggly line shows how CO2 has increased over this period.

When two things are plotted as I have done, the left only shows a temperature anomaly.
The actual numbers are meaningless since all slopes are essentially zero and the position of each line is merely a reflection of the base period from which anomalies are taken for each set. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 17 years, the temperatures have been flat for varying periods on various data sets.
The next graph shows the above, but this time, the actual plotted points are shown along with the slope lines and the CO2 is omitted:

Section 2
For this analysis, data was retrieved from Nick Stokes moyhu.blogspot.com. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.
On several different data sets, there has been no statistically significant warming for between 16 and 20 years.
The details for several sets are below.
For UAH: Since November 1995: CI from -0.001 to 2.501
For RSS: Since December 1992: CI from -0.005 to 1.968
For Hadcrut4: Since August 1996: CI from -0.006 to 1.358
For Hadsst3: Since May 1993: CI from -0.002 to 1.768
For GISS: Since August 1997: CI from -0.030 to 1.326
Section 3
This section shows data about 2013 and other information in the form of a table. The table shows the six data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadcrut3, Hadsst3, and GISS. Down the column, are the following:
1. 12ra: This is the final ranking for 2012 on each data set.
2. 12a: Here I give the average anomaly for 2012.
3. year: This indicates the warmest year on record so far for that particular data set. Note that two of the data sets have 2010 as the warmest year and four have 1998 as the warmest year.
4. ano: This is the average of the monthly anomalies of the warmest year just above.
5. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year.
6. ano: This is the anomaly of the month just above.
7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0.
8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month is followed by the last two numbers of the year.
9. Jan: This is the January, 2013, anomaly for that particular data set.
10. Feb: This is the February, 2013, anomaly for that particular data set, etc.
21. ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months. However if the data set itself gives that average, I may use their number. Sometimes the number in the third decimal place differs by one, presumably due to all months not having the same number of days.
22. rnk: This is the rank that each particular data set would have if the anomaly above were to remain that way for the rest of the year. It may not, but think of it as an update 45 minutes into a game. Due to different base periods, the rank is more meaningful than the average anomaly.
| Source | UAH | RSS | Had4 | Had3 | Sst3 | GISS |
|---|---|---|---|---|---|---|
| 1. 12ra | 9th | 11th | 9th | 10th | 9th | 9th |
| 2. 12a | 0.161 | 0.192 | 0.448 | 0.406 | 0.346 | 0.58 |
| 3. year | 1998 | 1998 | 2010 | 1998 | 1998 | 2010 |
| 4. ano | 0.419 | 0.55 | 0.547 | 0.548 | 0.416 | 0.67 |
| 5. mon | Apr98 | Apr98 | Jan07 | Feb98 | Jul98 | Jan07 |
| 6. ano | 0.66 | 0.857 | 0.829 | 0.756 | 0.526 | 0.94 |
| 7. y/m | 8/9 | 16/11 | 12/10 | 16/5 | 12/11 | 12/1 |
| 8. sig | Nov95 | Dec92 | Aug96 | May93 | Aug97 | |
| Source | UAH | RSS | Had4 | Had3 | Sst3 | GISS |
| 9. Jan | 0.504 | 0.440 | 0.450 | 0.390 | 0.292 | 0.63 |
| 10.Feb | 0.175 | 0.194 | 0.479 | 0.424 | 0.309 | 0.51 |
| 11.Mar | 0.183 | 0.204 | 0.405 | 0.384 | 0.287 | 0.60 |
| 12.Apr | 0.103 | 0.218 | 0.427 | 0.400 | 0.364 | 0.48 |
| 13.May | 0.077 | 0.139 | 0.498 | 0.472 | 0.382 | 0.57 |
| 14.Jun | 0.269 | 0.291 | 0.457 | 0.426 | 0.314 | 0.61 |
| 15.Jul | 0.118 | 0.222 | 0.514 | 0.488 | 0.479 | 0.54 |
| 16.Aug | 0.122 | 0.167 | 0.527 | 0.491 | 0.483 | 0.61 |
| 17.Sep | 0.297 | 0.257 | 0.534 | 0.516 | 0.455 | 0.74 |
| Source | UAH | RSS | Had4 | Had3 | Sst3 | GISS |
| 21.ave | 0.205 | 0.237 | 0.474 | 0.444 | 0.374 | 0.588 |
| 22.rnk | 6th | 8th | 9th | 7th | 6th | 9th |
If you wish to verify all of the latest anomalies, go to the following links, For UAH, version 5.5 was used since that is what WFT used, RSS, Hadcrut4, Hadcrut3, Hadsst3,and GISS
To see all points since January 2013 in the form of a graph, see the WFT graph below:

Appendix
In this section, we summarize data for each set separately.
RSS
The slope is flat since November 1996 or 16 years and 11 months. (goes to September) RSS is 203/204 or 99.5% of the way to Ben Santer’s 17 years.
For RSS: There is no statistically significant warming since December 1992: CI from -0.005 to 1.968
The RSS average anomaly so far for 2013 is 0.237. This would rank 8th if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2012 was 0.192 and it came in 11th.
UAH
The slope is flat since January 2005 or 8 years, 9 months. (goes to September using version 5.5)
For UAH: There is no statistically significant warming since November 1995: CI from -0.001 to 2.501
The UAH average anomaly so far for 2013 is 0.205. This would rank 6th if it stayed this way. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.66. The anomaly in 2012 was 0.161 and it came in 9th.
Hadcrut4
The slope is flat since December 2000 or 12 years, 10 months. (goes to September)
For HadCRUT4: There is no statistically significant warming since August 1996: CI from -0.006 to 1.358
The Hadcrut4 average anomaly so far for 2013 is 0.474. This would rank 9th if it stayed this way. 2010 was the warmest at 0.547. The highest ever monthly anomaly was in January of 2007 when it reached 0.829. The anomaly in 2012 was 0.448 and it came in 9th.
Hadcrut3
The slope is flat since May 1997 or 16 years, 5 months (goes to September, 2013)
The Hadcrut3 average anomaly so far for 2013 is 0.444. This would rank 7th if it stayed this way. 1998 was the warmest at 0.548. The highest ever monthly anomaly was in February of 1998 when it reached 0.756. One has to go back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less. The anomaly in 2012 was 0.406 and it came in 10th.
Hadsst3
For Hadsst3, the slope is flat since November 2000 or 12 years, 11 months. (goes to September, 2013).
For Hadsst3: There is no statistically significant warming since May 1993: CI from -0.002 to 1.768
The Hadsst3 average anomaly so far for 2013 is 0.374. This would rank 6th if it stayed this way. 1998 was the warmest at 0.416. The highest ever monthly anomaly was in July of 1998 when it reached 0.526. The anomaly in 2012 was 0.346 and it came in 9th.
GISS
The slope is flat since September 1, 2001 or 12 years, 1 month. (goes to September 30, 2013)
For GISS: There is no statistically significant warming since August 1997: CI from -0.030 to 1.326
The GISS average anomaly so far for 2013 is 0.588. This would rank 9th if it stayed this way. 2010 was the warmest at 0.67. The highest ever monthly anomaly was in January of 2007 when it reached 0.94. The anomaly in 2012 was 0.58 and it came in 9th.
Conclusion
It appears as if we can accurately say from what point in time the slope is zero or any other value. However the period where warming is statistically significant seems to be more of a challenge. Different programs give different results. However what I found really surprising was that according to Nick’s program, GISS shows significant warming at over 95% for the months of November 1996 to July 1997 inclusive. However during those nine months, the slope for RSS is not even positive! Can we trust both data sets?
———-
Update: Additional Explanatory Commentary from Nick Stokes
Trends and errors:
A trend coefficient is just a weighted average of a time series, which describes the rate of increase. You can calculate it without any particular statistical model in mind.
If you want to quantify the uncertainty you have about it, you need to be clear what kind of variations you have in mind. You might want to describe the uncertainty of actual measurement. You might want to quantify the spatial variability. Or you might want to say how typical that trend is given time variability. In other words, what if the weather had been different?
It’s that last variability that we’re talking about here, and we need a model for the variation. In all kinds of time series analysis, ARIMA models are a staple. No-one seriously believes that their data really is a linear trend with AR(1) fluctuations, or whatever, but you try to get the nearest fitting model to estimate the trend uncertainty.
In my trend viewer, I used AR(1). It’s conventional, because it allows for autocorrelating based on a single delay coefficient, and there is a widely used approximation (Quenouille). I’ve described here how you can plot the autocorrelation function to show what is being fitted. The uncertainty of the trend is proportional to the area under the fitted ACF. Foster and Rahmstorf argued, reasonably, that the AR(1) underfits, and a ARMA(1,1) approx does better. Here is an example from my post. SkS uses that approach, following F&R.
You can see from the ACF that it’s really more complicated, The real ACF does not taper exponentially – it oscillates, with a period of about 4 years – likely ENSO related. Some of that effect reaches back near zero, where the ARIMA fitting is done. If it is taken out, the peak would be more slender that AR(1). But there is uncertainty with ENSO too.
So the message is, trend uncertainty is complicated.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Nick Stokes says:
November 4, 2013 at 7:41 am
ferd berple says: November 4, 2013 at 5:39 am
that pretty much sums up climate science. use a model that everyone agrees doesn’t match the characteristics of the underlying data, and then place your faith in the results of the model.”
====================================
It isn’t climate science – it is standard statistics, used by all sorts of people. In fact, these are Box-Jenkins models. You know, the Box who famously said – “all models are wrong, but some are useful”. These are the models he was talking about.
————————————————————————————————————
Nick, it may be “standard statistics” but standard statistics are useless when misapplied to situations that don’t comply with the fundamental assumptions inherent in them.
The fundamental assumption when plotting linear trends is that the data is inherently linear (even if noisy), or at least has a linear component that we can detect. Fitting a linear trend to primarily cyclic data is absolutely meaningless unless you know the nature of the cyclic behaviour. It’s the sort of basic mistake we were warned against as 14 year olds in O level mathematics!
To suggest that linear trends are suitable, you’re effectively saying that we know all cycles in the climate system over all timescales . Because, if we don’t, then the apparent trend we measure may be part of an “up” or “down” slope of a cycle that we haven’t considered.
Do you really have enough hubris to claim that we know everything there is to know about every climate cycle?
Werner Brozek:
Thankyou for your post at November 4, 2013 at 10:15 am.
Firstly, please be assured that my questions of Nick Stokes were a sincere attempt to understand what the graph is showing and was not an attempt to denigrate your and/or his work. Indeed, you may have noted that I said his method could be disputed but I was choosing not to distract the subject with that.
It is many decades since I was doing this stuff ‘by hand’. Forty years ago I could have realed-off a method to determine the confidence limits of the regression but not now. Given time I could probably find it among all the stuff in the garage, but I am to leave for another of my times away in the morning so I don’t have the time to do that search at the moment: sorry. Anyway, there are probably software packages available because we are only talking about confidence limits of linear regression.
You say
You did the ‘right thing’ but the comment by Ross McKitrick is very relevant and it addresses an issue that ferd berple and joe have independently raised in the thread.
So-called ‘climate science’ uses linear trends so the use of linear trends is right for for consistency with them. The curvature of a line is a model which one fits to the data. The analyst chooses his/her model of the data by fitting the data to a curve of his/her choice: a straight line is the simplest model but it may not be the correct model. However, one can determine if the data is deviating from any model.
2-sigma can be assumed to be 95.2% or for convenience 95%.
In my opinion, there are flaws with both the methods you mention (i.e. Stokes and SkS); for example, neither adequately compensates for autocorelation. However, in my opinion both are useful for the purposes to which you are applying them so choose whichever you find easiest to use. If you have doubts ask Steve McIntyre who has all the detail you need at his fingertips.
I hope this rushed response contains something of use.
Richard
My 56 year old text book says the confidence limits on a linear regression = plus or minus the t statistic times an estimate of the error variance of the dependent variable. The confidence limits are a function of x. When plotted with the regression line they will form a hyperbolic envelope centered on the overall mean of x.
richardscourtney says:
November 4, 2013 at 11:03 am
Indeed, you may have noted that I said his method could be disputed but I was choosing not to distract the subject with that.
Thank you for your comments! But as for distracting the subject, that is one of the main purposes of this particular thread. Ease of use is not an issue for me. I have seen criticisms of SkS, however if I had always used Nick’s method, I may have seen criticisms for that.
P.S. Nick or Richard, could either of you please elaborate on who does a better job on auto correlation with respect to the following statement since that is one of the criticisms that I saw with respect to SkS?: Thank you.
In my opinion, there are flaws with both the methods you mention (i.e. Stokes and SkS); for example, neither adequately compensates for autocorelation.
So we have a global warming ‘pause’ caused by natural climate variations nullifying the otherwise overwhelming cause of global warming / climate change – the monotonically rising level of Carbon Dioxide in the atmosphere.
An amazing coincidence that these natural variations (stadium waves if you will) have matched almost perfectly the rapid warming from the the rise in Carbon Dioxide in the atmosphere. It really is amazing when you think about it the forcing from Carbon Dioxide increasing is logarithmic; yet the sum of the long term natural oscillations – all of them of all lengths from the Sun to the oceans – have matched the warming from Carbon Dioxide almost perfectly for 17 years!!
What an stunning coincidence!!
for those who believe in such coincidences – I have an ocean beach front property in Kansas you will be interested in. You will have to hurry there is already a queue of politicians and climate ‘scientists’ all eager to buy ….
Joe says:November 4, 2013 at 10:28 am
“The fundamental assumption when plotting linear trends is that the data is inherently linear (even if noisy), or at least has a linear component that we can detect. Fitting a linear trend to primarily cyclic data is absolutely meaningless unless you know the nature of the cyclic behaviour.”
That’s actually not true. A trend coefficient is just a weighted sum that describes the average rate of change over the period. You can apply it to cyclic data; it is the trend for the period stated, even though you know that it will change as time goes on. A regression will tell you that it warmed during spring; we know that isn’t climate change and won’t last, but it did indeed warm during spring.
But I’ll use that as a lead-in to Werner’s question, which I did try to address in my initial note that he put into the post. There are deviations from linearity which we know don’t really behave as any kind of ARIMA stochastic. I linked to this plot, which shows the ACF for HAD 4 for 1980-2013, and the two fitted ARIMA functions. The quoted uncertainty is a scaled area underneath. You can see that AR(1) (my version) is required to fit the first lag, and then seems to undershoot; ARMA(1,1) (SkS) has an extra parameter, and so tracks better for a year or so.
But all Box-Jenkins fitted models do taper away exponentially and are generally positive; the real ACF is different. You can see plots for other indices here; the pattern in the same. And this plot shows how the cyclic behaviour continues for years (in ACF lags).
That’s what Box means by “all models are wrong”. The cyclic component (ENSO, probably) is padding out the variation attributed to ARIMA stochastics. If you take it away (here), then there is less stochastic variation that either SkS or I would allow. But ENSO has its own variability, which would have to be added in.
As I said, it’s complicated. That’s why it is treated with diffidence by scientists and organisations like the Met Office. But the Daily Mail is more fearless.
Nick Stokes says:
November 3, 2013 at 4:30 pm
Would this model calculate time spent scraping ice/frost, clearing snow, or condensation both inside and outside my car, winter road conditions and such?
I always get up for work earlier than I need to according to my personal models precisely because my models would make me consistently late.
Nick Stokes says:
November 4, 2013 at 2:06 pm
Joe says:November 4, 2013 at 10:28 am
“The fundamental assumption when plotting linear trends is that the data is inherently linear (even if noisy), or at least has a linear component that we can detect. Fitting a linear trend to primarily cyclic data is absolutely meaningless unless you know the nature of the cyclic behaviour.”
======================================================
That’s actually not true. A trend coefficient is just a weighted sum that describes the average rate of change over the period. You can apply it to cyclic data; it is the trend for the period stated, even though you know that it will change as time goes on. A regression will tell you that it warmed during spring; we know that isn’t climate change and won’t last, but it did indeed warm during spring.
—————————————————————————————————————
My apologies, Nick, I was a little imprecise. But I assumed that your statistical knowledge would allow you to understand the very basic principle that I was referring to.
Yes, a linear trend calculated from cyclic data (even if the cycles are unknown) can indeed have useful meaning within the data interval measured. But it has no meaning whatsoever as soon as you move outside that data interval.
So modelling data that you know has large (and unknown) cyclic components as a linear trend, then expecting that trend to give any meaningful information about future data values is a high school error.
Now, either I was mistaken in assuming your statistical understanding reaches that grade school level, or else you were aware and decided to intentionally misunderstand / misrepresent my point.
I won’t waste my breath asking which it was!
Joe says: November 4, 2013 at 3:14 pm
“expecting that trend to give any meaningful information about future data values is a high school error”
Who did that?
It’s a two stage process. First let’s get the current trend right, including any estimate we can make of uncertainty. Then maybe think about the future. I’m at the first stage. Cycles, GHG’s etc are relevant to the second. But the IPCC, Met Office etc are not big on extrapolating trends.
Nick Stokes says:
November 4, 2013 at 3:46 pm
Joe says: November 4, 2013 at 3:14 pm
“expecting that trend to give any meaningful information about future data values is a high school error”
=========================================
Who did that?
————————————————————————————————————————
Only the entire multi-billion £ alarmist industry. But that’s kind of beside the point,
A linear trend fitted to non-linear data can’t give any clues about the future (outside the measured data). So the only conceivable purpose of taking your “first stage” using linear trends (which we know are a false model of the system) is to give a misleading, even dishonest, impression of our understanding of the system.
I say dishonest – reluctantly – because we know the linear model has no future utility, yet we allow it to be presented to, and interpreted by, others as if it does. Honesty would require us all to be saying “actually, it doesn’t mean that at all” every single time a policy maker or activist tries to use that trend to suggest what the future holds.
Unfortunately statistics like those from this current track above are not being told to Europeans. In the Key Messages section of the EEA Global and European Temperature Assessmeent published in August 2013 , EEA is telling Europe a different picture, namely,
• Between 1990 and 2010, the rate of change in global average temperature has been close to the 0.2°C per decade.
• Global mean surface temperature rose rapidly from the 1970s, but has been relatively flat in the last decade mostly due to heat transfer between upper and deep ocean waters.
It probably doesn’t matter that the 17-year tipping point has been reached – the alarmists are very creative at coming up- with rationalizations and excuses for saying it doesn’t matter. I think it will take another long series of hard winters and poor summers – another 17 years maybe – before people will entirely stop listening to the alarmists, and even then the alarmists will still claim that it all has been caused by global warming.