Statistical Significances – How Long Is "The Pause"? (Now Includes September Data)

WoodForTrees.org – Paul Clark – Click the pic to view at source

Image Credit: WoodForTrees.org

Guest Post By Werner Brozek, Edited By Just The Facts, Update/Additional Explanatory Commentary from Nick Stokes

UPDATE: RSS for October has just come out and the value was 0.207. As a result, RSS has now reached the 204 month or 17 year mark. The slope over the last 17 years is -0.000122111 per year.

The graphic above shows 5 lines. The long horizontal line shows that RSS is flat since November 1996 to September 2013, which is a period of 16 years and 11 months or 203 months. All three programs are unanimous on this point. The two lines that are sloped up and down and which are closer together include the error bars based on Nick Stokes’ Temperature Trend Viewer page. The two lines that are sloped up and down and which are further apart include the error bars based on SkS’s Temperature Trend Calculator. Nick Stokes’ program provides much tighter error bars and therefore his times for a 95% significance are less than that of SkS. In my previous post on August 25, I said: On six different data sets, there has been no statistically significant warming for between 18 and 23 years. That statement was based on the trend from the SkS page. However based on the trend from Nick Stokes’ page, there has been no statistically significant warming for between 16 and 20 years on several different data sets. In this post, I have used Nick Stokes’ numbers in section 2 as well as row 8 of the table below. Please let us know what you think of this change. I have asked that Nick Stokes join this thread to answer any questions pertaining to the different methods of calculating 95% significance and defend his chosen method. Nick’s trend methodology/page offers the numbers for Hadsst3 so I have also switched from Hadsst2 to Hadsst3. WFT offers numbers for Hadcrut3 but I can no longer offer error bars for that set since Nick’s program only does Hadcrut4.

In the future, I am not interested in using the trend methodology/page that offers the longest times. I am not interested in using trend methodology/page that offers the shortest times. And I am not interested in using trend methodology/page that offers the highest consensus. What I am interested in is using the trend methodology/page that offers that is the most accurate representation of Earth’s temperature trend. I thought it was SkS, but I may have been wrong. Please let us know in comments if you think that SkS or Nick Stokes’s methodology/page is more accurate, and if you can offer a more accurate one, please let us know that too.

According to NOAA’s State of the Climate In 2008 report:

The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.

In this 2011 paper “Separating signal and noise in atmospheric temperature changes: The importance of timescale” Santer et al. found that:

Because of the pronounced effect of interannual noise on decadal trends, a multi-model ensemble of anthropogenically-forced simulations displays many 10-year periods with little warming. A single decade of observational TLT data is therefore inadequate for identifying a slowly evolving anthropogenic warming signal. Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.

In 2010 Phil Jones was asked by the BBC, “Do you agree that from 1995 to the present there has been no statistically-significant global warming?”, Phil Jones replied:

Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. The positive trend is quite close to the significance level. Achieving statistical significance in scientific terms is much more likely for longer periods, and much less likely for shorter periods.

I’ll leave it to you to draw your own conclusions based upon the data below.

Note: If you read my recent article RSS Flat For 200 Months (Now Includes July Data) and just wish to know what is new with the August and September data, you will find the most important new information from lines 7 to the end of the table. And as mentioned above, all lines for Hadsst3 are new.

In the sections below, we will present you with the latest facts. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on several data sets. The second section will show for how long there has been no statistically significant warming on several data sets. The third section will show how 2013 to date compares with 2012 and the warmest years and months on record so far. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.

Section 1

This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.

On all data sets below, the different times for a slope that is at least very slightly negative ranges from 8 years and 9 months to 16 years and 11 months.

1. For GISS, the slope is flat since September 1, 2001 or 12 years, 1 month. (goes to September 30, 2013)

2. For Hadcrut3, the slope is flat since May 1997 or 16 years, 5 months. (goes to September)

3. For a combination of GISS, Hadcrut3, UAH and RSS, the slope is flat since December 2000 or 12 years, 10 months. (goes to September)

4. For Hadcrut4, the slope is flat since December 2000 or 12 years, 10 months. (goes to September)

5. For Hadsst3, the slope is flat since November 2000 or 12 years, 11 months. (goes to September)

6. For UAH, the slope is flat since January 2005 or 8 years, 9 months. (goes to September using version 5.5)

7. For RSS, the slope is flat since November 1996 or 17 years (goes to October)

RSS is 203/204 or 99.5% of the way to Ben Santer’s 17 years.

The next link shows just the lines to illustrate the above for what can be shown. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the sloped wiggly line shows how CO2 has increased over this period.

WoodForTrees.org – Paul Clark – Click the pic to view at source

When two things are plotted as I have done, the left only shows a temperature anomaly.

The actual numbers are meaningless since all slopes are essentially zero and the position of each line is merely a reflection of the base period from which anomalies are taken for each set. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 17 years, the temperatures have been flat for varying periods on various data sets.

The next graph shows the above, but this time, the actual plotted points are shown along with the slope lines and the CO2 is omitted:

WoodForTrees.org – Paul Clark – Click the pic to view at source

Section 2

For this analysis, data was retrieved from Nick Stokes moyhu.blogspot.com. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 16 and 20 years.

The details for several sets are below.

For UAH: Since November 1995: CI from -0.001 to 2.501

For RSS: Since December 1992: CI from -0.005 to 1.968

For Hadcrut4: Since August 1996: CI from -0.006 to 1.358

For Hadsst3: Since May 1993: CI from -0.002 to 1.768

For GISS: Since August 1997: CI from -0.030 to 1.326

Section 3

This section shows data about 2013 and other information in the form of a table. The table shows the six data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadcrut3, Hadsst3, and GISS. Down the column, are the following:

1. 12ra: This is the final ranking for 2012 on each data set.

2. 12a: Here I give the average anomaly for 2012.

3. year: This indicates the warmest year on record so far for that particular data set. Note that two of the data sets have 2010 as the warmest year and four have 1998 as the warmest year.

4. ano: This is the average of the monthly anomalies of the warmest year just above.

5. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year.

6. ano: This is the anomaly of the month just above.

7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0.

8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month is followed by the last two numbers of the year.

9. Jan: This is the January, 2013, anomaly for that particular data set.

10. Feb: This is the February, 2013, anomaly for that particular data set, etc.

21. ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months. However if the data set itself gives that average, I may use their number. Sometimes the number in the third decimal place differs by one, presumably due to all months not having the same number of days.

22. rnk: This is the rank that each particular data set would have if the anomaly above were to remain that way for the rest of the year. It may not, but think of it as an update 45 minutes into a game. Due to different base periods, the rank is more meaningful than the average anomaly.

Source UAH RSS Had4 Had3 Sst3 GISS
1. 12ra 9th 11th 9th 10th 9th 9th
2. 12a 0.161 0.192 0.448 0.406 0.346 0.58
3. year 1998 1998 2010 1998 1998 2010
4. ano 0.419 0.55 0.547 0.548 0.416 0.67
5. mon Apr98 Apr98 Jan07 Feb98 Jul98 Jan07
6. ano 0.66 0.857 0.829 0.756 0.526 0.94
7. y/m 8/9 16/11 12/10 16/5 12/11 12/1
8. sig Nov95 Dec92 Aug96 May93 Aug97
Source UAH RSS Had4 Had3 Sst3 GISS
9. Jan 0.504 0.440 0.450 0.390 0.292 0.63
10.Feb 0.175 0.194 0.479 0.424 0.309 0.51
11.Mar 0.183 0.204 0.405 0.384 0.287 0.60
12.Apr 0.103 0.218 0.427 0.400 0.364 0.48
13.May 0.077 0.139 0.498 0.472 0.382 0.57
14.Jun 0.269 0.291 0.457 0.426 0.314 0.61
15.Jul 0.118 0.222 0.514 0.488 0.479 0.54
16.Aug 0.122 0.167 0.527 0.491 0.483 0.61
17.Sep 0.297 0.257 0.534 0.516 0.455 0.74
Source UAH RSS Had4 Had3 Sst3 GISS
21.ave 0.205 0.237 0.474 0.444 0.374 0.588
22.rnk 6th 8th 9th 7th 6th 9th

If you wish to verify all of the latest anomalies, go to the following links, For UAH, version 5.5 was used since that is what WFT used, RSS, Hadcrut4, Hadcrut3, Hadsst3,and GISS

To see all points since January 2013 in the form of a graph, see the WFT graph below:

WoodForTrees.org – Paul Clark – Click the pic to view at source

Appendix

In this section, we summarize data for each set separately.

RSS

The slope is flat since November 1996 or 16 years and 11 months. (goes to September) RSS is 203/204 or 99.5% of the way to Ben Santer’s 17 years.

For RSS: There is no statistically significant warming since December 1992: CI from -0.005 to 1.968

The RSS average anomaly so far for 2013 is 0.237. This would rank 8th if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2012 was 0.192 and it came in 11th.

UAH

The slope is flat since January 2005 or 8 years, 9 months. (goes to September using version 5.5)

For UAH: There is no statistically significant warming since November 1995: CI from -0.001 to 2.501

The UAH average anomaly so far for 2013 is 0.205. This would rank 6th if it stayed this way. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.66. The anomaly in 2012 was 0.161 and it came in 9th.

Hadcrut4

The slope is flat since December 2000 or 12 years, 10 months. (goes to September)

For HadCRUT4: There is no statistically significant warming since August 1996: CI from -0.006 to 1.358

The Hadcrut4 average anomaly so far for 2013 is 0.474. This would rank 9th if it stayed this way. 2010 was the warmest at 0.547. The highest ever monthly anomaly was in January of 2007 when it reached 0.829. The anomaly in 2012 was 0.448 and it came in 9th.

Hadcrut3

The slope is flat since May 1997 or 16 years, 5 months (goes to September, 2013)

The Hadcrut3 average anomaly so far for 2013 is 0.444. This would rank 7th if it stayed this way. 1998 was the warmest at 0.548. The highest ever monthly anomaly was in February of 1998 when it reached 0.756. One has to go back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less. The anomaly in 2012 was 0.406 and it came in 10th.

Hadsst3

For Hadsst3, the slope is flat since November 2000 or 12 years, 11 months. (goes to September, 2013).

For Hadsst3: There is no statistically significant warming since May 1993: CI from -0.002 to 1.768

The Hadsst3 average anomaly so far for 2013 is 0.374. This would rank 6th if it stayed this way. 1998 was the warmest at 0.416. The highest ever monthly anomaly was in July of 1998 when it reached 0.526. The anomaly in 2012 was 0.346 and it came in 9th.

GISS

The slope is flat since September 1, 2001 or 12 years, 1 month. (goes to September 30, 2013)

For GISS: There is no statistically significant warming since August 1997: CI from -0.030 to 1.326

The GISS average anomaly so far for 2013 is 0.588. This would rank 9th if it stayed this way. 2010 was the warmest at 0.67. The highest ever monthly anomaly was in January of 2007 when it reached 0.94. The anomaly in 2012 was 0.58 and it came in 9th.

Conclusion

It appears as if we can accurately say from what point in time the slope is zero or any other value. However the period where warming is statistically significant seems to be more of a challenge. Different programs give different results. However what I found really surprising was that according to Nick’s program, GISS shows significant warming at over 95% for the months of November 1996 to July 1997 inclusive. However during those nine months, the slope for RSS is not even positive! Can we trust both data sets?

———-

Update: Additional Explanatory Commentary from Nick Stokes

Trends and errors:

A trend coefficient is just a weighted average of a time series, which describes the rate of increase. You can calculate it without any particular statistical model in mind.

If you want to quantify the uncertainty you have about it, you need to be clear what kind of variations you have in mind. You might want to describe the uncertainty of actual measurement. You might want to quantify the spatial variability. Or you might want to say how typical that trend is given time variability. In other words, what if the weather had been different?

It’s that last variability that we’re talking about here, and we need a model for the variation. In all kinds of time series analysis, ARIMA models are a staple. No-one seriously believes that their data really is a linear trend with AR(1) fluctuations, or whatever, but you try to get the nearest fitting model to estimate the trend uncertainty.

In my trend viewer, I used AR(1). It’s conventional, because it allows for autocorrelating based on a single delay coefficient, and there is a widely used approximation (Quenouille). I’ve described here how you can plot the autocorrelation function to show what is being fitted. The uncertainty of the trend is proportional to the area under the fitted ACF. Foster and Rahmstorf argued, reasonably, that the AR(1) underfits, and a ARMA(1,1) approx does better. Here is an example from my post. SkS uses that approach, following F&R.

You can see from the ACF that it’s really more complicated, The real ACF does not taper exponentially – it oscillates, with a period of about 4 years – likely ENSO related. Some of that effect reaches back near zero, where the ARIMA fitting is done. If it is taken out, the peak would be more slender that AR(1). But there is uncertainty with ENSO too.

So the message is, trend uncertainty is complicated.

0 0 votes
Article Rating
112 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
November 3, 2013 2:23 pm

“Well she was just seventeen,
Now we know flat’s the mean…”
Apologies to Paul McCartney…

November 3, 2013 2:27 pm

Reblogged this on biting tea and commented:
“The long horizontal line shows that RSS is flat since November 1996 to September 2013. which is a period of 16 years and 11 months or 203 months. All three programs are unanimous on this point.”

Leonard Weinstein
November 3, 2013 2:29 pm

Due to the chaotic underlying nature of climate, and due to the fact that we are likely nearing the end of the Holocene, any variation up, down, or flat for even several decades demonstrates nothing useful. The flat or even downward trend could be a large natural downward trend that has been significantly temporarily overcome by the large human warming effect, or the variation could be totally natural dominated variation. Playing statistics games on such processes is truly just a game, with no meaning. At this point we do not know what is going on or which way the trend will go from here, and to say otherwise is hubris.

cnxtim
November 3, 2013 2:33 pm

With respect Leonard, the IPCC is ONLY about statistics – it drives their funding and is crucial to their future financial support.

DR
November 3, 2013 2:34 pm

It’s an upside down greenhouse effect.

Stephen Richards
November 3, 2013 2:38 pm

Leonard Weinstein says:
November 3, 2013 at 2:29 pm
Yours are the key points. No one can prove that variations in climate are human driven or natural and yet the ÜNIPCC along with every numpty leader in the western world is trying to force us to give up are current civilisation to ameliorate the perpetual changes in climate.

Stephen Richards
November 3, 2013 2:39 pm

No-one seriously believes that their data really is a linear trend with AR(1) fluctuations
I’m not sure that I believe this staement, Nick.

November 3, 2013 3:23 pm

What should we expect if the data are actually going through a max of a long term cycle and the CO2 long term cycle max follows some years later? It isn’t a straight line trend.

michael hart
November 3, 2013 3:36 pm

Unprecedentedly non-alarming?

Richard M
November 3, 2013 3:39 pm

I think it’s now pretty obvious that we will pass the 17 year mark when RSS comes out with their October numbers. Over the past few years the global anomaly has dropped over the winter months. There does not appear to be anything unusual going on that would change that pattern. We might even see the length go to 17 years and 1 month. If we do see the cooling winter cycle the length could be 17 years and 6 months by January.

CodeTech
November 3, 2013 3:44 pm

“The Pause”.
Sounds like a Stephen King novel. But I guarantee it’s keeping some people awake at night filled with fear. They wonder if their gravy train has finally gone off the rails.
It’s a travesty, really.
(Pause, cyclical Peak, tomato, toMAHto)

November 3, 2013 3:46 pm

Stephen Richards says: November 3, 2013 at 2:39 pm
“I’m not sure that I believe this staement, Nick.”

Well, it’s 97% certain :). But anyway, the IPCC etc don’t claim that. You won’t see major claims about stat significance of linear trends based on time series there, and the Met Office famously needed considerable prodding to embark on such an exercise. Temperature is reckoned to respond to forcings, which aren’t linear with time.

rgbatduke
November 3, 2013 3:51 pm

Well, and then there is what happens if you add in the error bars in the underlying data. Let’s take HADCRUT4, for example, which has a claimed sigma of around 0.15 C for the period(s) graphed above. RSS is trickier — they estimate error with Monte Carlo, and the result varies with latitude, but sigma errors appear to be in the range 0.03C to 0.05 C (it is reasonable that satellite observations would be more systematic and would have smaller error bars than any of the mostly-land based estimates of e.g. GASTA. And let us not forget — we’re still dealing with the claim that GASTA is known to order of a few tenths of a degree at the same time NASA openly acknowledges that GAST is unknown more accurately than order of a full degree C even in the modern era, a proposition that I find rather dubious (but more dubious when supposedly connected back over 100+ years than over the satellite era).
With admitted uncertainty in the actual data the trend is a lot more difficult to compute, not just from persistence of supposedly irrelevant autocorrelation but from the fact that we cannot possibly know measured quantities precisely and the quantities in question are globally extrapolated averages from (usually remarkably sparse and imprecise) measure quantities sampled irregularly in space and time. The most correct statement one can make about the pitifully short data intervals pictured above is that a) they are nearly trendless; b) the uncertainty in the data, and hence the trend, is great. Since historically climate data has repeatedly exhibited trended stretches of fifty years or more with gains in GASTA (according to e.g. HADCRUT4) that almost precisely match the warming observed in the late 20th century — for example in the early 20th century — talking about “95% confidence intervals” for any purpose but falsifying the GCMs used to predict catastrophic CO_2-linked warming is rather pointless. With regard to the GCMs, a significant fraction of the models contributing to CIMP5 are manifestly inconsistent with all of the anomaly predictions. This doesn’t disprove CAGW, but it does prove that most if not all of the GCMs are unreliable and that to the extent that predictions of CAGW rely on them, the correct answer is that we currently have no idea if CAGW is a correct hypothesis, but it is rather LESS likely to be true as opposed to more likely.
rgb

KNR
November 3, 2013 3:51 pm

It does not matter how long it is , the magic of AGW means that no matter how long it will never before enough to disprove ‘the cause ‘ . And its the same ‘magic’ that means any time period can be used to ‘prove ‘ the cause .
Drop the idea its science, think politics or religion and you will how this game works.

November 3, 2013 3:55 pm

“So the message is, trend uncertainty is complicated.”
that’s an understatement, Nick.

geran
November 3, 2013 3:56 pm

DR says:
November 3, 2013 at 2:34 pm
It’s an upside down greenhouse effect.
>>>>>>
I got to go with DR on this one!

Editor
November 3, 2013 3:59 pm

The bottom line is that there has been no warming for several years. This may be down to natural variability, but please wake me up when we find out one way or the other.
I do find it astonishing that, having been told over and again that we have x-months to save the planet from climageddon, alarmists now admit that any man made warming is so small as to be made invisible by a bit of “natural variation”.

Sun Spot
November 3, 2013 4:05 pm

It’s clear CO2 is NOT a major climate driver !!!

jeanparisot
November 3, 2013 4:12 pm

Does anyone know what is the legal status of land claimed by glacial advance, during the glaciation and after the retreat?

geran
November 3, 2013 4:12 pm

Is their any truth to the rumor that Stokes and Mosher are identical twins, separated at birth?
Seriously, their comments are almost identical, except Stokes knows how to write, so obviously raised in an educated household.

Joe
November 3, 2013 4:15 pm

No-one seriously believes that their data really is a linear trend with AR(1) fluctuations, [but we all use that anyway]
—————————————————————————————————————
Forgive me, Nick, but isn’t that rather like me saying “I know I can’t really drive through rush houor at the speed limit, but, for convenience, I’m going to model my commute as if I can” and then telling the boss it’s him in the wrong when I’m consistently late for work?

November 3, 2013 4:17 pm

fhhaynie says:
November 3, 2013 at 3:23 pm
What should we expect if the data are actually going through a max of a long term cycle and the CO2 long term cycle max follows some years later? It isn’t a straight line trend.
I believe we are actually going in a sort of sine wave as illustrated here as far as temperatures are concerned.
http://wattsupwiththat.files.wordpress.com/2009/03/akasofu_ipcc.jpg
I believe we have peaked the top of the cycle and are headed down. The result is that the straight line will get longer and longer over the next 10 or 20 years.
But as far as CO2 is concerned, it will take the same path steadily upwards but it will have only a minimal impact on temperatures.

November 3, 2013 4:29 pm

Richard M says:
November 3, 2013 at 3:39 pm
I think it’s now pretty obvious that we will pass the 17 year mark when RSS comes out with their October numbers.
The anomaly was 0.257 in September. It only needs to be 0.265 or less in October so the chances are over 50% that it will happen when October’s numbers are out.

Nick Stokes
November 3, 2013 4:30 pm

Joe says: November 3, 2013 at 4:15 pm
“I know I can’t really drive through rush houor at the speed limit, but, for convenience, I’m going to model my commute as if I can”

Oddly, I was going to use that analogy too. Not speed limit, but suppose you work out an average speed for your daily commute. That doesn’t mean that you expect to drive at that speed uniformly; in fact it doesn’t imply any speed model at all. But suppose you then plan to arrive at work reliably on time. You need a model of variability. It doesn’t need to perfectly predict how your journey will go, but it needs to give average variation.
Most people in that situation do do something like that. And it mostly works.

November 3, 2013 4:47 pm

Thanks, Werner, JustTheFacts, Nick. Very good article.
Yes, it is a long time now, too long to dismiss anyway. But the future will be as nature pleases.

pat
November 3, 2013 4:48 pm

3 Nov: UK Daily Mail: David Rose: Global warming ‘pause’ may last for 20 more years and Arctic sea ice has already started to recover
Study says warmer temperatures are largely due to natural 300-year cycles
Actual increase in last 17 years lower than almost every prediction
Scientists likened continuing pause to a Mexican wave in a stadium
Even IPCC report co-authors such as Dr Hawkins admit some of the models are ‘too hot’.
He said: ‘The upper end of the latest climate model projections is inconsistent’ with observed temperatures, though he added even the lower predictions could have ‘negative impacts’ if true.
But if the pause lasted another ten years, and there were no large volcanic eruptions, ‘then global surface temperatures would be outside the IPCC’s indicative likely range’.
Professor Curry went much further. ‘The growing divergence between climate model simulations and observations raises the prospect that climate models are inadequate in fundamental ways,’ she said.
If the pause continued, this would suggest that the models were not ‘fit for purpose’.
http://www.dailymail.co.uk/news/article-2485772/Global-warming-pause-20-years-Arctic-sea-ice-started-recover.html

Geoff Sherrington
November 3, 2013 4:52 pm

Leonard Weinstein writes “….the large human warming effect, or the variation could be totally natural dominated variation. Playing statistics games on such processes is truly just a game, with no meaning. At this point we do not know what is going on or which way the trend will go from here, and to say otherwise is hubris.”
Leonard, given your unquestioned adoption of “the large human warming effect” an effect that is derived from a statistical treatment of data, what does this say about your own hubris?
Why do people assume that there has been a large human warming effect when, despite billions of dollars of research, nobody has been able to definitively show that “human” has anything to do with the subject; or that GHG are linked, by an accepted mathematical equation, to deltaT in the atmosphere, however computed.
Two of the pillars of AGW have crumbled, but people are too ready to look the other way.

Jimbo
November 3, 2013 4:56 pm

We have already passed the 15 year mark and looks like we will pass the 17 year mark. Do you think the IPCC is paying attention?
15 to 17 years

NOAA
The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf

————————————————–

Santer et. al. – June 22, 2012
The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes. The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive. Our results suggest that forcing errors are a serious concern.”
http://www.pnas.org/content/early/2012/11/28/1210514109.full.pdf

When is the referee going to blow the full time whistle? When is the fat lady going to sing her lungs out? The climate models have failed.

Green Sand
November 3, 2013 4:58 pm

Nick Stokes says:
November 3, 2013 at 4:30 pm
“It doesn’t need to perfectly predict how your journey will go, but it needs to give average variation.”

Then after 6 months of commute and not getting to work reliably the responsible employee ditches the model and starts using actual observational data.

geran
November 3, 2013 4:59 pm

Geoff Sherrington says:
November 3, 2013 at 4:52 pm
“Why do people assume that there has been a large human warming effect when, despite billions of dollars of research, nobody has been able to definitively show that “human” has anything to do with the subject; or that GHG are linked, by an accepted mathematical equation, to deltaT in the atmosphere, however computed.
Two of the pillars of AGW have crumbled, but people are too ready to look the other way.”
>>>>>>>>>>
Why can’t I state the facts so succinctly?

Jimbo
November 3, 2013 5:00 pm

Is Dr. Phil Jones getting worried now. It’s all happening right now for ya.

Dr. Phil Jones – CRU emails – 7th May, 2009
‘Bottom line: the ‘no upward trend’ has to continue for a total of 15 years before we get worried.’

Jeff
November 3, 2013 5:08 pm

“jeanparisot says:
November 3, 2013 at 4:12 pm
Does anyone know what is the legal status of land claimed by glacial advance, during the
glaciation and after the retreat?”
I suspect they freeze all the assets of everyone involved in the transaction….

Nick Stokes
November 3, 2013 5:12 pm

“Green Sand says: November 3, 2013 at 4:58 pm
“the responsible employee ditches the model and starts using actual observational data.”

No, there’s always a model based on observations – AR(1) etc is just fancier. But when you say – 20 mins average but allow an extra 10 for traffic – that’s a model based on observation. 10 mins will get you there with 95% probability or whatever. It’s never certain. But you need some plan.

November 3, 2013 5:16 pm

I have a visceral problem with the first graph.
How can you show uncertainty of slope (dy/dx) all eminateing from a certain point (1997, 0.235)?
Uncertainty of slope comes with uncerainty of intercept.
So the 5 lines ought to look more like a horizontal hour glass than a fan.

wayne
November 3, 2013 5:28 pm

I see. So that “alow 10 extra minutes”, just in case, so you will be 95% sure to never understate the time and be late is like the positive adjustments to the temperature records when they are homogenized, the stations sliced, diced and excluded to produce “the product”. 😉

Camburn
November 3, 2013 5:46 pm

We all know that CO2, other items being constant, will reflect a certain band of radiation back to earth. The results should be an increase in temperature.
However, climate is chaotic. And it most certainly appears that temperatures want to remain within certain parameters, even in that chaotic existense.
What the “pause” is trying to show us is that we don’t understand the complexity of climate. A slight change in AH can do more in regards to temperatures than a large change in CO2.
In the past, CO2 has been a lagging indicator. The behavior of temperatures during the Holocene seems to confirm that CO2 will continue to be a lagging indicator, not a driver in any shape, fashion or form.

geran
November 3, 2013 5:49 pm

wayne says:
November 3, 2013 at 5:28 pm
I see. So that “alow 10 extra minutes”, just in case, so you will be 95% sure to never understate the time and be late is like the positive adjustments to the temperature records when they are homogenized, the stations sliced, diced and excluded to produce “the product”. 😉
With a gallon of tequila, wayne and I would be the life of the party….
(Or, translating for wayne, wihteoh thgall on tesyksia wanisn and i oule be the lifffr of the preth)

Werner Brozek
November 3, 2013 5:51 pm

Stephen Rasey says:
November 3, 2013 at 5:16 pm
I have a visceral problem with the first graph.
How can you show uncertainty of slope (dy/dx) all eminateing from a certain point (1997, 0.235)?

It is from 1996.83 that the slope is 0, so that explains the flat line. According to Nick’s program, the error bars from November 1996 for RSS are “CI from -1.274 to 1.264”. This is per century, so it would be from -0.01274 to +0.01264 per year. So over a period of 16.92 years, the error bar goes to about +/- 16.9 x 0.01274 = 0.215. So I plotted the straight line from November 1996 and then detrended it +0.215 one time and -0.215 the next time. Then I did something similar with the SkS numbers except the numbers there were 0.345 instead of 0.215.
If I did this wrong, please correct me Nick. Thanks!

pat
November 3, 2013 6:05 pm

no pause in Hansen’s nuclear advocacy:
3 Nov: CNN: Top climate change scientists’ letter to policy influencers
Editor’s note: Climate and energy scientists James Hansen, Ken Caldeira, Kerry Emanuel and Tom Wigley released an open letter Sunday calling on world leaders to support development of safer nuclear power systems. For more on the future of nuclear power as a possible solution for global climate change, watch CNN Films’ presentation of “Pandora’s Promise,” Thursday, November 7, at 9 p.m. ET/PT…
http://www.cnn.com/2013/11/03/world/nuclear-energy-climate-change-scientists-letter/
***LOL:
3 Nov: CNN: Thom Patterson: Climate change warriors: It’s time to go nuclear
Cavanagh (Ralph Cavanagh of the Natural Resources Defense Council) said the “movie (Pandora’s Promise) attempts to establish the proposition that mainstream environmentalists are pouring into nuclear advocacy today. They aren’t. I’ve been in the NRDC since 1979. I have a pretty good idea of where the mainstream environmental groups are and have been. I’ve seen no movement.”
***Selling nuclear energy to environmentalists is a tough pitch. Hansen acknowledged that many of them won’t easily buy into it. Parts of the community operate like “a religion of sorts, which makes it very difficult,” Hansen said. “They’re not all objectively looking at the pros and cons.”…
http://www.cnn.com/2013/11/03/world/nuclear-energy-climate-change-scientists/index.html
no doubt some will welcome this…but not me.

pat
November 3, 2013 6:09 pm

3 Nov: NYT Dot Earth: Andrew C. Revkin: ‘To Those Influencing Environmental Policy But Opposed to Nuclear Power’
Four climate scientists, three of whom have published in peer-reviewed literature on energy issues (a sampler from Wigley, Hansen and Caldeira), are pressing the case for environmental groups to embrace the need for a new generation of nuclear power plants in a letter they distributed overnight to a variety of organizations and journalists.
Amory Lovins, Joe Romm and Mark Jacobson would disagree, I’d bet. I certainly know many other energy and climate analysts who would sign on in a heartbeat, including the physics Nobel laureate Burt Richter and Energy Secretary Ernest Moniz…
There’s more from Caldeira in a recorded video chat we had awhile back…VIDEO
http://dotearth.blogs.nytimes.com/2013/11/03/to-those-influencing-environmental-policy-but-opposed-to-nuclear-power/?_r=0

November 3, 2013 6:12 pm

Camburn says:
“We all know that CO2, other items being constant, will reflect a certain band of radiation back to earth. The results should be an increase in temperature.”
Let’s combine the charts in this excellent article, and see if your conjecture pans out.
As we see, there is something wrong. Werner Brozek says:
“I believe we have peaked the top of the cycle and are headed down.”
That does seem to be the case, as we see here.
Kudos to Werner for inviting Nick Stokes to be part of the discussion. That is something we don’t see in the alarmist blogosphere.

pat
November 3, 2013 6:18 pm

1 Nov: UK Daily Mail: Hannah Roberts in Rome: Toxic nuclear waste dumped illegally by the Mafia is blamed for surge in cancers in southern Italy
Italian Senate investigating link between pollutants and 50 per cent rise
Classified documents from 1997 reveal poison would kill everyone
Nuclear sludge, brought from Germany, was dumped in landfills
The Italian Senate is investigating a link between buried pollutants and a rise of almost 50 per cent in tumours found in the inhabitants of several towns around Naples.
In classified documents from 1997, only now released to the public, a mafia kingpin warned authorities that the poison in the ground would kill everyone ‘within two decades’.
http://www.dailymail.co.uk/news/article-2483484/Toxic-nuclear-waste-dumped-illegally-Mafia-blamed-surge-cancers-southern-Italy.html#ixzz2jQTNprrW

Latitude
November 3, 2013 6:22 pm

Nick: “And it mostly works.”
==
and you mostly know why…
In this case…if the temps go down….no one has a clue why
..and if temps go up….no one has a clue why
Face it people, they’ve been flat out lying for 15 years…and they are lying now

pat
November 3, 2013 6:33 pm

1 Nov: A Statement from U.S. Secretary of Energy Ernest Moniz Regarding Fukushima
“On Friday, I made my first visit to the Fukushima Daiichi Nuclear Power Station. It is stunning that one can see firsthand the destructive force of the tsunami even more than two and a half years after the tragic events”…
“They (TEPCO) face a daunting task in the cleanup and decommissioning of Fukushima Daiichi, one that will take decades and is being carried out under very challenging conditions.”…
http://energy.gov/articles/statement-us-secretary-energy-ernest-moniz-regarding-fukushima

Nick Stokes
November 3, 2013 7:01 pm

Werner Brozek says: November 3, 2013 at 5:51 pm
“Then I did something similar with the SkS numbers except the numbers there were 0.345 instead of 0.215.
If I did this wrong, please correct me Nick. Thanks!”

I think it’s right if you just focus on the slope – it’s not attempting different line fits. And it shows just how such a slope extreme would look on the graph.
I have said in the past, though, that this testing of zero slope for significance is the wrong way around. That applies to the general narrative of x years of no significant warming. I’ve posted about it here. A stat test succeeds if it rejects the null hypothesis. If it doesn’t, there’s not much to be concluded. It could be noise, just a weak test, or the null could be true. So the logical thing is to do as Lucia does – test a predicted non-zero trend as null.
It’s likely symmetric, so there this diagram says 0.344 slope is the limit for the zero-slope null, so zero is likely to be about the limit for a 0.345 null. So you could say that a trend of 0.345 °C/dec was rejected, FWIW. But it would be better to do that test directly.

Jquip
November 3, 2013 7:02 pm

Nick Stokes: “10 mins will get you there with 95% probability or whatever. It’s never certain. But you need some plan.”
This was an almost reasonable statement. But if 20 min, is your average, then it would be ‘plus or minus’ 10 minutes would be the estimate of 95%. Which is what you do when you’re interested in modelling a problem. When you’re interested in betting on a problem then you take only the one of +10 or -10, just to see what you should set your betting odds at.
Which is all terribly misplaced, since it presumes that: a) We are using the historical data. b) That the choice of which tail to bet off of, for safety/policy reasons is obvious or well settled.
But then we are quite strictly not using historical averages. Just short term and recent trends. And, of course, while we’re all interested in avoiding ice ages — tend to be terribly for biodiversty — we’re betting the other tail in that we might be greater biodiversity. So we’re not doing it right, and we’re betting in against the bet that is a net ‘win’ in terms of human and environmental interests.

Barclay E MacDonald
November 3, 2013 7:22 pm

Nick Stokes says: “Well, it’s 97% certain :). ”
Nice:))

Mike Bromley the Kurd
November 3, 2013 7:45 pm

CNXTim says:
November 3, 2013 at 2:23 pm
Apologies to Paul McCartney…
Sir Paul has imbibed the Warmista Elixir, unfortunately. You won’t catch him modifying his lyrics, I’m afraid. Yet another in the long line of genuinely talented people who, because of their popularity, embrace the religion.

pat
November 3, 2013 8:06 pm

for James Hansen & Co:
4 Nov: Bloomberg: Pankaj Mishra: India Shouldn’t Buy What Japan Is Selling
Last week in the south Indian city of Pondicherry, I met a friend who had managed to penetrate the security lockdown around Kudankulam, the Russian-built nuclear power station in Tamil Nadu that began partial operations late last month despite strong protests from local villagers.
Kudankulum lies only a few miles away from a coastline that was ravaged by a tsunami in 2004. Opposition to the plant intensified after another intense earthquake and tsunami in March 2011 caused meltdowns at three nuclear reactors at the Fukushima nuclear plant in Japan. Since then, Indian police have deported the few journalists who have tried to report on the protests, sequestered entire villages and levied criminal charges against tens of thousands of locals, some of whom have been accused of sedition and “waging war on the state.” …
It is also true that, as Japan scholar Jeff Kingston points out, the export of technology by Japanese companies is key to Abenomics. Japan is at the center of the global nuclear-industrial complex, which stands to benefit greatly from the continued sale of an outdated and demonstrably dangerous technology to wannabe nuclear powers such as India and Turkey.
Toshiba Corp. owns 87 percent of Westinghouse Electric Co. LLC, which is helping to build a nuclear plant — again, against intense local protests — in the Indian state of Rajasthan; Hitachi Ltd. and Mitsubishi Group are in collaborations with General Electric Co. and the French company Areva SA, whose multiple deals with India make it the real beneficiary of the country’s U.S.-assisted admission to the nuclear club in 2008.
In this scramble for large profits, democratic values such as oversight, accountability and transparency are likely to be trampled into the dust. The case of Tepco shows how a large and networked company can buy the silence of the media as well as of politicians and regulators. Thus, while Fukushima remains volatile, another nuclear catastrophe seems to be developing in India. As in Japan, the full-throated advocacy of nuclear energy by its leaders, and the absence of debate within the Parliament or the mainstream media, reinforces the bitter truth of a line from Slovenian philosopher Slavoj Zizek that Ramana quotes in his book: “It is indeed true that we live in a society of risky choices, but it is one in which only some do the choosing, while others do the risking.”
http://www.bloomberg.com/news/2013-11-03/india-shouldn-t-buy-what-japan-is-selling.html

November 3, 2013 8:12 pm

Mike Bromley the Kurd says:
November 3, 2013 at 7:45 pm
… “Sir Paul has imbibed the Warmista Elixir, unfortunately. You won’t catch him modifying his lyrics, I’m afraid. Yet another in the long line of genuinely talented people who, because of their popularity, embrace the religion.”
*
I sometimes wonder if such stars turn to this particular religion because they fear they are losing popularity. If they felt on top of their game, they wouldn’t need propping up by “being seen to be green”. They embrace it like it’s another badge which might improve their image, as though being seen in the ranks of eco-warriors they will be looked up to.
I guess that goes to show how out of touch with reality they can get.

RoHa
November 3, 2013 8:34 pm

.
It’s all mine.

Konrad
November 3, 2013 8:35 pm

Latitude says:
November 3, 2013 at 6:22 pm
“Face it people, they’ve been flat out lying for 15 years…and they are lying now”
—————————————————————————————————–
Ton Karl’s 1985 early paper on TOB adjustment is notable for mentioning “Global warming” in the concluding remarks. The paper starts with a quite reasonable adjustment for time zones, but sneaks off into computer programs for applying TOB adjustment for changes between morning and evening surface station readings that make no use of actual station metadata.
There is virtually no warming in US surface station records without adjustments. TOB adjustment is the single largest.
Flat out lying for 15 years? And the rest!

Werner Brozek
November 3, 2013 8:59 pm

UPDATE:
RSS for October has just come out and the value was 0.207. As a result, RSS has now reached the 204 month or 17 year mark. The slope over the last 17 years is -0.000122111 per year.

Thrasher
November 3, 2013 8:59 pm

RSS out for October at +0.207

November 3, 2013 9:03 pm

Let’s extend Nick’s analogy a bit…
I’ve been commuting to work and keeping track of my commute times for 10 years. I figure my commute time is 30 minutes, plus or minus 5 minutes. The modelers take my data and calculate that my commute time is 30.49 minutes, plus or minus 4.558 minutes. They calculate that based on the data, my commute time has been increasing by 0.014 minutes per year Then they notice that I forgot to write down my times fairly often in the first year or two. No problem, they just ram a linear trend through the data and calculate the missing values from that. Of course if you extend the linear trend far enough back in time, at some point I apparently start arriving at work just before I leave home. Well, let’s just ignore that for now. But now we have a new problem, which is when the modelers were graphing all the data, they noticed that five years ago there was a six month period when my commute times were 45 minutes, not 30. They quickly conclude that since the city is larger now than it was then, and traffic congestion is worse, that my records from that six month period must be wrong, and so they adjust them to match the balance of the data.
I then tell the modelers that there is a new housing development going up that will add 1,000 new cars to the immediate area, and ask if they can tell me how this will impact my commute time. They run their computers ragged, and then advise me that my commute time will increase year over year 10 times as fast as it has been up until now. I’m dubious because there’s new roads and better traffic light sequences, and sure enough, my commute time drops to 26 minutes.
The modelers tell me that this is impossible, there must be something wrong with the way I’m measuring time. Time goes on and my commute time drops to 24 minutes. The modelers tell me I am nuts, my commute time is actually well over 30 minutes. Next year I average 24 minutes again, and 24 minutes the year after. The modelers tell me that it is just a pause. Any day now the awful truth will dawn on me that my commute time is actually increasing so fast that I will never get to work at all. Another year goes by…
Now you know everything you need to know about models.

James Allison
November 3, 2013 9:12 pm

When the temperature is trending up Warmists tell us with absolute certainty that CO2 causes it. When the temperature is flatlining or trending down one of em says “trend uncertainty is complicated” and another adds his support by telling us breathlessly that that is actually an understatement. LOL

Nick Stokes
November 3, 2013 9:43 pm

Konrad says: November 3, 2013 at 8:35 pm
“The paper starts with a quite reasonable adjustment for time zones, but sneaks off into computer programs for applying TOB adjustment for changes between morning and evening surface station readings that make no use of actual station metadata.”

I presume you mean this 1986 paper. The method certainly does require station data on observation times. This 2003 NOAA paper says:
“This time of observation bias is addressed in the adjusted HCN using the method described in Karl et al. [1986]. This adjustment approach requires as input an a priori knowledge of the actual observation time in each year at each HCN station.”

November 3, 2013 9:48 pm

Leonard Weinstein says:
November 3, 2013 at 2:29 pm
Due to the chaotic underlying nature of climate, and due to the fact that we are likely nearing the end of the Holocene, any variation up, down, or flat for even several decades demonstrates nothing useful. The flat or even downward trend could be a large natural downward trend that has been significantly temporarily overcome by the large human warming effect, or the variation could be totally natural dominated variation. Playing statistics games on such processes is truly just a game, with no meaning. At this point we do not know what is going on or which way the trend will go from here, and to say otherwise is hubris.
+++++++++++
Hold on just a minute.
We do know something from the data here. Guess what that is? We know that the models’ predictions and/or projections have been proven to be wrong based on their own metrics.
That you suggest they have no predictive value is true. But that is NOT the point. Again the point is that we do know that the models which predicted doom are incorrect.

RossP
November 3, 2013 9:49 pm

Then you have to take into account the data tampering that has gone. Steve Gorddard shows what the NOAA did to September
http://stevengoddard.wordpress.com/2013/11/03/noaa-data-tampering-reaches-a-tipping-point/
I assume werner is using the adjusted data for the NOAA data set

Brian H
November 3, 2013 10:00 pm

Leonard;
Scientific truth, and the search therefor, is not driving this oxcart. A claimed connection between a so-called “forcing” increase in CO2 and global temperature has been asserted and challenged. Decisions of vast and fatal import have been taken and demanded based on the assertion. Would you say the assertion or the challenge has more to back it at this time?

RossP
November 3, 2013 10:26 pm

I should have added the following link to my comment on data tampering above. Again from Steve Goddard
http://stevengoddard.wordpress.com/2013/11/02/latest-nasa-global-data-tampering/

November 3, 2013 10:31 pm

RossP says:
November 3, 2013 at 9:49 pm
I assume werner is using the adjusted data for the NOAA data set
I actually do not use NOAA since it is not on WFT. However I use GISS and September for GISS was at a record high for September that it shared with 2005, namely 0.74.

Magicjava
November 3, 2013 11:33 pm

We all know that CO2, other items being constant, will reflect a certain band of radiation back to earth. The results should be an increase in temperature.
————
I don’t think we all know that. I don’t think it’s ever been demonstrated that adding *any* kind of green house gas the the atmosphere causes the temperature to increase.
In fact, if you removed the most powerful green house gas, water vapor, from the atmosphere, the temperature wouldn’t go down, it would go *up*! A lot. By about 20 degrees or so, if I remember correctly. This is because water vapor creates clouds which have an overall powerful cooling effect on the planet.
AGW depends on water vapor to get the heating it claims. But the facts are that water vapor has the exact opposite effect: it cools the planet.

David A
November 3, 2013 11:38 pm

Yes, and record adjustments and differential from RSS.

David A
November 3, 2013 11:42 pm
Peter Miller
November 3, 2013 11:54 pm

The Global Warming Industry costs us almost $1.0 billion per day.
Apart from filling the pockets of government and quasi-government bureaucrats and ‘scientists’, is there anything else tangible this incredible waste of money achieves?
Observations, when not routinely manipulated by the data gatekeepers, continue to make a mockery of the models.
The recent mild warming of the past 150 years is no more than a typical natural climate cycle, which this planet has witnessed millions of times before. Man has probably affected this latest natural climate cycle in a very minor way, but we are totally unable to quantify this amount and do not know the weighting (IPCC guesses can be safely ignored) to apply to CO2, soot, aerosols, irrigation, farming etc.

King of Cool
November 4, 2013 12:22 am

Don’t know about hiding in the deep ocean, I reckon the missing heat went underground and has appeared through a vent in the centre of Australia.
Still 2 months to go but we are having a warm one:
http://www.bom.gov.au/climate/updates/articles/a003-2013-temperature.shtml
Good news for the alarmists and tax loving politicians who are now vigorously promoting an emissions trading scheme. Doesn’t matter whether Central Europe has had 5 consecutive winters colder than normal or that the Antarctic re-supply ship Aurora Australis is icebound in record sea ice 10 days behind schedule and still 177 nm from her destination.
You will not hear about this on our ABC. All you will hear about is bush fires and heat waves. Yep, global warming for many is all about the cherries in your own back yard.
Last year was looking dicey for the demon CAGW but now we will need at least another 2-3 years of a line that looks more like a sine curve than a straight line on the temperature anomaly chart before it will be endangered here.

RossP
November 4, 2013 12:25 am

Sorry Werner @10.31, my mistake . I got the two links mixed up.

thisisnotgoodtogo
November 4, 2013 1:07 am

How much CO2 does it take to support the AGW industry?

clovis man
November 4, 2013 1:23 am

Are we allowed to call the late 20th century a ‘blip’ yet?

SandyInLimousin
November 4, 2013 1:39 am

Re Commute to work
When a recession hits, thanks to politicians and bankers, the commute time goes down as does fuel usage and the model can change. For instance you can have a ten minute water cooler chat about the weather. Then there are the seasonal adjustments for school holidays when, in the UK at least, the volume of traffic drops by about 15% and a shorter journey time is the result.
It’s a great analogy.

Disko Troop
November 4, 2013 1:47 am

Take away their computers and make them use sliderules and all this this sillyness would end.

Scottish Sceptic
November 4, 2013 1:52 am

Thanks Anthony, William and the rest of the crew – as a result of reading one of these papers, I now have the answer and know why these climate models fail. I know why we get these predictions that don’t work. This will take time to put together, but the thanks can come sooner.

Konrad
November 4, 2013 2:21 am

Nick Stokes says:
November 3, 2013 at 9:43 pm
—————————————————————————————
Yes Nick, that would be the one, written 1985, accepted 1986. Tom Karl’s pet rat TOBy, chewing on raw surface station data since 1986 😉
From the abstract-
“A self-contained computer program has been developed which allows a user to estimate the time of observation bias anywhere in the contiguous United States without the costly exercise of accessing 24-hourly observations at first order stations.”
From the conclusion –
“…or temporal analysis of climate, especially climate change…”
OCR is a wonderful thing is it not? Although I suspect a certain Dr. Perriehumbert, writer of the sacred texts and player of the devil’s instrument, wishes it had never been invented.
Nick, the real problem is not that adding radiative gases to the atmosphere will not reduce the atmospheres radiative cooling ability. Nor that radiative gases act to cool our atmosphere at all concentrations above 0.0ppm. The real problem is that some of those promoting and profiting from AGW knew this before 1995, and this being the age of the Internet, they have left a trail of “climate science” behind them that would fertilise the Simpson desert.

Greg Goodman
November 4, 2013 2:29 am

Good informative post but the way you are representing CO2 is totally misleading. You are delibarately scaling and offsetting so that the slope fills the graph range. You could do that with absolutely anything that has a positive trend and it would not show a damn thing about its relation to surface temps.
You have chosen a particularly useless web tool to show such a comparison because you can do nothing apart from detrending and running means on the data. (Duh!)
Probably the best you can do is scale CO2 to 400ppm full range. This is enough to make the point that it has been steadily rising despite the ‘pause’.
http://www.woodfortrees.org/plot/esrl-co2/from:1975/scale:0.0025/offset/plot/rss/plot/rss/from:1997/trend
Since CO2 “forcing” is supposed to be a log relationship and then you need to consider how to scale it (ie climate sensitivity, which is the big question) this is still not right but at least you are showing the actual quanity of CO2 not rigging it to make your point.

richardscourtney
November 4, 2013 2:33 am

Nick Stokes:
I write to ask for a clarification.
Nearly half a century has passed since I used to conduct regression analyses by hand (there were no electronic calculators in those days) but I still possess the ink pens and templates I used to plot graphs of the resulting regression results. Clearly, methods may have changed recently, but I would welcome knowledge of how regression analyses of different time series data sets with different lengths can provide confidence limits of their trends which all radiate from the same single point on a graph.
Please explain how you calculated the confidence limits for each calculated linear trend and what do the plotted confidence limits indicate.
I ask because the plotted values of confidence limits seem to make no sense. There are three confidence bounds which can be calculated from the result of a linear regression of a data series; viz.
1. The confidence limits of the data set.
And
2. The confidence limits of the trend line.
And
3. The confidence limits of the regression.
Simplistically, the confidence limits of the data set is an ‘average’ of the confidence limits for each datum in the set, and results in error bars for the data set. These error bars are two lines which parallel the trend line, and they show the band within which the data can probably be found; e.g. if they are 95% confidence limits for the data set then one in twenty of the data points is probably outside the lines.
Clearly, the confidence limits shown above are not the confidence limits of the data set. They are not a pair of lines which parallel the trend line of each data set.
The confidence limits of the trend line is the range of values within which the slope of the trend is calculated to exist, and they are error bars which have the form of straight lines which intersect with the centre of the trend line. The trend line can be rotated around the central point of the trend line to any value within those limits, and they show the range within which the slope of the trend line can probably be found; e.g. if they are 95% confidence limits then there is a 20:1 probability that the slope of the trend line is within those limits.
Clearly, the confidence limits shown above are not the confidence limits of the trend line. They are not a pair of lines which intersect the centre of the trend line of each data set. Indeed, they all intersect at the same place on the graph for all the data sets which have different lengths.
The confidence limits of the regression is the range of values within which the data and the slope of its linear trend are calculated to exist. They have the form of two curves (one on each side of the trend line) and they are each closest to the trend line at the centre of the trend line. They are error bars which combine the confidence limits of the data set and the trend line; e.g. if they are 95% confidence limits then there is a 20:1 probability that the slope of the trend line is within those limits, and there is a one in twenty of the data points will fall within those limits for any trend line which is within those limits.
Clearly, the confidence limits shown above are not the confidence limits of the regression. They are not a pair of curves for each data set.
So, I write to ask what your calculated confidence limits indicate and how they were calculated.
Richard

richardscourtney
November 4, 2013 2:40 am

Ooops!
I wrote
e.g. if they are 95% confidence limits then there is a 20:1 probability that the slope of the trend line is within those limits, and there is a one in twenty of the data points will fall within those limits for any trend line which is within those limits.
I intended to write
e.g. if they are 95% confidence limits then there is a 20:1 probability that the slope of the trend line is within those limits, and there is a one in twenty CHANCE ALL the data points will fall within those limits for any trend line which is within those limits.
Sorry, Richard

Joe
November 4, 2013 2:51 am

Nick Stokes says:
November 3, 2013 at 4:30 pm
Joe says: November 3, 2013 at 4:15 pm
“I know I can’t really drive through rush houor at the speed limit, but, for convenience, I’m going to model my commute as if I can”
Oddly, I was going to use that analogy too. Not speed limit, but suppose you work out an average speed for your daily commute. That doesn’t mean that you expect to drive at that speed uniformly; in fact it doesn’t imply any speed model at all. But suppose you then plan to arrive at work reliably on time. You need a model of variability. It doesn’t need to perfectly predict how your journey will go, but it needs to give average variation.
Most people in that situation do do something like that. And it mostly works.
———————————————————————————————————————
But it doesn’t work if you choose a model (such as “drive at the speed limit”) which you know doesn’t reflect reality and we know that simple linear trends don’t reflect any part of the real climate system!
So, while such “simplifications” may be convenient, and may appear to work occasionally, any such appearance is happenstance and can’t be relied on to give any indication at all of the future.

Nick Stokes
November 4, 2013 3:13 am

richardscourtney says: November 4, 2013 at 2:33 am
“Nick Stokes:
I write to ask for a clarification.”

Well, it isn’t my graph. But as I understand, the slopes of the lines are equal to the 95% CI values, computed by my method (AR(1)) and the SkS values. The central line, of course, has the slope of the actual trend. They aren’t themselves fitted lines; the purpose is, I believe, simply to show how the CI extremes look against the actual results.
As I said above, finding that the trend over a period is not significantly different from zero doesn’t really tell you very much. It could still be quite large, and might indeed be consistent with AGW predictions, without being significantly different from zero. It depends on the noise. This plot shows the limits of slopes that could be in the range.

Nick Stokes
November 4, 2013 3:24 am

Joe says: November 4, 2013 at 2:51 am
“we know that simple linear trends don’t reflect any part of the real climate system!”

Well, we know that travelling at constant speed doesn’t reflect any part of city driving. But the idea of average speed is still useful, and it’s how we (and electronic navigators) plan journeys.
A trend is an average rate of change.

Louis Hooffstetter
November 4, 2013 3:53 am

davidmhoffer says:
November 3, 2013 at 9:03 pm
Thanks, David. Your analogy is perfect!

richardscourtney
November 4, 2013 3:56 am

Nick Stokes:
Thankyou for your post at November 4, 2013 at 3:13 am in response to my request for clarification at November 4, 2013 at 2:33 am (and its corrigendum at November 4, 2013 at 2:40 am).
My post explained what I failed to understand about the plotted lines supplied by you in the above graph and why I failed to understand. And I asked

Please explain how you calculated the confidence limits for each calculated linear trend and what do the plotted confidence limits indicate.

Unfortunately, your response leaves me uninformed as to the answers to my requests for explanation. It says

Well, it isn’t my graph. But as I understand, the slopes of the lines are equal to the 95% CI values, computed by my method (AR(1)) and the SkS values. The central line, of course, has the slope of the actual trend. They aren’t themselves fitted lines; the purpose is, I believe, simply to show how the CI extremes look against the actual results.
As I said above, finding that the trend over a period is not significantly different from zero doesn’t really tell you very much. It could still be quite large, and might indeed be consistent with AGW predictions, without being significantly different from zero. It depends on the noise. This plot shows the limits of slopes that could be in the range.

Sorry, but I am told nothing by your statement that “the purpose is, I believe, simply to show how the CI extremes look against the actual results”. I think it says that the plotted confidence limits are the 95% confidence limits of the trend lines but each displaced to the same and wrong position. Indeed, my interpretation seems to be supported by your saying, “This plot shows the limits of slopes that could be in the range”. But I could be wrong in that interpretation.
And I did see your linked comment above; i.e.
http://wattsupwiththat.com/2013/11/03/statistical-significances-how-long-is-the-pause-now-includes-september-data/#comment-1465406
However, frankly, I fail to see how that addresses my request. So, to be clear, I repeat it.
Please explain how you calculated the confidence limits for each calculated linear trend and what do the plotted confidence limits indicate.
Richard

Nick Stokes
November 4, 2013 4:16 am

richardscourtney says: November 4, 2013 at 3:56 am
“Please explain how you calculated the confidence limits for each calculated linear trend and what do the plotted confidence limits indicate.”

Again, it’s Werner’s plot and he calculated the numbers as he indicated. I can tell you how my gadget that he used gets the CI’s. It is explained in this original post, with further commentary here and in linked plots. The standard regression assumption is that the residuals are random variables, which may be correlated. In AR(1), a linear expression can be written which is a linear combination that is expected to be an iid random variable. Its variance is estimated from the sum of squares of residuals, and that scales the CI’s.
In this post I have compared some of my calculations, using the Quenouille approximation, with the results of the R ar() routine. It’s in the table near the end.

richardscourtney
November 4, 2013 4:57 am

Nick Stokes:
Many thanks for your post at November 4, 2013 at 4:16 am in response to my request for clarification. Especial thanks for this link
http://www.moyhu.blogspot.com.au/2011/11/picture-of-statistically-significant.html
which provides your calculation method.
Firstly, as I understand the calculation method in that link it ignores autocorrelation and assumes the residuals are independent (although you admit this is not true) then calculates the standard error of the trend at each point along the time series.
If I have understood you correctly then that method can be disputed on several grounds, and I am grateful for your links which would assist such dispute. However, I am avoiding such a dispute because my request for clarification is an attempt to understand what is presented in the above graph: my request is NOT an attempt to side-track the thread into discussion of the validity of your method.
I fail to understand how your method provides 95% confidence limits for each of the data sets which consist of lines which radiate from the same point as is presented in the above graph. Indeed, at the intersect of those lines the confidence limits differ from each trend by zero (which is impossible).
I accept that – as you say – “it’s Werner’s plot and he calculated the numbers as he indicated”, but it is your method and you are not objecting to how Werner has presented the results of his calculations. Clearly, you are the person most familiar with your method and its use which is why I am asking you for clarification.
You have kindly explained your method for calculation of 95% confidence limits of the trend. But, if my understanding of your method is correct, then my failure to understand the graph is increased.
I asked

Please explain how you calculated the confidence limits for each calculated linear trend and what do the plotted confidence limits indicate.

And you have answered my request for clarification of “how you calculated the confidence limits for each calculated linear trend” by referencing your method and saying that Werner used it. Importantly, you have not disputed how Werner used your method and presented its results so I understand you to agree that the above graph represents results which you agree. Hence, I am asking you what the graph shows because it is the results of your method which are being presented.
Hence, I again ask with specific reference to the above graph,
what do the plotted confidence limits indicate?
Please note that I am grateful for your responses so far and I am choosing to not debate the validity of your method. I am trying to understand what is indicated by the results of your method which are displayed in the above graph.
Richard

Nick Stokes
November 4, 2013 5:29 am

richardscourtney says: November 4, 2013 at 4:57 am
“I am trying to understand what is indicated by the results of your method which are displayed in the above graph.”

If, as Werner did, you use my gadget as linked above, either by clicking on the triangle or adjusting the red and blue disks until the time range is right, then it will report that the trend from Nov ’96 to Sep ’13 is -0.005°C/century and the CI’s of that are -1.274 to 1.264°C/century. That is the extent of my contribution to the plot, and the numbers are calculated as in the linked post.
Werner has plotted lines with slopes corresponding to those extreme values, as well as the central value. The numbers shown on the plot are actually the product of the slope with the time interval (that’s what WFT uses in the detrend facility).
What it means is that, although the observed trend was zero, the vagaries of weather could have produced, in the same circumstances, a trend of anything from -1.274 to 1.264. That is, if the random weather could be re-run, the trend would be within that range with 95% probability.
As I said above, this is not the normal logic of statistical testing. Orthodox would be to test whether, given a hypothesis of say warming at 1.2 °/cen, an observation of -0.005 is within the bounds of reasonable (95%) probability. If not, you can reject the hypothesis.
However, that assumes that you had no prior knowledge. If you are testing it because you already suspect it was a “pause”, then the test doesn’t work. The reason is that 1 in 20 events do occur, and if you look where you know there is a better chance of finding them, then they won’t be 1/20 any more.

November 4, 2013 5:39 am

No-one seriously believes that their data really is a linear trend with AR(1) fluctuations, … In my trend viewer, I used AR(1).
=============
that pretty much sums up climate science. use a model that everyone agrees doesn’t match the characteristics of the underlying data, and then place your faith in the results of the model.
linear approximations deliver nonsense when applied to nature, because nature is inherently cyclical with scale independent variability. This causes linear methods to find trends where no trends exist, and to significantly underestimate natural variability.
fit a linear trend to spring-time temperatures and what does it tell you about the coming winter? that the winter will be warmer than the spring. climate science 101.
fit a linear tend to summer-time temperatures, and you will find a “pause” in the spring-time warming. but climate science is confident that by the fall warming will reappear.

Werner Brozek
November 4, 2013 5:57 am

Greg Goodman says:
November 4, 2013 at 2:29 am
Good informative post but the way you are representing CO2 is totally misleading. You are deliberately scaling and offsetting so that the slope fills the graph range. You could do that with absolutely anything that has a positive trend and it would not show a damn thing about its relation to surface temps.
There are different ways of looking at this. According to Nye, the CO2 is rising “extraordinarily fast” as shown by the link below. And if you go by percentages for CO2 versus temperature since 1750, there is no comparison. CO2 went up from 280 ppm to 400 ppm or an increase of 43%. Meanwhile, temperatures went up from about 287.0 K to 287.8 K or an increase of 0.0028%. You raise good points, and while no representation is perfect, in view of the percentages above, I do not agree that my way is “totally misleading”. As a direct result of Dr. Brown’s comments on my last post, I added the following comment to this post:
“The upward sloping CO2 line only shows that while CO2 has been going up over the last 17 years, the temperatures have been flat for varying periods on various data sets.”
NYE: So here’s the point, is it’s rising extraordinarily fast. That’s the difference between the bad old days and now is it’s —
MORANO: Carbon dioxide —
NYE: It’s much faster than ever in history. And so —
Read more: http://newsbusters.org/blogs/noel-sheppard/2012/12/04/climate-realist-marc-morano-debates-bill-nye-science-guy-global-warmi#ixzz2jgCV7ZVs

richardscourtney
November 4, 2013 6:15 am

Nick Stokes:
Many thanks indeed for your post at November 4, 2013 at 5:29 am in response to my request for clarification. That does address what I was trying to determine. Thankyou.
I write to feed back what I understand you to be saying because, as I have repeatedly said, originally in my first post in this thread, at November 4, 2013 at 2:33 am,

I would welcome knowledge of how regression analyses of different time series data sets with different lengths can provide confidence limits of their trends which all radiate from the same single point on a graph.

As I understand it the problem is an error of positioning provided by your gadget which Werner carried over in his plot of the 95% confidence limits. I explain this understanding as follows.
In my first post in this thread, at November 4, 2013 at 2:33 am, I also wrote

The confidence limits of the trend line is the range of values within which the slope of the trend is calculated to exist, and they are error bars which have the form of straight lines which intersect with the centre of the trend line. The trend line can be rotated around the central point of the trend line to any value within those limits, and they show the range within which the slope of the trend line can probably be found; e.g. if they are 95% confidence limits then there is a 20:1 probability that the slope of the trend line is within those limits.

But your gadget – in effect – provides those limits from one end of the assessed trend line. This is an error because, although the numerical value of the confidence limits is the same, it completely alters the possible range of trend lines which can be plotted on a graph.
Importantly, one end of the considered period is assumed to be a fixed and the possible trend lines rotate around that point. But the centre of the considered period should be fixed on a graph. Fixing the trend at one end of the period reduces the estimated error to zero at that end and doubles the estimated error at the other end as plotted on the graph.
In the case of the above graph one end point is assumed to be the same for all the analysed time series. And that increases the distortion as presented in the graph. It also has serious effect on a calculation of the confidence limits of the regression.
As I also said in my first post (with corrigendum)

The confidence limits of the regression is the range of values within which the data and the slope of its linear trend are calculated to exist. They have the form of two curves (one on each side of the trend line) and they are each closest to the trend line at the centre of the trend line. They are error bars which combine the confidence limits of the data set and the trend line; e.g. if they are 95% confidence limits then there is a 20:1 probability that the slope of the trend line is within those limits, and there is a one in twenty chance all data points will fall within those limits for any trend line which is within those limits.

Please explain and refute any misunderstanding which I have stated in this post.
Richard

November 4, 2013 6:15 am

To see why linear methods underestimate natural variability, consider for a moment the annual cycle of temperatures at some location. plot this on a piece of paper, you get the familiar sin/cos curve from high school math class.
now consider that “the pause” is summertime temperatures. fit the linear approximation and you will get the flat line in the opening graph. Assume that the variability is due to noise, and you will get the “V” shaped lines opening up to the right. This is the 95% projection of where your future temperatures must be.
But what you have is nonsense. Look ahead 6 months and the wintertime temperatures will be lower than the lower bound for your 95% projection, and the upper bound for the 95% projection is so high as to be ridiculous.
The problem is the underlying assumption. the assumption that your data (climate) is linear with noise (weather) is wrong. weather is the result of daily and annual cycles in nature. climate is a similar process, with time scales measured in hundreds, thousands and millions of years.
Our mistake is in seeing climate as the average of weather. It isn’t. We measure climate as the average of weather, so our minds create the illusion that climate is the average of weather, but nature isn’t controlled by our measuring process.
Weather is the chaotic response of nature to short term cycles. Climate is the chaotic response of nature to longer term cycles. Both are inherently unpredictable given our current level of understanding, except to the degree that cyclical systems tend to repeat.

richardscourtney
November 4, 2013 6:25 am

ferd berple:
You make a very good point in your post at November 4, 2013 at 5:39 am. This link jumps to it to help people who may have missed it
http://wattsupwiththat.com/2013/11/03/statistical-significances-how-long-is-the-pause-now-includes-september-data/#comment-1465650
The problem is compounded by so-called ;climate scientists’ having modern computer power and computational packages available to them. They use – as you say – incorrect models which they ‘pull off the shelf’ and amend to suite, but have no real idea of what the calculation does.
Richard

November 4, 2013 6:38 am

imagine for a moment that you were an extremely long lived creature, that years passed as seconds. each day of your life would be 32 million years long, and your 80 year lifespan would cover about 2.5 billion years.
daily weather would be imperceptible. annual weather would be a 1 second pulse. 100 thousand year ice ages would last slightly more than 1 day. These 100 thousand year cycles would show a warming and cooling cycle very much like we experience day to day. Over time, some “days” would be warmer and some would be cooler – very much like our daily weather. As time scales increase we see hot house earth – summer and we see ice house earth – winter. We see some “months” that are wet and some that are dry – long term climate change.
But in all of this, there is nothing to suggest that climate is the average of weather. Rather, that weather and climate are the same process, differing only in time scales. We create the illusion that climate is the average of weather over time as a result of our measuring process. But this is not the reality.

Gary Pearse
November 4, 2013 6:52 am

wbrozek says:
November 3, 2013 at 10:31 pm
“However I use GISS and September for GISS was at a record high for September that it shared with 2005, namely 0.74.”
Werner, how long do you think the data gatekeepers are going to tolerate this flat-lining? Compare Hadcrut 3 with hadcrut 4 for example. Trust me, there is going to be a hadcrut 5 under all this pressure and GISS is fiddling with the whole data string on a continuing basis. I suppose the satellite data is open to fiddling, too, although it is in honest hands for the present. Notice that, with arctic ice extent increasing at the fastest rate in decades, that there is a big pause in the data stream from US sites – they’ll be shifting the measure to 25% ice from 15% as the measure and giving us a big explanation as to why. Remember what happened when sea level measurements were beginning to flatten? They started fiddling that too with a change for crustal rebound and the actual sea level metric is no more. Its kinda, sorta like ocean basin volume measurements. The only reason we have a “pause” (a loaded word indeed) is it sneaked up on them and now the whole world has its eyes on it.

November 4, 2013 7:06 am

The mathematical definition of climate as the 30 year average of weather has created an illusion of climate predictability that does not match reality. Because we define climate as the average of weather, we apply all sorts of statistical properties to climate based on our understanding of averages.
For example, we assume that climate can never “naturally” get hotter than the hottest day in summer, or colder than the coldest day in winter. This is how average work. So, when we see climate starting to get hotter or colder than past weather, we start to think something “unnatural” is happening. That humans must be changing the climate.
What we fail to consider is that the formal definition of climate as the 30 year average of weather is a human definition. It is not nature’s definition of climate and nature is not in the slightest bound by our definition.
Our formal definition of climate as the 30 year average of weather is fundamentally wrong. It has led climate science down a path of statistical prediction based on mathematical properties that do not match reality. It has caused us to significantly under-estimate natural variability and to seriously over-estimate our confidence levels in our predictions.
Climate is not the average of weather over time, and thus cannot be reliably modeled using weather averages.

November 4, 2013 7:33 am

Gary Pearse says:
November 4, 2013 at 6:52 am
Trust me, there is going to be a hadcrut 5
Thank you for your excellent points. But who needs Hadcrut5 when they can just fix Hadcrut4 which they have already done? See:
http://wattsupwiththat.com/2013/05/12/met-office-hadley-centre-and-climatic-research-unit-hadcrut4-and-crutem4-temperature-data-sets-adjustedcorrectedupdated-can-you-guess-the-impact/

Nick Stokes
November 4, 2013 7:41 am

ferd berple says: November 4, 2013 at 5:39 am
“No-one seriously believes that their data really is a linear trend with AR(1) fluctuations, … In my trend viewer, I used AR(1).
=============
that pretty much sums up climate science. use a model that everyone agrees doesn’t match the characteristics of the underlying data, and then place your faith in the results of the model.”

It isn’t climate science – it is standard statistics, used by all sorts of people. In fact, these are Box-Jenkins models. You know, the Box who famously said – “all models are wrong, but some are useful”. These are the models he was talking about.

November 4, 2013 7:43 am

The starting place for climate models is not with the annual temperature data. That is no different than trying to predict the stock market based on the Dow. You are trying to fit parameters (CO2, aerosols, sunspots, TSI, etc) to essentially random data. You end up with schizophrenic models. They deliver super rational answers that latter prove to be crazy.
If you want to know about the market in the short term, you need to know what it looks like in the long term. Otherwise you will miss the pattern. The low long-term rise punctuated with short, steep drops. The novice investor, seeing the long slow rise, predicts that it is now safe to enter the market. Only to be burned by the short, steep drop. Having now lost their money, they leave the market to avoid further loss, while the market responds with a long, slow rise.
The starting place for climate models is the long term climate data, stretching back over thousands and millions of years. When your climate model can accurately capture the ice ages and interglacial, with the rapid pulses of warming and cooling interspersed, they may have some validity in telling us what to expect in the future.
To try and model climate based on thermometer data is no different than stock market chartists trying to predict tomorrows stock prices based on yesterday’s results. If you do everything perfectly, your predictions will be almost, but not quite as good as those of a dart board.

November 4, 2013 7:53 am

Nick Stokes says:
November 4, 2013 at 7:41 am
Box who famously said – “all models are wrong, but some are useful”.
==============
“some are useful” means “most are useless”. The IPCC models clearly demonstrate this.

Nick Stokes
November 4, 2013 8:27 am

richardscourtney says: November 4, 2013 at 6:15 am
“As I understand it the problem is an error of positioning provided by your gadget which Werner carried over in his plot of the 95% confidence limits.”

My gadget does no positioning. It simply calculates a trend from a time series. This is a statistic with standard error and confidence intervals. This is absolutely standard. The Excel function LINEST() returns a regression β with standard error se2. Excel does not allow for autocorrelation; I do. Here is Hu McCulloch explaining why. I use the Quenouille adjustment that he describes.
I think Werner’s choice to illustrate the effect of trend uncertainty by showing how the various trends would diverge from a single point is perfectly reasonable. But if you don’t like it, you should take it up with him.

richardscourtney
November 4, 2013 9:27 am

Nick Stokes:
Thankyou for your post at November 4, 2013 at 8:27 am which agrees my understanding stated in my post at November 4, 2013 at 6:15 am but with this exception: you say to me

I think Werner’s choice to illustrate the effect of trend uncertainty by showing how the various trends would diverge from a single point is perfectly reasonable. But if you don’t like it, you should take it up with him.

It is not “perfectly reasonable” for the reasons I explained in my post.
As I said, I asked for clarification from you (and you have kindly provided it) because Werner used your method so he would reasonably have responded that any clarification be obtained from you. The exception you have stated is Werner’s plot of the results he obtained from your method and you say you agree with that plot. As you say, in this circumstance it is reasonable for you to pass this issue to Werner.
I assume Werner is following this thread and, therefore, it seems reasonable for me to assume he has understood the matter because he has not queried it.
I again thank you for the clarifications you have provided.
Richard

Werner Brozek
November 4, 2013 10:15 am

richardscourtney says:
November 4, 2013 at 9:27 am
I assume Werner is following this thread and, therefore, it seems reasonable for me to assume he has understood the matter because he has not queried it.
Yes, I am following the thread although I will not claim to understand all of the technicalities. I agree with what you said earlier that: “They have the form of two curves (one on each side of the trend line) and they are each closest to the trend line at the centre of the trend line.” at richardscourtney says:
November 4, 2013 at 2:33 am
Those two curves would make more sense that what I have shown. However I do not know how to make those curves with the information given.
I am not sure if it is too relevant here, but Ross McKitrick once said something to the effect that you can only tell if a change is significant, but you cannot tell if a straight line is significant. Perhaps I did the wrong thing with the five lines in the top graph.
For RSS, Nick’s version says the slope could be 0 from December 1992 at the 95% level, whereas SkS says it is August of 1989 for the two sigma which is 95.2% if I am not mistaken. But then there is the other issue that I am not sure about. Is the SkS value really 95.2% or is it really 97.6% since there is a 2.4% chance the number is lower and a 2.4% chance the number is higher than the SkS two sigma limits. Do I have that correct?
At the end of the comments on this post, I would just like to know if I should use SkS or Nick’s version for my statistical significances in my future posts. Thank you for your inputs here.

Joe
November 4, 2013 10:28 am

Nick Stokes says:
November 4, 2013 at 7:41 am
ferd berple says: November 4, 2013 at 5:39 am
that pretty much sums up climate science. use a model that everyone agrees doesn’t match the characteristics of the underlying data, and then place your faith in the results of the model.”
====================================
It isn’t climate science – it is standard statistics, used by all sorts of people. In fact, these are Box-Jenkins models. You know, the Box who famously said – “all models are wrong, but some are useful”. These are the models he was talking about.
————————————————————————————————————
Nick, it may be “standard statistics” but standard statistics are useless when misapplied to situations that don’t comply with the fundamental assumptions inherent in them.
The fundamental assumption when plotting linear trends is that the data is inherently linear (even if noisy), or at least has a linear component that we can detect. Fitting a linear trend to primarily cyclic data is absolutely meaningless unless you know the nature of the cyclic behaviour. It’s the sort of basic mistake we were warned against as 14 year olds in O level mathematics!
To suggest that linear trends are suitable, you’re effectively saying that we know all cycles in the climate system over all timescales . Because, if we don’t, then the apparent trend we measure may be part of an “up” or “down” slope of a cycle that we haven’t considered.
Do you really have enough hubris to claim that we know everything there is to know about every climate cycle?

richardscourtney
November 4, 2013 11:03 am

Werner Brozek:
Thankyou for your post at November 4, 2013 at 10:15 am.
Firstly, please be assured that my questions of Nick Stokes were a sincere attempt to understand what the graph is showing and was not an attempt to denigrate your and/or his work. Indeed, you may have noted that I said his method could be disputed but I was choosing not to distract the subject with that.
It is many decades since I was doing this stuff ‘by hand’. Forty years ago I could have realed-off a method to determine the confidence limits of the regression but not now. Given time I could probably find it among all the stuff in the garage, but I am to leave for another of my times away in the morning so I don’t have the time to do that search at the moment: sorry. Anyway, there are probably software packages available because we are only talking about confidence limits of linear regression.
You say

I am not sure if it is too relevant here, but Ross McKitrick once said something to the effect that you can only tell if a change is significant, but you cannot tell if a straight line is significant. Perhaps I did the wrong thing with the five lines in the top graph.

You did the ‘right thing’ but the comment by Ross McKitrick is very relevant and it addresses an issue that ferd berple and joe have independently raised in the thread.
So-called ‘climate science’ uses linear trends so the use of linear trends is right for for consistency with them. The curvature of a line is a model which one fits to the data. The analyst chooses his/her model of the data by fitting the data to a curve of his/her choice: a straight line is the simplest model but it may not be the correct model. However, one can determine if the data is deviating from any model.
2-sigma can be assumed to be 95.2% or for convenience 95%.
In my opinion, there are flaws with both the methods you mention (i.e. Stokes and SkS); for example, neither adequately compensates for autocorelation. However, in my opinion both are useful for the purposes to which you are applying them so choose whichever you find easiest to use. If you have doubts ask Steve McIntyre who has all the detail you need at his fingertips.
I hope this rushed response contains something of use.
Richard

Reply to  richardscourtney
November 4, 2013 12:37 pm

My 56 year old text book says the confidence limits on a linear regression = plus or minus the t statistic times an estimate of the error variance of the dependent variable. The confidence limits are a function of x. When plotted with the regression line they will form a hyperbolic envelope centered on the overall mean of x.

November 4, 2013 11:57 am

richardscourtney says:
November 4, 2013 at 11:03 am
Indeed, you may have noted that I said his method could be disputed but I was choosing not to distract the subject with that.
Thank you for your comments! But as for distracting the subject, that is one of the main purposes of this particular thread. Ease of use is not an issue for me. I have seen criticisms of SkS, however if I had always used Nick’s method, I may have seen criticisms for that.
P.S. Nick or Richard, could either of you please elaborate on who does a better job on auto correlation with respect to the following statement since that is one of the criticisms that I saw with respect to SkS?: Thank you.
In my opinion, there are flaws with both the methods you mention (i.e. Stokes and SkS); for example, neither adequately compensates for autocorelation.

Ian W
November 4, 2013 1:53 pm

So we have a global warming ‘pause’ caused by natural climate variations nullifying the otherwise overwhelming cause of global warming / climate change – the monotonically rising level of Carbon Dioxide in the atmosphere.
An amazing coincidence that these natural variations (stadium waves if you will) have matched almost perfectly the rapid warming from the the rise in Carbon Dioxide in the atmosphere. It really is amazing when you think about it the forcing from Carbon Dioxide increasing is logarithmic; yet the sum of the long term natural oscillations – all of them of all lengths from the Sun to the oceans – have matched the warming from Carbon Dioxide almost perfectly for 17 years!!
What an stunning coincidence!!
for those who believe in such coincidences – I have an ocean beach front property in Kansas you will be interested in. You will have to hurry there is already a queue of politicians and climate ‘scientists’ all eager to buy ….

Nick Stokes
November 4, 2013 2:06 pm

Joe says:November 4, 2013 at 10:28 am
“The fundamental assumption when plotting linear trends is that the data is inherently linear (even if noisy), or at least has a linear component that we can detect. Fitting a linear trend to primarily cyclic data is absolutely meaningless unless you know the nature of the cyclic behaviour.”

That’s actually not true. A trend coefficient is just a weighted sum that describes the average rate of change over the period. You can apply it to cyclic data; it is the trend for the period stated, even though you know that it will change as time goes on. A regression will tell you that it warmed during spring; we know that isn’t climate change and won’t last, but it did indeed warm during spring.
But I’ll use that as a lead-in to Werner’s question, which I did try to address in my initial note that he put into the post. There are deviations from linearity which we know don’t really behave as any kind of ARIMA stochastic. I linked to this plot, which shows the ACF for HAD 4 for 1980-2013, and the two fitted ARIMA functions. The quoted uncertainty is a scaled area underneath. You can see that AR(1) (my version) is required to fit the first lag, and then seems to undershoot; ARMA(1,1) (SkS) has an extra parameter, and so tracks better for a year or so.
But all Box-Jenkins fitted models do taper away exponentially and are generally positive; the real ACF is different. You can see plots for other indices here; the pattern in the same. And this plot shows how the cyclic behaviour continues for years (in ACF lags).
That’s what Box means by “all models are wrong”. The cyclic component (ENSO, probably) is padding out the variation attributed to ARIMA stochastics. If you take it away (here), then there is less stochastic variation that either SkS or I would allow. But ENSO has its own variability, which would have to be added in.
As I said, it’s complicated. That’s why it is treated with diffidence by scientists and organisations like the Met Office. But the Daily Mail is more fearless.

clipe
November 4, 2013 2:53 pm

Nick Stokes says:
November 3, 2013 at 4:30 pm

Joe says: November 3, 2013 at 4:15 pm
“I know I can’t really drive through rush houor at the speed limit, but, for convenience, I’m going to model my commute as if I can”
Oddly, I was going to use that analogy too. Not speed limit, but suppose you work out an average speed for your daily commute. That doesn’t mean that you expect to drive at that speed uniformly; in fact it doesn’t imply any speed model at all. But suppose you then plan to arrive at work reliably on time. You need a model of variability. It doesn’t need to perfectly predict how your journey will go, but it needs to give average variation.
Most people in that situation do do something like that. And it mostly works.

Would this model calculate time spent scraping ice/frost, clearing snow, or condensation both inside and outside my car, winter road conditions and such?
I always get up for work earlier than I need to according to my personal models precisely because my models would make me consistently late.

Joe
November 4, 2013 3:14 pm

Nick Stokes says:
November 4, 2013 at 2:06 pm
Joe says:November 4, 2013 at 10:28 am
“The fundamental assumption when plotting linear trends is that the data is inherently linear (even if noisy), or at least has a linear component that we can detect. Fitting a linear trend to primarily cyclic data is absolutely meaningless unless you know the nature of the cyclic behaviour.”
======================================================
That’s actually not true. A trend coefficient is just a weighted sum that describes the average rate of change over the period. You can apply it to cyclic data; it is the trend for the period stated, even though you know that it will change as time goes on. A regression will tell you that it warmed during spring; we know that isn’t climate change and won’t last, but it did indeed warm during spring.
—————————————————————————————————————
My apologies, Nick, I was a little imprecise. But I assumed that your statistical knowledge would allow you to understand the very basic principle that I was referring to.
Yes, a linear trend calculated from cyclic data (even if the cycles are unknown) can indeed have useful meaning within the data interval measured. But it has no meaning whatsoever as soon as you move outside that data interval.
So modelling data that you know has large (and unknown) cyclic components as a linear trend, then expecting that trend to give any meaningful information about future data values is a high school error.
Now, either I was mistaken in assuming your statistical understanding reaches that grade school level, or else you were aware and decided to intentionally misunderstand / misrepresent my point.
I won’t waste my breath asking which it was!

Nick Stokes
November 4, 2013 3:46 pm

Joe says: November 4, 2013 at 3:14 pm
“expecting that trend to give any meaningful information about future data values is a high school error”

Who did that?
It’s a two stage process. First let’s get the current trend right, including any estimate we can make of uncertainty. Then maybe think about the future. I’m at the first stage. Cycles, GHG’s etc are relevant to the second. But the IPCC, Met Office etc are not big on extrapolating trends.

Joe
November 4, 2013 4:23 pm

Nick Stokes says:
November 4, 2013 at 3:46 pm
Joe says: November 4, 2013 at 3:14 pm
“expecting that trend to give any meaningful information about future data values is a high school error”
=========================================
Who did that?
————————————————————————————————————————
Only the entire multi-billion £ alarmist industry. But that’s kind of beside the point,
A linear trend fitted to non-linear data can’t give any clues about the future (outside the measured data). So the only conceivable purpose of taking your “first stage” using linear trends (which we know are a false model of the system) is to give a misleading, even dishonest, impression of our understanding of the system.
I say dishonest – reluctantly – because we know the linear model has no future utility, yet we allow it to be presented to, and interpreted by, others as if it does. Honesty would require us all to be saying “actually, it doesn’t mean that at all” every single time a policy maker or activist tries to use that trend to suggest what the future holds.

herkimer
November 4, 2013 4:54 pm

Unfortunately statistics like those from this current track above are not being told to Europeans. In the Key Messages section of the EEA Global and European Temperature Assessmeent published in August 2013 , EEA is telling Europe a different picture, namely,
• Between 1990 and 2010, the rate of change in global average temperature has been close to the 0.2°C per decade.
• Global mean surface temperature rose rapidly from the 1970s, but has been relatively flat in the last decade mostly due to heat transfer between upper and deep ocean waters.

Chad Wozniak
November 5, 2013 11:33 am

It probably doesn’t matter that the 17-year tipping point has been reached – the alarmists are very creative at coming up- with rationalizations and excuses for saying it doesn’t matter. I think it will take another long series of hard winters and poor summers – another 17 years maybe – before people will entirely stop listening to the alarmists, and even then the alarmists will still claim that it all has been caused by global warming.