On the Difference Between Lord Monckton's 18 Years for RSS and Dr. McKitrick's 26 Years (Now Includes October Data)

Guest Post by Werner Brozek Edited by Just The Facts:

WoodForTrees.org – Paul Clark – Click the pic to view at source

To make this discussion easy, I will make the following assumptions. Dr. McKitrick’s data went until April, 2014, however I will assume his data continued to October, 2014. I will assume the lower error bar is zero for exactly 26 years in the past. I will also assume the line since September 1996 is also exactly 0 as per the following from Nick Stokes’ site:

Temperature Anomaly trend:

Sep 1996 to Oct 2014

Rate: 0.000°C/Century;

CI from -1.106 to 1.106

First of all, I will discuss Lord Monckton’s slope of zero for a time that is slightly larger than 18 years. Lord Monckton says the slope is zero for slightly longer than 18 years as is shown by the flat turquoise line above that starts in September 1996. Another way of saying this is that when we include error bars, there is a 50% chance that cooling occurred during this time and there is a 50% chance that warming occurred during this time.

According to my interpretation of the numbers from Nick Stokes’ site, there is a 95% chance that the real slope for this period of over 18 years is +/- 1.106 degrees C/Century. The two sloping lines from September 1996 show this range. This implies there is a very small chance there is cooling of more than 1.106 C/Century. At the same time, there is the same small chance of warming at more than 1.106 C/Century.

Before I discuss Dr. McKitrick’s 26 years, I would like to offer this quote from Peterson et al., 2009: State of the Climate in 2008, American Meteorological Society Bulletin.

”The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

From the above, it appears that climate scientists do not attach a huge amount of importance to the time for a slope of zero, but rather to the time that the warming is not statistically significant at the 95% level.

What Dr. McKitrick has found is that for RSS, the warming is not statistically significant at the 95% level for 26 years. So if WoodForTrees.org (WFT) gives a warming rate of X C/year, the error bars are also +/- X C/year.

According to WFT, there is warming from 26 years ago at the rate of 0.0123944 C/year. So this means that we can be 95% sure the real warming rate is 0.0123944 C/year +/- 0.0123944 C/year. Doing the adding and subtracting, this gives, at the 95% level, a range of between 0.0247888 C/year (or 0.025 C/year to two significant digits) and zero. These two ranges are indicated on the graph above starting at November 1988. Since the lower number is zero and therefore not positive, it is reasonable to say the warming since November 1988 is not statistically significant, at least according to RSS.

Analogous to the case with no warming, there is a small chance that the warming over 26 years is larger than 0.025 C/year. However there is the same small chance that there has been cooling over the last 26 years according to Dr. McKitrick’s calculations using the RSS data.

In the sections below, as in previous posts, we will present you with the latest facts. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on some data sets. At the moment, only the satellite data have flat periods of longer than a year. The second section will show for how long there has been no statistically significant warming on several data sets. The third section will show how 2014 to date compares with 2013 and the warmest years and months on record so far. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.

Section 1

This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.

1. For GISS, the slope is not flat for any period that is worth mentioning.

2. For Hadcrut4, the slope is not flat for any period that is worth mentioning. Note that WFT has not updated Hadcrut4 since July and it is only Hadcrut4.2 that is shown.

3. For Hadsst3, the slope is not flat for any period that is worth mentioning.

4. For UAH, the slope is flat since January 2005 or 9 years, 10 months. (goes to October using version 5.5)

5. For RSS, the slope is flat since October 1, 1996 or 18 years, 1 month (goes to October 31).

The next graph shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the upward sloping blue line at the top indicates that CO2 has steadily increased over this period.

WoodForTrees.org – Paul Clark – Click the pic to view at­ source

When two things are plotted as I have done, the left only shows a temperature anomaly.

The actual numbers are meaningless since the two slopes are essentially zero. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 18 years, the temperatures have been flat for varying periods on the two sets.

Section 2

For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website <a href=”http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html”. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 14 and almost 22 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.

Dr. Ross McKitrick has also commented on these parts and has slightly different numbers for the three data sets that he analyzed. I will also give his times.

The details for several sets are below.

For UAH: Since June 1996: CI from -0.037 to 2.244

(Dr. McKitrick says the warming is not significant for 16 years on UAH.)

For RSS: Since December 1992: CI from -0.018 to 1.774

(Dr. McKitrick says the warming is not significant for 26 years on RSS.)

For Hadcrut4.3: Since April 1997: CI from -0.010 to 1.154

(Dr. McKitrick said the warming was not significant for 19 years on Hadcrut4.2 going to April. Hadcrut4.3 would be slightly shorter however I do not know what difference it would make to the nearest year.)

For Hadsst3: Since December 1994: CI from -0.007 to 1.723

For GISS: Since February 2000: CI from -0.043 to 1.336

Note that all of the above times, regardless of the source, with the exception of GISS are larger than 15 years which NOAA deemed necessary to “create a discrepancy with the expected present-day warming rate”.

Section 3

This section shows data about 2014 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadsst3, and GISS.

Down the column, are the following:

1. 13ra: This is the final ranking for 2013 on each data set.

2. 13a: Here I give the average anomaly for 2013.

3. year: This indicates the warmest year on record so far for that particular data set. Note that two of the data sets have 2010 as the warmest year and three have 1998 as the warmest year.

4. ano: This is the average of the monthly anomalies of the warmest year just above.

5. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year. Note that this does not yet include records set so far in 2014 such as Hadsst3 in June, etc.

6. ano: This is the anomaly of the month just above.

7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0. Periods of under a year are not counted and are shown as “0”.

8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.

9. sy/m: This is the years and months for row 8. Depending on when the update was last done, the months may be off by one month.

10. McK: These are Dr. Ross McKitrick’s number of years for three of the data sets.

11. Jan: This is the January 2014 anomaly for that particular data set.

12. Feb: This is the February 2014 anomaly for that particular data set, etc.

21. ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months.

22. rnk: This is the rank that each particular data set would have if the anomaly above were to remain that way for the rest of the year. It may not, but think of it as an update 50 minutes into a game. Due to different base periods, the rank is more meaningful than the average anomaly.

Source UAH RSS Had4 Sst3 GISS
1.13ra 7th 10th 9th 6th 7th
2.13a 0.197 0.218 0.492 0.376 0.59
3.year 1998 1998 2010 1998 2010
4.ano 0.419 0.55 0.555 0.416 0.66
5.mon Apr98 Apr98 Jan07 Jul98 Jan07
6.ano 0.662 0.857 0.835 0.526 0.92
7.y/m 9/10 18/1 0 0 0
8.sig Jun96 Dec92 Apr97 Dec94 Feb00
9.sy/m 18/5 21/11 17/7 19/11 14/9
10.McK 16 26 19
Source UAH RSS Had4 Sst3 GISS
11.Jan 0.236 0.261 0.508 0.342 0.68
12.Feb 0.127 0.161 0.305 0.314 0.43
13.Mar 0.137 0.213 0.548 0.347 0.70
14.Apr 0.184 0.251 0.658 0.478 0.71
15.May 0.275 0.286 0.596 0.477 0.78
16.Jun 0.279 0.346 0.620 0.563 0.61
17.Jul 0.221 0.351 0.543 0.551 0.52
18.Aug 0.117 0.193 0.669 0.644 0.69
19.Sep 0.186 0.206 0.593 0.574 0.76
20.Oct 0.243 0.272 0.613 0.529 0.76
Source UAH RSS Had4 Sst3 GISS
21.ave 0.201 0.254 0.565 0.482 0.66
22.rnk 7th 7th 1st 1st 1st

If you wish to verify all of the latest anomalies, go to the following:

For UAH, version 5.5 was used since that is what WFT uses.

http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.5.txt

For RSS, see: ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt

For Hadcrut4, see: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.3.0.0.monthly_ns_avg.txt

For Hadsst3, see: http://www.cru.uea.ac.uk/cru/data/temperature/HadSST3-gl.dat

For GISS, see:

http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

To see all points since January 2014 in the form of a graph, see the WFT graph below. Note that Hadcrut4 is the old version that has been discontinued. WFT does not show Hadcrut4.3 yet.

WoodForTrees.org – Paul Clark – Click the pic to view at source

As you can see, all lines have been offset so they all start at the same place in January 2014. This makes it easy to compare January 2014 with the latest anomaly.

Appendix

In this part, we are summarizing data for each set separately.

RSS

The slope is flat since October 1, 1996 or 18 years, 1 month. (goes to October 31)

For RSS: There is no statistically significant warming since December 1992: CI from -0.018 to 1.774.

The RSS average anomaly so far for 2014 is 0.254. This would rank it as 7th place if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2013 was 0.218 and it is ranked 10th.

UAH

The slope is flat since January 2005 or 9 years, 10 months. (goes to October using version 5.5 according to WFT)

For UAH: There is no statistically significant warming since June 1996: CI from -0.037 to 2.244. (This is using version 5.6 according to Nick’s program.)

The UAH average anomaly so far for 2014 is 0.201. This would rank it as 7th place if it stayed this way. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.662. The anomaly in 2013 was 0.197 and it is ranked 7th.

Hadcrut4.3

The slope is not flat for any period that is worth mentioning.

For Hadcrut4: There is no statistically significant warming since April 1997: CI from -0.010 to 1.154.

The Hadcrut4 average anomaly so far for 2014 is 0.565. This would rank it as 1st place if it stayed this way. 2010 was the warmest at 0.555. The highest ever monthly anomaly was in January of 2007 when it reached 0.835. The anomaly in 2013 was 0.492 and it is ranked 9th.

HADSST3

For HADSST3, the slope is not flat for any period that is worth mentioning. For HADSST3: There is no statistically significant warming since December 1994: CI from -0.007 to 1.723. The HADSST3 average anomaly so far for 2014 is 0.482. A new record is guaranteed. 1998 was the warmest at 0.416 prior to 2014. The highest ever monthly anomaly was in July of 1998 when it reached 0.526. This is also prior to 2014. The anomaly in 2013 was 0.376 and it is ranked 6th.

GISS

The slope is not flat for any period that is worth mentioning.

For GISS: There is no statistically significant warming since February 2000: CI from -0.043 to 1.336.

The GISS average anomaly so far for 2014 is 0.66(4). This would rank it as first place if it stayed this way. 2010 was the warmest previously at 0.66(1). The highest ever monthly anomaly was in January of 2007 when it reached 0.92. The anomaly in 2013 was 0.59 and it is ranked 7th.

Conclusion

There are different ways of deciding whether or not we are in a pause. We could say that if we have a flat slope for X number of years, we are in a pause. Or we could say that if the warming is not statistically significant for over 15 years, we are in a pause. Or we could say that as long as the satellite data sets do not break the 1998 record, we are still in a pause.

In my opinion, a combination of UAH and RSS needs to show statistically significant warming for less than 15 years before I am comfortable with declaring the pause over. What do you think?

Advertisements

198 thoughts on “On the Difference Between Lord Monckton's 18 Years for RSS and Dr. McKitrick's 26 Years (Now Includes October Data)

  1. I like using the “Big 4” average. I think all four are higher trend than the actual surface trend (2 being satellites and doing LT right, and 2 surface metrics doing it just plain wrong). It’s what we have.
    That takes us back to 2001. This also has the salubrious effect of removing the 1998 El Nino and the severe 1999 – 2000 La Nina that followed. Anywhere between 1998 – 2000 risks being a cherrypick. So one must be careful.

    • I assume you are talking about WTI. It was discontinued in May when we had the last Hadcrut3 reading, so it would not be accurate at 2001 if we had Hadcrut4 and if it were updated to October.

      • Actually that is not much help either! It has Hadcrut4.2 which stopped in July as my last graphic shows. And as we discussed in the last two posts, Hadcrut4.3 is warmer in recent years than Hadcrut4.2. As well the last three months have been hotter than average.
        In addition, WFT uses version 5.5 for UAH which is cooler than version 5.6.

    • Not sure it’s a good idea to mix sattelite data with site data, the methodology being so different. At some point, if sattelite date keeps telling a different story, we will have to decide which set better describes reality (instead of hidding its impact in an average).

  2. Contrary to the article writer above I find it hard finding any significant warming what so ever using the models and methods used – The models are worse than anything I have seen during the 43 years since my Systemprogrammer Exam, they lack the most needed parameters (I myself used 43 parameters back in 1993 to establish the sea water leavels from peak Stone Age up to 1000 AD. Program that used on the so called “models” show they lack ALL knowledge of Basic knowledge Theories of Science one ought to expect anyone calling him/herself scientist should be able to show having learnt!
    Thus the only valid comment is the one we used back in 70’s: Bad input -> Bad output

    • On rows 9 and 10 of the table, I give the times from Nick Stokes and Dr. McKitrick. Both agree that the times are longer than 15 years for the three sets both analyzed.

      • Fallacie argumentum ab auctoritateFallacies in argumentation as well as false asumption. Consensus is a political term with no connection what so ever to Theories of Science.
        Science axioms:
        In Theories of Science it’s never ever possible to prove a thesis right. Only to falsify a thesis

    • “The Duhem–Quine thesis (also called the Duhem–Quine problem, after Pierre Duhem and Willard Van Orman Quine) is that it is impossible to test a scientific hypothesis in isolation, because an empirical test of the hypothesis requires one or more background assumptions (also called auxiliary assumptions or auxiliary hypotheses). The hypothesis in question is by itself incapable of making predictions. Instead, deriving predictions from the hypothesis typically requires background assumptions that several other hypotheses are correct; for example, that an experiment worked as designed or that previous scientific knowledge was accurate. For instance, as evidence against the idea that the Earth is in motion, some people noted that birds did not get thrown off into the sky whenever they let go of a tree branch. Later theories of physics and astronomy could account for this fact while also positing a moving Earth.
      Although a bundle of hypotheses (i.e. a hypothesis and its background assumptions) as a whole can be tested against the empirical world and be falsified if it fails the test, the Duhem–Quine thesis says it is impossible to isolate a single hypothesis in the bundle. One solution to the dilemma thus facing scientists is that when we have rational reasons to accept the background assumptions as true (e.g. scientific theories via evidence) we will have rational—albeit nonconclusive—reasons for thinking that the theory tested is probably wrong if the empirical test fails.”

      • I have problems with that – that’s as far from Theories of Science as can be.
        Empiri is and models aren´t valid no matter who presents them.
        That problem also exists in the 2nd String Theory in one single point. (see my bloggarticle regarding String theory in Norah4you) It doesn’t matter if all but one show “green light” – as long as there is one single part of a thesis that doesn’t the thesis falls to pieces.

      • It’s basically the same idea as Occam’s razor – it would be rational to sacrifice the single hypothesis whose repudiation will require us to make the lowest number of changes to our overall world view. However, people become attached to different aspects of their world views, and thus the arguing begins.

      • The short form in which this is commonly expressed in considering a laboratory result is “all other things being equal.” You must simplify or complicate an explanation as necessary to eliminate false-to-fact results. The old physics quiz joke that begins “assume a frictionless, spherical …” also addresses this problem in a tongue-in-cheek manner.
        Assuming all other things are equal works well in laboratories, but is far less satisfactory in accounting for field results. The willful disregard of the truism that reality is always more complex than a lab environment can be regarded as a character flaw. It is a flaw very clearly expressed in the Climategate emails. A good example is Trenberth’s complaint that the data must be wrong. Elegant theory v. messy reality, he chose theory and blamed reality.

  3. How about the “fact” that the 1930s were warmer than today before the adjustments and homogenizations?
    I think that Willis’s Buoy microcosm illustration shows how the temperature records have been altered to show global warming. Don’t know how they did it or why they did it, but I think the temperatures have been adjusted. I like this GISS graph – very alarming???:
    http://suyts.files.wordpress.com/2013/02/image_thumb265.png?w=636&h=294

      • What always amazes me is that the historical Data gets adjusted and manipulated and shows a trend, but the Data that is readily available and recent that anyone can look at and interpret shows little or no trend. Funny how observation controls the environment maybe the environment is a quantum event.

    • PLEA TO OTHERS HERE: Is it possible to get this graph in degrees C? Can anyone run it and present it as a pic? It would be a very powerful tool to use in countering ‘global warming’.

    • Even if the 1930s were not warmer than today it makes little difference to people trying to survive life’s every day real problems primarily because they don’t fear minor warming.
      My guess is that the alarmist’s guess about the change in temperature is so insignificant anyway and well within legitimate error bars and peak to peak noise there is no rational evidence which demands action.
      You’d have to be a certified loon to allow so much harm because the temperature might go up 1°- 3°C at some magical time in the future.
      Today, we have the resources to fix nearly every problem in the world and what do we do with our well earned wealth?
      The problems with pushing CAGW is nobody cares enough to do it because:
      a. They don’t believe it themselves as demonstrated by their activities.
      b. Everybody knows alarmist have failed to predicted anything correctly.
      c. It’s too late to save us.
      d. The solutions of buying more toys which are not fit for purpose, Cap&Trade and raising taxes will do virtually nothing to affect earth’s temperature.
      I’m not convinced yet either.
      IPCC, please send more fear mongering propaganda.

  4. “In my opinion, a combination of UAH and RSS needs to show statistically significant warming for less than 15 years before I am comfortable with declaring the pause over.” Did you mean ‘more’ (not “less”) than 15 years in the next to the last sentence?

      • Then you could have said: A temperature increase so steep that it becomes significant in 15 years or less. GUIDE your readers, don’t force them to parse your arguments backwards.

      • Saying “less than 15 years” is ambiguous. It left me scratching my head. One is less than 15. So does that mean that if UAH and RSS show statistically significant warming for only 1 year (perhaps during an El Nino), you would declare the pause over? Perhaps you should specify a range, such as 10-15 years.

      • Excellent point! I should have said more than 10 but less than 15. Here is what Nick’s site said for a certain period for RSS:
        Temperature Anomaly trend
        Nov 1996 to Aug 1998 
        Rate: 23.474°C/Century;
        CI from 18.299 to 28.650;

  5. Twenty six years takes us back to the conspiracy between Jim “Venus Express” Hansen & CO Senator Tim “Populution” Wirth to turn off the air conditioning in the Senate hearing room.
    Poetic justice that the warming of c. 1977-88 ended just as the co-conspirators pronounced man-made global warming a problem of cataclysmic, biblical proportions.
    God obviously has a sense of humor.

    • I seem to recall that every global climate conference has experienced unseasonably cold weather …. so you may be right and ‘somebody up there’ is trying to tell us something …….

  6. RSS Update:
    The November anomaly came out at 0.246, a drop of 0.028 from October. The average is now 0.253 so it still ranks in 7th place after 11 months.
    The flat part increases to 18 years and 2 months. It missed being 18 years and 3 months by the smallest of margins with a positive slope of 6.30663e-07 per year.

  7. Oh can I get back to you in say, oh, maybe 2 years with an answer?
    It sounds as though there has been no warming for a couple of years, meanwhile CO2 has increased. Sounds like time to shut down coal plants by implementing new regulations to protect the ozone.

  8. Quoted in the para immedialtely above “Section 1″
    ” At the moment, only the satellite data have flat periods of longer than a year. ”
    Yet I recall seeing a post some months ago giving the details of results from the NOAA’s new set of land stations in the USA – all specially sited to ensure that they were properly away from ‘odd’ influences and not likely to be moved in the next umpteen years. This showed not only no warming, but a very slight decrease over the 10 years since the full network commenced operating.
    Contradiction?

    • I have never quoted NOAA since WFT does not cover it. However a few months ago, all global data sets that I cover had flat periods of several years. But that has changed over the last few months. However it may be different for the United States alone.

  9. Here in Australia I’m not looking forward to their ABC and the lazy MSM screaming 2014 hottest year ever etc. Tim Flannery will be interviewed for sure.
    Not according to the satellites though which are the only trustworthy source in my opinion.
    Great post Werner,thank you.

    • Sats are only trustworthy if measuring sea level rise. (Because alarmists always quote sat sea level rise, never tide gauges when talking about trends.) Otherwise they should always be ignored. ;-P

  10. Werner, please rephrase this.
    “In my opinion, a combination of UAH and RSS needs to show statistically significant warming for less than 15 years before I am comfortable with declaring the pause over.”

    • If Nick Stokes and Dr. McKitrick say the warming is statistically significant for as short a period of only 10 years on RSS rather than 22 and 26 years, respectively, then I will agree the pause is over.

  11. “In my opinion, a combination of UAH and RSS needs to show statistically significant warming for less than 15 years before I am comfortable with declaring the pause over. What do you think?”
    You can’t really do that, because you can’t get a combined variance that isn’t extremely large. They aren’t the same population. To give just one idea of the problem, as you say, RSS shows zero trend since Sep 1996, with range 1.106C/Cen. But UAH5.6 shows 1.042, with lower bound -0.131. So each says the other is almost outside its CI range.
    On Dr McKitrick’s numbers, what he actually said was:
    “I propose a robust definition for the length of the pause in the warming trend over the closing subsample of sur- face and lower tropospheric data sets. The length term J_ MAX is defined as the maximum duration J for which a valid (HAC-robust) trend confidence interval contains zero for every subsample beginning at J and ending at T – m where m is the shortest duration of interest. “
    You need to figure out what that means before comparing it with conventional significance. HAC-robust is the key. Statistically insignificant as normally defined says could have arisen by chance. HAC-robust means that it could have arisen allowing for heteroskedasticy or some long term persistence. The more alternative ways you postulate, the longer you can stretch out lack of significance. But I don’t think you’ll see Ross’ definition taken up any time soon.
    On RSS, here is a plot that shows trends going back to past times from October 2014. A zero trend pause starts when the curve first crosses the x-axis. You can see how RSS is an outlier. It is the dark blue curve at the bottom. UAH is the light blue at the top.
    http://www.moyhu.org.s3.amazonaws.com/pics/Oct1118.png
    BTW, you said UAH had zero trend since 2005. My check said 2008.

    • Thank you very much for that, Nick! I am merely the driver of the car here. You are the mechanic.
      However it would be nice if RSS and UAH were closer together.
      As for the times, you are right about the lower time for UAH, but that is for version 5.6. I am using version 5.5 since that is what WFT gives.

    • Apparently you have never studied statistics, or if you have, you weren’t paying attention.
      You can’t have an outlier when there are only two series, ie RSS & UAH.
      I mean, sheesh. To the Nth.

    • Remedial statistics for Nick Stokes I fear. Everything, significant or not, arises
      ‘by chance’ in statistical hypothesis testing.
      More seriously, the idea that Prof McKitrick is using an unconventional definition just because it makes it harder to reject the null hypothesis of stationarity is wrong. It amounts to saying one should prefer less efficient/ precise/robust tests. See Mosher’s erudite (well I didn’t know the philosophy part) comment above. The test is for a stationary time series, not the auxiliary (ancillary?) hypotheses of homoskedasticity or AR(1). It is clearly not right to reject stationarity because a time series fails the test through failing one of these incidental hypotheses incorporated into the null distribution in order to get a test statistic. The HAC test makes rejection for spurious reasons less likely, however galling that may be to climate folk.

  12. “what do you think”.?
    I think future generations of scientists will be laughing at all this navel gazing . If one looks at the Holocene temperature record the top of each semi-periodic variation lasts for a hundred years at least, why should our optimum be different?

  13. Permit me to ask a simple qualifying question.
    At what point in history were we able to start measuring global temp to 100’s of a degree?

    • Good point! Think of it as comparing two teams. One scores 16 goals in 20 games and the other scores 24 goals in 20 games. The average for the first team is 0.8 goals per game and the other is 1.2 goals per game. No team can score a tenth of a goal, but numbers can be compared to see which is the highest scoring team.

      • No, they can’t. With only two teams, you have the higher, not the highest.
        The more I read of you Team advocates, the more I laugh.

      • So if you have three data sets no two the same, it would seem that one of the sets is higher than the other two.
        But you have no basis for declaring that set the highest.
        No matter how many sets you have, you still can’t say that there is no other set that is the highest set.
        Same principle as inability to prove a theory correct. Only need one repeatable contrary result to disprove.
        But you can say that one set is the highest of the currently known sets. That is true even if there are currently only two known sets.
        I’m tempted to make some learned comment in Maori; but it seems that English is the preferred language of this blog.

      • Werner, my question was a leading one. Most of the temperature references in the data above are actually stated in 1,000’s of a degree. I just don’t see how we can convert low resolution historic temp readings into that fine of number without major intervention.
        Do any global reporting stations actually report temperature to the 1,000th of a degree level?
        Example question, how much warmer was 1998 than 1934, before adjustment, in 1,000’s of a degree?

      • I use the numbers they provide. For all recent numbers, we could probably say +/- 0.1. And for numbers a hundred years ago, we could probably say at least +/- 0.5. On top of that, there were few thermometers around the globe a hundred years ago. But some people would like to turn the world upside down based on these numbers.

      • .1 is not .001 as is displayed in the post temperature references. Perhaps I am off base in not understanding how we represent thousandths or hundredths of a degree without actually measuring it. This year, as an example, may be the warmest ever by .02 of a degree, and yet we don’t actually measure to that level.
        Thanks for your reply and continued good work. You too JTF!

      • This year, as an example, may be the warmest ever by .02 of a degree, and yet we don’t actually measure to that level.
        Suppose you measure the temperature of 50 cities to the nearest degree and find that all are 24 C to the nearest whole number. Then a year later, 49 cities are at 24 C but one is 25 C. Then the average in the first year was 24, but 24.02 in the second year. Of course it should be rounded to 24, but they do not do this. So just take their extra decimals with a chunk of salt.

      • Lets say 1 team scores 1 goal in a thousand games and the other team scores 3 goals in a thousand games. We can say the second team is higher scoring at .003 goals per game.
        But…who cares, the difference is moot, neither team is winning any games due to their offense.
        So the question becomes when is a statistical difference significant, in the case of the fanciful and fleeting thing called global temperature, even though fantastical claims are made, nobody knows.
        In games, the scores are not recorded differently, do not have competing methods of determining what the final score is, and the games are not recorded in thousandths of a goal to begin with. Unfortunately for climate science this is not the case. Too many data streams of temperature do not agree and change over time, there is not agreement as to the best method of recording temperature, and since we do not know what the temperatures/scores are lets just average to get a number is not something sports games deal with.
        In conclusion climate science has two flaws in this area, not being able to determine significance, (dooms day speculation is not helpful here, neither is presuming any and every climate anomaly must be due to warming) and the ability to accurately determine global temperature.
        So even before mentioning the poorly performing models, CO2 and mans 3.5% contribution to the 400 parts per million occurring in the atmosphere, and hypothesis that can never be dis-proven, one wonders how climate science has gained any credibility at all.

      • mans 3.5% contribution to the 400 parts per million
        I do not want to debate this here since it is off topic. While I agree with the 3.5% per year, I accept that the cumulative effect is about 40% from 280 ppm in 1750 to 400 ppm now. I know others disagree. But I thought I should mention this for new readers who are not aware of the controversy here.

      • Thanks Werner, that is what I was after. Now we can understand how we arrive at such temperature numbers through averages of averages of such. To the 1,000’s of a degree. 🙂
        Regards Ed .

    • Hadcrut is recorded to one thousandth of a degree yet most data was recorded to the nearest whole degree!

  14. I disagree with your interpretation of the State of the Climate Report.
    You quoted it as : ”The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
    And then you gave the above this interpretation: “From the above, it appears that climate scientists do not attach a huge amount of importance to the time for a slope of zero, but rather to the time that the warming is not statistically significant at the 95% level.”
    My interpretation is that they they ran a whole bunch of simulations and less than 5% of the 15 year periods had a zero slope, where “zero slope” is the best estimate of the trend, without any confidence interval.
    Lord Monckton tested for that condition and found that a zero slope has existed for greater than 15 years, therefore, per the State of the Climate Report there is a “discrepancy with the expected present-day warming rate”.

    • As I said in the update, it is now 18 years and 2 months for RSS. So however you interpret that statement, the models are in trouble and there are over 50 excuses as to why the models are not correct.
      However if your interpretation is correct, why would Nick Stokes and Dr. McKitrick go to the effort they do to come up with their “95%” numbers?

      • Stokes and McKitrick weren’t testing the statement made in the State of the Climate report., so they can use any method they want, on any dataset they want.
        If you want to test the statement of the State of the Climate 2008 then you should test for 0 degree/ decade OLS trend greater than 15 years, using the appropriate time series dataset. RSS and UAH are lower troposphere, not surface temperature.
        I’m not sure what relevance the various excuses have or why you mention them. They don’t have any effect on whether or not there is a discrepancy between models and observations, per the criteria stated by the State of the Climate 2008

      • With regards to your first sentence, I will let Nick Stokes handle that.
        Due to the adiabatic lapse rate, there really should not be a difference between the warming of the surface and the lower troposphere. And if there is a difference, we need to find out why.
        As for my mentioning the excuses, some people will deny we are even in a pause, so that is my reason for mentioning it.

      • “charliexyz December 2, 2014 at 8:32 pm
        My interpretation is that they they ran a whole bunch of simulations and less than 5% of the 15 year periods had a zero slope, where “zero slope” is the best estimate of the trend, without any confidence interval.”

        I think that is the right interpretation. But there are points to add:
        1. The statement is that any one 15 yr period has less than a 5% chance… Of course, if you keep looking at successive intervals, the chance of getting one such interval goes up. So over, say, 50 years, the chance of such a 15 yr is a lot higher.
        2. There is context:
        “Ten of these simulations have a steady long-term rate of warming between 0.15° and 0.25ºC decade–1, close to the expected rate of 0.2ºC decade–1. ENSO-adjusted warming in the three surface temperature datasets over the last 2–25 yr continually lies within the 90% range of all similar-length ENSO-adjusted temperature changes in these simulations (Fig. 2.8b). Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.
        The 10 model simulations (a total of 700 years of simulation) possess 17 nonoverlapping decades with trends in ENSO-adjusted global mean temperature within the uncertainty range of the observed 1999–2008 trend (-0.05° to 0.05°C decade–1).”

        He’s talking about ENSO-adjusted warming. Not about the chance of getting a run of La Ninas.
        You’re right about us not testing that statement. I don’t think absence of significance is a useful criterion for anything much, but it is a test for, given positive observed trend, whether a zero value is unlikely in the inferred distribution. Peterson is talking about observed trends in model calcs.
        And yes, he was specifically talking about surface trends, not troposphere.

      • It might be better to use the Santer paper as that is specifically tied to lower tropospheric temperatures.
        The importance of timescale
        B. D. Santer et al (2011)
        “Our results show that temperature records of at least 17 years in length are required for identifying human effects on global‐mean tropospheric temperature.”

      • Thank you! Here is what Richard Courtney had to say about this:
        The Santer statement says that a period of at least 17 years is needed to see an anthropogenic effect. It is a political statement because “at least 17 years” could be any length of time longer than 17 years. It is not a scientific statement because it is not falsifiable.
        However, if the Santer statement is claimed to be a scientific statement then any period longer than 17 years would indicate an anthropogenic effect. So, a 17-year period of no discernible global warming would indicate no anthropogenic global warming.
        In my opinion, Santer made a political statement so it should be answered with a political response: i.e. it should be insisted that he said 17 years of no global warming means no anthropogenic global warming because any anthropogenic effect would have been observed.
        Santer made his petard and he should be hoisted on it.
        Richard

  15. I question the notion of a “pause.” With data sets that begin anytime after 1979, a dominant feature is the 1980s and 1990s warm regime. What is seen as a “pause” in the 2000s may be the shoulder of an oscillation before a protracted cooling trend, as has been predicted by several researchers including the Russian Academy of Sciences.
    Pauses this long in historical climate trends is unusual. Data, both proxy and observed, going back farther in time suggest the present may be a reversal of trend rather than a pause.

  16. While the alarmists ought to have their noses rubbed in the fact that their precious models have been falsified by their own criteria, I wonder if we’re over doing it. It seems to me that natural variability is much higher than originally thought. At some point, the alarmists are going to have to admit that. The silver lining for them is that this gives them the excuse to move the goal posts, and posit a much longer time period to falsify the models.
    The fact is that temps have been warming since the depths of the LIA. Hundreds of years. Does anyone think that trend has suddenly stopped? I for one doubt it. It just got overwhelmed by natural variability. So if THAT trend is real, and nothing has changed to alter it, the pause will come to an end with or without CO2 increases. When it does, the alarmists will go back to claiming the warming is CO2, and skeptics will go back to claiming that it is natural variability, and around the circle we’ll go again.

    • I wonder if we’re over doing it.
      You and I may have read things hundreds of times. However there may be many new people each month that may be seeing this for the first time.
      It just got overwhelmed by natural variability.
      That may well be the case. And if it is, it only proves that there is nothing catastrophic about CAGW.

    • I completely agree with your thesis. By making these tortuous arguments in increments of 0.0123944 C/year we obviate the issue. Can I sense 0.0123 C?
      Why aren’t ALL “our” Denier charts in WHOLE DEGREES C! A medical thermometer is only accurate to ±0.2 °C!!!
      The “public” would reject Warmist claims if we were to assert that the change in all temperature in the last quarter century was less than can be measured in any household in America.
      “The change in temperature of the earth in the last 26 years is within the accuracy of the wall thermostats & within 20X the accuracy of the your home medical thermometer.”

    • Once the alarmists admit natural variability is a significant factor it is game over. It reduces the potential impact of CO2 to a minor warming. There’s no reason to spend large sums of money to halt a small, generally beneficial warming.

    • It has been warming since the Maunder Minimum in the depths of the Little Ice Age, over 300 years ago. But earth has been in a longer term cooling trend at least since the Minoan Warm Period, more than 3000 years ago, when the East Antarctic Ice Sheet, by far the largest depository of fresh water on the planet, quit retreating.
      No one can say what the future holds. The Holocene, our present balmy interglacial, could end in 300, 3000 or 30,000 years. But there’s little that humans can do to stop the return of the northern hemisphere ice sheets, no matter how much fossil fuel we burn. Maybe we’ll come up with something in whatever time we have left. Or we can just adapt to life in a glacial world, which after all has been the norm for going on three million years, the entire history of our genus Homo. Or over 30 million if you start from when Antarctica glaciated, roughly the history of our superfamily, Hominoidea, the apes.

  17. The post is ponderous, the comments too.
    Am I the only person who is thinking “angels and pin heads”?

    • WUWT offers a variety of posts and authors. Rarely will a given post be liked by everyone. However if you are in discussions with any one about any aspect of global warming, this is the place to get the facts.

    • Maybe yes, maybe no; I at least found the post informative, and the comments even more so.
      As for “angels and pin heads” — that was a medieval thought experiment, trying to come to grips with the concept of infinitesimals. Today you might ask how many geometric “points” are contained on a pin head. It was NEVER about the nature of angels, or angelic dance practices, and no mathematician considers either “infinite” or “infinitesimal” a ridiculous, trivial concept.
      (I once attended a “parents’ class” discussing points, lines and planes; we almost came to blows over whether you could or could not “add up” a bunch of points to get a line, like beads on a thread. That class would have made a hilarious Philosophers’ SNL skit.)

  18. Werner,
    I think we’re presently inside an envelope which I’ve defined as one standard deviation of the residual when I regress the log of CO2 against temperature: https://drive.google.com/file/d/0B1C2T0pQeiaSZ05kUXBrNW96alU
    In other words, Le Grande Pause isn’t all that unusual. Now if we were to dip outside of that envelope and stay there for 10 years or so, then I’d be calling Houston to report a problem.

      • Werner Brozek

        But should we be spending billions of dollars in the meantime to stop warming if there is a chance we may be outside that envelope in 10 years?

        How quickly we’ve moved from the science to the politics. Since you apparently don’t put much stock in what the people doing the actual research are saying, I wonder how good an idea you’ve got of where in that envelope we’ll be in a decade. For all you know, we could be outside it on the positive side. [1]
        That aside, I think researching the planet is money well spent either way it goes. Like any science, one never quite knows beforehand what discoveries will be made and what use will come of it.
        As for mitigation policy, I think that the cure cannot be worse than the disease and that wrecking the economy to stop emissions dead in their tracks is a Bad Idea. This assumes, of course, that we know how bad the disease will be. Which we don’t, and won’t until we get there. My opinion is that for the US, the top priority mitigation strategy should be to ramp up nuclear fission plants to replace coal. (C)AGW/CC aside, total replacement of coal power with nuclear would save on the order of 30-60k lives per year according to my estimates based on WHO and NIH studies.
        Next would be to ramp geothermal, which to me is a no-brainer. Industry estimates that ~20% of current electricity demand could be met by building geothermal plants in 5 western states.
        I don’t mind solar where it works, not a big fan of wind. We need to drop ethanol fuel production from food crops like a bad habit … because it is a bad habit. For my money, liquid fuels from blue-green algae seem to hold the most promise.
        ——————————
        [1] Given that those 1 (and 2) sigma deviations from the mean over periods of a decade or two haven’t yet wiped us out, they’re more a challenge to CO2 diddit theory than to our well-being. It’s the 5-10 sigma departure from the pre-industrial average for centuries and beyond that present the higher risk.

      • wrecking the economy to stop emissions dead in their tracks is a Bad Idea
        I agree. All nations should put their money into where it does the most good. In Alberta, Canada, where I live it would make much more sense to spend billions on improving insulation for houses rather than spending the same amount of money on carbon capture which could reduce the temperature by 1/10000 degree in 100 years.

    • Brandon Gates
      Werner Brozek has repeatedly said e.g. here

      As for my mentioning the excuses, some people will deny we are even in a pause, so that is my reason for mentioning it.

      You say

      I think we’re presently inside an envelope which I’ve defined as one standard deviation of the residual when I regress the log of CO2 against temperature:

      Clearly, your “excuse” consists of a complete redefinition of “warming”, and that redefinition assumes “warming” is a function of “the log of CO2”.
      “Warming” as projected by climate modelers consists of an observed rise in linear trend of temperature which is discernible with 95% confidence as being different from zero rise.
      The only significance of the putative “warming” is that it is predicted by the models and, therefore, provides an assessment of model performance. The models do NOT output “one standard deviation of the residual when [you] regress the log of CO2 against temperature”.

      Richard

      • richardscourtney,

        Clearly, your “excuse” consists of a complete redefinition of “warming”, and that redefinition assumes “warming” is a function of “the log of CO2″.

        Where have you been? F = α * ln(CO2/CO20) has been in the literature since the late 19th century.

        The models do NOT output “one standard deviation of the residual when [you] regress the log of CO2 against temperature”.

        Of course they don’t, but so what. GCMs don’t set a straight edge to an arbitrary 20 year chunk of a temperature time series and extrapolate either. The quick and dirty regression I did is based on the most key relationship in the physics and doesn’t suffer from the same sensitivity to choosing endpoints. It allowed me to calculate a standard deviation across ALL the available data, and very clearly shows that the past 20 years is nothing special in terms of departure from the regression prediction.
        I get it that doesn’t make you very happy — hence all the bold text — but try addressing my analysis on its own merits instead of what some quote-mined statement in a 2008 BAMS report says, eh? The great thing about independently investigating claims is … the independence.

      • Brandon Gates:
        Your reply to my refutation of your silly redefinition of “warming” is even more misguided than your post which I refuted.
        I wrote the salient points in bold so they were clearly recognisable (n.b. not for the ridiculous reason you suggest) and those points were this

        “Warming” as projected by climate modelers consists of an observed rise in linear trend of temperature which is discernible with 95% confidence as being different from zero rise.
        The only significance of the putative “warming” is that it is predicted by the models and, therefore, provides an assessment of model performance. The models do NOT output “one standard deviation of the residual when [you] regress the log of CO2 against temperature”.

        Your reply says

        Of course they don’t, but so what. GCMs don’t set a straight edge to an arbitrary 20 year chunk of a temperature time series and extrapolate either. The quick and dirty regression I did is based on the most key relationship in the physics and doesn’t suffer from the same sensitivity to choosing endpoints. It allowed me to calculate a standard deviation across ALL the available data, and very clearly shows that the past 20 years is nothing special in terms of departure from the regression prediction.

        That is plain daft. The modelers and the IPCC consider “warming” to be as I stated; e.g. see here.
        And you try to pretend that your invention of a redefinition of “warming”

        has been in the literature since the late 19th century

        I don’t believe you because I am not aware of any such definition of “warming” in the accepted literature: please provide a citation.
        Any data can be processed to show anything. You have processed the temperature data in association with other data (i.e. “the log of CO2”) and conclude the warming

        very clearly shows that the past 20 years is nothing special in terms of departure from the regression prediction

        and your response to my refuting that nonsense is to demand that I

        try addressing my analysis on its own merits instead of what some quote-mined statement in a 2008 BAMS report says

        OK. I will do that.
        Your so-called “analysis” consists solely of unsubstantiated and illogical rubbish which you have clearly constructed as an excuse for the failure of the climate models.
        There is no need to thank me for taking the trouble to fulfill your demand because I am always willing to provide information of use to onlookers when I can.
        Richard

      • richardscourtney,

        That is plain daft. The modelers and the IPCC consider “warming” to be as I stated; e.g. see here.

        I read the link, which is AR4 section 10.7.1, “Climate Change Commitment to Year 2300 Based on AOGCMs”. I find nothing like your statement: “Warming” as projected by climate modelers consists of an observed rise in linear trend of temperature which is discernible with 95% confidence as being different from zero rise. Perhaps you can quote the specific text which you find incompatible with “my definition” of warming.

        I don’t believe you because I am not aware of any such definition of “warming” in the accepted literature: please provide a citation.

        http://www.ams.org/notices/201010/rtx101001278p.pdf
        In 1896 Swedish scientist Svante August Arrhenius (1859–1927), 1903 Nobel Prize winner in chemistry, was aware that atmospheric concentrations of CO2 (and other gases) had an effect on ground level temperatures; and he formulated a “greenhouse law for CO2”, [1]. Were Arrhenius alive, the motivations for his study and the precise values of physical constants used in his models might change, but his greenhouse law remains intact today. From a reference published about 102 years after [1], namely, page 2718 of [14], we see Arrhenius’s greenhouse law for CO2 stated as: (Greenhouse Law for CO2) ∆F = α ln(C/C0) where C is CO2 concentration measured in parts per million by volume (ppmv); C0 denotes a baseline or unperturbed concentration of CO2, and ∆F is the radiative forcing, measured in Watts per square meter, W/m^2. The Intergovernmental Panel on Climate Change (IPCC) assigns to the constant α the value 6.3; [14] assigns the value 5.35. Radiative forcing is directly related to a corresponding (global average) temperature, by definition radiative forcing is the change in the balance between radiation coming into the atmosphere and radiation going out. A positive radiative forcing tends on average to warm the surface of the Earth, and negative forcing tends on average to cool the surface.(We will not go into the details of the quantitative relationship between radiative forcing and global average temperature.) Qualitatively his CO2 thesis, which Arrhenius was the first to articulate, says: increasing emissions of CO2 leads to global warming. Arrhenius predicted that doubling CO2 concentrations would result in a global average temperature rise of 5 to 6 deg C. In 2007 the IPCC calculated a 2 to 4.5 deg C rise. This is fairly good agreement given that more than a century of technology separates the two sets of numbers.
        Ref. [1] may be found here: http://www.rsc.org/images/Arrhenius1896_tcm18-173546.pdf Svante August Arrhenius, On the influence of carbonic acid in the air upon the temperature of the ground, Philosophical Magazine 41 (1896), 237–76. Page 267 (p. 17 of the .pdf), top of the right-hand column: Thus if the quantity of carbonic acid increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression.
        In the regression I used to produce my chart, I used a slightly different formulation: F = α ln(C), where F is instantaneous forcing in W/m^2 and C is current CO2 concentration in ppmv. I habitually use 5.35 as the value for α since it is the more conservative of published estimates and also is the one I find most often cited. Of course I’m regressing this relationship against observed temperature, not radiative flux. The regression calculation found the best fit multiplying the theoretical flux amount by 0.52, implying this relationship: ∆T = 2.78 ln(C/C0).
        Typically equilibrium climate response to doubled CO2 is defined without taking the natural log. When I do the same calculation I come up with: ∆T = 2.41 C/C0. The canonical value is 3.0°C, with a range of 1.5 to 4.5°C. You’re likely asking why the 0.6°C discrepancy. The partial answer is that the planet has not yet reached equilibrium. My regression is only picking up the transient response on the way to equilibrium. The balance of the discrepancy is likely due to the myriad of other inputs to the system which I made no attempt to account for in this model.

        Any data can be processed to show anything.

        Tell that to Werner here, who has done nothing more than drawn some trendlines on a narrowly selected range of observations and left it at that, wholly uncoupled from any theoretical underpinning.

        Your so-called “analysis” consists solely of unsubstantiated and illogical rubbish which you have clearly constructed as an excuse for the failure of the climate models.

        My substantiation begins with empirical studies begun in 1896 which even then provided a correct prediction of the approximate magnitude and exact direction of the effect. My very simple model uses that same relationship and the output is in general agreement with contemporary primary literature. You may wish to actually become familiar with the core first principles of climatology before blustering on about “unsubstantiated and illogical rubbish”.

      • Brandon Gates
        You pretend that you have reading difficulties in your attempt to defend your ridiculous redefinition of “warming”.
        You say

        I read the link, which is AR4 section 10.7.1, “Climate Change Commitment to Year 2300 Based on AOGCMs”. I find nothing like your statement: “Warming” as projected by climate modelers consists of an observed rise in linear trend of temperature which is discernible with 95% confidence as being different from zero rise. Perhaps you can quote the specific text which you find incompatible with “my definition” of warming.

        Of course I can. And I am offended by your implication that I am as incapable of reading as you claim to be. The link is – in full –
        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-7.html
        and it says e.g.

        The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios.

        That IPCC statement not only supports my accurate statement of the definition of “warming” used by climate science, it also shows that the model prediction of the “rate of warming averaged over the first two decades of the 21st century” was 100% wrong. It does not mention the 95% confidence practice applied by climate science because there was no need. But it does mention temperature change and rate of temperature change over time which your redefinition does not.
        You then laughably cite Arrhenius’ hypothesis of the radiative Greenhouse Effect (GHE) as being a definition of “warming”. No. The radiative GHE is one possible cause of “warming”. I strongly suggest that before making silly assertions about climate change(s) you at very least need to learn the difference between an effect and its possible causes. WUWT is a science site and you can expect to be chastised for posting such schoolboy errors.
        I refuse to obey your demand that I tell Werner that any data can be processed to show anything. He knows it, so he rightly refuses to process the data but shows what the unadulterated data indicates. Furthermore, he admirably requests a ‘warmist’, Nick Stokes, to critique his presentations of what the data shows.
        Contrast the proper behaviour of Werner to your alteration of the data to obtain an apparent indication of what you would prefer the data did indicate.
        Richard

      • richardscourtney,

        You pretend that you have reading difficulties in your attempt to defend your ridiculous redefinition of “warming”.

        Just making it up as we go, aren’t we.

        [IPCC AR4] The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios.

        Nothing about confidence intervals in that statement. I do notice that every time “warming” is mentioned, it’s followed by a temperature greater than zero. Perhaps my reading difficulties aren’t so pretend after all.

        It does not mention the 95% confidence practice applied by climate science because there was no need.

        Ye Olde Goalpost Movement, right on cue.

        But it does mention temperature change and rate of temperature change over time which your redefinition does not.

        No redefinition. I’ve already cited Arrhenius, but I guess that didn’t sink in. Well, how about something more recent, say 1992 from the IPCC FAR: http://www.ipcc.ch/ipccreports/far/wg_I/ipcc_far_wg_I_chapter_02.pdf
        Table 2.2 Expressions used to derive radiative forcing for past trends and future scenarios of greenhouse gas concentrations […] Carbon Dioxide ∆F=6.3 In(C/C0) where C is CO2 in ppmv for C < 1000 ppmv
        My eyes must be fooling me again.

        I strongly suggest that before making silly assertions about climate change(s) you at very least need to learn the difference between an effect and its possible causes. WUWT is a science site and you can expect to be chastised for posting such schoolboy errors.

        Well I’ve noticed that when I wear a jacket on a cold day I stay warmer than I would not having it on. Maybe I’m just imagining it.

        Contrast the proper behaviour of Werner to your alteration of the data to obtain an apparent indication of what you would prefer the data did indicate.

        I used four of the same exact datasets Werner did, HADCRUT4, GISTemp LOTI, UAH Global LT and RSS. Only difference is that I used all of the data available back to 1850 because you see, one way to get data to tell a desired story is to only use the part of it that supports one’s argument. Being that WUWT is a science site, I figured that cherry-picking is something I probably wouldn’t get away with.

      • Brandon Gates
        I started to read your twaddle which is your latest – and long-winded – attempt to excuse your daft assertions, but I could not be bothered to read much of that nonsense because there are useful things I need to do.
        If you think you have convinced anybody of anything by your spouting your ignorant and ill-informed errors then be happy in that thought because it is no more daft than your assertions about “warming”.
        I am content that I have pointed out to those who are interested (probably none) that you claim to not know the difference between an effect and its possible cause(s) and you claim an inability to understand what you say you read. Hence, I have helped any who may wish to assess your assertions to recognise how and why your assertions are extremely wrong.
        Richard

    • While I was still teaching physics, I would tell the students to think of the most extreme example they could think of to answer a question. So I will try that here. If the CI is +/- 1.106 at 95%, then it would be much higher at 99%; for example it may be +/- 1.5 at 99%. So if you are asking about a CI of 0%, I would say the slope itself has a CI of zero in the sense that if the slope is given as 1.000, then there is no chance the real slope is exactly 1.000000000 and not up to + 1.000000001 or down to 0.9999999999.
      Does that make sense? Should I be wrong, then Nick can correct me.

  19. Stop and start dates may well be diverting but I have not heard a compelling case as to the cause of the LIA or the Roman warm period.
    We all seem to be taking the patients temperature but failing to diagnose the illness.

  20. Just curious, as a scientist, you have in the first section 0.0123944 C/year; what is the accuracy of the temperature measurement? And is this in that realm? Not nitpicking, it’s just that when I see temp’s expressed to this many decimal places I wonder how they achieved this level of accuracy.

    • Yes, 7 decimal places.
      Hence my skepticism.
      Do they still teach the first principle of measurement – significant numbers?

    • That number was taken right from WFT and I agree that we do not know its accuracy to that extent. That is why I gave the later number as 0.025 along with the 0.0247888.
      It is shocking how bad our numbers really are! Yet billion dollar decisions are made based on these numbers.
      For example, RSS had October 2014 in 8th place at 0.272. October 1998 was first at 0.461. But with UAH, version 5.6, October of 1998 was way lower at 0.291 but October 2014 was higher at 0.367. That is a relative difference of 0.189 + 0.076 = 0.265!

      • I agree with you there Werner. The precision in that number is immaterial, the accuracy is rubbish.
        I have a long standing grudge with computer models. Unless basic mathematical principles are coded into the program, all these machines do is compound errors. Super computers just compound these errors more, and quicker.
        Since the most sophisticated thermometers we have can barely measure to 2 decimal places of a degree, the fantasy that we can reliably measure any better than that is childishly destructive.

      • Werner, as an engineer, I would reduce the accuracy down to 0.03ºC, there is no way on this earth we can measure meaningful temperature to such accuracy! IMHO, that is.

      • Let me beat a dead horse! Plot all data on the range of a home thermostats: 10 – 30 C!
        If we say it enough, the public will begin to understand in a few years! WE ARE PLAYING ON THEIR HOME TURF, and we needn’t be.

  21. What do I think? I think Obama doesn’t care anything about the trend and intends to do all he can in the time remaining in his fast fading political career to shutter as much American industry as his pen allows.

  22. I think that I don’t care whether it gets warmer or cooler as that proves nothing about CO2. It’s quite easy to prove that CO2 is not a danger in regards to temperatures and though the long pause in temperatures has been useful in making people question Al Gore and other con artists, it might not last. I would rather focus all my energy on educating people on why CO2 can’t, under any circumstances, ever be responsible for dangerous rises in temperature, than spend my time going “look, it’s still cold!” As there is a 33.3333% chance it might get hotter again! (33.3333% it will get colder, 33.3333% it will remain the same)

    • I agree.
      We don’t even need a statistical analysis.
      We know that CO2 increased linearly since 1958. Not so for temperature. This is known.
      As far as the temperature graphs/analysis presented here they are not informative to convince Al Gore or anybody else.
      What we need is simple:
      Plot the data points, run a linear regression analysis and plot the line, give the p value and r squared value.
      Easy to do:
      http://blog.minitab.com/blog/adventures-in-statistics/how-to-interpret-a-regression-model-with-low-r-squared-and-low-p-values
      After looking at the results adjustment for autocorrelation can be undertaken if warranted.

    • Problem is most people cannot relate to that kind of discussion. However, they can relate to the concept of temperature not going up (especially if it is going down). For that reason alone this topic is probably the most important one for winning the minds of the common man.

  23. Your graphical representation of the confidence intervals for trends is incorrect. You have the three lines (the regression line and the upper and lower CIs for trend) meeting at the start of the regression line. In fact, they should meet in the middle of the date range, like this:
    http://www.woodfortrees.org/plot/rss/last:312/plot/rss/last:312/trend/plot/rss/last:312/trend/detrend:0.3222544/offset:0.161127/plot/rss/last:312/trend/detrend:-0.3222544/offset:-.161127

  24. With all the extra ice for the last 2 years it is hard to see the land sea adjusted data as correct. Cannot change the thermometers so surely at some time the thermometers must move down in accord with the satellite data. The C and W style rigging does allow Polar data to be rigged up and is out of satellite range at times. Is this where the major discrepency is?
    The near El Niño meant a warmer year in the tropics, but did it also mean the warm water did not go pole wards.

    • According to Bob Tisdale, it was the north east part of the Pacific that had an extremely high anomaly lately. But for some reason, this warm water here did not affect the satellite data as it did the surface data. Perhaps less evaporation occurred due to lower absolute temperatures so there was less condensation higher up.

  25. An appeal again:
    Does anyone have a graph of the actual global temp (not the anomaly, so around 14.5c) for say the past 100 years…IN DEGREES C? Many thanks all.

  26. If you wish to verify all of the latest anomalies, go to the following:
    Thanks for including the latest links (-:

  27. Sometimes I have just got to laugh out loud when I read the latest from the statistical torture chamber!. Nothing personal against anyone who likes to join the farce, we all have to get our jollies somewhere. I am sure that you are just like me, Mr Brozek, you were there when the temperatures were being taken. Obviously not during the stone age when the cavemen used to rush outside and dip their finger into an elk carcase then note down the temperature to the nearest thou and paint it on the cave in Woad. Or even when the Roman soldiers used to stop on the way to slaughter a few more Gauls (and what is not to like about that!) and carried their Stephenson screen up the slope and then wrote the results on a slate to the nearest thou. No, I was there in the 60’s when recording data was done on a thermometer to the nearest half degree. I operated a mobile maritime weather station with a mark 1 eyeball and a mark 2 plastic bucket for sea temperature. I used to send the Cadet up to the monkey island to read the temps in a Stephenson screen bolted a few feet above a red painted steel deck. Other ships had green paint, others bitumen tar and rubber. We threw a bucket over and hauled up a bit of seawater. Sometimes to the main deck, sometimes to the bridge wing , sometimes in very rough weather a phone call was made to the ER to get the intake sea temperature. There were no Argo Buoys, no satellites, no digital readouts, precious few aircraft overhead and almost no ships south of 60 or north of 60 degrees. All this was done so that we were not going to get caught out by undetected Hurricanes and Storms. The data was resolved on a slide rule and graphed on a chart with a calligraphy pen then, if we were lucky, faxed back to us as a coherent whole.
    I am very sorry if I refuse to get excited about computer programs that forecast the end of the world by torturing data til it screams. Get back to me in a thousand years when we have enough reliable data to even BEGIN to draw conclusions from it.

  28. Sorry for a bit OT and perhaps a dumb question. I have seen the graph at top and ones like it with 1998 way up there hundreds of times. I’m assuming from other data sets as well.
    What I can’t reconcile is the recent headlines by NOAA that 2014 may be a record warm year. Looking at the above chart it is not even close. Specifically the press releases have pointed out October 2014 surpassing 1998 by .04 C.
    I just don’t see how they are even close.

    • The above is for RSS only. However UAH also shows 1998 way up there and on both satellite data sets, there is no way that either will come in first or even second. But the surface data sets do indeed show a record to this point. See row 22 of the table where I give the present rankings after 10 months. RSS and UAH version 5.5 are in 7th place, but Hadcrut4.3, GISS and Hadsst3 are all in first place. Hadcrut4.3 and GISS could still end up as first, second or third, but Hadsst3 is guaranteed to set a new record in 2014.

  29. At the end of the day it’s rather pointless. The Chicken Little’s will simply find some other crisis to blame Carbon (CO2) for, they always do. Because everything evil under the sun is a direct result of Carbon emissions, you see. If you made a list off all things it has been blamed for, it would be as large as Manhattan island (which will soon be underwater BTW).
    And even if everyone of these prophecies turns out to be incorrect (which is likely) the Chicken Little’s will still think it is better to have done something. Because you can’t tilt at windmills until you build them first.

    • There is a big problem with their logic. It is one thing to say that warming will happen due to the greenhouse effect. So if warming occurs, oceans will expand, etc. But if warming is not occurring, then by what mechanism does extra CO2 cause more tornadoes for example?

  30. Good article Werner well written!
    As an aside, are there any Brits out there who were listening/watching the breakfast news prog at around 7am this morning? I was munching thru my cereal & slurping my tea, when I heard something about Climate Change being “partly” caused by Human activity! Was I hearing things? Did anyone else hear it? Is this the BBC hedging its bets or something? Is there a subtle sea change going on & I have missed it?

    • I wasn’t paying perfect attention to Roger Harrabin but I thought he was talking about 2014 being the hottest year since the start of the CET centuries ago and that that was “partly” caused by human activity.
      That is not quite the same.

      • Thanks, that clears that up, I wasn’t paying much attention to RH’s speil that’s why I was rather surprised! It seems from the posts here that NOAA, The Wet Office (UK), & the BBC are bigging up 2014 as the hottest evvaaa!!!

  31. OT but moderator please make this exception and accommodate me.
    Jean Beliveau, beloved Captain of the legendary Montreal Canadiens died this week at 83. Beliveau was a great hockey player and a true gentlemen, respected by all.
    As a kid growing up in small town Quebec, I lived for les Habs, and look back on their golden years with wonder and gratitude. Can you imagine being a young hockey fan and having your team win ten Stanley Cups, including five in a row? Only the New York Yankees of the same era had as great a winning record!
    Our local Wolf Cubs and Boy Scouts held a father-and-son dinner and Jean Beliveau and Gump Worsley (Goalie for the New York Rangers and later for the Canadiens) were our guests of honor. Beliveau had recently joined the Canadians and was an immediate star (we did not use the word “superstar” in those days).
    Worsley, who lived in our town, arrived on time but Beliveau was very late so we stated to eat our dinners. At one point, I had a sudden thought and excused myself – I was going to find Jean Beliveau. The school hallways were dark and lit only by red exit lights, and I went toward the Principal’s office in the oldest part of the school, turned a corner and there in the darkness stood a tall man, who asked me “Where’s da gym”? I said “I’ll take you there”, and entered our gym beside Jean Beliveau, as the crowd erupted into cheers and applause.
    As I grow older, I am much less interested in sports. Ironically, when I go to my gym I watch more and more sports and less and less news, because I find the news saddens me – the news is really the bad news – it gathers up all the misery and inhumanity of the world and brings it into your living room. In sports, on the other hand, everybody lives to play another day.
    Strive to be kind to one another.
    Best wishes to all, Allan

  32. The good Lord’s slope is zero because that is his goal. His point is not that the slope is zero, his point is that you have to have greater than N years of data to have a non-zero slope. Different meaning to the various sets of data.

  33. From the article:
    There are different ways of deciding whether or not we are in a pause… In my opinion, a combination of UAH and RSS needs to show statistically significant warming for less than 15 years before I am comfortable with declaring the pause over.
    This is ridiculous, IMHO. People on both sides of the debate are splitting hairs over hundreths of a degree. If global warming was a problem, we would know it.
    For some needed perspective:
    http://suyts.files.wordpress.com/2013/02/image266.png

    • An odd idea of perspective! Arbitrary scale from 0 to 120 (Fahrenheit I assume?).
      Actual global average temperatures on this planet have not strayed more than about 25F above current levels and 12F below current levels within the last 500 million years.
      The difference between full on glaciation and the mildest interglacial period is only about 10F. So what on earth does a 120F range have to do with anything?
      Why not show a range from absolute zero to the surface temperature of the sun?

      • Nigel Harris,
        The scale is not important. What matters is the fact that by using an x-axis scaled in degrees, there is no observable change in T. The original argument claimed runaway global warming and predicted it would accelerate. If there was runaway warming occurring, it would be easily visible in that chart.
        But the original conjecture was wrong. Everything said subsequently is backing and filling: trying to convince people that there is a problem, when there isn’t. It’s just crying “Wolf!!” The public is getting jaded. Can you blame them?
        Also, there are other charts showing the same thing. All we are observing are normal step change rises from the depths of the LIA. There is nothing either unprecedented or unusual happening. Everything we see now has happened before, and prior to human CO2 emissions being a factor.
        ==============================
        Dave in Canmore,
        Yes, that’s what it shows: no trend. I worked in a Metrology lab for thirty years, calibrating weather instruments including every kind of thermometer, both stick and electronic; PRT, thermocouple, RTD, etc. Anyone who tells you that you can accurately measure tenths and hundreths of a degree C/F accurately, without using *very* expensive instruments, simply does not know what they’re talking about. Changes of ±0.1ºC are bandied about as if that is reality. It isn’t. Almost all state of the art thermometers have larger error bands. In most cases, much larger.
        I prefer using degrees C or F. If a problem is brewing, those will show it just as fast as any electronic thermometer. The problem for warmists is that they can’t show any kind of a problem. Everything currently observed has happened before, repeatedly, and to a much greater degree.
        For all practical purposes, global warming has stopped, for many years now. It may stay the same, or resume, or cooling may begin. We don’t know. The only thing that has not stopped is the constant climate alarmism, by people who have made consistently wrong predictions from the very beginning.

      • When I look at that chart I see zero chance in hell of the predicted disasters of CAGW manifesting. It is certainly non threatening, although I am a bit curious, as I thought we had warmed 3 to 4 F since the little ice age, and I do not see it in the F thermometer chart. (I say the further from an ice age the better, within reason)

    • dbstealey, I can still see some wiggles in your plot. Convert to Kelvins to really flatten that sucker out.

      • Werner,
        Especially when playing visual games with the y-axis. If I want to exaggerate a warming trend by printing out a plot on legal size paper with the y-axis going lengthwise it will have dramatic visual impact. But the significance of the change is due to the physics of the actual phenomena, not the big scary units I choose. Not whether I express the figures as absolute values on that scale or as anomalies. The planet does not read charts. It is not fooled by appeals to (in)credulity by use of small or large numbers.
        One way to cut through all these stupid and scientifically useless rhetorical games with smallish temperature values vs. 10^infinity energy statistics is to strip units out of it entirely and think in ratios or percentages. My favorite example is to note that from bottom to top of the temperature cycle from glacial to interglacial is ~12 K. Since 1880, global temperatures have changed on the order of 0.8-0.9 K, near enough to one to call that 1/12th of a full glacial cycle. The dreaded 2 K anomaly is 1/6th of same. Considering that bottom to top of the natural cycle takes on the order of 20,000 years and we’ve just experienced 1/12th of that in 135 years … well you can do the math, it’s 150 times the maximum average observed rate over the past million years when the planet is left to its own devices.
        No shenanigans with charts, no hard choices with units since temperature and energy have a one to one relationship. I don’t care what planet you’re from, 1/12th a movement in 0.007th the time is nowhere remotely close to insignificant.

      • But the significance of the change is due to the physics of the actual phenomena, not the big scary units I choose.
        I agree! And the people that really need to know this is those who insist on giving steep rises to ocean heat content by showing X times 10^22 Joules on the y axis. Then when you convert to degrees C, you find it is about 0.1 C that the ocean went up in the last 60 years. So if you wish to multiply both numbers by 12, you get 1.2 C in 720 years. Sorry, but it does not alarm me if the ocean goes from 3.0 C to 4.2 C in 720 years. For all intents and purposes, the ocean is an infinite heat sink for minor warming of the air.

      • Werner,

        And the people that really need to know this is those who insist on giving steep rises to ocean heat content by showing X times 10^22 Joules on the y axis.

        Well gee, Joules are the SI standard unit of energy. Poor science to use appropriate units, yes indeed.

        Then when you convert to degrees C, you find it is about 0.1 C that the ocean went up in the last 60 years.

        And now we’re back to playing games with small numbers. Reminds me of this rather silly (read: dishonest) chart from Bob Tisdale: https://bobtisdale.files.wordpress.com/2014/11/figure-2-tempering-effect-of-ocean-on-global-warming.png
        10^22 Joules and °C on the same axis!!! Switch to 10^24 Joules and voila, you can actually derive some meaning from the plot, plus the y-axis numbers aren’t quite as big and scary either.
        Totally, utterly non-scientific tripe. The relative slopes don’t mean diddly squat if you’re just arbitrarily scaling stuff for purposes of visual presentation, especially since there’s a direct linear relationship between temperature and energy for crying out loud.

        For all intents and purposes, the ocean is an infinite heat sink for minor warming of the air.

        Over a 100 ky glacial cycle, it works out that surface temps change by a factor of about 5 times greater than the deep ocean. The lag is about 10k years for each major reversal in surface temps. But that’s deep ocean now. The surface — where all the ice hangs out — is far more responsive. Effective heat sink, yes. Infinite, not so much. All depends on one’s own personal expiration date, I suppose.

      • And now we’re back to playing games with small numbers. Reminds me of this rather silly (read: dishonest) chart from Bob Tisdale:
        I do not agree that this is dishonest at all. Suppose you went swimming in the ocean and you found it rather chilly. And then you were told that 10^23 joules had been added to the water since last year. Of course joules are the SI unit for energy, but would that mean anything to you? You might get the impression the oceans were about to boil. But if you were then told the temperature went up by 1 C, that would be much more meaningful. As well, you might not even be able to detect that difference.
        I believe that we will either have an ice age or nuclear fusion long before our world has over heated.

      • Werner,

        Of course joules are the SI unit for energy, but would that mean anything to you?

        Yes. What I didn’t get from high school I got in my freshman year at college. It’s not the researchers’ fault that some people forgot their standard education as soon as they finished their exams.
        There’s another way to slice through all this b/s, and that’s to use Watts per sq. meter. A per unit area calculation makes things easily comparable because it strips out the vast differences in total mass and relative heat capacity allowing apples to apples comparisons instead of donkeys to elephants.

      • My point was that when I get up in the morning and need to decide whether to wear long johns or not, I check the TV for the temperature. I am not interested in how may joules or W/m2 the air over my city gained or lost since it was -5 C last night.
        The problem is not that I could not make the calculations to convert joules to change in C using Q = mct, but that would not be nearly as convenient as seeing the temperature.

      • Werner,

        My point was that when I get up in the morning and need to decide whether to wear long johns or not, I check the TV for the temperature. I am not interested in how may joules or W/m2 the air over my city gained or lost since it was -5 C last night.

        I agree, and for the vast majority of us (including me) that does hold true. My point is twofold:
        1) When we go skinny dipping in the ocean, we don’t dive to 2,000 m.
        2) Averaged temperature across a 2 km thick layer of sea water cited in exclusion of all else obscures what’s going on at the top most layer where we … and things like ice … live.
        Pile on to (2): plotting °C and 10^22 J down to 2,000 m on the same axis is worse than meaningless when the obvious intent is to hide the incline of °C. I could only laugh at the sheer audacity of it.

        The problem is not that I could not make the calculations to convert joules to change in C using Q = mct, but that would not be nearly as convenient as seeing the temperature.

        Again I agree, but my main points are not about that, but rather the scientific, physical, relevance of a given choice of units. When possible I like to think in Watts per sq. meter, but of course that only makes sense at boundary layers. OHC in Joules is the most direct way to that calculation. When talking °C, well, yes of course ∆T is an impressively small number down to 2,000 m. Not so small higher up the water column: https://drive.google.com/file/d/0B1C2T0pQeiaSdmJUcmpJQkVCWVE/view?usp=sharing
        The rate plot at the bottom is fun, isn’t it.

      • Thank you! I still prefer temperature and the differences among all layers is fine as you have shown. And while the top 100 m is warming fastest, I see nothing alarming about it. And should the rate of warming at the top increase, it will just go to the lower layers faster and the warming at the top would be dampened.

      • Werner Brozek,

        And while the top 100 m is warming fastest, I see nothing alarming about it.

        Emotional reaction isn’t always a choice. Opinion often is. Truth be told, I don’t worry much about estimates of the worst effects because I’ll be dead. Aside from that, I don’t think were in danger of extincting ourselves by CO2. Personally I think it’s more likely that Pakistan will nuke India over Kashmir, or some other similar scenario.

        And should the rate of warming at the top increase, it will just go to the lower layers faster and the warming at the top would be dampened.

        The rate plot shows all three depth layers with an accelerating warming trend across the entire record. [1] Yes of course the cooler depths dampen the warming at the surface, the scientifically relevant question at this point is: how much, and what will the energy which remains at the surface do in the future. And where, as in what surface grid. Three curves on a plot of global averages doesn’t get remotely close to conveying that sort of info. Does your “I see nothing to be alarmed about” statement contain the barest of hint of research or calculation into local surface effects?
        ———————————
        [1] The surface rates are the outlier here, showing deceleration. All three trendlines are incredibly sensitive to start and endpoint, but especially at the surface. If I knock out the first two years of the record, the surface acceleration trend goes significantly more positive than the three depth curves, which I would of course, expect. This is why I’m dubious of trend analysis as a primary means of investigation and prediction.

      • O.K. Let us take any emotion out of it. A slope line was not shown, but the top 100 m were at 0.10 in 1978 and 0.33 in 2013, which is an increase of 0.23 in 35 years. This is way less that 1.0 C in 100 years. And an extra 1.0 C of warmer ocean will only warm air by 1.0 C as well. And if an extra 2 C above 1750 is supposed to be bad, we have a long time before we reach that point. I am sure technology advances over the next 100 years will allow us to cope with what needs to happen.
        Furthermore, I do not believe that an extra 2 C will be that bad. Where I live in Canada, I would still spend a lot of money keeping warm and none keeping cool.

      • Werner,

        A slope line was not shown, but the top 100 m were at 0.10 in 1978 and 0.33 in 2013, which is an increase of 0.23 in 35 years.

        Your eyeballs don’t deceive. 0.241 °C change, rate 0.007 °C/year.

        This is way less that 1.0 C in 100 years.

        0.7 °C/century, 30% shy of a full degree is significantly less. But you assume rate will stay constant. I did throw linear trendlines on the bottom graph, and the 100 m curve shows an acceleration of 0.000357 °C/year^2. The predicted rate from that regression in 2014 is 0.0135 °C/year, and the predicted temp is 0.35 °C, (+0.02 of the actual).
        Now if I take my turn to assume, at constant acceleration the 100 m temp works out to 2.84 °C by 2100. That’s the anomaly above the 1955-1964 baseline average, not from 1750. The rate of change in 2100 would be 0.0441 °C/year.

        I am sure technology advances over the next 100 years will allow us to cope with what needs to happen.

        Oh probably. I’ve said before we’re tenacious and creative. I don’t think we’ll extinct ourselves. But I think you’re getting ahead of things here; the data show an accelerating rate of temperature change, not a constant one.

        Furthermore, I do not believe that an extra 2 C will be that bad. Where I live in Canada, I would still spend a lot of money keeping warm and none keeping cool.

        [chortle] You could always take the reverse of markx’s philosophy to heart right now and move to Florida: http://wattsupwiththat.com/2014/12/05/friday-funny-over-a-centurys-worth-of-failed-eco-climate-quotes-and-disinformation/#comment-1807203

      • Thank you for that! But who knows if the rate will accelerate until 2100? Climate seems to go in 60 years cycles in addition to other cycles. And even if it does continue to accelerate, people will have to adapt to whatever circumstances they find themselves in.
        The worst thing our Alberta government can do now is to spend 2 billion on carbon capture to possibly shave off 1/10000 of a degree by 2100.

  34. The RSS data also shows a (slightly) negative trend from Jan 1979 to Sep 1989 or 10 years, 9 months. That means that all of the warming is confined to a period of less than seven years.

  35. Sigh, a linear fit to a nonlinear process yet again. If you look at the satellite record from 1980 to now you will see that it is a step function with a flat region prior to 1998 and then a step up to the regime we are in now. Both of the steps have fluctuations that are due to El Nino/La Nina. At 1998 there is a huge El Nino (+ something else?) and then the new plateau. Even Trenberth admits this according to Bob Tisdale. The simple minded linear increase due to CO2 predicted by the models cannot produce this kind of behvaior, it can only come out of a nonlinear process, aka, chaos.
    Please stop with the silly fits of straight lines to nonlinear processes. And while I’m on a rant, please stop with the smoothing, it throws away data.

      • Unfortunately the tools you are using are making things worse.
        Look at the plot of CO2 data from 1958 to now from Hawaii. Do you need any statistical analysis to see that there is a linear increase in CO2 concentration from 1958 to now?
        Nobody needs a statistical analysis for this. Just a pair of eyes and a brain.
        Now, while CO2 and temperature anomalies were increasing modelers were doing fine.
        Claiming that CO2 was the cause of the increase, regardless of the fact that while the CO2 increase was linear, the temperature anomalies were not increasing in a linear fashion. But still, both were increasing.
        Now, however, this is no longer the case. While CO2 is still increasing we now have a flat region for temperature anomalies over a number of years. The modelers were doing fine with short flat regions, they could ignore such.
        They are now in trouble. The longer the current flat region will stay and the longer the increase in CO2 will continue, the more difficult it will be for them to claim that CO2 is responsible. They now need to adjust their “models” to fit the current data. I don’t know how they will do this. But if they can’t fit historical data, they can’t predict!

      • The temperature also did not increase from 1958 to around 1977. Nor from about 1944 to 1958, even though CO2 was rising then, too, just not recorded at Mauna Loa. There were just two decades in the middle, ~1977-98 (or ’96), when rising CO2 happened to coincide with apparently rising T, as it no longer is.

  36. An awful lot of discussion (not just this post) about such trivial trends. It should be plainly obvious that the trends for the last 10-18 yrs, whatever, are so small in the sat data as to be insignificant. I don’t count land-stations — way too problematic.

    • The most significant trend over the last 25 years has been the unprecedented pace of increase in institutionalized mendacity and the official acceptability of deceit as a means to policy making.
      It’s so bad now that offenders are not disgraced in the slightest for being caught Grubering.
      Dishonesty has become an official badge of honor.

      • Werner Brozek says:
        This would be further proof that CO2 is not a major player… When will many heads of state realize how insignificant the warming is?
        At current CO2 concentrations, changes in temperature due to CO2 are too small to show up in the data. Even a 25% rise in CO2 would not be enough to show a measureable increase in T. That is why there are no comparable charts showing that changes in CO2 cause subsequent changes in T.
        Most of the observed changes in CO2 are caused by changes in T. I am willing to be convinced otherwise, but it will require the same kind of data that I posted here.
        ==============================
        Steve Oregon says:
        The most significant trend over the last 25 years has been the unprecedented pace of increase in institutionalized mendacity and the official acceptability of deceit as a means to policy making.
        It’s so bad now that offenders are not disgraced in the slightest for being caught Grubering.
        Dishonesty has become an official badge of honor.

        Repeated for effect.

  37. My head is spinning with all the graphs and numbers. But one thing is clear. Every Damn “study” I’ve seen about the “catastrophic” impact of “the global warming” over the past 20 years is Horse spit. If a study says “cumulative effect” it is probably also useless because the weasel word quotient is pretty high, but if the impact has been during the last couple of decades……
    The biggest thing I can’t reconcile in my pea brain is this: if we pick a pristine station, well sited, with records far back into the 19th century, in no case does it show what the composites claim. All single sites are rejected unless sliced, diced, and fully homogenized. So how come we accept a single site for CO2 – especially one so “typical” of the rest of the worlds landmass. This warming stuff remains turtles all the way down, and the floggers are the same snake oil salesmen we once would have ridden out of town on a rail.

  38. Seeing as we have the USCRN with all class 1 locations and a very well thought out setup could we use those? I know that is only land and only USA but given the difference between all the other terrestrial based data sets could we use it to calibrate their accuracy? Heck could we use it to check UAH/RSS readings over the area that the CRN covers?
    I will be very interested in how RSS and UAH fix their differences.

  39. By the way, the monthly posting on WUWT of Roy Spencer’s UAH global lower tropospheric temperature update seems to have been missing the past two months. It was +0.39 in October and +0.33 in November, in case anyone is interested.

  40. I am thinking that the pause will soon be known as the plateau and that we are on the back end of that plateau. Once this El Nino is over the cooling will be evident, even on the molested datasets, and they will need to adjust the last decade of temperatures down to hide the cooling as long as they can.

  41. Statistics-newbie question… what is the definition of “Statistically Significant”? Is it 1 standard deviation above a flatline, 2 standard deviations above a flatline, or what?

    • Here is “or what” with a clear examples given at this site and easy to understand:
      http://blog.minitab.com/blog/adventures-in-statistics/how-to-interpret-a-regression-model-with-low-r-squared-and-low-p-values
      Here in this post, they are plotting temperature anomalies data vs year. OK.
      So, they want to know if there is a trend. OK
      Maybe the trend is upward (warming), this could justify that CO2 maybe the cause since CO2 has been increasing during these years.
      Maybe the trend is downward (cooling), obviously against CO2.
      Maybe there is no trend, obviously against CO2 also.
      So, the simple start is to use linear regression analysis. You have two examples from Minitabs cited above, take a look at them and you will easily see how to interpret such data.
      The trend is exactly the same for both examples given, but obviously much more variation in one example than the other.
      Look at how they measured “how good the fit is”. They used two values: p value and R squared value.
      If the p value is <0.05, statisticians will declare "statistical significance" . However, in regression analysis, R squared value is really what you want. This value will be between 0 and 100% (as in the example above, although this is usually given by other statisticians as between 0 and 1 instead of 0 and 100%). Obviously if it is 100% (or 1) you have a perfect fit. So look at the R squared values for the two examples at Minitabs and you will easily see that when you have large variations, even if the p value is <0.05 the R squared value decreases rapidly. This indicates that other than the single factor you selected to plot against is not the only variable contributing (or causing) to the trend. Obviously if there is no upward or downward trends then there is no causation.
      Simple linear regression is just a beginning, but at least it should be done properly as a starting point not only to indicate statistical significance but also to give you assurance of declaring that the modeling is working or what other things you need to look at. A simple plot of temperature vs CO2 would be the thing to do!

      • To Brandon Gates. Very nice and yes r squared is what is needed.
        I will keep this for sure. Glad I revisited this post this morning.
        Thank you.

      • To Brandon Gates:
        Saved your graphs. Worked fine.
        Since you gave me something, maybe you do not have a copy of the first paper published on CO2 and temperature. A classic.
        Here is a copy of it:
        http://onlinelibrary.wiley.com/doi/10.1002/qj.49706427503/pdf
        Old style science writing. I love Fig. 2 and the predictions in Table VI.
        I am in agreement on page 14 “The conclusion…….” We can still take in some CO2.
        There is an interesting discussion after the References section of the paper.
        Thanks again.
        rd50

      • RD50,
        You’re welcome for the graphs. I did not have a copy of Callendar’s paper. I agree with you, it is a joy to read and quite prescient … and not just from the perspective of the science. First sentence after the abstract:

        Few of those familiar with the natural heat exchanges of the atmosphere, which go into the making of our climates and weather, would be prepared to admit that the activities of man could have any influence upon phenomena of so vast a scale.

        If he only knew …

      • RD50, PS;
        I just finished reading the Discussion section. While properly skeptical questions and rebuttals — to be expected in the face of such novel research covering a very large scope — they are eerily familiar. One would hope that after nearly 80 years such basic cautions and objections would have been handled to the satisfaction of all. Instead, in some quarters they are hashed and rehashed as if they’d never been asked at all. This paper is an absolute goldmine for perspective. Again my thanks for referring me to it.

    • For a regression, it is usually the prob that the data can be explained by a line of 0 slope. To be significant, usually a prob of .05 is the cut off, and that’s about 2 S.E. (not S.D.). The 0.05 is negotiable, depending on how $$ the test is, if people are going to die, etc.

    • Climate science says something like warming is statistically significant if there is a 95% chance of warming actually happening. This is slightly less than the 95.45% that represents two sigma.

  42. I am not a linear tread person, I normally deal with two samples at a time. Really simple stats.
    That means I use Student’s T or Wilcoxon Rank for most of my stuff. I prefer the Rank test because it makes no assumptions, although you get the same answer with the T test … usually.
    Anyway, if you Wilcoxon Rank using the first 10 months of this years NOAA series, 2014 data is higher than any other year, although not significantly. For example, 2010 has a prob of about .25 and 1998, 2003 and 2005 come in at prob of about 0.1. But other years, like 1995, 1999, or 2003 have probs <.01, which is highly significant. (I didn't run all the years).
    If you like, you can run 2014 against a run of years to lower the S.E., and let's include a warm year for a challenge: 2014 against 2009, 2010 and 2011, you get a prob of 0.02. Pretty significant. Or run 2013 and 2014 against 2003 and 2004 (hot and cold in each pair): that is like 0.07, so not 0.05 but not bad.
    So I can't really argue with the L.R. people because I am into L.R.s, but this non-significance business doesn't pass the smell test.
    (And if I was into L.R., I would use the Kolmogorov-Smirnov test…)

  43. Werner – First, I really don’t like all those straight lines in your first graph. You have sufficient resolution to show actual temperature trend instead of speculative guesses. Secondly, I prefer UAH satellites to RSS because RSS shows cooling in the twenty-first century and UAH does not. I happen to think that this cooling is an artifact of their new data handling procedure. UAH did not monkey with data handling and shows a straight horizontal line for hiatus. And a straight horizontal line fits both satellite data for the eighties and nineties as I shall show. But since we are talking of a 26 year interval you should include the data for that interval by going back to 1979 when satellites came on line. As it is, your first segment is shortened and shows only two of the five El Nino peaks visible in the satellite record before the beginning of the super El Nino. All five are needed for the analysis. I suggest you use UAH for that. Now lets forget about all the trends you have seen and just do the analysis needed. First thing is to make the actual temperature trend visible. Use a transparent red marker wide enough to cover the bulk of the noise covering the temperature curve. The noise is caused by cloudiness variations and hence has an approximate mean amplitude, with occasional outliers you can ignore. That transparent red band is the best possible way to define global air temperature, but it is not global mean. Global mean can easily be defined for the segment in the eighties and nineties that shows ENSO oscillations. There are five El Nino peaks there, with La Nina valleys in between. ENSO amplitude in the eighties and nineties is approximately 0.5 degrees Celsius. The same period shown in ground-based data has an amplitude of about 0.3 degrees Celsius showing the difference in resolution between the two measurement techniques. Once you have the red band drawn in put a yellow dot at the half way mark between an El Nino peak and its neighboring La Nina valley. These dots mark the locations for global mean temperature. Connecting them shows what happens to global mean temperature. Trying to do it by computer is not a substitute because of systematic errors in computer-generated curves. See figure 15 in my book “What Warming?” for an example. Doing this on the left side of the graph will give a horizontal straight line from 1979 to early 1997. It tells us that throughout this period of ENSO ossilations global mean temperature did not change. This is proof that El Ninos have nothing to do with global warming. But if not El Ninos then what? This regular succession of ENSO oscillations is cut short by the super El Nino of 1998. It rises and falls quickly and on both sides of it there is a La Nina depression as there should be. By analogy with the eighties and nineties there should be another El Nino rising in 1999 when the super El Nino has ended. It looks that way but the temperature keeps rising until it is a third of a degree above that of the previous ENSO oscillation in the eighties and nineties. And what is more, temperature stays at that level for the next seven years instead of coming down as expected. So what happened to ENSO? Well, it does show signs of life when the 2008 La Nina arrives. This really confused Trenberth who had no idea why there was cooling when he expected warming. And as we expect, an El Nino is not far behind that La Nina and appears in 2010. Problem is, all these people hoping for the end of hiatus were expecting another El Nino in 2014 or 2015 to save them but have not gotten nothing yet. They are dreaming that an El Nino will cause global warming but as I pointed out, El Ninos have nothing to do with warming. They are all paired with La Ninas, and the average of the two, not the El Nino peak itself, determines global mean temperature. The over-all picture is thus a continuing hiatus/pause, regardless of what an El Nino may not do. It is highly likely that the hiatus we are in was instigated by the huge amount of warm water carried across the ocean by the super El Nino. That super El Nino and its consequences are not well understood but all the billions of research money they get from Uncle Sam are not available for silly things like trying to understand climate. The super El Nino was followed by that mysterious step warming that raised global temperature by a third of a degree Celsius and then stopped. This is actually the only warming the world has seen since 1979, and it is guaranteed not to be anthropogenic. As I pointed out, there was also another hiatus in the eighties and nineties, and the two don’t line up because of this step warming in between. If you take a 26 year segment of temperature history it is comprised of two horizontal segments, one preceding the arrival of the super El Nino, and one following its departure, separated by a temperature rise of a third of a degree at the beginning of the century. Because of this it is impossible to join them into a single curve. Oh, and one more thing. You have not heard of the hiatus of the eighties and nineties because all three ground-based temperature sources (GISS, NCDC, and HadCRUT) are faking a warming there that does not exist. It used to be called the “late twentieth century warming” and claims were made that it must be human caused because no one knew why it was there. I proved this fakery (see Figure 24) when I wrote my book. I even put a warning about it in the preface, but nothing happened. Their cooperation is proven by the fact that their data were computer processed by an identical procedure that left its footprints on publicly available temperature curves. To me, that is scientific fraud. They are still brazenly raising the slope in the twenty-first century graph, with the absurd result that in their temperature curves the 2010 El Nino peak is now higher than the 1998 super El Nino is. My advice is to not use any ground-based temperature curves if satellite data are available.

    • My advice is to not use any ground-based temperature curves if satellite data are available.
      Thank you for these thoughts. I believe Bob Tisdale would agree with much of it. I am not in a position to judge between RSS and UAH. You like UAH and Lord Monckton likes RSS. I give the statistics for both. And by giving the ground based data as well, the glaring discrepancies become apparent.

  44. Maybe we should do an additional graph showing that there hasn’t been any net warming since the MWP 1200 years ago.

  45. UAH version 5.5 update
    The November anomaly came in at 0.212, a small drop from 0.242 in October. The average after 11 months stays at 0.201 and the ranking stays at 7th. However the time for a slope that is not positive decreased from 9 years and 10 months to 6 years and 6 months.

  46. Werner, i am wondering can you tell me where GISS would rank this year if they were to use the 1980-2010 baseline?
    I don’t have the time to find out, if someone could tell me the answer i would be grateful.

    • In general, a different base line does not change the relative ranking. It just changes all of the anomalies either up or down. To give you an analogy, if you measure your height and that of your brother relative to the sidewalk outside, you would get certain values. But if you measured them relative to the second floor of a house, the numbers would be different, however that would not change who is taller.

  47. The figure is irrelevant as what we should be looking at is a best fit curve from the highest grade noise filters available. Unfortunately most of the really good ones are classified so you cannot even publish the results let alone the method but even the crudest rolling five year average one shows a clearly cyclic waveform with subsidiary cyclic ones superimposed as well as sudden random spikes and troughs.

  48. From here, I found the following interesting and relevant to some points raised here.
    http://wattsupwiththat.com/2014/12/03/onward-marches-the-great-pause/
    “Steven Goddard writes: “The graph compares UAH, RSS and GISS US temperatures with the actual measured US HCN stations. UAH and GISS both have a huge warming bias, while RSS is close to the measured daily temperature data. The small difference between RSS and HCN is probably because my HCN calculations are not gridded. My conclusion is that RSS is the only credible data set, and all the others have a spurious warming bias.” “

  49. “In my opinion, a combination of UAH and RSS needs to show statistically significant warming for less than 15 years before I am comfortable with declaring the pause over. What do you think?”
    I think you’re severely confused. <15 yrs? How about 1 yr? 0.05 yrs.? 3 yrs? Howcome statisticians can't talk good?

    • You are correct and this point was brought up earlier. I should have said between 10 and 15 years or something like that since a strong El Nino can cause significant warming over the course of a single year and it would not mean much.

Comments are closed.