On the Difference Between Lord Monckton's 18 Years for RSS and Dr. McKitrick's 26 Years (Now Includes October Data)

Guest Post by Werner Brozek Edited by Just The Facts:

WoodForTrees.org – Paul Clark – Click the pic to view at source

To make this discussion easy, I will make the following assumptions. Dr. McKitrick’s data went until April, 2014, however I will assume his data continued to October, 2014. I will assume the lower error bar is zero for exactly 26 years in the past. I will also assume the line since September 1996 is also exactly 0 as per the following from Nick Stokes’ site:

Temperature Anomaly trend:

Sep 1996 to Oct 2014

Rate: 0.000°C/Century;

CI from -1.106 to 1.106

First of all, I will discuss Lord Monckton’s slope of zero for a time that is slightly larger than 18 years. Lord Monckton says the slope is zero for slightly longer than 18 years as is shown by the flat turquoise line above that starts in September 1996. Another way of saying this is that when we include error bars, there is a 50% chance that cooling occurred during this time and there is a 50% chance that warming occurred during this time.

According to my interpretation of the numbers from Nick Stokes’ site, there is a 95% chance that the real slope for this period of over 18 years is +/- 1.106 degrees C/Century. The two sloping lines from September 1996 show this range. This implies there is a very small chance there is cooling of more than 1.106 C/Century. At the same time, there is the same small chance of warming at more than 1.106 C/Century.

Before I discuss Dr. McKitrick’s 26 years, I would like to offer this quote from Peterson et al., 2009: State of the Climate in 2008, American Meteorological Society Bulletin.

”The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

From the above, it appears that climate scientists do not attach a huge amount of importance to the time for a slope of zero, but rather to the time that the warming is not statistically significant at the 95% level.

What Dr. McKitrick has found is that for RSS, the warming is not statistically significant at the 95% level for 26 years. So if WoodForTrees.org (WFT) gives a warming rate of X C/year, the error bars are also +/- X C/year.

According to WFT, there is warming from 26 years ago at the rate of 0.0123944 C/year. So this means that we can be 95% sure the real warming rate is 0.0123944 C/year +/- 0.0123944 C/year. Doing the adding and subtracting, this gives, at the 95% level, a range of between 0.0247888 C/year (or 0.025 C/year to two significant digits) and zero. These two ranges are indicated on the graph above starting at November 1988. Since the lower number is zero and therefore not positive, it is reasonable to say the warming since November 1988 is not statistically significant, at least according to RSS.

Analogous to the case with no warming, there is a small chance that the warming over 26 years is larger than 0.025 C/year. However there is the same small chance that there has been cooling over the last 26 years according to Dr. McKitrick’s calculations using the RSS data.

In the sections below, as in previous posts, we will present you with the latest facts. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on some data sets. At the moment, only the satellite data have flat periods of longer than a year. The second section will show for how long there has been no statistically significant warming on several data sets. The third section will show how 2014 to date compares with 2013 and the warmest years and months on record so far. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.

Section 1

This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.

1. For GISS, the slope is not flat for any period that is worth mentioning.

2. For Hadcrut4, the slope is not flat for any period that is worth mentioning. Note that WFT has not updated Hadcrut4 since July and it is only Hadcrut4.2 that is shown.

3. For Hadsst3, the slope is not flat for any period that is worth mentioning.

4. For UAH, the slope is flat since January 2005 or 9 years, 10 months. (goes to October using version 5.5)

5. For RSS, the slope is flat since October 1, 1996 or 18 years, 1 month (goes to October 31).

The next graph shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the upward sloping blue line at the top indicates that CO2 has steadily increased over this period.

WoodForTrees.org – Paul Clark – Click the pic to view at­ source

When two things are plotted as I have done, the left only shows a temperature anomaly.

The actual numbers are meaningless since the two slopes are essentially zero. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 18 years, the temperatures have been flat for varying periods on the two sets.

Section 2

For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website <a href=”http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html”. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 14 and almost 22 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.

Dr. Ross McKitrick has also commented on these parts and has slightly different numbers for the three data sets that he analyzed. I will also give his times.

The details for several sets are below.

For UAH: Since June 1996: CI from -0.037 to 2.244

(Dr. McKitrick says the warming is not significant for 16 years on UAH.)

For RSS: Since December 1992: CI from -0.018 to 1.774

(Dr. McKitrick says the warming is not significant for 26 years on RSS.)

For Hadcrut4.3: Since April 1997: CI from -0.010 to 1.154

(Dr. McKitrick said the warming was not significant for 19 years on Hadcrut4.2 going to April. Hadcrut4.3 would be slightly shorter however I do not know what difference it would make to the nearest year.)

For Hadsst3: Since December 1994: CI from -0.007 to 1.723

For GISS: Since February 2000: CI from -0.043 to 1.336

Note that all of the above times, regardless of the source, with the exception of GISS are larger than 15 years which NOAA deemed necessary to “create a discrepancy with the expected present-day warming rate”.

Section 3

This section shows data about 2014 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadsst3, and GISS.

Down the column, are the following:

1. 13ra: This is the final ranking for 2013 on each data set.

2. 13a: Here I give the average anomaly for 2013.

3. year: This indicates the warmest year on record so far for that particular data set. Note that two of the data sets have 2010 as the warmest year and three have 1998 as the warmest year.

4. ano: This is the average of the monthly anomalies of the warmest year just above.

5. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year. Note that this does not yet include records set so far in 2014 such as Hadsst3 in June, etc.

6. ano: This is the anomaly of the month just above.

7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0. Periods of under a year are not counted and are shown as “0”.

8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.

9. sy/m: This is the years and months for row 8. Depending on when the update was last done, the months may be off by one month.

10. McK: These are Dr. Ross McKitrick’s number of years for three of the data sets.

11. Jan: This is the January 2014 anomaly for that particular data set.

12. Feb: This is the February 2014 anomaly for that particular data set, etc.

21. ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months.

22. rnk: This is the rank that each particular data set would have if the anomaly above were to remain that way for the rest of the year. It may not, but think of it as an update 50 minutes into a game. Due to different base periods, the rank is more meaningful than the average anomaly.

Source UAH RSS Had4 Sst3 GISS
1.13ra 7th 10th 9th 6th 7th
2.13a 0.197 0.218 0.492 0.376 0.59
3.year 1998 1998 2010 1998 2010
4.ano 0.419 0.55 0.555 0.416 0.66
5.mon Apr98 Apr98 Jan07 Jul98 Jan07
6.ano 0.662 0.857 0.835 0.526 0.92
7.y/m 9/10 18/1 0 0 0
8.sig Jun96 Dec92 Apr97 Dec94 Feb00
9.sy/m 18/5 21/11 17/7 19/11 14/9
10.McK 16 26 19
Source UAH RSS Had4 Sst3 GISS
11.Jan 0.236 0.261 0.508 0.342 0.68
12.Feb 0.127 0.161 0.305 0.314 0.43
13.Mar 0.137 0.213 0.548 0.347 0.70
14.Apr 0.184 0.251 0.658 0.478 0.71
15.May 0.275 0.286 0.596 0.477 0.78
16.Jun 0.279 0.346 0.620 0.563 0.61
17.Jul 0.221 0.351 0.543 0.551 0.52
18.Aug 0.117 0.193 0.669 0.644 0.69
19.Sep 0.186 0.206 0.593 0.574 0.76
20.Oct 0.243 0.272 0.613 0.529 0.76
Source UAH RSS Had4 Sst3 GISS
21.ave 0.201 0.254 0.565 0.482 0.66
22.rnk 7th 7th 1st 1st 1st

If you wish to verify all of the latest anomalies, go to the following:

For UAH, version 5.5 was used since that is what WFT uses.

http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.5.txt

For RSS, see: ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt

For Hadcrut4, see: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.3.0.0.monthly_ns_avg.txt

For Hadsst3, see: http://www.cru.uea.ac.uk/cru/data/temperature/HadSST3-gl.dat

For GISS, see:

http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

To see all points since January 2014 in the form of a graph, see the WFT graph below. Note that Hadcrut4 is the old version that has been discontinued. WFT does not show Hadcrut4.3 yet.

WoodForTrees.org – Paul Clark – Click the pic to view at source

As you can see, all lines have been offset so they all start at the same place in January 2014. This makes it easy to compare January 2014 with the latest anomaly.

Appendix

In this part, we are summarizing data for each set separately.

RSS

The slope is flat since October 1, 1996 or 18 years, 1 month. (goes to October 31)

For RSS: There is no statistically significant warming since December 1992: CI from -0.018 to 1.774.

The RSS average anomaly so far for 2014 is 0.254. This would rank it as 7th place if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2013 was 0.218 and it is ranked 10th.

UAH

The slope is flat since January 2005 or 9 years, 10 months. (goes to October using version 5.5 according to WFT)

For UAH: There is no statistically significant warming since June 1996: CI from -0.037 to 2.244. (This is using version 5.6 according to Nick’s program.)

The UAH average anomaly so far for 2014 is 0.201. This would rank it as 7th place if it stayed this way. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.662. The anomaly in 2013 was 0.197 and it is ranked 7th.

Hadcrut4.3

The slope is not flat for any period that is worth mentioning.

For Hadcrut4: There is no statistically significant warming since April 1997: CI from -0.010 to 1.154.

The Hadcrut4 average anomaly so far for 2014 is 0.565. This would rank it as 1st place if it stayed this way. 2010 was the warmest at 0.555. The highest ever monthly anomaly was in January of 2007 when it reached 0.835. The anomaly in 2013 was 0.492 and it is ranked 9th.

HADSST3

For HADSST3, the slope is not flat for any period that is worth mentioning. For HADSST3: There is no statistically significant warming since December 1994: CI from -0.007 to 1.723. The HADSST3 average anomaly so far for 2014 is 0.482. A new record is guaranteed. 1998 was the warmest at 0.416 prior to 2014. The highest ever monthly anomaly was in July of 1998 when it reached 0.526. This is also prior to 2014. The anomaly in 2013 was 0.376 and it is ranked 6th.

GISS

The slope is not flat for any period that is worth mentioning.

For GISS: There is no statistically significant warming since February 2000: CI from -0.043 to 1.336.

The GISS average anomaly so far for 2014 is 0.66(4). This would rank it as first place if it stayed this way. 2010 was the warmest previously at 0.66(1). The highest ever monthly anomaly was in January of 2007 when it reached 0.92. The anomaly in 2013 was 0.59 and it is ranked 7th.

Conclusion

There are different ways of deciding whether or not we are in a pause. We could say that if we have a flat slope for X number of years, we are in a pause. Or we could say that if the warming is not statistically significant for over 15 years, we are in a pause. Or we could say that as long as the satellite data sets do not break the 1998 record, we are still in a pause.

In my opinion, a combination of UAH and RSS needs to show statistically significant warming for less than 15 years before I am comfortable with declaring the pause over. What do you think?

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

198 Comments
Inline Feedbacks
View all comments
Evan Jones
Editor
December 2, 2014 6:54 pm

I like using the “Big 4” average. I think all four are higher trend than the actual surface trend (2 being satellites and doing LT right, and 2 surface metrics doing it just plain wrong). It’s what we have.
That takes us back to 2001. This also has the salubrious effect of removing the 1998 El Nino and the severe 1999 – 2000 La Nina that followed. Anywhere between 1998 – 2000 risks being a cherrypick. So one must be careful.

Werner Brozek
Reply to  Evan Jones
December 2, 2014 7:12 pm

I assume you are talking about WTI. It was discontinued in May when we had the last Hadcrut3 reading, so it would not be accurate at 2001 if we had Hadcrut4 and if it were updated to October.

Evan Jones
Editor
Reply to  Werner Brozek
December 2, 2014 8:25 pm

WTI also has haddy4, though. So I use those and eyeball it.

Werner Brozek
Reply to  Werner Brozek
December 2, 2014 8:37 pm

Actually that is not much help either! It has Hadcrut4.2 which stopped in July as my last graphic shows. And as we discussed in the last two posts, Hadcrut4.3 is warmer in recent years than Hadcrut4.2. As well the last three months have been hotter than average.
In addition, WFT uses version 5.5 for UAH which is cooler than version 5.6.

David
Reply to  Evan Jones
December 3, 2014 4:14 am

Not sure it’s a good idea to mix sattelite data with site data, the methodology being so different. At some point, if sattelite date keeps telling a different story, we will have to decide which set better describes reality (instead of hidding its impact in an average).

Werner Brozek
Reply to  David
December 3, 2014 6:43 am

I agree. However it would be nice if UAH and RSS were not so far apart since 1998. One goes down about as much as the other goes up.

norah4you
December 2, 2014 6:59 pm

Contrary to the article writer above I find it hard finding any significant warming what so ever using the models and methods used – The models are worse than anything I have seen during the 43 years since my Systemprogrammer Exam, they lack the most needed parameters (I myself used 43 parameters back in 1993 to establish the sea water leavels from peak Stone Age up to 1000 AD. Program that used on the so called “models” show they lack ALL knowledge of Basic knowledge Theories of Science one ought to expect anyone calling him/herself scientist should be able to show having learnt!
Thus the only valid comment is the one we used back in 70’s: Bad input -> Bad output

Werner Brozek
Reply to  norah4you
December 2, 2014 7:21 pm

On rows 9 and 10 of the table, I give the times from Nick Stokes and Dr. McKitrick. Both agree that the times are longer than 15 years for the three sets both analyzed.

norah4you
Reply to  Werner Brozek
December 2, 2014 7:32 pm

Fallacie argumentum ab auctoritateFallacies in argumentation as well as false asumption. Consensus is a political term with no connection what so ever to Theories of Science.
Science axioms:
In Theories of Science it’s never ever possible to prove a thesis right. Only to falsify a thesis

Reply to  norah4you
December 3, 2014 12:35 am

“The Duhem–Quine thesis (also called the Duhem–Quine problem, after Pierre Duhem and Willard Van Orman Quine) is that it is impossible to test a scientific hypothesis in isolation, because an empirical test of the hypothesis requires one or more background assumptions (also called auxiliary assumptions or auxiliary hypotheses). The hypothesis in question is by itself incapable of making predictions. Instead, deriving predictions from the hypothesis typically requires background assumptions that several other hypotheses are correct; for example, that an experiment worked as designed or that previous scientific knowledge was accurate. For instance, as evidence against the idea that the Earth is in motion, some people noted that birds did not get thrown off into the sky whenever they let go of a tree branch. Later theories of physics and astronomy could account for this fact while also positing a moving Earth.
Although a bundle of hypotheses (i.e. a hypothesis and its background assumptions) as a whole can be tested against the empirical world and be falsified if it fails the test, the Duhem–Quine thesis says it is impossible to isolate a single hypothesis in the bundle. One solution to the dilemma thus facing scientists is that when we have rational reasons to accept the background assumptions as true (e.g. scientific theories via evidence) we will have rational—albeit nonconclusive—reasons for thinking that the theory tested is probably wrong if the empirical test fails.”

norah4you
Reply to  Steven Mosher
December 3, 2014 1:27 am

I have problems with that – that’s as far from Theories of Science as can be.
Empiri is and models aren´t valid no matter who presents them.
That problem also exists in the 2nd String Theory in one single point. (see my bloggarticle regarding String theory in Norah4you) It doesn’t matter if all but one show “green light” – as long as there is one single part of a thesis that doesn’t the thesis falls to pieces.

Reply to  Steven Mosher
December 3, 2014 10:01 am

It’s basically the same idea as Occam’s razor – it would be rational to sacrifice the single hypothesis whose repudiation will require us to make the lowest number of changes to our overall world view. However, people become attached to different aspects of their world views, and thus the arguing begins.

Duster
Reply to  Steven Mosher
December 3, 2014 11:11 am

The short form in which this is commonly expressed in considering a laboratory result is “all other things being equal.” You must simplify or complicate an explanation as necessary to eliminate false-to-fact results. The old physics quiz joke that begins “assume a frictionless, spherical …” also addresses this problem in a tongue-in-cheek manner.
Assuming all other things are equal works well in laboratories, but is far less satisfactory in accounting for field results. The willful disregard of the truism that reality is always more complex than a lab environment can be regarded as a character flaw. It is a flaw very clearly expressed in the Climategate emails. A good example is Trenberth’s complaint that the data must be wrong. Elegant theory v. messy reality, he chose theory and blamed reality.

December 2, 2014 7:02 pm

How about the “fact” that the 1930s were warmer than today before the adjustments and homogenizations?
I think that Willis’s Buoy microcosm illustration shows how the temperature records have been altered to show global warming. Don’t know how they did it or why they did it, but I think the temperatures have been adjusted. I like this GISS graph – very alarming???:
http://suyts.files.wordpress.com/2013/02/image_thumb265.png?w=636&h=294

Werner Brozek
Reply to  J. Philip Peterson
December 2, 2014 7:31 pm

This sort of thing was also discussed in the post here:
http://wattsupwiththat.com/2014/11/05/hadcrut4-adjustments-discovering-missing-data-or-reinterpreting-existing-data-now-includes-september-data/
Then there is also the question as to whether this was a North American phenomena or global.

Bob Boder
Reply to  Werner Brozek
December 3, 2014 9:29 am

What always amazes me is that the historical Data gets adjusted and manipulated and shows a trend, but the Data that is readily available and recent that anyone can look at and interpret shows little or no trend. Funny how observation controls the environment maybe the environment is a quantum event.

The Ghost Of Big Jim Cooley
Reply to  J. Philip Peterson
December 3, 2014 1:18 am

PLEA TO OTHERS HERE: Is it possible to get this graph in degrees C? Can anyone run it and present it as a pic? It would be a very powerful tool to use in countering ‘global warming’.

Bair Polaire
Reply to  The Ghost Of Big Jim Cooley
December 3, 2014 3:05 am

Central England Temperatures in degC and Global CO2 Emissions from 1659 to 2009
http://c3headlines.typepad.com/.a/6a010536b58035970c0120a7c87805970b-pi

Reply to  The Ghost Of Big Jim Cooley
December 3, 2014 3:30 am

Brilliant.
I’ve been asking for this graph on the Guardian website and all I ever get is Mauna Loa vs HADCrut.
The fact that this graph isn’t in AR5 is telling in itself.
After all the UNFCCC says that it believed warming was due man’s emissions and the IPCC was set up to find the evidence. Graphing that is what the IPCC was meant to be doing.

Reply to  J. Philip Peterson
December 3, 2014 11:39 am

Even if the 1930s were not warmer than today it makes little difference to people trying to survive life’s every day real problems primarily because they don’t fear minor warming.
My guess is that the alarmist’s guess about the change in temperature is so insignificant anyway and well within legitimate error bars and peak to peak noise there is no rational evidence which demands action.
You’d have to be a certified loon to allow so much harm because the temperature might go up 1°- 3°C at some magical time in the future.
Today, we have the resources to fix nearly every problem in the world and what do we do with our well earned wealth?
The problems with pushing CAGW is nobody cares enough to do it because:
a. They don’t believe it themselves as demonstrated by their activities.
b. Everybody knows alarmist have failed to predicted anything correctly.
c. It’s too late to save us.
d. The solutions of buying more toys which are not fit for purpose, Cap&Trade and raising taxes will do virtually nothing to affect earth’s temperature.
I’m not convinced yet either.
IPCC, please send more fear mongering propaganda.

markl
December 2, 2014 7:03 pm

“In my opinion, a combination of UAH and RSS needs to show statistically significant warming for less than 15 years before I am comfortable with declaring the pause over.” Did you mean ‘more’ (not “less”) than 15 years in the next to the last sentence?

Werner Brozek
Reply to  markl
December 2, 2014 7:35 pm

I did mean less as stated. So a rapid warming over 10 years means a lot. But if the same rise happened over 30 years, it would not mean much.

Reply to  Werner Brozek
December 3, 2014 10:07 am

Then you could have said: A temperature increase so steep that it becomes significant in 15 years or less. GUIDE your readers, don’t force them to parse your arguments backwards.

Louis
Reply to  Werner Brozek
December 3, 2014 10:24 am

Saying “less than 15 years” is ambiguous. It left me scratching my head. One is less than 15. So does that mean that if UAH and RSS show statistically significant warming for only 1 year (perhaps during an El Nino), you would declare the pause over? Perhaps you should specify a range, such as 10-15 years.

Werner Brozek
Reply to  Werner Brozek
December 3, 2014 11:51 am

Excellent point! I should have said more than 10 but less than 15. Here is what Nick’s site said for a certain period for RSS:
Temperature Anomaly trend
Nov 1996 to Aug 1998 
Rate: 23.474°C/Century;
CI from 18.299 to 28.650;

milodonharlani
December 2, 2014 7:06 pm

Twenty six years takes us back to the conspiracy between Jim “Venus Express” Hansen & CO Senator Tim “Populution” Wirth to turn off the air conditioning in the Senate hearing room.
Poetic justice that the warming of c. 1977-88 ended just as the co-conspirators pronounced man-made global warming a problem of cataclysmic, biblical proportions.
God obviously has a sense of humor.

Old England
Reply to  milodonharlani
December 3, 2014 1:06 am

I seem to recall that every global climate conference has experienced unseasonably cold weather …. so you may be right and ‘somebody up there’ is trying to tell us something …….

ozspeaksup
Reply to  Old England
December 3, 2014 5:27 am

may they freeze in peru then!
😉

Werner Brozek
Reply to  Old England
December 3, 2014 6:54 am

And did you know that the United States did not have a category 3 or higher hurricane strike the east coast since WUWT started? Should we call it the “Watts Effect”?

Richard G
Reply to  Old England
December 3, 2014 9:43 am

I prefer the Watts Up With That Effect. As in Watt happened to global warming or Watt happened to all the Atlantic hurricanes. Every time we go to a global warming conference we freeze our ass off, Watts up with that.

Catherine Ronconi
Reply to  Old England
December 3, 2014 9:57 am

So WUWT is responsible for global cooling since 2006?

Richard G
Reply to  Old England
December 3, 2014 12:01 pm

No, I just think it’s poetic justice for the CAGW alarmists.

Werner Brozek
December 2, 2014 7:06 pm

RSS Update:
The November anomaly came out at 0.246, a drop of 0.028 from October. The average is now 0.253 so it still ranks in 7th place after 11 months.
The flat part increases to 18 years and 2 months. It missed being 18 years and 3 months by the smallest of margins with a positive slope of 6.30663e-07 per year.

JBP
December 2, 2014 7:08 pm

Oh can I get back to you in say, oh, maybe 2 years with an answer?
It sounds as though there has been no warming for a couple of years, meanwhile CO2 has increased. Sounds like time to shut down coal plants by implementing new regulations to protect the ozone.

Werner Brozek
Reply to  JBP
December 2, 2014 11:29 pm

What I find really ironic is that they want strict rules for mercury, but ban incandescent bulbs while allowing bulbs with mercury.

BFL
Reply to  Werner Brozek
December 3, 2014 1:04 pm

The rules did not ban “specialty” bulbs. One can still buy (for example) 100 watt bulbs in that class:
https://www.1000bulbs.com/category/100-watt-standard-shape-light-bulbs/

Reply to  Werner Brozek
December 3, 2014 1:32 pm

The incandescent ban was just the compact fluorescent manufacturers getting a return on their investments before LEDs send them to oblivion. GE got a great return for their lobbying efforts.

Dudley Horscroft
December 2, 2014 7:08 pm

Quoted in the para immedialtely above “Section 1″
” At the moment, only the satellite data have flat periods of longer than a year. ”
Yet I recall seeing a post some months ago giving the details of results from the NOAA’s new set of land stations in the USA – all specially sited to ensure that they were properly away from ‘odd’ influences and not likely to be moved in the next umpteen years. This showed not only no warming, but a very slight decrease over the 10 years since the full network commenced operating.
Contradiction?

Werner Brozek
Reply to  Dudley Horscroft
December 2, 2014 7:48 pm

I have never quoted NOAA since WFT does not cover it. However a few months ago, all global data sets that I cover had flat periods of several years. But that has changed over the last few months. However it may be different for the United States alone.

Robin.W.
December 2, 2014 7:17 pm

Here in Australia I’m not looking forward to their ABC and the lazy MSM screaming 2014 hottest year ever etc. Tim Flannery will be interviewed for sure.
Not according to the satellites though which are the only trustworthy source in my opinion.
Great post Werner,thank you.

Reply to  Robin.W.
December 2, 2014 7:40 pm

Sats are only trustworthy if measuring sea level rise. (Because alarmists always quote sat sea level rise, never tide gauges when talking about trends.) Otherwise they should always be ignored. ;-P

Steve Oregon
December 2, 2014 7:35 pm

Werner, please rephrase this.
“In my opinion, a combination of UAH and RSS needs to show statistically significant warming for less than 15 years before I am comfortable with declaring the pause over.”

Werner Brozek
Reply to  Steve Oregon
December 2, 2014 7:55 pm

If Nick Stokes and Dr. McKitrick say the warming is statistically significant for as short a period of only 10 years on RSS rather than 22 and 26 years, respectively, then I will agree the pause is over.

December 2, 2014 7:36 pm

This post should get an award for the worst choice of a graphic in a WUWT article.

December 2, 2014 7:37 pm

“In my opinion, a combination of UAH and RSS needs to show statistically significant warming for less than 15 years before I am comfortable with declaring the pause over. What do you think?”
You can’t really do that, because you can’t get a combined variance that isn’t extremely large. They aren’t the same population. To give just one idea of the problem, as you say, RSS shows zero trend since Sep 1996, with range 1.106C/Cen. But UAH5.6 shows 1.042, with lower bound -0.131. So each says the other is almost outside its CI range.
On Dr McKitrick’s numbers, what he actually said was:
“I propose a robust definition for the length of the pause in the warming trend over the closing subsample of sur- face and lower tropospheric data sets. The length term J_ MAX is defined as the maximum duration J for which a valid (HAC-robust) trend confidence interval contains zero for every subsample beginning at J and ending at T – m where m is the shortest duration of interest. “
You need to figure out what that means before comparing it with conventional significance. HAC-robust is the key. Statistically insignificant as normally defined says could have arisen by chance. HAC-robust means that it could have arisen allowing for heteroskedasticy or some long term persistence. The more alternative ways you postulate, the longer you can stretch out lack of significance. But I don’t think you’ll see Ross’ definition taken up any time soon.
On RSS, here is a plot that shows trends going back to past times from October 2014. A zero trend pause starts when the curve first crosses the x-axis. You can see how RSS is an outlier. It is the dark blue curve at the bottom. UAH is the light blue at the top.
http://www.moyhu.org.s3.amazonaws.com/pics/Oct1118.png
BTW, you said UAH had zero trend since 2005. My check said 2008.

Werner Brozek
Reply to  Nick Stokes
December 2, 2014 8:17 pm

Thank you very much for that, Nick! I am merely the driver of the car here. You are the mechanic.
However it would be nice if RSS and UAH were closer together.
As for the times, you are right about the lower time for UAH, but that is for version 5.6. I am using version 5.5 since that is what WFT gives.

Catherine Ronconi
Reply to  Nick Stokes
December 2, 2014 8:48 pm

Apparently you have never studied statistics, or if you have, you weren’t paying attention.
You can’t have an outlier when there are only two series, ie RSS & UAH.
I mean, sheesh. To the Nth.

basicstats
Reply to  Nick Stokes
December 3, 2014 3:45 am

Remedial statistics for Nick Stokes I fear. Everything, significant or not, arises
‘by chance’ in statistical hypothesis testing.
More seriously, the idea that Prof McKitrick is using an unconventional definition just because it makes it harder to reject the null hypothesis of stationarity is wrong. It amounts to saying one should prefer less efficient/ precise/robust tests. See Mosher’s erudite (well I didn’t know the philosophy part) comment above. The test is for a stationary time series, not the auxiliary (ancillary?) hypotheses of homoskedasticity or AR(1). It is clearly not right to reject stationarity because a time series fails the test through failing one of these incidental hypotheses incorporated into the null distribution in order to get a test statistic. The HAC test makes rejection for spurious reasons less likely, however galling that may be to climate folk.

anna v
December 2, 2014 8:16 pm

“what do you think”.?
I think future generations of scientists will be laughing at all this navel gazing . If one looks at the Holocene temperature record the top of each semi-periodic variation lasts for a hundred years at least, why should our optimum be different?

ossqss
December 2, 2014 8:31 pm

Permit me to ask a simple qualifying question.
At what point in history were we able to start measuring global temp to 100’s of a degree?

Werner Brozek
Reply to  ossqss
December 2, 2014 8:46 pm

Good point! Think of it as comparing two teams. One scores 16 goals in 20 games and the other scores 24 goals in 20 games. The average for the first team is 0.8 goals per game and the other is 1.2 goals per game. No team can score a tenth of a goal, but numbers can be compared to see which is the highest scoring team.

Catherine Ronconi
Reply to  Werner Brozek
December 2, 2014 8:49 pm

No, they can’t. With only two teams, you have the higher, not the highest.
The more I read of you Team advocates, the more I laugh.

Werner Brozek
Reply to  Werner Brozek
December 2, 2014 9:16 pm

With only two teams, you have the higher, not the highest.
Sorry! It has been 50 years ago since I learned that in high school and it slipped my mind.

Catherine Ronconi
Reply to  Werner Brozek
December 2, 2014 9:20 pm

As long as the important things haven’t gone missing, no matter.

george e. smith
Reply to  Werner Brozek
December 2, 2014 10:41 pm

So if you have three data sets no two the same, it would seem that one of the sets is higher than the other two.
But you have no basis for declaring that set the highest.
No matter how many sets you have, you still can’t say that there is no other set that is the highest set.
Same principle as inability to prove a theory correct. Only need one repeatable contrary result to disprove.
But you can say that one set is the highest of the currently known sets. That is true even if there are currently only two known sets.
I’m tempted to make some learned comment in Maori; but it seems that English is the preferred language of this blog.

ossqss
Reply to  Werner Brozek
December 3, 2014 7:04 am

Werner, my question was a leading one. Most of the temperature references in the data above are actually stated in 1,000’s of a degree. I just don’t see how we can convert low resolution historic temp readings into that fine of number without major intervention.
Do any global reporting stations actually report temperature to the 1,000th of a degree level?
Example question, how much warmer was 1998 than 1934, before adjustment, in 1,000’s of a degree?

Werner Brozek
Reply to  Werner Brozek
December 3, 2014 8:53 am

I use the numbers they provide. For all recent numbers, we could probably say +/- 0.1. And for numbers a hundred years ago, we could probably say at least +/- 0.5. On top of that, there were few thermometers around the globe a hundred years ago. But some people would like to turn the world upside down based on these numbers.

ossqss
Reply to  Werner Brozek
December 3, 2014 9:42 am

.1 is not .001 as is displayed in the post temperature references. Perhaps I am off base in not understanding how we represent thousandths or hundredths of a degree without actually measuring it. This year, as an example, may be the warmest ever by .02 of a degree, and yet we don’t actually measure to that level.
Thanks for your reply and continued good work. You too JTF!

Werner Brozek
Reply to  Werner Brozek
December 3, 2014 11:31 am

This year, as an example, may be the warmest ever by .02 of a degree, and yet we don’t actually measure to that level.
Suppose you measure the temperature of 50 cities to the nearest degree and find that all are 24 C to the nearest whole number. Then a year later, 49 cities are at 24 C but one is 25 C. Then the average in the first year was 24, but 24.02 in the second year. Of course it should be rounded to 24, but they do not do this. So just take their extra decimals with a chunk of salt.

Alx
Reply to  Werner Brozek
December 3, 2014 11:45 am

Lets say 1 team scores 1 goal in a thousand games and the other team scores 3 goals in a thousand games. We can say the second team is higher scoring at .003 goals per game.
But…who cares, the difference is moot, neither team is winning any games due to their offense.
So the question becomes when is a statistical difference significant, in the case of the fanciful and fleeting thing called global temperature, even though fantastical claims are made, nobody knows.
In games, the scores are not recorded differently, do not have competing methods of determining what the final score is, and the games are not recorded in thousandths of a goal to begin with. Unfortunately for climate science this is not the case. Too many data streams of temperature do not agree and change over time, there is not agreement as to the best method of recording temperature, and since we do not know what the temperatures/scores are lets just average to get a number is not something sports games deal with.
In conclusion climate science has two flaws in this area, not being able to determine significance, (dooms day speculation is not helpful here, neither is presuming any and every climate anomaly must be due to warming) and the ability to accurately determine global temperature.
So even before mentioning the poorly performing models, CO2 and mans 3.5% contribution to the 400 parts per million occurring in the atmosphere, and hypothesis that can never be dis-proven, one wonders how climate science has gained any credibility at all.

Werner Brozek
Reply to  Werner Brozek
December 3, 2014 12:00 pm

mans 3.5% contribution to the 400 parts per million
I do not want to debate this here since it is off topic. While I agree with the 3.5% per year, I accept that the cumulative effect is about 40% from 280 ppm in 1750 to 400 ppm now. I know others disagree. But I thought I should mention this for new readers who are not aware of the controversy here.

ossqss
Reply to  Werner Brozek
December 3, 2014 6:01 pm

Thanks Werner, that is what I was after. Now we can understand how we arrive at such temperature numbers through averages of averages of such. To the 1,000’s of a degree. 🙂
Regards Ed .

Reply to  ossqss
December 3, 2014 7:32 pm

Hadcrut is recorded to one thousandth of a degree yet most data was recorded to the nearest whole degree!

charliexyz
December 2, 2014 8:32 pm

I disagree with your interpretation of the State of the Climate Report.
You quoted it as : ”The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
And then you gave the above this interpretation: “From the above, it appears that climate scientists do not attach a huge amount of importance to the time for a slope of zero, but rather to the time that the warming is not statistically significant at the 95% level.”
My interpretation is that they they ran a whole bunch of simulations and less than 5% of the 15 year periods had a zero slope, where “zero slope” is the best estimate of the trend, without any confidence interval.
Lord Monckton tested for that condition and found that a zero slope has existed for greater than 15 years, therefore, per the State of the Climate Report there is a “discrepancy with the expected present-day warming rate”.

Werner Brozek
Reply to  charliexyz
December 2, 2014 8:59 pm

As I said in the update, it is now 18 years and 2 months for RSS. So however you interpret that statement, the models are in trouble and there are over 50 excuses as to why the models are not correct.
However if your interpretation is correct, why would Nick Stokes and Dr. McKitrick go to the effort they do to come up with their “95%” numbers?

Charliexyz
Reply to  Werner Brozek
December 2, 2014 9:47 pm

Stokes and McKitrick weren’t testing the statement made in the State of the Climate report., so they can use any method they want, on any dataset they want.
If you want to test the statement of the State of the Climate 2008 then you should test for 0 degree/ decade OLS trend greater than 15 years, using the appropriate time series dataset. RSS and UAH are lower troposphere, not surface temperature.
I’m not sure what relevance the various excuses have or why you mention them. They don’t have any effect on whether or not there is a discrepancy between models and observations, per the criteria stated by the State of the Climate 2008

Werner Brozek
Reply to  Werner Brozek
December 2, 2014 10:27 pm

With regards to your first sentence, I will let Nick Stokes handle that.
Due to the adiabatic lapse rate, there really should not be a difference between the warming of the surface and the lower troposphere. And if there is a difference, we need to find out why.
As for my mentioning the excuses, some people will deny we are even in a pause, so that is my reason for mentioning it.

Reply to  Werner Brozek
December 3, 2014 2:00 am

“charliexyz December 2, 2014 at 8:32 pm
My interpretation is that they they ran a whole bunch of simulations and less than 5% of the 15 year periods had a zero slope, where “zero slope” is the best estimate of the trend, without any confidence interval.”

I think that is the right interpretation. But there are points to add:
1. The statement is that any one 15 yr period has less than a 5% chance… Of course, if you keep looking at successive intervals, the chance of getting one such interval goes up. So over, say, 50 years, the chance of such a 15 yr is a lot higher.
2. There is context:
“Ten of these simulations have a steady long-term rate of warming between 0.15° and 0.25ºC decade–1, close to the expected rate of 0.2ºC decade–1. ENSO-adjusted warming in the three surface temperature datasets over the last 2–25 yr continually lies within the 90% range of all similar-length ENSO-adjusted temperature changes in these simulations (Fig. 2.8b). Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.
The 10 model simulations (a total of 700 years of simulation) possess 17 nonoverlapping decades with trends in ENSO-adjusted global mean temperature within the uncertainty range of the observed 1999–2008 trend (-0.05° to 0.05°C decade–1).”

He’s talking about ENSO-adjusted warming. Not about the chance of getting a run of La Ninas.
You’re right about us not testing that statement. I don’t think absence of significance is a useful criterion for anything much, but it is a test for, given positive observed trend, whether a zero value is unlikely in the inferred distribution. Peterson is talking about observed trends in model calcs.
And yes, he was specifically talking about surface trends, not troposphere.

Richard M
Reply to  Werner Brozek
December 3, 2014 7:49 am

It might be better to use the Santer paper as that is specifically tied to lower tropospheric temperatures.
The importance of timescale
B. D. Santer et al (2011)
“Our results show that temperature records of at least 17 years in length are required for identifying human effects on global‐mean tropospheric temperature.”

Werner Brozek
Reply to  Werner Brozek
December 3, 2014 9:02 am

Thank you! Here is what Richard Courtney had to say about this:
The Santer statement says that a period of at least 17 years is needed to see an anthropogenic effect. It is a political statement because “at least 17 years” could be any length of time longer than 17 years. It is not a scientific statement because it is not falsifiable.
However, if the Santer statement is claimed to be a scientific statement then any period longer than 17 years would indicate an anthropogenic effect. So, a 17-year period of no discernible global warming would indicate no anthropogenic global warming.
In my opinion, Santer made a political statement so it should be answered with a political response: i.e. it should be insisted that he said 17 years of no global warming means no anthropogenic global warming because any anthropogenic effect would have been observed.
Santer made his petard and he should be hoisted on it.
Richard

PieterF
December 2, 2014 8:50 pm

I question the notion of a “pause.” With data sets that begin anytime after 1979, a dominant feature is the 1980s and 1990s warm regime. What is seen as a “pause” in the 2000s may be the shoulder of an oscillation before a protracted cooling trend, as has been predicted by several researchers including the Russian Academy of Sciences.
Pauses this long in historical climate trends is unusual. Data, both proxy and observed, going back farther in time suggest the present may be a reversal of trend rather than a pause.

Werner Brozek
Reply to  PieterF
December 2, 2014 9:05 pm

You are right. We may even be in a decline, at least according to RSS. See:
http://www.woodfortrees.org/plot/rss/from:1996.65/plot/rss/from:1996.65/to:2005/trend/plot/rss/from:2005/trend

Catherine Ronconi
Reply to  Werner Brozek
December 2, 2014 9:22 pm

Indeed we are. It’s quite amusing to watch the gymnastics which the Team goes through to try to get around that inconvenient fact.

Richard M
Reply to  Werner Brozek
December 3, 2014 7:52 am

I suspect that when the current El Nino passes we will see significant cooling. Solar cycle 24 will be on the decline as will the AMO. The PDO will return to negative values. It’s only a question of whether that happens in 2015 or 2016.

December 2, 2014 9:10 pm

While the alarmists ought to have their noses rubbed in the fact that their precious models have been falsified by their own criteria, I wonder if we’re over doing it. It seems to me that natural variability is much higher than originally thought. At some point, the alarmists are going to have to admit that. The silver lining for them is that this gives them the excuse to move the goal posts, and posit a much longer time period to falsify the models.
The fact is that temps have been warming since the depths of the LIA. Hundreds of years. Does anyone think that trend has suddenly stopped? I for one doubt it. It just got overwhelmed by natural variability. So if THAT trend is real, and nothing has changed to alter it, the pause will come to an end with or without CO2 increases. When it does, the alarmists will go back to claiming the warming is CO2, and skeptics will go back to claiming that it is natural variability, and around the circle we’ll go again.

Werner Brozek
Reply to  davidmhoffer
December 2, 2014 9:26 pm

I wonder if we’re over doing it.
You and I may have read things hundreds of times. However there may be many new people each month that may be seeing this for the first time.
It just got overwhelmed by natural variability.
That may well be the case. And if it is, it only proves that there is nothing catastrophic about CAGW.

GogogoStopSTOP
Reply to  Werner Brozek
December 3, 2014 7:54 am

Werner Brozek: My sincere thanks for your efforts on the Front Lines of Reality. But, relative to the immediate comments here, could you redo your charts in 1 C increments to show the scale that’s being discussed by the Warmist arguments, Hadley, IPCC, etc?
My ancient thermostat goes from 10 – 30 C… so plot the above data on that scale and ask the “public” if they see a trend. This should be a requirement of every post here.

GogogoStopSTOP
Reply to  davidmhoffer
December 3, 2014 7:44 am

I completely agree with your thesis. By making these tortuous arguments in increments of 0.0123944 C/year we obviate the issue. Can I sense 0.0123 C?
Why aren’t ALL “our” Denier charts in WHOLE DEGREES C! A medical thermometer is only accurate to ±0.2 °C!!!
The “public” would reject Warmist claims if we were to assert that the change in all temperature in the last quarter century was less than can be measured in any household in America.
“The change in temperature of the earth in the last 26 years is within the accuracy of the wall thermostats & within 20X the accuracy of the your home medical thermometer.”

Richard M
Reply to  davidmhoffer
December 3, 2014 7:59 am

Once the alarmists admit natural variability is a significant factor it is game over. It reduces the potential impact of CO2 to a minor warming. There’s no reason to spend large sums of money to halt a small, generally beneficial warming.

Catherine Ronconi
Reply to  davidmhoffer
December 3, 2014 9:29 am

It has been warming since the Maunder Minimum in the depths of the Little Ice Age, over 300 years ago. But earth has been in a longer term cooling trend at least since the Minoan Warm Period, more than 3000 years ago, when the East Antarctic Ice Sheet, by far the largest depository of fresh water on the planet, quit retreating.
No one can say what the future holds. The Holocene, our present balmy interglacial, could end in 300, 3000 or 30,000 years. But there’s little that humans can do to stop the return of the northern hemisphere ice sheets, no matter how much fossil fuel we burn. Maybe we’ll come up with something in whatever time we have left. Or we can just adapt to life in a glacial world, which after all has been the norm for going on three million years, the entire history of our genus Homo. Or over 30 million if you start from when Antarctica glaciated, roughly the history of our superfamily, Hominoidea, the apes.

RACookPE1978
Editor
December 2, 2014 9:18 pm

Noted.

LevelGaze
December 2, 2014 10:33 pm

The post is ponderous, the comments too.
Am I the only person who is thinking “angels and pin heads”?

Werner Brozek
Reply to  LevelGaze
December 2, 2014 11:39 pm

WUWT offers a variety of posts and authors. Rarely will a given post be liked by everyone. However if you are in discussions with any one about any aspect of global warming, this is the place to get the facts.

mellyrn
Reply to  LevelGaze
December 3, 2014 6:32 am

Maybe yes, maybe no; I at least found the post informative, and the comments even more so.
As for “angels and pin heads” — that was a medieval thought experiment, trying to come to grips with the concept of infinitesimals. Today you might ask how many geometric “points” are contained on a pin head. It was NEVER about the nature of angels, or angelic dance practices, and no mathematician considers either “infinite” or “infinitesimal” a ridiculous, trivial concept.
(I once attended a “parents’ class” discussing points, lines and planes; we almost came to blows over whether you could or could not “add up” a bunch of points to get a line, like beads on a thread. That class would have made a hilarious Philosophers’ SNL skit.)

Brandon Gates
December 2, 2014 10:38 pm

Werner,
I think we’re presently inside an envelope which I’ve defined as one standard deviation of the residual when I regress the log of CO2 against temperature: https://drive.google.com/file/d/0B1C2T0pQeiaSZ05kUXBrNW96alU
In other words, Le Grande Pause isn’t all that unusual. Now if we were to dip outside of that envelope and stay there for 10 years or so, then I’d be calling Houston to report a problem.

Werner Brozek
Reply to  Brandon Gates
December 2, 2014 11:06 pm

Fair enough. But should we be spending billions of dollars in the meantime to stop warming if there is a chance we may be outside that envelope in 10 years?

Brandon Gates
Reply to  Werner Brozek
December 3, 2014 11:41 pm

Werner Brozek

But should we be spending billions of dollars in the meantime to stop warming if there is a chance we may be outside that envelope in 10 years?

How quickly we’ve moved from the science to the politics. Since you apparently don’t put much stock in what the people doing the actual research are saying, I wonder how good an idea you’ve got of where in that envelope we’ll be in a decade. For all you know, we could be outside it on the positive side. [1]
That aside, I think researching the planet is money well spent either way it goes. Like any science, one never quite knows beforehand what discoveries will be made and what use will come of it.
As for mitigation policy, I think that the cure cannot be worse than the disease and that wrecking the economy to stop emissions dead in their tracks is a Bad Idea. This assumes, of course, that we know how bad the disease will be. Which we don’t, and won’t until we get there. My opinion is that for the US, the top priority mitigation strategy should be to ramp up nuclear fission plants to replace coal. (C)AGW/CC aside, total replacement of coal power with nuclear would save on the order of 30-60k lives per year according to my estimates based on WHO and NIH studies.
Next would be to ramp geothermal, which to me is a no-brainer. Industry estimates that ~20% of current electricity demand could be met by building geothermal plants in 5 western states.
I don’t mind solar where it works, not a big fan of wind. We need to drop ethanol fuel production from food crops like a bad habit … because it is a bad habit. For my money, liquid fuels from blue-green algae seem to hold the most promise.
——————————
[1] Given that those 1 (and 2) sigma deviations from the mean over periods of a decade or two haven’t yet wiped us out, they’re more a challenge to CO2 diddit theory than to our well-being. It’s the 5-10 sigma departure from the pre-industrial average for centuries and beyond that present the higher risk.

Werner Brozek
Reply to  Werner Brozek
December 4, 2014 8:21 am

wrecking the economy to stop emissions dead in their tracks is a Bad Idea
I agree. All nations should put their money into where it does the most good. In Alberta, Canada, where I live it would make much more sense to spend billions on improving insulation for houses rather than spending the same amount of money on carbon capture which could reduce the temperature by 1/10000 degree in 100 years.

richardscourtney
Reply to  Brandon Gates
December 2, 2014 11:24 pm

Brandon Gates
Werner Brozek has repeatedly said e.g. here

As for my mentioning the excuses, some people will deny we are even in a pause, so that is my reason for mentioning it.

You say

I think we’re presently inside an envelope which I’ve defined as one standard deviation of the residual when I regress the log of CO2 against temperature:

Clearly, your “excuse” consists of a complete redefinition of “warming”, and that redefinition assumes “warming” is a function of “the log of CO2”.
“Warming” as projected by climate modelers consists of an observed rise in linear trend of temperature which is discernible with 95% confidence as being different from zero rise.
The only significance of the putative “warming” is that it is predicted by the models and, therefore, provides an assessment of model performance. The models do NOT output “one standard deviation of the residual when [you] regress the log of CO2 against temperature”.

Richard

Brandon Gates
Reply to  richardscourtney
December 4, 2014 12:06 am

richardscourtney,

Clearly, your “excuse” consists of a complete redefinition of “warming”, and that redefinition assumes “warming” is a function of “the log of CO2″.

Where have you been? F = α * ln(CO2/CO20) has been in the literature since the late 19th century.

The models do NOT output “one standard deviation of the residual when [you] regress the log of CO2 against temperature”.

Of course they don’t, but so what. GCMs don’t set a straight edge to an arbitrary 20 year chunk of a temperature time series and extrapolate either. The quick and dirty regression I did is based on the most key relationship in the physics and doesn’t suffer from the same sensitivity to choosing endpoints. It allowed me to calculate a standard deviation across ALL the available data, and very clearly shows that the past 20 years is nothing special in terms of departure from the regression prediction.
I get it that doesn’t make you very happy — hence all the bold text — but try addressing my analysis on its own merits instead of what some quote-mined statement in a 2008 BAMS report says, eh? The great thing about independently investigating claims is … the independence.

richardscourtney
Reply to  richardscourtney
December 5, 2014 11:23 am

Brandon Gates:
Your reply to my refutation of your silly redefinition of “warming” is even more misguided than your post which I refuted.
I wrote the salient points in bold so they were clearly recognisable (n.b. not for the ridiculous reason you suggest) and those points were this

“Warming” as projected by climate modelers consists of an observed rise in linear trend of temperature which is discernible with 95% confidence as being different from zero rise.
The only significance of the putative “warming” is that it is predicted by the models and, therefore, provides an assessment of model performance. The models do NOT output “one standard deviation of the residual when [you] regress the log of CO2 against temperature”.

Your reply says

Of course they don’t, but so what. GCMs don’t set a straight edge to an arbitrary 20 year chunk of a temperature time series and extrapolate either. The quick and dirty regression I did is based on the most key relationship in the physics and doesn’t suffer from the same sensitivity to choosing endpoints. It allowed me to calculate a standard deviation across ALL the available data, and very clearly shows that the past 20 years is nothing special in terms of departure from the regression prediction.

That is plain daft. The modelers and the IPCC consider “warming” to be as I stated; e.g. see here.
And you try to pretend that your invention of a redefinition of “warming”

has been in the literature since the late 19th century

I don’t believe you because I am not aware of any such definition of “warming” in the accepted literature: please provide a citation.
Any data can be processed to show anything. You have processed the temperature data in association with other data (i.e. “the log of CO2”) and conclude the warming

very clearly shows that the past 20 years is nothing special in terms of departure from the regression prediction

and your response to my refuting that nonsense is to demand that I

try addressing my analysis on its own merits instead of what some quote-mined statement in a 2008 BAMS report says

OK. I will do that.
Your so-called “analysis” consists solely of unsubstantiated and illogical rubbish which you have clearly constructed as an excuse for the failure of the climate models.
There is no need to thank me for taking the trouble to fulfill your demand because I am always willing to provide information of use to onlookers when I can.
Richard

Brandon Gates
Reply to  richardscourtney
December 5, 2014 2:26 pm

richardscourtney,

That is plain daft. The modelers and the IPCC consider “warming” to be as I stated; e.g. see here.

I read the link, which is AR4 section 10.7.1, “Climate Change Commitment to Year 2300 Based on AOGCMs”. I find nothing like your statement: “Warming” as projected by climate modelers consists of an observed rise in linear trend of temperature which is discernible with 95% confidence as being different from zero rise. Perhaps you can quote the specific text which you find incompatible with “my definition” of warming.

I don’t believe you because I am not aware of any such definition of “warming” in the accepted literature: please provide a citation.

http://www.ams.org/notices/201010/rtx101001278p.pdf
In 1896 Swedish scientist Svante August Arrhenius (1859–1927), 1903 Nobel Prize winner in chemistry, was aware that atmospheric concentrations of CO2 (and other gases) had an effect on ground level temperatures; and he formulated a “greenhouse law for CO2”, [1]. Were Arrhenius alive, the motivations for his study and the precise values of physical constants used in his models might change, but his greenhouse law remains intact today. From a reference published about 102 years after [1], namely, page 2718 of [14], we see Arrhenius’s greenhouse law for CO2 stated as: (Greenhouse Law for CO2) ∆F = α ln(C/C0) where C is CO2 concentration measured in parts per million by volume (ppmv); C0 denotes a baseline or unperturbed concentration of CO2, and ∆F is the radiative forcing, measured in Watts per square meter, W/m^2. The Intergovernmental Panel on Climate Change (IPCC) assigns to the constant α the value 6.3; [14] assigns the value 5.35. Radiative forcing is directly related to a corresponding (global average) temperature, by definition radiative forcing is the change in the balance between radiation coming into the atmosphere and radiation going out. A positive radiative forcing tends on average to warm the surface of the Earth, and negative forcing tends on average to cool the surface.(We will not go into the details of the quantitative relationship between radiative forcing and global average temperature.) Qualitatively his CO2 thesis, which Arrhenius was the first to articulate, says: increasing emissions of CO2 leads to global warming. Arrhenius predicted that doubling CO2 concentrations would result in a global average temperature rise of 5 to 6 deg C. In 2007 the IPCC calculated a 2 to 4.5 deg C rise. This is fairly good agreement given that more than a century of technology separates the two sets of numbers.
Ref. [1] may be found here: http://www.rsc.org/images/Arrhenius1896_tcm18-173546.pdf Svante August Arrhenius, On the influence of carbonic acid in the air upon the temperature of the ground, Philosophical Magazine 41 (1896), 237–76. Page 267 (p. 17 of the .pdf), top of the right-hand column: Thus if the quantity of carbonic acid increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression.
In the regression I used to produce my chart, I used a slightly different formulation: F = α ln(C), where F is instantaneous forcing in W/m^2 and C is current CO2 concentration in ppmv. I habitually use 5.35 as the value for α since it is the more conservative of published estimates and also is the one I find most often cited. Of course I’m regressing this relationship against observed temperature, not radiative flux. The regression calculation found the best fit multiplying the theoretical flux amount by 0.52, implying this relationship: ∆T = 2.78 ln(C/C0).
Typically equilibrium climate response to doubled CO2 is defined without taking the natural log. When I do the same calculation I come up with: ∆T = 2.41 C/C0. The canonical value is 3.0°C, with a range of 1.5 to 4.5°C. You’re likely asking why the 0.6°C discrepancy. The partial answer is that the planet has not yet reached equilibrium. My regression is only picking up the transient response on the way to equilibrium. The balance of the discrepancy is likely due to the myriad of other inputs to the system which I made no attempt to account for in this model.

Any data can be processed to show anything.

Tell that to Werner here, who has done nothing more than drawn some trendlines on a narrowly selected range of observations and left it at that, wholly uncoupled from any theoretical underpinning.

Your so-called “analysis” consists solely of unsubstantiated and illogical rubbish which you have clearly constructed as an excuse for the failure of the climate models.

My substantiation begins with empirical studies begun in 1896 which even then provided a correct prediction of the approximate magnitude and exact direction of the effect. My very simple model uses that same relationship and the output is in general agreement with contemporary primary literature. You may wish to actually become familiar with the core first principles of climatology before blustering on about “unsubstantiated and illogical rubbish”.

richardscourtney
Reply to  richardscourtney
December 5, 2014 3:20 pm

Brandon Gates
You pretend that you have reading difficulties in your attempt to defend your ridiculous redefinition of “warming”.
You say

I read the link, which is AR4 section 10.7.1, “Climate Change Commitment to Year 2300 Based on AOGCMs”. I find nothing like your statement: “Warming” as projected by climate modelers consists of an observed rise in linear trend of temperature which is discernible with 95% confidence as being different from zero rise. Perhaps you can quote the specific text which you find incompatible with “my definition” of warming.

Of course I can. And I am offended by your implication that I am as incapable of reading as you claim to be. The link is – in full –
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-7.html
and it says e.g.

The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios.

That IPCC statement not only supports my accurate statement of the definition of “warming” used by climate science, it also shows that the model prediction of the “rate of warming averaged over the first two decades of the 21st century” was 100% wrong. It does not mention the 95% confidence practice applied by climate science because there was no need. But it does mention temperature change and rate of temperature change over time which your redefinition does not.
You then laughably cite Arrhenius’ hypothesis of the radiative Greenhouse Effect (GHE) as being a definition of “warming”. No. The radiative GHE is one possible cause of “warming”. I strongly suggest that before making silly assertions about climate change(s) you at very least need to learn the difference between an effect and its possible causes. WUWT is a science site and you can expect to be chastised for posting such schoolboy errors.
I refuse to obey your demand that I tell Werner that any data can be processed to show anything. He knows it, so he rightly refuses to process the data but shows what the unadulterated data indicates. Furthermore, he admirably requests a ‘warmist’, Nick Stokes, to critique his presentations of what the data shows.
Contrast the proper behaviour of Werner to your alteration of the data to obtain an apparent indication of what you would prefer the data did indicate.
Richard

Brandon Gates
Reply to  richardscourtney
December 5, 2014 9:12 pm

richardscourtney,

You pretend that you have reading difficulties in your attempt to defend your ridiculous redefinition of “warming”.

Just making it up as we go, aren’t we.

[IPCC AR4] The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios.

Nothing about confidence intervals in that statement. I do notice that every time “warming” is mentioned, it’s followed by a temperature greater than zero. Perhaps my reading difficulties aren’t so pretend after all.

It does not mention the 95% confidence practice applied by climate science because there was no need.

Ye Olde Goalpost Movement, right on cue.

But it does mention temperature change and rate of temperature change over time which your redefinition does not.

No redefinition. I’ve already cited Arrhenius, but I guess that didn’t sink in. Well, how about something more recent, say 1992 from the IPCC FAR: http://www.ipcc.ch/ipccreports/far/wg_I/ipcc_far_wg_I_chapter_02.pdf
Table 2.2 Expressions used to derive radiative forcing for past trends and future scenarios of greenhouse gas concentrations […] Carbon Dioxide ∆F=6.3 In(C/C0) where C is CO2 in ppmv for C < 1000 ppmv
My eyes must be fooling me again.

I strongly suggest that before making silly assertions about climate change(s) you at very least need to learn the difference between an effect and its possible causes. WUWT is a science site and you can expect to be chastised for posting such schoolboy errors.

Well I’ve noticed that when I wear a jacket on a cold day I stay warmer than I would not having it on. Maybe I’m just imagining it.

Contrast the proper behaviour of Werner to your alteration of the data to obtain an apparent indication of what you would prefer the data did indicate.

I used four of the same exact datasets Werner did, HADCRUT4, GISTemp LOTI, UAH Global LT and RSS. Only difference is that I used all of the data available back to 1850 because you see, one way to get data to tell a desired story is to only use the part of it that supports one’s argument. Being that WUWT is a science site, I figured that cherry-picking is something I probably wouldn’t get away with.

richardscourtney
Reply to  richardscourtney
December 6, 2014 12:18 am

Brandon Gates
I started to read your twaddle which is your latest – and long-winded – attempt to excuse your daft assertions, but I could not be bothered to read much of that nonsense because there are useful things I need to do.
If you think you have convinced anybody of anything by your spouting your ignorant and ill-informed errors then be happy in that thought because it is no more daft than your assertions about “warming”.
I am content that I have pointed out to those who are interested (probably none) that you claim to not know the difference between an effect and its possible cause(s) and you claim an inability to understand what you say you read. Hence, I have helped any who may wish to assess your assertions to recognise how and why your assertions are extremely wrong.
Richard

December 2, 2014 10:39 pm

What is the no confidence interval?

Werner Brozek
Reply to  M Simon
December 2, 2014 10:55 pm

While I was still teaching physics, I would tell the students to think of the most extreme example they could think of to answer a question. So I will try that here. If the CI is +/- 1.106 at 95%, then it would be much higher at 99%; for example it may be +/- 1.5 at 99%. So if you are asking about a CI of 0%, I would say the slope itself has a CI of zero in the sense that if the slope is given as 1.000, then there is no chance the real slope is exactly 1.000000000 and not up to + 1.000000001 or down to 0.9999999999.
Does that make sense? Should I be wrong, then Nick can correct me.

Farmer Gez
December 2, 2014 10:42 pm

Stop and start dates may well be diverting but I have not heard a compelling case as to the cause of the LIA or the Roman warm period.
We all seem to be taking the patients temperature but failing to diagnose the illness.

Werner Brozek
Reply to  Farmer Gez
December 2, 2014 11:08 pm

We know it was not CO2 anyway!

hanson807
December 2, 2014 10:46 pm

Just curious, as a scientist, you have in the first section 0.0123944 C/year; what is the accuracy of the temperature measurement? And is this in that realm? Not nitpicking, it’s just that when I see temp’s expressed to this many decimal places I wonder how they achieved this level of accuracy.

LevelGaze
Reply to  hanson807
December 2, 2014 10:52 pm

Yes, 7 decimal places.
Hence my skepticism.
Do they still teach the first principle of measurement – significant numbers?

Werner Brozek
Reply to  hanson807
December 2, 2014 11:24 pm

That number was taken right from WFT and I agree that we do not know its accuracy to that extent. That is why I gave the later number as 0.025 along with the 0.0247888.
It is shocking how bad our numbers really are! Yet billion dollar decisions are made based on these numbers.
For example, RSS had October 2014 in 8th place at 0.272. October 1998 was first at 0.461. But with UAH, version 5.6, October of 1998 was way lower at 0.291 but October 2014 was higher at 0.367. That is a relative difference of 0.189 + 0.076 = 0.265!

LevelGaze
Reply to  Werner Brozek
December 3, 2014 2:29 am

I agree with you there Werner. The precision in that number is immaterial, the accuracy is rubbish.
I have a long standing grudge with computer models. Unless basic mathematical principles are coded into the program, all these machines do is compound errors. Super computers just compound these errors more, and quicker.
Since the most sophisticated thermometers we have can barely measure to 2 decimal places of a degree, the fantasy that we can reliably measure any better than that is childishly destructive.

Alan the Brit
Reply to  Werner Brozek
December 3, 2014 2:54 am

Werner, as an engineer, I would reduce the accuracy down to 0.03ºC, there is no way on this earth we can measure meaningful temperature to such accuracy! IMHO, that is.

GogogoStopSTOP
Reply to  Werner Brozek
December 3, 2014 8:00 am

Let me beat a dead horse! Plot all data on the range of a home thermostats: 10 – 30 C!
If we say it enough, the public will begin to understand in a few years! WE ARE PLAYING ON THEIR HOME TURF, and we needn’t be.

dp
December 2, 2014 11:21 pm

What do I think? I think Obama doesn’t care anything about the trend and intends to do all he can in the time remaining in his fast fading political career to shutter as much American industry as his pen allows.

richardscourtney
December 2, 2014 11:27 pm

Werner Brozek
Thankyou for another fine – and important – article.
Richard

Werner Brozek
Reply to  richardscourtney
December 2, 2014 11:50 pm

Thank you! I wish you the health to enjoy Christmas!

climatereason
Editor
Reply to  richardscourtney
December 3, 2014 1:34 am

Richard
Good to see you posting. I thought you might be interested that in a Christmas catalogue that fell out of my Sunday Papers was an entry for Hero’s Steam Engine which we have previously talked about.
http://www.curiousminds.co.uk/heros-steam-turbine
This is a 2000 year old design so, like climate, there is nothing new under the sun.
tonyb

1 2 3