How Imminent is the RSS Pause? (Now Includes January and February Data)

Guest Post by Werner Brozek, Extended Comments from Barry and Edited by Just The Facts

UAH (University of Alabama in Huntsville) and RSS (Remote Sensing Systems) are two major satellite groups that provide monthly climate anomalies. From January 1998 to January 2016, the slope was slightly negative, a period which many have referred to as a “pause”, although some prefer other names. Since a huge anomaly spike in February 2016 due to a very strong El Nino, the so called pause is gone.

Last month, Barry wrote about several things that must happen for the pause to return for UAH, which I excerpted in an article titled How Imminent is the UAH Pause? (Now Includes Some January Data)  This month, Barry has written about what must happen for the pause to return for RSS, as well as provided additional information with respect to the UAH pause.

Barry’s comments follow:

This RSS analysis leverages the  RSSv3 TLT global data set. The following plot contains the full record with 12 month averages for visual accompaniment:

WoodForTrees.org – Paul Clark – Click the pic to view at source

Ordinary least squares linear regression, trends in degrees Celsius, the mean trend from January 1998 to:

Feb 2016: 0.019 /decade

Mar 2016: 0.028 /decade

Apr 2016: 0.035 /decade

May 2016: 0.038 /decade

Jun 2016: 0.041 /decade

Jul 2016: 0.043 /decade

Aug 2016: 0.045 /decade

Sep 2016: 0.049 /decade

Oct 2016: 0.049 /decade (higher to 4 decimal places than Sep)

Nov 2016: 0.050 /decade

Dec 2016: 0.048 /decade

Jan 2017: 0.052 /decade

Feb 2017: 0.053 /decade

Unlike UAHv6, there is one month (Dec 2016) that lowered the then warming trend slightly. I’ve plotted monthly data and the trend to Nov 2016, and you can see the Dec 2016 anomaly is below the trend line. That’s why December lowered the then trend slightly:

WoodForTrees.org – Paul Clark – Click the pic to view at source

Otherwise, every other month after the peak warm month of Feb 2016 increased the trend, even though they were all cooler than February. The trend rose because subsequent months were warmer than the trend itself, except December 2016. For the ‘pause’ from 1998 to resume next month, the March anomaly would have to be -3.6C. For the pause to resume by December 2017, the annual average anomaly for 2017 would have to be -0.02C. The last time an annual temperature anomaly was this cool or cooler in the RSSv3 TLT dataset was 1993 (-0.118C). However, January and February 2017 have been 0.41 and 0.44 respectively, so for the pause to resume by December, the average of the next 10 months would have to be -0.12C. The last time this happened was in 1992 (-0.19C).

For a pause to resume by 2020 (Dec 2019), the three year averaged anomaly 2017 to 2019 for RSS would have to be -0.04C. The last time a 3 year average was that cool or cooler was 1992 through 1994 (-0.09). For the pause to resume by 2020, we’d need to see temps of the next three years similar to those of the early 1990s. Check the graph above to see what that looks like.

The section below provides some additional updates for UAH.

Next month’s anomaly would have to be lower than 0.2C to reduce the trend slightly. To get a flat or negative trend since 1998, the March anomaly would have to be -3.8C. The decimal point is in the correct place!

For the 1998 trend to return to flat or negative values by the end of this year, the annual average anomaly for 2017 would have to be -0.16C. We have 2 months data already, at around 0.5C warmer than that, so what would the average temperature anomaly for the rest of 2017 have to be to get a flat/negative trend since 1998? -0.26C (Mar-Dec)

The most recent year the annual average anomaly was that cool was in 1985. The annual average then was -0.35C. With 2017 predicted to be an el Nino or ENSO neutral year the chances of a flat trend by December are very slim. As I expect some warming with atmospheric CO2 increase, however one may argue the magnitude, I think it is unlikely we will see a year as cold as 1985, barring a volcanic eruption of greater magnitude than the 1991 Pinatubo eruption. Consequently, I think it is unlikely the ‘pause’ will return at all if 1998 is used as the start date.

In comments last month Werner asked how cool the annual anomalies would have to be to get a flat trend if there were a succession of cool years. For the trend since 1998 to go flat by 2020 (December 2019) the annual average temperature anomaly for the three years Jan 2017 to Dec 2019 would have to be: 0.05C

When did we last have 3 consecutive years as cool or cooler than that?

2007 to 2009: 0.05C However, January and February 2017, being 0.30 and 0.35C respectively, would raise the three year average to 0.6 0.065 if the rest of the months through 2019 were 0.05C. So we have to go further back in time to get a cooler 3-year average. Most recent is: 1994 to 1996: 0.0C

Those predicting imminent cooling from lower solar ebb or ocean-atmosphere oscillations may expect to see annual temperatures like the early 1990s sometime soon. I am less confident of that. Time will tell.

————-

Written by Barry

In the sections below, we will present you with the latest facts. The information will be presented in two sections and an appendix. The first section will show for how long there has been no statistically significant warming on several data sets. The second section will show how 2017 compares with 2016, the warmest year so far, and the warmest months on record so far. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.

Section 1

For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 0 and 23 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.

The details for several sets are below.

For UAH6.0: Since December 1993: Cl from -0.009 to 1.776

This is 23 years and 3 months.

For RSS: Since October 1994: Cl from -0.006 to 1.768 This is 22 years and 5 months.

For Hadcrut4.5: The warming is statistically significant for all periods above four years.

For Hadsst3: Since May 1997: Cl from -0.031 to 2.083 This is 19 years and 9 months.

For GISS: The warming is statistically significant for all periods above four years.

Section 2

This section shows data about 2017 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadsst3, and GISS.

Down the column, are the following:

1. 16ra: This is the final ranking for 2016 on each data set. On all data sets, 2016 set a new record. How statistically significant the records were was covered in an earlier post here: https://wattsupwiththat.com/2017/01/26/warmest-ten-years-on-record-now-includes-all-december-data/

2. 16a: Here I give the average anomaly for 2016.

3. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year.

4. ano: This is the anomaly of the month just above.

5. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.

6. sy/m: This is the years and months for row 5.

7. Jan: This is the January 2017 anomaly for that particular data set.

8. Feb: This is the February 2017 anomaly for that particular data set if available.

9. ave: This is the average anomaly of all available months with at least two months of data.

10. rnk: This is the 2017 rank for each particular data set assuming the average of the anomalies stay that way all year. Of course they won’t, but think of it as an update 5 minutes into a game.

Source UAH RSS Had4 Sst3 GISS
1.16ra 1st 1st 1st 1st 1st
2.16a 0.503 0.574 0.773 0.613 0.98
3.mon Feb16 Feb16 Feb16 Jan16 Feb16
4.ano 0.829 0.996 1.070 0.732 1.30
5.sig Dec93 Oct94 May97
6.sy/m 23/3 22/5 19/9
7.Jan 0.299 0.409 0.741 0.488 0.92
8.Feb 0.348 0.440
9.ave 0.324 0.425
10.rnk 4th 4th 3rd 3rd 2nd
Source UAH RSS Had4 Sst3 GISS

If you wish to verify all of the latest anomalies, go to the following:

For UAH, version 6.0beta5 was used.

http://www.nsstc.uah.edu/data/msu/v6.0/tlt/tltglhmam_6.0.txt

For RSS, see: ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt

For Hadcrut4, see: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.5.0.0.monthly_ns_avg.txt

For Hadsst3, see: https://crudata.uea.ac.uk/cru/data/temperature/HadSST3-gl.dat

For GISS, see:

http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

To see all points since January 2016 in the form of a graph, see the WFT graph below.

WoodForTrees.org – Paul Clark – Click the pic to view at source

As you can see, all lines have been offset so they all start at the same place in January 2016. This makes it easy to compare January 2016 with the latest anomaly.

The thick double line is the WTI which shows the average of RSS, UAH, HadCRUT4.5 and GISS.

Appendix

In this part, we are summarizing data for each set separately.

UAH6.0beta5

For UAH: There is no statistically significant warming since December 1993: Cl from -0.009 to 1.776. (This is using version 6.0 according to Nick’s program.)

The UAH average anomaly so far is 0.324. This would rank in fourth place if it stayed this way. 2016 was the warmest year at 0.503. The highest ever monthly anomaly was in February of 2016 when it reached 0.829.

RSS

For RSS: There is no statistically significant warming since October 1994: Cl from -0.006 to 1.768.

The RSS average anomaly so far is 0.425. This would rank in fourth place if it stayed this way. 2016 was the warmest year at 0.574. The highest ever monthly anomaly was in February of 2016 when it reached 0.996.

Hadcrut4.5

For Hadcrut4.5: The warming is significant for all periods above four years.

The Hadcrut4.5 average anomaly for 2016 was 0.773. This set a new record. The highest ever monthly anomaly was in February of 2016 when it reached 1.070. The January anomaly was 0.741 which would rank 2017 in third place if it stayed this way.

Hadsst3

For Hadsst3: There is no statistically significant warming since May 1997: Cl from -0.031 to 2.083.

The Hadsst3 January anomaly is 0.488. This would rank third if it stayed this way. The highest ever monthly anomaly was in January of 2016 when it reached 0.732.

GISS

For GISS: The warming is significant for all periods above four years.

The GISS average anomaly for 2016 was 0.98. This set a new record. The highest ever monthly anomaly was in February of 2016 when it reached 1.30. The January anomaly was 0.92 which would rank 2017 in second place if it stayed this way.

Conclusion

Do you think RSS will ever have a pause of over 18 years again? Why or why not?

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

273 Comments
Inline Feedbacks
View all comments
March 14, 2017 11:06 am

Pretty cool…

george e. smith
Reply to  John_QPublic (@John_QPublic)
March 14, 2017 11:32 am

Who ” Barry ” ??

G

Reply to  george e. smith
March 14, 2017 11:52 am

Who ” Barry ” ??

Can you keep a secret? So can I! ☺

JJM Gommers
Reply to  george e. smith
March 15, 2017 12:32 am

Barry Lyndon.!

barry
Reply to  george e. smith
March 15, 2017 4:38 pm

I post on Dr Roy Spencer’s blog and occasionally here. I’m nobody special.

Geoff
Reply to  John_QPublic (@John_QPublic)
March 14, 2017 1:58 pm

What we need is a thermometer stuck up Gaia’ rear end. Accuracy from any mouth is unobtainable.

Mike Bromley the wannabe Kurd
Reply to  Geoff
March 14, 2017 9:26 pm

Even 19 years is an abysmally short period of time to draw earth history inferences from.

seaice1
March 14, 2017 11:09 am

The best bet is in 18 years time to have a pause going back to the recent El Nino.

MarkW
Reply to  seaice1
March 14, 2017 12:07 pm

In other words, no global warming. Just a flat trend punctuated by El Ninos.

Reply to  MarkW
March 14, 2017 12:29 pm

In other words, no global warming. Just a flat trend punctuated by El Ninos.

That is one way of looking at it. Or how about:
No global warming, just Arctic warming.

Stephen Richards
Reply to  MarkW
March 15, 2017 1:51 am

Werner Brozek March 14, 2017 at 12:29 pm

Spot on. All of the ANOMALOUS warming has been in the arctic. My old physics mentor would have thrown me out if I produced anomalous reading of 0.05 with no error margins over thirty years but Werner is merely producing the figures for us to read, eh Werner?

seaice1
Reply to  MarkW
March 15, 2017 6:55 am

MarkW. You are wrong. A flat trend punctuated with El Nino’s (Los Ninos?) would have a pause going back to the start of the flat trend. If there were no warming the pause would quickly go back to before the 1998 El Nino.

If you want I will talk you through it.

Stan
Reply to  MarkW
March 15, 2017 5:50 pm

No global warming, just Arctic warming produced by pretend thermometers and homogenisation pulled out of the arses of climate scientists.

Reply to  MarkW
March 15, 2017 10:16 pm

seaice1 writes

If there were no warming the pause would quickly go back to before the 1998 El Nino.

And if this represents another step increase like we saw around the turn of the millennium then we might expect another period of little to no warming at the new slightly warmer level.

GCMs dont show that behaviour.

Dick Burkel
March 14, 2017 11:17 am

I can not understand how anybody can justify calculating thsee numbers to 3 or more decimal places. I understand that satellite measurements have less uncertainty than the majority of surface measurement, but three decimal places? And then commenting about 4 decimal places!

george e. smith
Reply to  Dick Burkel
March 14, 2017 11:35 am

Are you talking about four decimal places; or four significant digits ??

Some things are quoted to 18 decimal places; or even 43.

G

ColA
Reply to  george e. smith
March 14, 2017 2:13 pm

I’m a old Engineer, I tell my Junior Engineers they can use all the digits the computer has BUT never talk to me with more than 3 significant figures! (the rest are just hairs on a nats bum!!)

On a lighter note! (digits added for emphasis!!) 🙂

You are trying to argue the amount of warming from CO2 at about 0.2134665587749984536213255 W/m2 but the overall is actually 0.61230452346789465231 w/m2 ?? wait a minute where did the other 0.41315464976456543623643623612361 W/m2 come from – oh, maybe it’s just caused from the uncertainties of the estimates which are

….. wait for it …….

+/- 17.012455765698892514123324656542554565 W/m2

Yep you read it correct +/- 17.012455765698892514123324656542554565 W/m2 – so the uncertainties of the measurements give you a range of answers -16.21465484156486411586316181445654136 to 17.613221654694368136126123694194695426312398451236 W/m2 !!!!

That’s 10 times (to 20 significant figures!!) what you are trying to measure which clearly makes your numbers USLESS! (to 20 significant figures!!)

……………. hmmmmm, no wonder IPCC are 95% confident!!

And our idiot politicians accept this pigs swill!!

http://notrickszone.com/2017/03/13/uncertainties-errors-in-radiative-forcing-estimates-10-100-times-larger-than-entire-radiative-effect-of-increasing-co2/#sthash.1QuorkfI.dpbs

Reply to  Dick Burkel
March 14, 2017 11:46 am

but three decimal places?

I use the numbers they provide. It is probably safe to assume all numbers are +/- 0.1. See my recent post here that discusses this to some extent:
https://wattsupwiththat.com/2017/01/26/warmest-ten-years-on-record-now-includes-all-december-data/

george e. smith
Reply to  Werner Brozek
March 14, 2017 12:28 pm

Not complaining, just clarifying.

Sometimes it is OK to give more digits, than justified by the experimental accuracy. For example I have seen values stated and the error number is given to three digits.

That simply reflects that the experimental apparatus has the ability to RESOLVE such differences; but not necessarily with that calibration accuracy.

My handbook of Physics gives the uncertainty of the value of G to three significant digits, although the value is not known as accurately as that LSD; it can be measured to that resolution.

G

Thank Barry for the information.

richard verney
Reply to  Werner Brozek
March 14, 2017 1:14 pm

Let us be realistic; land based thermometer anomaly data sets probably nearer +/- 0.6 degC

Reply to  Werner Brozek
March 14, 2017 2:19 pm

richard verney March 14, 2017 at 1:14 pm
°Let us be realistic; land based thermometer anomaly data sets probably nearer +/- 0.6 degC”

Yes, for a single Thermometer, but if you use thousands of stations and calculate the average, you will get more decimal places.

One station has a quite up and down during the year in terms of degrees. If you get a global average, you will have only tenths of a degree.

commieBob
Reply to  Werner Brozek
March 14, 2017 3:20 pm

Johannes Herbst March 14, 2017 at 2:19 pm

… Yes, for a single Thermometer, but if you use thousands of stations and calculate the average, you will get more decimal places.

That is based on the errors being properly random. That is a dangerous assumption because the climate is not Linear Time Invariant (LTI).

Reply to  Werner Brozek
March 14, 2017 3:24 pm

Johannes Herbst, “if you use thousands of stations and calculate the average, you will get more decimal places.” Only if the measurement error is random.

However, all surface stations, except the new aspirated CRN sensors, have significant systematic measurement errors that do not average away. These errors arise from uncompensated environmental variables, especially solar irradiance and wind speed effects.

I have published on this, open access pdf here (1 MB) and a post the subject on WUWT here.

Richard Verney is quite right. The surface air temperature record is certainly no more accurate than ±0.5 C and likely much worse.

The RSS and UAH satellite records, by the way, are likely no more accurate than about ±0.3 C.

Rick C PE
Reply to  Werner Brozek
March 14, 2017 4:32 pm

Having taught courses in Measurement Uncertainty for over twenty years in the independent testing industry, I can say that the abuse of significant digits is a very common issue. I taught that you must first do a legitimate MU analysis (typically at 95% confidence) of your data then round your MU estimate to two significant digits. Then report your result to the same level of precision that is indicated by the MU. e.g. if your data result is 1.7523624 and the MU is +/- 0.02573, the result should be reported as 1.752 +/- 0.026. I also try to emphasize that measurement uncertainty itself is only an estimate. There are almost always some unrecognized sources of uncertainty we just can’t account for. We all have a tendency to think our measurements are better than they really are.

Frank
Reply to  Werner Brozek
March 14, 2017 6:03 pm

Pat Frank wrote: “However, all surface stations, except the new aspirated CRN sensors, have significant systematic measurement errors that do not average away. These errors arise from uncompensated environmental variables, especially solar irradiance and wind speed effects.”

So what? We are interested in temperature CHANGE, not absolute temperature. That is why we usually work with temperature anomalies, not raw temperatures.

If problems at a station cause that station to read +2 degC high on calm sunny days, this doesn’t cause a problem after temperature anomalies have been calculated – as long as number of calm sunny days remains constant over decades. A biased trend is created when problems at the station are fixed and the station no longer reads +2 degC high on calm sunny days. Correction would introduces a cold bias into the station trend.

This also explains why the surface stations project didn’t find a major bias in the trend at poorly- vs properly-sited stations. It is a changing bias – like a GRADUALLY increasing UHI bias – that creates problems with the long-term trend. If New York City had a large UHI bias in 1900, the presence of a similar bias today won’t produce a biased trend. On the other hand, there were only a 100,000 people in Los Angeles in 1900, and about 100-fold as many in its metropolitan area today. That could create a warming bias in the trend.

richard verney
Reply to  Werner Brozek
March 15, 2017 1:41 am

Yes, for a single Thermometer, but if you use thousands of stations and calculate the average, you will get more decimal places.

People making this claim (ie., improved accuracy due to the law of large numbers) overlook that the land based thermometer record is not a true and proper time series.

There are so many problems with the principle underpinning this data set, it is difficult to know where to start, but the fundamental problem is that the constituent components are constantly in flux as stations are added, or drop out, or there are significant station moves, and the changes in these constituent components is far from random. Then you have the problem of how infilling is performed, and homogenisation etc.

If one wanted to know how temperature anomalies have changed over time from say 1880 to date, one would identify the stations that were used and reported data in 1880, and use only those stations (and no others) that have a continuous uninterupted record and are still reporting data today. If there has been significant station moves, the station would be thrown out. If there has been a change in environmental conditions, change of equipment, change in TOB some adjustments would need to be made and each adjustment carries with it a margin of error. If one wanted to know how temperatures have changed as from say 1940, one would compile a new data set adopting a similar approach, ie., using only the 1940 stations, and no others.

In 1880, there were only about 500 stations, and of those, only about 20 in the Southern Hemisphere. In 1960 the number of stations peaked at about 5,900, and since then it has dropped back to about 1400. See:

http://notrickszone.com/wp-content/uploads/2017/02/NOAA-Data-Manipulation-Station-Removal-Small.jpg

But not only does that variation result in in significant problems, the very composition has continually changed such that there has over time been a significant variation in the proportion of rural to urban stations, to the ratio of high latitude to mid latitude stations, the ratio of airport stations to non airport stations etc. The pattern of change is anything but random. Even with airport stations not only has the ratio dramatically changed over time, from about 20% of station data in 1920 to approximately 50% of station data today, but so to has the nature of airports. Many airports in say 1930s were small and may even had a grass runway, but those airports now are very different beasts. Consider;

http://notrickszone.com/wp-content/uploads/2017/02/NOAA-Data-Manipulation-Urban-Bias-Airport-Temperature.jpg

All of these changes in the composition of what is said to be a time series has caused fundamental issues with the data set. The changes are not at all random, and errors are not cancelled out but rather are exacerbating each other. The land based thermometer record is worthless; it is not fit for scientific purpose. The sampling and spatial coverage in the SH is a joke, and realistically, the only worthwhile data is that of the NH, but that data has been bastardised beyond repair and needs reworking from the ground up. Prior ARGO we have no worthwhile data on sea temps.

Stephen Richards
Reply to  Werner Brozek
March 15, 2017 1:53 am

My tutor said one decimal place rounded to most accurate margin. 1.05 measured accurately becomes 1.1

Reply to  Werner Brozek
March 15, 2017 9:28 am

Frank, the systematic measurement error is hour-by-hour, day-by-day variable, and non-random. It’s not removed by taking anomalies.

The error is caused by uncontrolled environmental variables, and is in both well-sited and poorly sited stations, which is why comparisons remain consistent.

Either the climate scientists involved know nothing whatever about measurement error, or they’re ignoring it on purpose. But in either case, it’s known to be present, it’s not taken into account, and it corrupts the entire surface temperature record.

george e. smith
Reply to  Werner Brozek
March 15, 2017 9:53 am

It doesn’t matter how many stations you have or how many thermometers; each one only measures the Temperature somewhere in its vicinity.

Combining all of those different temperatures to get one single “Temperature” gives you a result that NOBODY ever observed ANYWHERE at ANY time.

The result of applying whatever Statistics text book algorithms you want to use; has ONLY STATISTICAL significance.

It has no relevance to ANYTHING in the physical world; only in the pretentious Academic world of Statisticians.

NOTHING physical in the entire universe pays any heed or is even aware of ANY statistical machinations performed by some totally fictitious numerical Origami algorithm published in some text book.

Well; I guess Stat maths academics do.

If you draw a straight line through ANY global Temperatures graph covering any period of history or geological time scale; at the calculated AVERAGE Temperature for all of the numbers on that graph, it will be immediately obvious, that the Temperature is virtually NEVER at that average value.

Well of course it has to be that value some times, because by definition, the average value must be somewhere within the graphed range, and since any real physical variable is a continuous function, then the value must cross the average line at, at least one point.

But clearly the time spent at the average value is damn near zero.

So the average value is about as unlikely a value for that continuous variable to have, as any other one might choose.

The universe, and the climate ONLY respond to real physical variables. Neither pays ANY attention to some contrived number derived from an algorithm. That number is of significance ONLY to statisticians.

G

Reply to  Werner Brozek
March 15, 2017 10:02 am

So the average value is about as unlikely a value for that continuous variable to have, as any other one might choose.

While that is true, it is very useful to have average values. For example if I were planning a vacation and did not know where to go, but found out that one place averages 28 C at that time and the other averages -5 C at that time, I know where I want to be. Of course I would not have a right to expect either place to be at its latest 30 year average.

whiten
Reply to  Werner Brozek
March 15, 2017 11:20 am

Werner Brozek
March 14, 2017 at 11:46 am

Guest Post by Werner Brozek, Extended Comments from Barry and Edited by Just The Facts.
How Imminent is the RSS Pause? (Now Includes January and February Data)..
———-
Werner…what pause?….there is no pause in the RSS data….
How can you actually consider to evaluate and conclude about the pause and it’s progress by relying in data sets that do not even show or have a pause, is beyond me….

You can not conclude that the pause has ended or when the pause will be resurrected according to your numbers and your maths when all based on a data set that does not even have a pause in the first play….

If you ask me, you do fail with your method in the same way in principle as same as the Karlization, the latest Bekerlye team temp reconstruction or any other brand new fresh method that is trying to deal with the pause……

If you can’t see the principal problem, you can’t understand it…

cheers

Reply to  Werner Brozek
March 15, 2017 1:10 pm

Werner…what pause?….there is no pause in the RSS data….

I know. Hence my title asks about when it will resume. Did you not see all of Lord Monckton’s blog posts from before February 2016 talking about the length of the pause? Here is the last post with a pause length:
https://wattsupwiththat.com/2016/02/06/the-pause-hangs-on-by-its-fingernails/

whiten
Reply to  Werner Brozek
March 15, 2017 1:55 pm

Werner Brozek
March 15, 2017 at 1:10 pm

I know. Hence my title asks about when it will resume.
————–
Brozek, I think you entirely missed my point, for some weird reason……

Brozek, when I stated “there is no pause in the RSS data, I meant the whole data set not only the latest part………there is no pause in the whole RSS data set to be found, just some allegations about it….not the such pause that you trying a deal with.
that data set can not be relied on to for evaluation about the pause because does not have one in the first place to consider….

Please follow my comment below at Nick Stokes……maybe it helps and assist you in understanding my point…..

My point may be wrong, but I think you have failed to understand it, or so it seems…..

cheers

Reply to  Werner Brozek
March 15, 2017 2:21 pm

there is no pause in the whole RSS data set to be found, just some allegations about it

Did you read Lord Monckton’s post I referenced above? For now, the pause is gone, but it used to be there.

whiten
Reply to  Werner Brozek
March 15, 2017 2:50 pm

Werner Brozek
March 15, 2017 at 2:21 pm

I think this is your own quote from the above article:

“UAH (University of Alabama in Huntsville) and RSS (Remote Sensing Systems) are two major satellite groups that provide monthly climate anomalies. From January 1998 to January 2016, the slope was slightly negative, a period which many have referred to as a “pause”, although some prefer other names.”
———-

A slope slightly negative is not the “pause” in question because the one in question is as per other data sets which will not be effected much by a short natural variability, like an El Nino or such…..
So according to the data sets that you rely on there was not a pause either before or after 2016, so how can you use the data of such data sets to evaluate the IPCC pause is still beyond me.

Yes, you can refer for comparison but the data of these two data sets can not be used as per evaluating the pause, these data sets have actually no any pause as you your self state, regardless what Lord Monckton or others may say…..there is not any actual pause established in these data, even where there is no warming shown in these data up to the 2016…….
These data is not stable enough to be taken in account for what you attempting to…

cheers

Reply to  Werner Brozek
March 15, 2017 3:19 pm

A slope slightly negative is not the “pause” in question because the one in question is as per other data sets

Did you note the title? It talks about RSS.
As for other data sets, they had pauses long ago and before adjustments got rid of them. For example, see this post of mine from four years ago:
https://wattsupwiththat.com/2013/03/05/has-global-warming-stalled-now-includes-january-data/

whiten
Reply to  Werner Brozek
March 15, 2017 4:29 pm

Oh sorry Werner,,,,,,silly me.

We being talking past each other because I failed to realize that your Blog post was about your own “pause” and that of Lord Monkton, according to your own definition based in the satellite data.

I do apologize for the misunderstanding…..

Probably I should apologize to Nick too…….gosh every one this days has got their own new brand definitions and their own brand new methods about these things…getting harder and harder to follow.

bye Brozek.

cheers

barry
Reply to  Werner Brozek
March 15, 2017 6:34 pm

whiten,

Of course once you include the CI in the analysis, the ‘pause’ becomes quite debatable in the first place. But Werner and I based our analyses for the most part on the metrics given by lord Monkcton and as fairly popular in the debate (particularly among skeptics) – ie, no confidence intervals, only the mean trend. I tend to agree with your view, which could be explored by anyone in a different post.

Frank
Reply to  Werner Brozek
March 15, 2017 10:17 pm

Pat Frank write: “the systematic measurement error is hour-by-hour, day-by-day variable, and non-random. It’s not removed by taking anomalies”.

Of course. However, when you subtract one anomaly from another (or do a linear regression), constant bias cancel.

If I accidentally recorded temperature in degK instead of degC, I would be making an enormous, but constant, measurement error. However, when I calculate the trend from this data, a warming rate of 1 degC/century is exactly the same as 1 K/century! Now, if I added 273 degrees to readings made in degC and my successor added 273.15 degrees, that CHANGING bias will show up in the trend.

The only biases that are important to the TREND are ones that change with time.

Reply to  Werner Brozek
March 16, 2017 9:08 am

Systematic measurement error is not a constant bias, Frank.

Reply to  Werner Brozek
March 16, 2017 9:17 am

Pat Frank, the problem with “systemic error(s)” is that when they are discovered and corrected for, people complain that the data is “adjusted” and clamor for the raw data. So, you’re screwed both ways.

Reply to  Werner Brozek
March 16, 2017 12:01 pm

I tend to agree with your view, which could be explored by anyone in a different post.

Did you have anything in particular in mind? Barry, with well over 200 comments in each of our last two posts, WUWT folks obviously greatly appreciate your insights. If you wish to be my guest and come up with something once the March satellite anomalies are in, please let me know.
A thought that occurred to me was to ask what RSS and UAH need to show for the rest of 2017 in order to have a 20 year “pause” of say 0.02/decade from January 1998 to December 2017. I could title it:
“Implications of a New Pause Definition (Now Includes February and March Data)”

Clyde Spencer
Reply to  Dick Burkel
March 14, 2017 12:08 pm

DB,
I completely agree with you. I’ve started writing something on this issue of precision. Hopefully, it will soon see the light of day.

Reply to  Clyde Spencer
March 15, 2017 5:51 am

Clyde, I am warming to that idea and hope it becomes a hot issue :).Don’t keep us in the dark.!

Bindidon
Reply to  Dick Burkel
March 14, 2017 2:23 pm

Dick Burkel on March 14, 2017 at 11:17 am

I understand that satellite measurements have less uncertainty than the majority of surface measurement…

This, Dick Burkel, is wrong.

Here are some 2σ trends, in °C / decade, for the satellite era (1979 – 2016), calculated by Dr Kevin Cowtan (http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html):

RSS4.0 TTT: 0.180 ± 0.060
RSS3.3 TLT: 0.135 ± 0.061
UAH 6.0 TLT: 0.123 ± 0.062

BEST: 0.181 ± 0.037
GISSTEMP: 0.173 ± 0.040
HadCRUT: 0.172 ± 0.038
NOAA: 0.163 ± 0.036

The differences are tiny of course! But they all show in the same direction.

Michael Jankowski
Reply to  Bindidon
March 14, 2017 6:38 pm

I don’t think he meant “uncertainty” in the statistical terminology.

Nevertheless, uncertainty in trend is not the same an uncertainty in measurement.

Bindidon
Reply to  Bindidon
March 15, 2017 7:14 am

Michael Jankowski on March 14, 2017 at 6:38 pm

Nevertheless, uncertainty in trend is not the same an uncertainty in measurement.

You are right!

I’m all you want but a fan of Skeptical Science. But Kevin Cowtan published there last year an interesting post:

https://skepticalscience.com/surface_temperature_or_satellite_brightness.html

barry
Reply to  Dick Burkel
March 15, 2017 4:45 pm

Hi Dick. I took my cue for anomaly reporting from many places, especially Lord Monckton. I agree that 3 decimal places is a bit much. Most of my figures are to 2 decimal places, like most other analyses.

I didn’t include confidence intervals. That, too, is because Lord Monckton didn’t. Doing so produces a quite different view of the data (and the ‘pause’), but I was interested in examining the possibility of a resumption of the pause according to the metrics that are popular in skeptical discussions (ie, without confidence intervals). This context is not universal among skeptics, but prevalent enough to prompt a simple analysis like the one above.

Ross King
March 14, 2017 11:26 am

Ross King, MBA, P.Eng. (ret’d) [trimmed, by another PE, for prudence sake]

“The older I get, the better I was….”

March 14, 2017 11:27 am

I’ve got a semi-automatically updating site http://isthereaglobalwarmingpause.com that shows the longest periods without “statistically significant” warming in various datasets, using the methodology and data from Skeptical Science’s trend calculator (based on Foster & Rahmstorf 2011). That analysis suggests that only GISTEMP and NOAA land+ocean are the only datasets without a >10 year pause.

Reply to  ilmastotiede
March 14, 2017 12:10 pm

That analysis suggests that only GISTEMP and NOAA land+ocean are the only datasets without a >10 year pause.

HadCRUT4
Land/ocean
2017 Jan
16 years
0.135 ± 0.136 °C/decad

That is interesting! Nick’s site gives:

Temperature Anomaly trend
Jul 2013 to Jan 2017 
Rate: 8.161°C/Century;
CI from -0.526 to 16.847;

For 16 years, Nick’s site gives

Temperature Anomaly trend
Jan 2001 to Jan 2017 
Rate: 1.357°C/Century;
CI from 0.481 to 2.233;

I knew there were some differences, but I had no idea they were that large! Hopefully Barry or Nick will weigh in.

Nick Stokes
Reply to  Werner Brozek
March 14, 2017 1:36 pm

Werner,
“I knew there were some differences, but I had no idea they were that large!”
The trends aren’t different – the 16 years Hadcrut trend is the same. The uncertainty range is about 50% wider. That’s a consequence of using Tamino’s Arma(1,0,1) noise model instead of the more orthodox AR(1) that I use. I compared the methods here, explaining why I think Tamino’s is maybe overly responsing to short-term lags.

But the link shows the fallacy of using edges of confidence intervals as a definition of pause, as Frank explains downthread. The trends that are alleged to show the “pause” are actually quite strongly positive. Berkeley, for example, is 2°C per century. Now that is equal to the warming predicted. It isn’t a deviation from prediction, it’s a very good confirmation. It’s juts a matter of defining how you feel about uncertainty. And as Frank says, failing to reject the null hypothesis doesn’t assert anything. It just means there is a 2.5% probability that you would have observed that even though the trend was really zero or negative (the other 2.5% is the chance tat the trend was very much higher).

In summary, the observed trends in this “pause” are highly positive. There is a very high likelihood that repitition, if that were possible, would produce equally high trends. There is a very small chance that repetition (with different weather) would produce a negative trend. This does not justify the use of the term pause in anyone’s language.

Chimp
Reply to  Werner Brozek
March 14, 2017 1:53 pm

Nick,

What part of “no statistically significant warming” don’t you get? GISS and HadCRU are works of anti-science fantasy, whose corrupt perpetrators should be prosecuted.

I agree with you however that the term “pause” is unjustified, but for a different reason, since there are no grounds to expect whatever warming did occur from c. 1977 to the ’90s will be followed by more warming after the current flat interval.

Extrapolating the down trend in the late 17th century would have had us in a new glaciation by now. There is no more reason to extrapolate the warming trend since c. 1690 indefinitely. The long-term trend (for the past more than 3000 years) remains down.

Reply to  Werner Brozek
March 14, 2017 2:13 pm

The uncertainty range is about 50% wider.

Very true! I was thinking of the huge differences in time lengths.
Of course the pause, as Lord Monckton defined it, does not exist for any significant length of time now.

Nick Stokes
Reply to  Werner Brozek
March 14, 2017 2:56 pm

Chimp
“What part of “no statistically significant warming” don’t you get?”
I don’t think you know what it means. The first thing to focus on is the warming. Every one of the plots in the link shows strong warming, comparable with what was predicted. The stat sig stuff says that, despite those observations, there is a very small chance that such trends might have arisen by chance in a world that wasn’t warming. Turning the logic around (which isn’t quite right), it says that there is a very small chance (2.5%) that despite appearances, there was a pause in climate, and it just happened by chance that we had a run of warming weather. That is very different to saying that there was a pause. There wasn’t.

Sun Spot
Reply to  Werner Brozek
March 14, 2017 4:52 pm

Nick where is the modelled warming acceleration?

Michael Jankowski
Reply to  Werner Brozek
March 14, 2017 6:43 pm

No statistically significant warming = “strong warming?”

Do you really want to keep making a fool out of yourself?

Reply to  Werner Brozek
March 14, 2017 7:22 pm

No statistically significant warming = “strong warming?”
Do you really want to keep making a fool out of yourself?

Who are you addressing? The words

No statistically significant warming

appear 8 other times. They have a certain meaning to people in climate science such as Phil Jones. It does not necessarily mean strong warming but rather there is a small likelihood that the warming could be zero.

whiten
Reply to  Werner Brozek
March 15, 2017 1:33 pm

Nick Stokes
March 14, 2017 at 1:36 pm

The trends that are alleged to show the “pause” are actually quite strongly positive.
—————
Yes Nick, quasi the truth…..
It was accepted as a sacrifice since the beginning of the adjustments that produced such trends….to prevent us from pulling the guns and shooting at each other any time there was an El Nino, La nina, a strong volcanic eruption , a weird sun spot activity, or any such strong short term natural variabilities effecting the temps and such trends……..
Such trends are a product of adjustments that filter out and reduce considerably the effect of short term natural variability…..and after passing the 15 year mark became mature enough…as according to the godfather of such adjustments and the pause…. the very Mr. P. Jones himself…..

Such trends were immune (up to 2013-14), and hopefully will remain so in the future, to the short term natural variability impact, Nick.
Strange that regardless of the positive biases introduced due to such adjustments, still such trends do not show warming, but a pause…….isn’t it strange Nick!?…upsetting to some yes, but still true about the no warming….and very strange indeed…because the pause happens to be a warmer representation of the actual reality, due to the positive biases..

The RSS data set is not “immune” to short term variability impact, like in the case of an El Nino…..therefor unable to show a steady trend, as the pause…..and we just pulling the guns over it………
The RSS data is a product of adjustments that do not offer stability towards the short term natural variation impact on the temps…..

hopefully this make some sense to you..

cheers

Bindidon
Reply to  Werner Brozek
March 15, 2017 2:53 pm

whiten on March 15, 2017 at 1:33 pm

The RSS data is a product of adjustments that do not offer stability towards the short term natural variation impact on the temps….. hopefully this make some sense to you..

I simply love people like you trying to teach a science man like Nick Stokes. Sounds delicious, especially with all these frenchy looking punctuation marks.

Let me teach you something in turn, whiten, with a little hint to what science people obtain when subtracting natural variability from a temperature time series like RSS3.3 TLT:

http://fs5.directupload.net/images/170219/dfeergne.jpg

Do you see the residuals?

P.S. I don’t care about what they come from. This comment is just about showing them.
Ni plus ni moins, n’est-ce pas?

whiten
Reply to  Werner Brozek
March 15, 2017 4:49 pm

Bindidon
March 15, 2017 at 2:53 pm

Let me teach you something in turn, whiten, with a little hint to what science people obtain when subtracting natural variability from a temperature time series like RSS3.3 TLT:
—————

Thanks for the reply Bindidon.

Trust me I really like to learn from the science people…..even when I have to accept that I may be a bit slow at it..

So let me ask: Why do the science people have to subtract natural variability from the RSS3.3 TLT temperature time series!?

Reply to  Werner Brozek
March 15, 2017 10:34 pm

Nick writes

The trends that are alleged to show the “pause” are actually quite strongly positive.

You’ve said “Results

The coefficients are calculated using the R arima() command with the CSS optimisation. The period is from 1980 to June 2013.”

Are you saying there is no pause when looking back to 1980? How does 1980 come into it?

Nick Stokes
Reply to  Werner Brozek
March 15, 2017 11:14 pm

TTTM,
“How does 1980 come into it?”
That isn’t a study about the pause. It is looking at various ways of dealing with autocorrelation, the effect on trend, and the standard error. I chose the period since 1980 since some of the data were satellite sets.

Reply to  Werner Brozek
March 15, 2017 11:20 pm

Nick writes

But the link shows the fallacy of using edges of confidence intervals as a definition of pause, as Frank explains downthread. The trends that are alleged to show the “pause” are actually quite strongly positive.

Then what are you saying? This seems pretty clear.

Bindidon
Reply to  Werner Brozek
March 16, 2017 2:20 am

whiten on March 15, 2017 at 4:49 pm

Thank you whiten for the answer to my somewhat sarcastic remark, I was very tired yesterday evening (9 hours difference to WUWT time).

I apologize for having forgotten to give you the link to the paper:
https://dspace.mit.edu/openaccess-disseminate/1721.1/89054

You can read there all you need. In short: the team tried to discover the residual global warming behind natural sources because they suspect it to be of anthropogenic nature.

Some think it’s CO2-based: I have no idea about that. It is by far too complex for me to participate in the discussion in a meaningful way.

March 14, 2017 11:38 am

For those of you who wish to follow the latest ENSO numbers, you can do so at this site:
http://www.tropicaltidbits.com/analysis/ocean/nino34.png

It gets updated every six hours so if you are reading this after several hours, simply click on the graphic for the latest update.

Steve Fraser
Reply to  Werner Brozek
March 14, 2017 12:01 pm

Werner, do you have an idea at to why the 3.4 got zig-zaggy in feb? In the past few days it has settled down.

Reply to  Steve Fraser
March 14, 2017 12:24 pm

Werner, do you have an idea at to why the 3.4 got zig-zaggy in feb? In the past few days it has settled down.

If you look at this map, things can vary greatly over very short distances. Then keep in mind that things are constantly shifting by a small amount every day.
http://cci-reanalyzer.org/wx/DailySummary/#T2_anom

(I have no idea why it jumped back to February on me!)
I will try again and hopefully it is up to date this time:
http://www.tropicaltidbits.com/analysis/ocean/nino34.png

Bill Illis
Reply to  Steve Fraser
March 14, 2017 4:35 pm

The weekly Nino 3.4 Index values shows nothing like these numbers.

Back down to -0.2C in the week of March 8, 2017. +0.3C the week of Feb 22.

http://www.cpc.ncep.noaa.gov/data/indices/wksst8110.for

Reply to  Steve Fraser
March 14, 2017 5:12 pm

Back down to -0.2C in the week of March 8, 2017.

I would say the week of March 7 to 14 at -0.1 is certainly in the ball park of -0.2.

Bindidon
Reply to  Steve Fraser
March 15, 2017 4:46 pm

Bill Illis on March 14, 2017 at 4:35 pm

The weekly Nino 3.4 Index values shows nothing like these numbers.
Back down to -0.2C in the week of March 8, 2017. +0.3C the week of Feb 22.

22FEB2017 28.5 2.3 27.3 0.7 27.1 0.3 28.0-0.1
01MAR2017 28.5 2.2 27.1 0.4 26.9 0.0 28.1-0.1
08MAR2017 28.5 2.1 27.4 0.4 26.8-0.2 27.8-0.3

The graph shows these numbers for 3+4. Maybe you look again…

March 14, 2017 11:39 am

HadSST3 Update:
HadSST3 for February came in at 0.524, which is a slight increase from the January value of 0.484. This gives an average of 0.504 and it would still rank in third place if it stayed that way.

troe
March 14, 2017 11:43 am

The pause there it is.

Frederik Michiels
March 14, 2017 12:06 pm

To answer the question if there will be another 18 year long pause is simple: It depends on what ENSO will do.

in the pause ENSO had a pretty neutral state: the La nina’s and el nino’s were balanced and cancelled each other out. this las El nino did actually end this balance (and by result: end the pause)

now what ENSO will do in the future is unknown, but a strong back to banc la nina is even able to make the pause to return, while a new strong el nino would give a new step up.

i find this graph really telling the whole story:
http://s33.postimg.org/e3fkz0m0v/ENSO_1871.png

you see very well how el nino correlates to the rise of 1900-1945 then to the dip till 1975 and the rise till 1998 with then “the pause”
the last strong el nino did “end the pause” though

Reply to  Frederik Michiels
March 14, 2017 12:36 pm

All huge upward spikes were followed by large La Ninas eventually. I guess we just have to be patient.

richard verney
Reply to  Werner Brozek
March 14, 2017 1:18 pm

Yes. But remember that the satellite data set is less sensitive to La Nina than it is to El Nino. Possibly because with El Nino there is more convection.

Reply to  Werner Brozek
March 14, 2017 2:18 pm

Yes. But remember that the satellite data set is less sensitive to La Nina than it is to El Nino.

I guess in that case we will need La Ninas that are three times longer than the preceding El Nino to equal things out.

george e. smith
Reply to  Frederik Michiels
March 14, 2017 12:47 pm

So absolutely nothing has happened since 1871.

Downtown Sunnyvale changes three or four times that much in just 24 hours.

And the whole planet hasn’t changed more than 12 degrees C in 650 million years.

G

Bindidon
Reply to  Frederik Michiels
March 16, 2017 4:06 am

Frederik Michiels on March 14, 2017 at 12:06 pm

To answer the question if there will be another 18 year long pause is simple: It depends on what ENSO will do.

Maybe! But when writing

i find this graph really telling the whole story:

I’m afraid you are really wrong. You can’t simply guess that by simple eye-balling on a chart; here a true comparison of ENSO with a temperature record is necessary, in order to see how the long-term running means over the two records really behave:
http://fs5.directupload.net/images/170316/d6hexoea.jpg
As you can see, the two 120 month running means show some similarity until 1990 but then begin to strongly diverge: while ENSO is on the decline, HadCRUT is increasing.

P.S. I didn’t add the Sun Spot Number record to the graph presented here; but be sure it behaves quite similar to ENSO…

RWturner
March 14, 2017 12:12 pm

I just asked my magic 8-ball and it said wait until November 2018.

Reply to  RWturner
March 14, 2017 1:22 pm

It is decidedly so.

R. Shearer
Reply to  Menicholas
March 14, 2017 3:42 pm

Don’t count on it.

Reply to  Menicholas
March 14, 2017 7:25 pm

But all signs point to yes.

Frank
March 14, 2017 12:22 pm

What is a “Pause”? As I have noted before, the absence of a statistically significant warming trend does not constitute evidence that there has been no warming or that a “Pause” is underway. It simply means that you failed to reject the null hypothesis that the warming trend is zero or below*.

Over the last 40+ years, statistically significant warming has been observed. For shorter periods, the data is too noisy to draw any conclusion. Ambiguity is ambiguity, not something from which you can draw a meaningful conclusion. Especially when you cherry-pick short periods. The CENTRAL ESTIMATE is the best information we have. The confidence interval tells us how much faith to place in that central estimate.

For a scientist to claim that a Pause in warming is underway, you need to define what a Pause is. For example, a Pause* could be defined as a warming rate of 0.05 K/decade or less, about 20% of what the IPCC has projected. Or less than 0.025 K/decade (10%) or 0.10 K/decade (40%). Then you need to show that the confidence interval for for the trend is less than this value. (Hint: You will fail.)

* What does it mean to fail to reject the null hypothesis? To paraphrase William Briggs Youtube video:
a) Assuming your statistical model (linear? linear AR1?) is correct…
b) And assuming that your temperature data is representative of typical unforced variability
c) And assuming the experiment could be repeated hundreds of time
d) Fewer than 5% of these repeated experiments would be expected to show a trend of zero or less.

Since you fail to reject the null hypothesis, more than 5% are expect to show a warming trend of zero or less. That doesn’t mean 100% (or 50% or 25% or 12.5%) will be less than zero.

Reply to  Frank
March 14, 2017 12:48 pm

What is a “Pause”?

I am using the word “pause” in the same way Lord Monckton has always used it. Namely that it is the period of time from the latest month and going backwards to the furthest time until the slope is at least slightly negative. (It is virtually impossible for WFT to have a slope of exactly 0 to 7 decimal places.)
However in saying this, it is of course reasonable to accept other definitions, but those are not how I am using it in the title.

george e. smith
Reply to  Werner Brozek
March 14, 2017 12:57 pm

Werner, he uses “statistically different from zero” (or not). Up down doesn’t matter.

(works for me)

G

John@EF
Reply to  Werner Brozek
March 14, 2017 1:07 pm

“I am using the word “pause” in the same way Lord Monckton has always used it”

In other words, the very definition of a Cherry Pick.

Reply to  Werner Brozek
March 14, 2017 1:08 pm

Werner, he uses “statistically different from zero” (or not).

In that case, from my section 1:
On several different data sets, there has been no statistically significant warming for between 0 and 23 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.
For UAH6.0: Since December 1993: Cl from -0.009 to 1.776
This is 23 years and 3 months.
For RSS: Since October 1994: Cl from -0.006 to 1.768 This is 22 years and 5 months.
For Hadsst3: Since May 1997: Cl from -0.031 to 2.083 This is 19 years and 9 months.

Here is a recent exchange I had with Nick Stokes:
Werner Brozek
 
January 26, 2017 at 11:53 am

“Would Phil Jones use either your numbers or those of SkS to determine if warming over 15 years was statistically significant or not?”

Nick Stokes
 
January 26, 2017 at 12:19 pm

“My usage there is the same as SkS and is conventional – 95% chance of being within CI. I think from memory that this is what PJ was using too.”

Reply to  Werner Brozek
March 14, 2017 2:27 pm

In other words, the very definition of a Cherry Pick.

I do not agree! If you were to ask me between which two months I could get a negative slope for 18 years, I could cherry pick two points on RSS and UAH to give me those two months. But if we have to start from the latest month, we cannot get a negative slope that is longer than 2 years. You simply cannot “cherry pick” your way out of that fact!

seaice1
Reply to  Werner Brozek
March 14, 2017 3:40 pm

Using that definition the pause is about 8 months.

Reply to  Werner Brozek
March 14, 2017 3:47 pm

Using that definition the pause is about 8 months.

Anomalies have been going down for the last 13 months, so the pause is more or less twice that time.

Reply to  Werner Brozek
March 15, 2017 6:30 am

Monckton’s definition of “pause” was, the farthest back you could go and find a slope that wasn’t positive.

Within Monckton’s “pause” there were always times of both positive and negative slope. Well, actually, there were one or two brief periods of negative slope corresponding to times when the early endpoint was very close to big spikes in temperature due to El Ninos–it was positive almost all the time, because the Earth is warming.

Monckton’s method relied on the beginning of his “pause” being very close to the 1998 super El Nino, and wouldn’t ever have worked without it. Remove the El Nino spikes, and the “pause” vanishes completely. It is an artifact of the 1998 El Nino. Without the El Nino spikes, Mockton’s method is unimpressive.

Lest that seem somehow unfair, one can remove the La Nina dips as well, and the warming trend is even more obvious. El Nino and La Nina don’t contribute anything to the overall trend, because they simply move around the heat that is already here. They neither add heat nor remove it. What they do is cause large noise fluctuations that can be used to mask the overall trend through statistical manipulations such as Monckton used. Various researchers such as Tamino have used a variety of methods for eliminating the noise that ENSO causes in the data record.

We know Monckton was not displaying any actual physical event (i.e., a “pause”) because the starting point of his “pause” kept shifting. If there had been some actual change in climate at some actual moment in time–a change that created a “pause”–then the start point wouldn’t keep shifting. A past event doesn’t alter the date of its occurrence once it has happened. Monckton’s “pause” does not describe any real physical event or process.

Monckton merely highlighted the effects of statistical noise and of the chaotic timing of the fluctuation that is ENSO. El Ninos and La Ninas are semi-random variations up and down around the trendline. The trend is unmistakably up, and has been for decades.

Reply to  Werner Brozek
March 15, 2017 7:09 am

Monckton’s method relied on the beginning of his “pause” being very close to the 1998 super El Nino, and wouldn’t ever have worked without it.

At the same time we had an 18 year pause before the 1998 El Nino, we also had a 15 year pause after the 1998 El Nino. A while ago, I wrote a blog post about that very thing here:
https://wattsupwiththat.com/2015/04/09/rss-shows-no-warming-for-15-years-now-includes-february-data/

Reply to  Werner Brozek
March 15, 2017 7:19 am

Monckton’s “pause” does not describe any real physical event or process.

Then why did the brightest minds in climate science scramble to find dozens of possible excuses for the pause? See:
https://wattsupwiththat.com/2014/09/11/list-of-excuses-for-the-pause-in-global-warming-is-now-up-to-52/

george e. smith
Reply to  Werner Brozek
March 15, 2017 10:03 am

Lord Monckton’s “pause” requires ONLY a trend slope NOT STATISTICALLY DIFFERENT FROM ZERO. It does not specify a non positive slope.

And naturally the statistically significant slope changes with every change in the trial length, as prescribed by the stat math laws.

ONLY the ending point (NOW) is chosen. The starting point is eventually determined by following the algorithm; it is NOT cherry picked by M of B.

The only reason that NOW is chosen as the starting point, is that we do not yet have the number for TOMORROW; well actually Christopher goes in monthly increments.

If you don’t read his eminently easy to follow instructions; you won’t get his result.

G

Reply to  Werner Brozek
March 15, 2017 10:20 am

Lord Monckton’s “pause” requires ONLY a trend slope NOT STATISTICALLY DIFFERENT FROM ZERO. It does not specify a non positive slope.

You are mistaken here.

From Lord Monckton’s post here:
https://wattsupwiththat.com/2016/02/06/the-pause-hangs-on-by-its-fingernails/

The hiatus period of 18 years 8 months is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend. 

george e. smith
Reply to  Frank
March 14, 2017 12:51 pm

Actually the raw data is by far the best estimate you have.

The value spends almost ZERO time at the value of the “central estimate”.

So it will be almost impossible to ever observe or measure a value equal to the central estimate. Nobody has ever caught any real value right when it is at its average value.

It is pure fiction.

G

george e. smith
Reply to  Frank
March 14, 2017 12:55 pm

Lord M of B defined his algorithm to detect if the trend was statistically different from ZERO or not.

Using just the common rules of statistical mathematics, as defined in the text books. He didn’t write those text books, just uses them like everyone else does.

G

Nick Stokes
Reply to  george e. smith
March 14, 2017 2:32 pm

“Lord M of B defined his algorithm to detect if the trend was statistically different from ZERO or not.”
I think you should give a quote or cite. I don’t think it is true.

seaice1
Reply to  george e. smith
March 14, 2017 3:41 pm

I thought the pause was defined as as far back from today you could go and not get a positive slope.

Reply to  george e. smith
March 14, 2017 3:50 pm

I thought the pause was defined as as far back from today you could go and not get a positive slope.

Yes, that is the definition Lord Monckton and I have been using.

Michael of Oz
Reply to  george e. smith
March 14, 2017 5:15 pm

Least squares linear regression i think it has been called by Mr Monkton.

barry
Reply to  george e. smith
March 15, 2017 4:52 pm

Lord M of B defined his algorithm to detect if the trend was statistically different from ZERO or not.

News to me. I second the request for a reference.

Reply to  Frank
March 15, 2017 11:28 pm

Frank writes

Over the last 40+ years, statistically significant warming has been observed. For shorter periods, the data is too noisy to draw any conclusion. Ambiguity is ambiguity, not something from which you can draw a meaningful conclusion. Especially when you cherry-pick short periods.

Perhaps you misunderstand the pause?

There is only one non-cherry picked date in all of climate science and that is “today”. The pause was looking backwards from today to see how far one could look before the trend appeared. It turned out to be nearly 20 years at one point (although after the latest El Nino that’s probably changed). That isn’t exactly a short period.

MarkY
March 14, 2017 12:30 pm

The entire discussion is fascinating.
Since 1998, the “average temperature anomalies” are within a single degree centigrade, with error bars probably wider, and we’re discussing trends?
I must not be smart enough to see the point.

Reply to  MarkY
March 14, 2017 12:56 pm

I must not be smart enough to see the point.

If I understood you properly, you are in good company! Here is a quote from Richard S. Lindzen:
“It is not about whether CO2 is increasing: it clearly is. It is not about whether the increase in CO2, by itself, will lead to some warming: it should. The debate is simply over the matter of how much warming the increase in CO2 can lead to, and the connection of such warming to the innumerable claimed catastrophes. The evidence is that the increase in CO2 will lead to very little warming, and that the connection of this minimal warming (or even significant warming) to the purported catastrophes is also minimal.”

Reply to  Werner Brozek
March 14, 2017 1:19 pm

I found this senstence to be out of place in such a statistical article…” As I expect some warming with atmospheric CO2 increase”,….why “some”, why mention CO2 at all? a genuine expectation ( opinion) might have been better to use water increase as a reason or better still, stick to statistical knowledge rathef than theoretical supposition.

barry
Reply to  Werner Brozek
March 15, 2017 4:55 pm

You are free to delete the sentence from your mind and focus on the rest if you like. As Anthony Watts and Roy Spencer agree that, all else being equal, increased CO2 should cause some warming, I didn’t imagine the sentence would be too controversial.

barry
Reply to  MarkY
March 15, 2017 6:40 pm

As no one bothered with the error bars much while speaking of the ‘pause’, Werner and I assessed on that convention. Werner has posted links to a WUWT article by Lord Monckton that sets the context on the ‘pause’. If you don’t like the method, take it up with him.

John@EF
Reply to  barry
March 16, 2017 9:54 am

I think MoB may have tried the “not statistically significant” angle once but, as you know, that just doesn’t have the same level of politically useful sizzle. Since useful sizzle is what MoB seeks, he stopped.

Reply to  barry
March 16, 2017 10:05 am

I think MoB may have tried the “not statistically significant” angle once but, as you know, that just doesn’t have the same level of politically useful sizzle. Since useful sizzle is what MoB seeks, he stopped.

I do know that years ago after I had reported that the satellites showed no statistically significant warming for 23 years, that he quoted that and gave me credit for that number even though I said that I used the skeptical science program at the time.
As for “sizzle”, it is much easier to explain “no warming” than “no statically significant warming at the 95% level”.

March 14, 2017 12:32 pm

There was a flat average temperature trend for over a decade as CO2 rose.

That’s more evidence that CO2 is not the “climate controller.”

The use of average temperature data to three decimal places in this article is bad math, and bad science … even if you believe claims of a +/- 0.1 degree margin or error (I assume surface measurements have a
+/- 0.5 degree margin of error until I see proof that I’m wrong.)

The inaccuracy of the source data (with so much wild guess infilling and insufficient global coverage) means anyone using the data to three decimal places is being silly … however I know the website owner accepts three decimal places, and I just wonder why?

The warmunists are brainwashing children in school.

The deniers are seeking pauses with mathematical mass-turbation to three decimal places.

How does that refute them?

It’s a flat trend.

Enjoy it for now — after a few years of “adjustments” it will be gone!

It doesn’t matter if there is a slight rise or slight decline in the average temperature — it is still an unexpected and unpredicted flat trend stretching over a decade … and perhaps not over yet.

Climate blog for non-scientists
http://www.elOnionBloggle.Blogspot.com

Frank
March 14, 2017 12:49 pm

What is a Pause? Continued. Some consensus scientists defined a Pause as any period where the Central Estimate for Warming was zero of less. They didn’t include a confidence interval in this definition. Then they looked for periods in AOGCMs where the warming rate was zero or less.

About 25% of five-year periods had a central estimate for the warming trend of zero or less. About 6% of 10-year periods had a central estimate for the warming trend of zero or less. Less than 1% of 20-year periods had a central estimate for the warming trend of zero or less. Elsewhere I read that 15 year trends less than zero occurred less than 5% of the time.

This allows us to approach the question of whether the recent “Pause” was statistically meaningful. Five-year pauses are common. 10-year pauses are much less common. Before Karl (2015), there was a 15-year period with a central estimate for the warming trend of zero or less. The adjustments in Karl (2015) shortened that period.

george e. smith
Reply to  Frank
March 14, 2017 1:00 pm

Frank you can always make up your own statistical algorithm.

We made up all the ones we have, so if you don’t like any of those, make up your own.

G

Frank
Reply to  george e. smith
March 14, 2017 11:21 pm

I’m asking whether your definitions have any useful meaning. The situation that faces us is that a forcing that takes many decades to grow is causing an unknown amount of warming on a planet whose global temperature fluctuates chaotically.

I can prove it isn’t warming by defining warming as the temperature trend last autumn. Or last night! Clearly I’m telling you about temperature trends that aren’t useful for understanding global warming. Neither are you.

Reply to  Frank
March 14, 2017 1:20 pm

Some consensus scientists defined a Pause as any period where the Central Estimate for Warming was zero of less. They didn’t include a confidence interval in this definition.

According to my understanding of Dr. McKitrick, you cannot give a probability for a slope of 0. You can only give a probability of warming or cooling over a certain time period. Having said this, a slope of 0 automatically means that you have a 50% probability of warming and a 50% probability of cooling.

Elsewhere I read that 15 year trends less than zero occurred less than 5% of the time.

We had trends of pauses of over 18 years for both RSS and UAH until February 2016.

Frank
Reply to  Werner Brozek
March 15, 2017 10:36 pm

Werner: I explained that you could chose any sensible range around 0 K/century as your definition of a Pause (neither warming nor cooling). You might argue that anything within +/-0.3 K/century of 0 k/century is effectively a Pause when the IPCC is predicting 10-fold more warming. I wouldn’t complain if you chose +/-0.6 K/century, 1/5th as much warming at the IPCC projects. You could even lump cooling and Pause together and say that less than +0.6 K/century is either a Pause or cooling. However, when you are done, the entire 95% confidence interval needs to meet your criteria.

Reply to  Werner Brozek
March 15, 2017 10:54 pm

Werner: I explained that you could chose any sensible range around 0 K/century as your definition of a Pause

I am not disagreeing with the different possibilities you present. I just choose to use Lord Monckton’s definition.

Reply to  Frank
March 16, 2017 3:50 am

Frank writes

The adjustments in Karl (2015) shortened that period.

The adjustments in Karl (2015) are very unconvincing to me. Based on buoy data that is biased heavily to the coastlines around the US and some other coast lines around the world with very few open ocean measurements, especially in the Southern hemisphere, nothing in the Indian ocean or Southern ocean and only a very few buoys in the Arctic ocean. Its not representative of the earth’s oceans and yet from that they adjusted SSTs by 0.05C per decade.

It was a paper that set out to find a warming trend and surprise, surprise they found it.

http://www.ndbc.noaa.gov/

barry
Reply to  Frank
March 17, 2017 6:19 am

Frank, a ‘pause’ suggests something going on before it. A pause from what? From warming, in this case. Put another way, it’s a change in the trend – from warming to not warming.

“Statistically meaningful”: A statistically meaningful change in linear trend occurs when the confidence intervals (uncertainty estimates) don’t overlap for the 2 periods. This was never the case for the 1998 ‘pause,’ but is the case for the mid-century flattish trend.

For example, UAHv6.0

1979 – 2017 trend + 95% confidence interval:
0.12 (+/- 0.06) C/decade
Range of uncertainty: 0.06 0.18 C/decade

!979 – 1998:
0.09 (+/- 0.16) C/decade
Range of uncertainty: -0.07 0.25 C/decade

1998 – 2017:
0.05 (+/- 0.18) C/decade
Range of uncertainty: -0.13 0.23 C/decade

The confidence intervals for each trend not only overlap each other, they all also overlap with the mean trends for each. The trends are therefore not statistically distinct.

An analogy is the debate about whether 2016 was the warmest year in the record. Many people pointed out that although 2016 was numerically highest rank, the uncertainty made it statistically indistinct from the top 2 or 3 years. So it is with trends in the last 30 years or so, short and long. They are not statistically different from each other. No statistically significant change in trend.

Whereas,

HadCRUt4

1900 – 2017:
0.08 (+/- 0.01) C/decade
Range of uncertainty: 0.07 0.09 C/decade

1900 – 1940:
0.09 (+/- 0.04) C/decade
Range of uncertainty: 0.05 0.13 C/decade

1940 – 1970
-0.02 (+/- 0.05)
Range of uncertainty: -0.07 0.03

The uncertainty range for the period 1940-1970 does not overlap with that of the previous warming period. It does not overlap with the uncertainty range for the whole record.

This is good statistical evidence of a change in trend: a pause.

Statistically significant changes in trend are hard to find with periods of less than 25 years or so in the instrumental record of global temperature.

mark
March 14, 2017 12:51 pm

There is no pause in global warming.
Fake news.

Reply to  mark
March 14, 2017 1:27 pm

We sure did get a lot of explanations for something which never occurred.

Reply to  Menicholas
March 14, 2017 2:32 pm

We sure did get a lot of explanations for something which never occurred.

You can still find out where the pause used to be in the satellite data, but not the surface data sets that have been Karlized.

barry
Reply to  Menicholas
March 15, 2017 4:57 pm

The surface data sets haven’t been Karlised. The non-Karlised data is still available for use under the headings that don’t include his name at the various trend apps on the web. You can also find his version at most of those places.

barry
Reply to  Menicholas
March 16, 2017 5:42 pm

You can still find out where the pause used to be in the satellite data, but not the surface data sets that have been Karlized.

IPCC AR5 had the 1998 to 2012 mean trend at 0.05C /decade. HadCRUt4 still has that trend for the period. I think, but am not sure, that IPCC tends to use HadCRUt4 for it’s general global temp trend figures.

The surface data sets tended to run warmer than satellite from 1998, which is why, presumably, the satellite data sets were the reference for pause talk. UAH6.5 also ran warmer than RSS3.3, more closely to the surface data sets from 1998. RSS was the data set du-jour for pause talk (why Monckton used it) until UAHv6 came along.

JohnWho
March 14, 2017 1:04 pm

First, I feel it is important to realize that so many recognize that there was a recent “pause” and pre-satellite data doesn’t eliminate the probability that there have been many pauses in either long term cooling or warming.

“Do you think RSS will ever have a pause of over 18 years again? Why or why not?”

Assuming RSS, or subsequent similar systems, remains operational long enough, there is no reason to assume that it will not record either shorter or longer pauses in either cooling or warming. Our relatively short (not even 40 years) data is too short to make long term assumptions. We won’t have hundreds of years of satellite data for, er, hundreds of years.

Reply to  JohnWho
March 14, 2017 1:26 pm

Our relatively short (not even 40 years) data is too short to make long term assumptions.

But keep in mind the above by Frank:

Elsewhere I read that 15 year trends less than zero occurred less than 5% of the time.

And we had 18 years and 9 months at one point.

Reply to  Werner Brozek
March 14, 2017 7:32 pm

And all while CO2 was rising more rapidly than ever.
Surely a relevant part of the story.

Frank
Reply to  Werner Brozek
March 14, 2017 11:08 pm

We have a significant difference between the warming at the surface and warming of the lower troposphere as a whole. As I understand it, either our understanding of lapse rate is wrong or one of the two records of warming (surface or satellite) is wrong. Modelers want to believe the satellite trend is wrong. If the satellite record is correct, models have been invalidated. After Karl (2015), the longest “pause” in surface warming is about 10 years, which is somewhat consistent with models.

bitchilly
Reply to  Werner Brozek
March 15, 2017 5:04 am

the element i find farcical is climate science is basically saying the world will never see the global temperature drop again . history would suggest that is nonsense.

Frank
Reply to  Werner Brozek
March 15, 2017 10:58 pm

Werner: What you say here is somewhat reasonable. However, a zero or less trend without confidence interval is not being used to demonstrate absence of warming; it is being used to determine whether the observed Pause invalidates climate models. We have hundreds of years of climate model output to analyze for Pauses in the midst of AGW, but only a few decades of observations when GHGs are rising quickly (more than 1 ppm/yr).

If we observed a 15-year Pause, I would conclude that models EITHER produce too much warming OR too little unforced variability (noise).

ENSO is unforced variability. Between 1975 and 1995, we experienced an unusually large number of and stronger El Ninos, and strong warming (central estimate). Between 2001 and 2013, we experienced relatively more La Ninas and no warming (central estimate). A climate model that was incapable of producing realistically strong El Ninos and La Ninas might have the overall 1975-2013 trend correct, but it would be unlikely to produce a 13-year Pause.

ENSO causes changes in UAH/RSS twice as big as at the surface. Getting unforced variability is correct is even more important to unforced variability. Pauses in the UAH/RSS record aren’t interchangeable with those at the surface.

Reply to  Werner Brozek
March 15, 2017 11:11 pm

Pauses in the UAH/RSS record aren’t interchangeable with those at the surface.

Fair enough! There are many legitimate ways of looking at things. Each has advantages and disadvantages. Lord Monckton and I choose to use a very simple straightforward way that is easily explained and that serves our purposes.

barry
Reply to  JohnWho
March 15, 2017 4:59 pm

the element i find farcical is climate science is basically saying the world will never see the global temperature drop again

I don’t think this is accurate. Year-to-year variation is certainly expected to bring warm years followed by cool years, but as you haven’t specified a time-frame it’s not clear what you mean.

MarkY
March 14, 2017 1:07 pm

Thank you Werner. I just would like to know when I can buy that northern England land for grape growing, or buy the beef herd to graze on Greenland again.
This whole exercise is in response to bad (junk) science. We are arguing the wrong things. We have accepted the challenge.
It was once commonplace for a “gentleman” to avoid the challenge of a mann (intentional sic) who was not a gentleman. One did not fight an inferior. One did not even consider the challenge from an inferior.
The basic premise is flawed. That has been shown time and time again. It has been shown by statistics, by graphs, by physics. It has even been shown by engineers (who, by their own reckoning are the final arbiters of good science).
I believe it’s time for “our side” to shut up.
It’s time for the warmistas to put up.
The science is indeed settled. Man has not shown the ability to predict long range climate… nor even long-range weather.

Reply to  MarkY
March 14, 2017 1:37 pm

I believe it’s time for “our side” to shut up.

Not yet! We in Canada do not have a Trump!

Reply to  MarkY
March 15, 2017 5:35 am

MarkY March 14, 2017 at 1:07 pm
Thank you Werner. I just would like to know when I can buy that northern England land for grape growing,

You could buy it now if you wished, you’d be a bit late though.
The one I’ve linked to below has been in business since 1985.
http://www.leventhorpevineyard.co.uk

Robber
March 14, 2017 1:47 pm

Be very afraid my friends. A warming trend of 0.053/decade, or 0.0053/year.
It gets me hot just thinking about this catastrophic global warming. /sarc

John M. Ware
Reply to  Robber
March 14, 2017 4:36 pm

I would like to know how much the temperature needs to change to be detectable by our senses. My guess is that if you have two neighboring rooms a degree apart (F or C), most people would not be able to tell which was warmer. The idea that one can actually feel a difference of a twentieth of a degree is laughable.

I drove to the library and then the grocery store today. When I left the house, it was 32 degrees F, and the trees were all silvery from the ice storm that dumped 1.82″ of rain into my gauge. From here to the library is about three miles; about halfway there I noticed far less ice on the trees, and at the library the ice was gone. The library is at the center of town, with the big US highway and the business district; the temp felt the same, but it must have been a degree or two warmer. By the time I got home, the ice on the trees, while mostly still there, had lessened detectably; after another two hours, it was gone. Again, the temperature at home still felt the same. Yet, it must have been at least a degree or two higher at the library; but I couldn’t feel the difference, even though, from the evidence of the ice, I knew it was there. How about that?

Reply to  John M. Ware
March 14, 2017 7:33 pm

Most household thermostats will only respond to temp changes of about two degrees I think.

wyzelli
Reply to  Robber
March 14, 2017 10:03 pm

I think according to the atmospheric lapse rate (~6/5C/km) the air around your head should be around 0.011C cooler than the air around your feet … 😀

Richard M
March 14, 2017 1:48 pm

I’ll repeat my own definition of a valid trend. Use only ENSO neutral months preferably with a 3-4 month lag. When I looked at this, the trend was less than .01 C/decade since 1997. I think the pause is ongoing as far as a real warming signal is concerned.

Given that AMO driven Arctic warming will push this up during winter months, it is amazing the trend is this low. We will see come June-September where we really are. That is when the Arctic has the smallest impact.

I contemplated an idea a while back that the millennial cycle (i.e. the cause of the Minoan, Roman, Medieval and modern warm periods) is due to the timing of the AMO and PDO. The thought is that when these two lined up so the +AMO followed directly after the +PDO that warming occurs and just the opposite when the AMO leads the PDO.

Reply to  Richard M
March 14, 2017 1:58 pm

When I looked at this, the trend was less than .01 C/decade since 1997.

But even if you include ENSO and get 0.053 /decade since 1998, that is still nothing to be concerned about.

March 14, 2017 1:52 pm

Werner, I don’t understand why you discuss pauses beginning in January 1998.
As I recall, Monckton’s method was to start in the present and look backward.

The very lowest slope from before 1999 is actually from December 1997. However the chances are good that if we ever get a negative slope from December 1997, that the slope will also be negative for several months around that time, naturally also including from January 1998 which is a convenient date to work from for now.
And yes Monckton did what you said.

What does Monckton’s method say now?

RSS: The negative string goes back to May 2015, unchanged from last month.

UAH: The negative string goes back
to August 2015.  This is also unchanged from last month.

Talking about pauses this short just invite ridicule which is why I do not normally mention it.

Reply to  Werner Brozek
March 14, 2017 2:41 pm

As your article discusses alternative scenarios, what would the figures need to do this year to get say a 10 year pause?

Perhaps Barry will weigh in here. But my sense is that due to the way the numbers will work out, if you do ever get a 10 year pause, you would be hard pressed to not automatically get both a 16 year and 20 year pause at the same time.

JohnWho
Reply to  Werner Brozek
March 14, 2017 6:28 pm

Yes, short “pauses” are as ridiculous as the many short cooling periods we’ve had during this overall warming since the end of the LIA.

barry
Reply to  Werner Brozek
March 15, 2017 5:04 pm

As your article discusses alternative scenarios, what would the figures need to do this year to get say a 10 year pause?

I’d guess you’d need even cooler temps in the coming months. Starting in 2007 and ending Dec 2016, the 10-yr trend is 0.17 C/decade, much higher than since Jan 1998.

If you’re interested in a more exacting reply I’ll do the calcs.

Reply to  Werner Brozek
March 15, 2017 10:27 pm

As your article discusses alternative scenarios, what would the figures need to do this year to get say a 10 year pause?
I’d guess you’d need even cooler temps in the coming months. Starting in 2007 and ending Dec 2016, the 10-yr trend is 0.17 C/decade, much higher than since Jan 1998.
If you’re interested in a more exacting reply I’ll do the calcs.

Due to the huge La Nina in 2008, a 9 year pause from January 2009 to December 2017 might be closer to being in reach.

Bob Weber
March 14, 2017 1:55 pm

“Those predicting imminent cooling from lower solar ebb or ocean-atmosphere oscillations may expect to see annual temperatures like the early 1990s sometime soon. I am less confident of that. ”

Funny that you should say that, and at the same apparently not recognize that the three instances of three year cooling periods you cited were during three solar minimums!

SC22 solar minimum
“We have 2 months data already, at around 0.5C warmer than that, so what would the average temperature anomaly for the rest of 2017 have to be to get a flat/negative trend since 1998? -0.26C (Mar-Dec)

The most recent year the annual average anomaly was that cool was in 1985. The annual average then was -0.35C. ”

SC23 solar minimum
“The last time a 3 year average was that cool or cooler was 1992 through 1994 (-0.09)” [w/Pinatubo]
“So we have to go further back in time to get a cooler 3-year average. Most recent is: 1994 to 1996: 0.0C”

SC24 solar minimum
“When did we last have 3 consecutive years as cool or cooler than that?
2007 to 2009: 0.05C”

So, what is your reason for betting against this solar minimum producing a temp drop-off?

Will it be a roller coaster to the minimum, like it was during SC20, or a big drop-off right at the end of the cycle as in 2007/8?

http://climate4you.com/images/SunspotsMonthlySIDC%20and%20HadSST3%20GlobalMonthlyTempSince1960%20WithSunspotPeriodNumber.gif

barry
Reply to  Bob Weber
March 17, 2017 7:29 am

So, what is your reason for betting against this solar minimum producing a temp drop-off?

It may produce a drop-off, but unlikely to reach the cool temps of the early 90s. All but one of the 3-yr average temps at solar minimum in your graph was warmer than the last, and all of these have been sequentially warmer since 1975-77.

March 14, 2017 1:58 pm

“Do you think RSS will ever have a pause of over 18 years again?”

Likely not

“Why or why not?”

Climate is generally not stable very long. It is usually either in a cooling or a warming trend. I believe the probability that it remains flat for that long a period is low. More likely climate will change to either a warming or cooling regime.

Nick Stokes
March 14, 2017 2:01 pm

“I’ve plotted monthly data and the trend to Nov 2016, and you can see the Dec 2016 anomaly is below the trend line.”
That’s the basic arithmetic. New readings above the line pull it up; below they pull it down. You can think of it as a seesaw, with the pivot at about 0.26°C. It was balanced in Jan 2016 at that level. Everything since has been pulling it up, and the cumulative effect is shown by the area under the curve relative to the 0.26 base. To get the pause back, you need an equivalent area below 0.26. Slightly less, because the more recent points have more leverage. But that is a minor effect in a 20 year period.

So December was enough to slightly pull back the present high trend. But it would pull the other way if the trend did drop to zero.

barry
Reply to  Nick Stokes
March 15, 2017 5:06 pm

Yes. I wanted to explain how the trend gets pulled about for readers who know even less than I.

edwardt
March 14, 2017 2:25 pm

Something to watch for is the potential divergence between the troposphere and the AMO. Seems like the heat from this El Nino discharged to space without having the usual accompanying impact on the AMO. Rather unusual…thinking it’s time for the to head negative.

http://woodfortrees.org/plot/esrl-amo/from:1977/plot/uah5/from:1977/offset:-0.0

bitchilly
Reply to  edwardt
March 15, 2017 5:10 am

peak amo .surface temps at a level where additional atmospheric warming not possible ?

Richard M
Reply to  edwardt
March 15, 2017 7:35 am

The AMO turned positive around 1995. For a full 30 year warm phase we would have to wait until 2025 or so for it to go negative. Now, Pinatubo could have masked a slightly earlier start date but I seriously doubt we will see it go strongly negative for at least another 7-8 years and maybe a little longer.

Keep in mind that we have reached the peak so we are now on the downward half of the positive phase. The problem is the effect on the Arctic sea ice is cumulative which means we won’t necessarily see any Arctic cooling for several more years. Of course, with the coming solar minimum we could see other factors come into play.

Amber
March 14, 2017 2:40 pm

Always found it fascinating that the comedian Bill Nye could so confidently bet $ on warming when changes are in fractions of one degree and +/- statistically within a band of error . The guy is obviously a genius or maybe just knew NASA was a safe bet when producing temperature averages for the entire earth and Obama was making his global warming fear mongering claims .
Don’t hear Bill Nye betting now . Gee I wonder why ?

1 2 3