Satellite Records and Slopes Since 1998 are Not Statistically Significant. (Now Includes November and December Data)

Guest Post by Werner Brozek, Edited by Just The Facts

WoodForTrees.org – Paul Clark – Click the pic to view at source

As can be seen from the above graphic, the slope is positive from January 1998 to December 2016, however with the error bars, we cannot be 95% certain that warming has in fact taken place since January 1998. The high and low slope lines reflect the margin of error at the 95% confidence limits. If my math is correct, there is about a 30% chance that cooling has taken place since 1998 and about a 70% chance that warming has taken place. The 95% confidence limits for both UAH6.0beta5 and RSS are very similar. Here are the relevant numbers from Nick Stokes’ Trendviever site for both UAH and RSS:

For RSS:

Temperature Anomaly trend

Jan 1998 to Dec 2016

Rate: 0.450°C/Century;

CI from -0.750 to 1.649;

t-statistic 0.735;

Temp range 0.230°C to 0.315°C

For UAH:

Temperature Anomaly trend

Jan 1998 to Dec 2016

Rate: 0.476°C/Century;

CI from -0.813 to 1.765;

t-statistic 0.724;

Temp range 0.113°C to 0.203°C

If you wish to see where warming first becomes statistically significant, see Section 1 below. In addition to the slopes showing statistically insignificant warming, the new records for 2016 over 1998 are also statistically insignificant for both satellite data sets.

In 2016, RSS beat 1998 by 0.573 – 0.550 = 0.023 or by 0.02 to the nearest 1/100 of a degree. Since this is less than the error margin of 0.1 C, we can say that 2016 and 1998 are statistically tied for first place. However there is still over a 50% chance that 2016 did indeed set a record, but the probability for that is far less than 95% that climate science requires so the 2016 record is statistically insignificant.

If anyone has an exact percentage here, please let us know, however it should be around a 60% chance that a record was indeed set for RSS. In 2016, UAH6.0beta5 beat 1998 by 0.505 – 0.484 = 0.021 or also by 0.02 to the nearest 1/100 of a degree. What was said above for RSS applies here as well. My predictions after the June data came in were therefore not correct as I expected 2016 to come in under 1998.

What about GISS and HadSST3 and HadCRUT4.5? The December numbers are not in yet, but GISS will set a statistically significant record for 2016 over its previous record of 2015 since the new average will be more than 0.1 above the 2015 mark. HadSST3 will set a new record in 2016, but it will only be by a few hundredths of a degree so it will not be statistically significant. HadCRUT4.5 is still up in the air. The present average after 11 months is 0.790. The 2015 average was 0.760. As a result, December needs to come in at 0.438 to tie 2015. The November anomaly was 0.524, so only a further drop of 0.086 is required. This cannot be ruled out, especially since this Nicks site shows December 0.089 lower than November:

Also worth noting are that UAH dropped by 0.209 from November to December and RSS dropped by 0.162. Whatever happens with HadCRUT4.5, 2016 and 2015 will be in a statistical tie with a possible difference in the thousandths of a degree. The difference will be more important from a psychological perspective than a scientific perspective as it will be well within the margin of error.

In the sections below, we will present you with the latest facts. The information will be presented in two sections and an appendix. The first section will show for how long there has been no statistically significant warming on several data sets. The second section will show how 2016 so far compares with 2015 and the warmest years and months on record so far. For three of the data sets, 2015 also happens to be the warmest year. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data. Only the satellite data go to December.

Section 1

For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 0 and 23 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.

The details for several sets are below.

For UAH6.0: Since November 1993: Cl from -0.009 to 1.784

This is 23 years and 2 months.

For RSS: Since July 1994: Cl from -0.005 to 1.768 This is 22 years and 6 months.

For Hadcrut4.5: The warming is statistically significant for all periods above four years.

For Hadsst3: Since March 1997: Cl from -0.003 to 2.102 This is 19 years and 9 months.

For GISS: The warming is statistically significant for all periods above three years.

Section 2

This section shows data about 2016 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadsst3, and GISS.

Down the column, are the following:

1. 15ra: This is the final ranking for 2015 on each data set.

2. 15a: Here I give the average anomaly for 2015.

3. year: This indicates the warmest year on record so far for that particular data set. Note that the satellite data sets have 1998 as the warmest year and the others have 2015 as the warmest year.

4. ano: This is the average of the monthly anomalies of the warmest year just above.

5. mon: This is the month where that particular data set showed the highest anomaly prior to 2016. The months are identified by the first three letters of the month and the last two numbers of the year.

6. ano: This is the anomaly of the month just above.

7. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.

8. sy/m: This is the years and months for row 7.

9. Jan: This is the January 2016 anomaly for that particular data set.

10. Feb: This is the February 2016 anomaly for that particular data set, etc.

21. ave: This is the average anomaly of all months to date.

22. rnk: This is the rank that each particular data set would have for 2016 without regards to error bars and assuming no changes to the current average anomaly. Think of it as an update 55 minutes into a game. However the satellite data are complete for the year.

Source UAH RSS Had4 Sst3 GISS
1.15ra 3rd 3rd 1st 1st 1st
2.15a 0.261 0.381 0.760 0.592 0.86
3.year 1998 1998 2015 2015 2015
4.ano 0.484 0.550 0.760 0.592 0.86
5.mon Apr98 Apr98 Dec15 Sep15 Dec15
6.ano 0.743 0.857 1.024 0.725 1.11
7.sig Nov93 Jul94 Mar97
8.sy/m 23/2 22/6 19/9
Source UAH RSS Had4 Sst3 GISS
9.Jan 0.539 0.681 0.906 0.732 1.15
10.Feb 0.831 0.994 1.068 0.611 1.33
11.Mar 0.732 0.871 1.069 0.690 1.29
12.Apr 0.713 0.784 0.915 0.654 1.08
13.May 0.544 0.542 0.688 0.595 0.93
14.Jun 0.337 0.485 0.731 0.622 0.75
15.Jul 0.388 0.491 0.728 0.670 0.83
16.Aug 0.434 0.471 0.770 0.654 0.98
17.Sep 0.440 0.581 0.710 0.606 0.90
18.Oct 0.407 0.355 0.586 0.601 0.88
19.Nov 0.452 0.391 0.524 0.488 0.95
20.Dec 0.243 0.229
21.ave 0.505 0.573 0.790 0.629 1.01
22.rnk 1st 1st 1st 1st 1st
Source UAH RSS Had4 Sst3 GISS

If you wish to verify all of the latest anomalies, go to the following:

For UAH, version 6.0beta5 was used.

http://www.nsstc.uah.edu/data/msu/v6.0/tlt/tltglhmam_6.0.txt

For RSS, see: ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt

For Hadcrut4, see: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.5.0.0.monthly_ns_avg.txt

For Hadsst3, see: https://crudata.uea.ac.uk/cru/data/temperature/HadSST3-gl.dat

For GISS, see:

http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

To see all points since January 2016 in the form of a graph, see the WFT graph below.

WoodForTrees.org – Paul Clark – Click the pic to view at source

As you can see, all lines have been offset so they all start at the same place in January 2016. This makes it easy to compare January 2016 with the latest anomaly.

The thick double line is the WTI which shows the average of RSS, UAH6.0beta5, HadCRUT4.5 and GISS.

Appendix

In this part, we are summarizing data for each set separately.

UAH6.0beta5

For UAH: There is no statistically significant warming since November 1993: Cl from -0.009 to 1.784. (This is using version 6.0 according to Nick’s program.)

The UAH average anomaly for 2016 is 0.505. This sets a new record. 1998 was previously the warmest at 0.484. Prior to 2016, the highest ever monthly anomaly was in April of 1998 when it reached 0.743. The average anomaly in 2015 was 0.261 and it was ranked third but will now be in fourth place.

RSS

Presently, for RSS: There is no statistically significant warming since July 1994: Cl from -0.005 to 1.768.

The RSS average anomaly for 2016 is 0.573. This sets a new record. 1998 was previously the warmest at 0.550. Prior to 2016, the highest ever monthly anomaly was in April of 1998 when it reached 0.857. The average anomaly in 2015 was 0.381 and it was ranked third but will now be in fourth place.

Hadcrut4.5

For Hadcrut4.5: The warming is significant for all periods above four years.

The Hadcrut4.5 average anomaly so far is 0.790. This would set a record if it stayed this way. Prior to 2016, the highest ever monthly anomaly was in December of 2015 when it reached 1.024. The average anomaly in 2015 was 0.760 and this set a new record.

Hadsst3

For Hadsst3: There is no statistically significant warming since March 1997: Cl from -0.003 to 2.102.

The Hadsst3 average anomaly so far for 2016 is 0.629. This would set a record if it stayed this way. Prior to 2016, the highest ever monthly anomaly was in September of 2015 when it reached 0.725. The average anomaly in 2015 was 0.592 and this set a new record.

GISS

For GISS: The warming is significant for all periods above three years.

The GISS average anomaly so far for 2016 is 1.01. This would set a record if it stayed this way. Prior to 2016, the highest ever monthly anomaly was in December of 2015 when it reached 1.11. The average anomaly in 2015 was 0.86 and it set a new record.

Conclusion

Does it seem odd that only GISS will probably set a statistically significant record in 2016?

0 0 vote
Article Rating
287 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
January 12, 2017 7:17 am

I am 95% certain that my body temperature has not increased by one degree F.
I have no clue whether it has increased by 0.5 F., if this gives you a clue about how much this worries me. (^_^)

Reply to  Robert Kernodle
January 12, 2017 7:23 am

I am 95% certain that my body temperature has not increased by one degree F.

From: http://www.webmd.com/first-aid/body-temperature
 “Also, your normal temperature changes by as much as 1°F (0.6°C) during the day, depending on how active you are and the time of day.” 

Reply to  Werner Brozek
January 12, 2017 11:48 am

Thanks, Werner B, but I’m still pretty confident.

Menicholas
Reply to  Werner Brozek
January 12, 2017 2:09 pm

It is normal for the body temp to fall even more during sleep…generally about two degrees.
http://www.sleepdex.org/thermoregulation.htm

Darrell Demick
Reply to  Robert Kernodle
January 12, 2017 1:03 pm

Aaaaaaand, Gentlemen, ….. , 98.6 F or 37 C is a statistical average for body temperature. Heck, even at the microscopic scale of a human being (microscopic compared to the atmosphere and all of the external factors – think up to and including galaxy – that can impact the climate of the atmosphere) we have to work in, at best, statistical averages.
Yup, need billions (perhaps trillions) more to fine tune that simulation model so that we can carry on the fight for CAGW.
Arrrrrrgh!!!!!!!!
Agree, no concern with respect to body temperature. And agree, physical exertion will alter body temperature.

Michael Jankowski
Reply to  Robert Kernodle
January 12, 2017 5:01 pm

At what body temperature does blood boil?

Reply to  Michael Jankowski
January 12, 2017 5:12 pm

At what body temperature does blood boil?

Figuratively or literally? Since blood is mostly water, it would be close to 100 C at 1 atm.

erastvandoren
Reply to  Michael Jankowski
January 13, 2017 9:20 am

If you run a mile your temperature goes up by 2 degrees. My model shows you will be boiling if you run 10 miles (due to positive feedbacks not just after 56 miles). Send me a check, so I can figure out how to save your blood from boiling!

Macha
Reply to  Robert Kernodle
January 13, 2017 4:17 am

So southern hemisphere only warmer in winter for decades..
But not summer.comment image
And even then, only at night.comment image
Hence…nothning to do with CO2…of any origin.

January 12, 2017 7:40 am

It may interest you to know that according to Nick’s site mentioned above, the average anomaly for the first 10 days in January 2017 is lower than any monthly average since August 2015. Of course, things can easily change before the end of the month.

AndyG55
Reply to  Werner Brozek
January 12, 2017 6:21 pm

In RSS, a zero trend exists from 1887 to just before the 2015/16 El Nino. (Green)
The transient of that El Nino (Blue) has now decayed to just below that trend line.comment image
UAH zero trend line was slightly lower so remains above, but should drop below its 20 year zero trend either in January or February

AndyG55
Reply to  AndyG55
January 12, 2017 6:22 pm

typo…. First date is obviously 1997… doh !

CheshireRed
January 12, 2017 7:45 am

Do these GISS figures seem odd? Hmmm. I know what I suspect but maybe try asking Gavin! Good luck with that, Werner. 🙂

Reply to  CheshireRed
January 12, 2017 8:22 am

It appears that the main reason that GISS is so much higher for 2016 than 2015 as compared to Hadcrut4.5 and even the Japan data set is due to much warmer polar temperatures in 2016, which GISS allegedly covers much better. In my opinion, the GISS numbers are more proof for polar warming than global warming.

Matt Bergin
Reply to  Werner Brozek
January 12, 2017 8:43 am

Werner since there only seems to be three thermometers in the arctic that GISS use I don’t think there results mean anything. That is a lot of area for only three measurement sites.

Matt Bergin
Reply to  Werner Brozek
January 12, 2017 8:45 am

Sorry in the last message there should read their

Reply to  Werner Brozek
January 12, 2017 10:23 am

Werner since there only seems to be three thermometers in the arctic that GISS use

I do not know the number of thermometers, but the satellite data show lots of polar warming recently. Check out NoPol and SoPol here:
http://www.nsstc.uah.edu/data/msu/v6.0beta/tlt/uahncdc_lt_6.0beta5.txt

Michael Jankowski
Reply to  Werner Brozek
January 12, 2017 5:02 pm

Werner, Mosh will tell you that the South Pole should be cooling for decades due to global warming.

Reply to  Werner Brozek
January 12, 2017 5:14 pm

Werner, Mosh will tell you that the South Pole should be cooling for decades due to global warming.

Is that because hot air rises?

catweazle666
Reply to  Werner Brozek
January 14, 2017 12:32 pm

“That is a lot of area for only three measurement sites.”
But lots of scope for infilling – er – krigging er – making stuff up?

observa
Reply to  Werner Brozek
January 17, 2017 5:32 am

“I do not know the number of thermometers, but the satellite data show lots of polar warming recently.”
Just splice them together silly. Don’t you know anything about climate science yet?

Joe Crawford
Reply to  CheshireRed
January 13, 2017 9:03 am

“Is that because hot air rises?”
Of course it does, Werner. Just like the Shenandoah River in Virginia runs uphill.

Reply to  Joe Crawford
January 13, 2017 9:26 am

And the Nile ☺

JasG
January 12, 2017 8:08 am

I’m confident that 100% (or 0%) of this non-warming was caused by CO2. Cue the pause deniers…

January 12, 2017 8:12 am

All those numbers are irrelevant. The only number that matters right now is what is the temperature. 0.2 C above normal. Without explaining where or how the heat comes and goes, a trend is meaningless. According to AGW theory the heat gets retained. There is no where for the heat to go. If the temperature was 1.0 C, how has global temperature fallen 0.8 C in such a short period of of time ?
If the a warming falls to 0.00 C or lower, it proves that a combination of external or internal factors that have not been accounted for the dramatic shifts in the retained atmospheric heat. I submit that it has already on at least 3 different occasions since 1998.
Co2 as a control knob of temperature, it isn’t. What? the temperatures are going to have to be adjusted again ? The thermostats just aren’t recording the temperature properly ? According to me, co2 ppm/v increase for this year should decrease along with the temperature. Let’s see if that happens. They will probably adjust that too. It’s a difficult thing to explain: if last year (2016) the increase was 8.0 ppm/v , then if it’s cooler in 2017, I expect that the ppm/v will be less than 8.0 ppm/v increase (or whatever it was) for this year. Unless of course IPCC and associates proclaim an over all reduction in the production of co2…. as if that will actually happen. Oh, what the heck, doesn’t matter, it’ll be 8.2 ppm/v increase. Somewhat what like 2005, co2 level stood for years at 2.52 ppm/v increase. And then by magic, 3.10 ppm/v. .. 0.58 ppm/v looks like a small number until you realize how much anthropogenic co2 is needed to achieve that number ( again if the increase is actually the result of anthropogenic co2) . It also helps them explain where the unaccounted co2 went. Well, it had to have ended up in the atmosphere, we can’t say we don’t know. The amount previously ( before magic numbers) unaccounted for is enormous. There currently still is huge amounts missing in the numbers, but I’m sure NOAA is working tirelessly to correct that. ( sarc at NOAA)

richard verney
Reply to  rishrac
January 12, 2017 8:19 am

It is quite clear from the satellite record that natural variability is large. On short term basis, it is more than 1 degC. This is no doubt one reason why the signal to CO2, if any at all, cannot be isolated and eeked out from the variability of natural variation and the inherent deficiencies/short comings (including sensitivity and errors) of our best measuring equipment.

Ross King
Reply to  rishrac
January 12, 2017 8:36 am

Rishrac says above:
“The only number that matters right now is what is the temperature. 0.2 C above normal.”
Please define “normal”.

Reply to  Ross King
January 12, 2017 8:49 am

“The only number that matters right now is what is the temperature. 0.2 C above normal.”
Please define “normal”.

The present anomaly for December was 0.229 above its December average with respect to the base line years that RSS chooses to use. Of course it is not normal in the same way we have a “normal” body temperature when we do not have a fever.

Reply to  Ross King
January 12, 2017 1:48 pm

On the left hand side of the vertical are numbers. It’s the one designated as zero. Which is what I’m referencing. The temperature up or down could be in absolute terms. Doesn’t matter as the global temperature has fallen.
Below are attempts to explain short term variations on ” the heat hiding in the ocean “. Where did the oceans warm up 0.8 C ? It should be more than 0.8 C shouldn’t it. While the global temperature measured the entire earth, oceans only comprise 7/10ths. Ok, there are variations in temperature. So some areas are the same, some cooler, and using CAGW s line that it is the average, where is the ocean water that much warmer ? We just have el nino after el nino, on and on ? From one peak el nino to the next was 18 years. That’s not the premise of AGW. I pretty sure that when the “heat was hiding in the ocean”, they went looking for it and didn’t find it. It’s like the tropical hotspot. Doesn’t exist. Co2’s signal is background noise at best.
I do think that there has been a slight warming trend. Who knows ? I also think it isn’t due to co2. However there is a strong possibility that nothing has happened. If the zero reference point is moved up 2 tenths, there has been no warming at all. The adjustments that NASA/NOAA have made could well cover 2/10ths. We could just as well be in a LIA and still get a similar looking graph. The central point is, where is the heat now ? On the one hand AGW says it took 100 years to get 1.0 C of warming, and on the other 0.8 C of cooling in 6 to 7 months. How can the atmosphere loose that much heat ? And considering the large increase in atmospheric co2. Aren’t we familiar with the lab experiments detailing how co2 retains heat ? It gave up the heat because…… ?? The nature of co2 changed ?
Minor variations in global temperature is understandable, large ones like this, are not.

Reply to  Ross King
January 12, 2017 3:27 pm

On the left hand side of the vertical are numbers. It’s the one designated as zero. Which is what I’m referencing.

That, (0), is the average value of their baseline anomaly which is from 1979 to the end of 1998.

Where did the oceans warm up 0.8 C ? It should be more than 0.8 C shouldn’t it.

I am not sure what you are referring to here, but during an El Nino, section 3-4 warms up by up to 3 C and this has affects on the whole ocean after a while.

How can the atmosphere loose that much heat ?

When less of the ocean surface is warm, the atmosphere feels it. This can be done by winds causing warm water to pile up in one place or a 1 or 2% increase in cloud cover.

If the zero reference point is moved up 2 tenths, there has been no warming at all.

Different reference periods are one cause of different anomalies.

Reply to  rishrac
January 12, 2017 8:43 am

If the temperature was 1.0 C, how has global temperature fallen 0.8 C in such a short period of of time ?

Heat that was deep in the western Pacific Ocean with little area exposed to the atmosphere was spread out with a much larger area exposed to the air. So the temperature spiked. As this large area lost heat, temperatures dropped.
As an analogy, suppose a city has a mile long cylindrical piece of iron at 1000 C in the middle of a city, but that the iron is stuck in the ground with only 1 square metre exposed to the air. What would happen if this mile long iron at 1000 C is pulled out and placed in a horizontal position? The temperature in the city would briefly spike until the bar cooled off due to natural processes.

john harmsworth
Reply to  Werner Brozek
January 12, 2017 11:02 am

Maybe, but if this heat was deep in the Pacific it must have been there for a long time. What we are really talking about is that Pacific surface water was replaced by water from below that was a degree or so warmer. So it had to have been a relic of an earlier warm period.
I have no problem understanding how this could be, but extensive knowledge of the Earth’s reservoirs of heat does not exist. In substitution we have flaky suppositions of tree ring differentials of less than a millimeter. It is pretend science for the benefit of the practitioners and politicians.

Reply to  Werner Brozek
January 12, 2017 12:24 pm

Maybe, but if this heat was deep in the Pacific it must have been there for a long time.

Bob Tisdale is the expert here. But my understanding is that it is not deep, but extending down from the surface perhaps a few hundred metres and about 3 C above normal. It is due to wind blowing hotter water west causing a slight elevation of hotter water. Then when winds die down over a period of a few years, the hotter water sloshes back to the middle of the Pacific Ocean greatly increasing the area of warmer water exposed to air and thereby warming the air.

Reply to  Werner Brozek
January 12, 2017 7:16 pm

Did you do the math on this ? That’s an amazing feat that a relatively small area of the Pacific can pull down so much energy in such a short amount of time. We aren’t talking about the entire pacific ocean. That’d be an oceanic vortex in such a relatively small area. I suppose by spring we should see another el nino ? Surely we should see the depth and volume of water warming at an accelerated pace.
Let’s picture it this way. It’s 70 F today, a cold front moves through and drops the temperature to 40 F. The system that was here is displaced. That’s weather. The dropping of temperature from 1.0 C to 0.2 C is climate. The warm air didn’t get displaced. The energy only has a couple of ways to go. It either gets absorbed, released, or a combination of the two. … in any event, that is not what AGW says. Short term ? Do you see the squiggly lines where for a few years the temperature was building before the election nino ? Then on the other side, now, a sharp drop off. All that energy is gone. The temperature of the water and air are relative to one another. A sharp differential is sheer.
0.8 C drop is a very large amount of energy that went somewhere. It should be obvious and not conjecture of where that energy is. Just stating that it gets absorbed by a part of the Pacific Ocean, is that so, or being where it is warms up due to other factors. .? How do you filter out the one from the other when the ocean acts like a weather system ?

Reply to  Werner Brozek
January 12, 2017 7:34 pm

Did you do the math on this ? 

Ask Bob Tisdale when he has his next article. Keep in mind that water has a much larger heat capacity than air. So a relatively small amount of water can heat a large amount of air.

Reply to  Werner Brozek
January 12, 2017 7:54 pm

And we can calculate it too. As I stated, CAGW went looking for the heat in the ocean and it wasn’t there. You remember the phrase ” the heats hiding in the ocean ” ?
The two reasons I think there has been a slight warming trend is because I remember the 1970s and that I am convinced that co2 follows temperature . Otherwise, there is a very strong probability that nothing has changed except weather patterns, and not the climate.

Reply to  Werner Brozek
January 13, 2017 2:22 am

The the sudden atmospheric warming can simply explained by the fact that hot water stored some hundred meters deep in the West pacific is suddely released to the surface of the whole Pacific, which makes one third circumference of the globe – a real big area!
The heat in the West Pacific comes from the trade winds, having no clouds and exposing a big part of the entire Pacific permanently to the sun. Without the winds, the warmed up and piled up waters are flushing back toward East and the American Continent.
Water can store 1000 times more heat than air, so the atmophere is heated up quickly. When cold water again is pushed from America to the West Pacific, the heat release is stopped. And the atomosphere ist radiating quickly its heat towards space.

Bill Illis
Reply to  Werner Brozek
January 13, 2017 7:34 am

The CERES satellite says the energy (originally stored in the Western Warm Pool area next to Indonesia in the top 200 metres from let’s say 2014 to early 2016 but the circulated underneath back to the East in the undercurrent to surface at the Galapagos Islands and formed the super-El Nino) …
… was emitted back to space, more-or-less 3 months after the super-El Nino peaked.
The CERES Long-Wave emissions from Earth to space peaked in February 2016 (a pretty good spike there)comment image
Very similar to the spike that the ERBE satellite recorded for the 1997-98 super-El Nino.
http://www.met.reading.ac.uk/~sgs02rpa/PAPERS/wielickietal2002_supp_fig.jpg
Energy stored in the western warm pool (you can say from the Sun because that was the original source – eventually get circulated to a spot where it gives it back to the atmosphere, warming occurs temporarily – but then the energy gets emitted back to space – temperatures back to normal – awaiting the next temporary energy build-up. The history of the ENSO regions says this happens repeatedly and so does period when the energy is drained down and there are La Ninas. Probably happening like this since the Pacific became a wide deep ocean about 400 million years ago.

Hivemind
Reply to  rishrac
January 12, 2017 3:17 pm

“The only number that matters right now is what is the temperature. 0.2 C above normal.”
I disagree. The only number that matters right now is how much grant money can be extracted from the taxpaying suckers.

bit chilly
Reply to  rishrac
January 12, 2017 3:29 pm

spot on rishrac . there is a very good reason anomalies are used instead of absolute temps . the so called positive anomalies are within the variation to be expected just about everywhere the temperature is recorded , no one seems to be too bothered about this issue though.

richard verney
January 12, 2017 8:12 am

Werner
So it is arguable that, according to the satellite data, there is still a pause in the warming, and it goes back to 1998. At any rate, the possibility of this cannot be ruled out.
You suggest that the first 10 days of January are showing cooling. Obviously, we do not know whether this will continue or not for the remainder of the month. But speculating that January will continue where December left off, and that January will show a decline in the anomaly, at what level of anomaly will it be before the trend line has no positive slope?
Of course, that level of anomaly might not be reached until later this year (if indeed it is reached, which no doubt will depend upon whether the ENSO neutral conditions change towards La Nina conditions).

Steve Case
Reply to  richard verney
January 12, 2017 8:44 am

So it is arguable that, according to the satellite data, there is still a pause …
Depends on what you look at. If you let the Warmunists define the metric and only look at average temperatures then it isn’t so obvious. Taking the average loses a lot of information, after all, the average of 1 and 99 is 50 and the average of 49 and 51 is also 50. But if you look at Summer time Maximum Temperatures it becomes a whole lot more apparent that what is being claimed might not be exactly so.
http://oi63.tinypic.com/156fl8y.jpg

Toneb
Reply to  Steve Case
January 12, 2017 9:49 am

No, the best metric to pick the AGW signal is minimum temperature….comment image

Bartemis
Reply to  Steve Case
January 12, 2017 11:03 am

So, Toneb, the takeaway message is that temperatures are not actually getting higher, but the spread between low and high is decreasing, which augurs more benign weather, yes?

MarkW
Reply to  Steve Case
January 12, 2017 11:49 am

You are using the warming prior to 1920 to create that slope.
However the warming prior to 1920 cannot have been caused by CO2, since CO2 levels weren’t rising at the time.
In short, you have disproven your own hypothesis.

Dave fir
Reply to  Steve Case
January 12, 2017 11:59 am

Increases in minimum temperatures simply reflect UHI effect. Get a grip, data mongers.

Steve Case
Reply to  Steve Case
January 12, 2017 12:16 pm

Toneb January 12, 2017 at 9:49 am
No, the best metric to pick the AGW signal is minimum temperature….

When Johnny Carson said, “Wow! Was it ever hot to day, a real scorcher!” And the audience chimed in, “How hot was it?” They weren’t asking how warm it was at 2:00 AM. The summer time minimum temperature sheds no light on whether or not the warming is a catastrophe or not. After all, it’s the extreme weather that we are told is the problem. Cooler summers and warmer winters doesn’t constitute extreme weather. The Warmunistas are going to have start telling us about extreme mildness.

Toneb
Reply to  Steve Case
January 12, 2017 12:45 pm

“Increases in minimum temperatures simply reflect UHI effect. Get a grip, data mongers.”
Nope.
Ask the BEST crew and former sceptic Richard Muller.
http://berkeleyearth.org/faq/
“Is the urban heat island (UHI) effect real?
The Urban Heat Island effect is real. Berkeley’s analysis focused on the question of whether this effect biases the global land average. Our UHI paper analyzing this indicates that the urban heat island effect on our global estimate of land temperatures is indistinguishable from zero.”

Toneb
Reply to  Steve Case
January 12, 2017 12:51 pm

“You are using the warming prior to 1920 to create that slope.
However the warming prior to 1920 cannot have been caused by CO2, since CO2 levels weren’t rising at the time.
In short, you have disproven your own hypothesis.”
Not at all.
It was the only graph of US min temps I could find.
You DO need to do a OLS trend by eye from around 1970 when GHG forcing outweighed aerosols content.
Or 1930 if you prefer, to compare with the trend drawn on the max graph just above.
It would not be negative.

MarkW
Reply to  Steve Case
January 12, 2017 1:11 pm

1) BEST has been thoroughly refuted.
2) Muller was never a skeptic.
Two lies in one sentence. You are improving your padawan.

MarkW
Reply to  Steve Case
January 12, 2017 1:11 pm

Toneb: If the only graph that you can find disproves your point, then your point is mighty weak to begin with.

Bartemis
Reply to  Steve Case
January 12, 2017 1:25 pm

Toneb @ January 12, 2017 at 12:51 pm
“You DO need to do a OLS trend by eye from around 1970 when GHG forcing outweighed aerosols content. Or 1930 if you prefer, to compare with the trend drawn on the max graph just above.
Merely manifestations of the ~60 year cycle:
http://i1136.photobucket.com/albums/n488/Bartemis/temps_zpsf4ek0h4a.jpg

Menicholas
Reply to  Steve Case
January 12, 2017 2:25 pm

Funny thing: Outside of conversations about global warming with alarmists, I hardly ever get to use the word “HOGWASH!”

clipe
Reply to  Steve Case
January 12, 2017 4:21 pm

and former sceptic Richard Muller

https://www.technologyreview.com/s/403256/global-warming-bombshell/
by Richard Muller October 15, 2004

If you are concerned about global warming (as I am) and think that human-created carbon dioxide may contribute (as I do), then you still should agree that we are much better off having broken the hockey stick. Misinformation can do real harm, because it distorts predictions. Suppose, for example, that future measurements in the years 2005-2015 show a clear and distinct global cooling trend. (It could happen.) If we mistakenly took the hockey stick seriously–that is, if we believed that natural fluctuations in climate are small–then we might conclude (mistakenly) that the cooling could not be just a random fluctuation on top of a long-term warming trend, since according to the hockey stick, such fluctuations are negligible. And that might lead in turn to the mistaken conclusion that global warming predictions are a lot of hooey. If, on the other hand, we reject the hockey stick, and recognize that natural fluctuations can be large, then we will not be misled by a few years of random cooling.
A phony hockey stick is more dangerous than a broken one–if we know it is broken. It is our responsibility as scientists to look at the data in an unbiased way, and draw whatever conclusions follow. When we discover a mistake, we admit it, learn from it, and perhaps discover once again the value of caution.

clipe
Reply to  Steve Case
January 12, 2017 4:29 pm
AndyG55
Reply to  Steve Case
January 12, 2017 6:25 pm

“and former sceptic Richard Muller”
roflmao.. you are in inebriated LIAR, toneb
Muller never was a skeptic.

clipe
Reply to  Steve Case
January 12, 2017 6:38 pm

Misinformation can do real harm.

Hello CNN/Buzzfeed.

Devil
Reply to  Steve Case
January 13, 2017 3:41 am

Muller was a skeptic but no denier. See the difference!

Bill Illis
Reply to  Steve Case
January 13, 2017 1:54 pm

Here is the Earliest US temperature graph that there is: The NCDC completely wiped these records out of everywhere but they missed one the Wayback Machine.
http://web.archive.org/web/20000817130955/http://www.ncdc.noaa.gov/ol/climate/research/1999/ann/usthcnann_pg.gif
And the data: here
http://web.archive.org/web/20000817172708/http://www.ncdc.noaa.gov/ol/climate/research/1999/ann/usthcnann.txt
One of the earliest global temperature records from the NCDC. Adjusted out quite a bit of this now.
http://web.archive.org/web/20000815214002/http://www.ncdc.noaa.gov/ol/climate/research/1997/globet3.txt

clipe
Reply to  Steve Case
January 13, 2017 7:58 pm

Muller was a skeptic but no denier. See the difference!

“See the difference!”
Did you confuse an exclamation point with a question mark?
Reading what Muller said, no. Muller is a believer.
Does Toneb know the difference?

Reply to  richard verney
January 12, 2017 9:02 am

So it is arguable that, according to the satellite data, there is still a pause in the warming, and it goes back to 1998. At any rate, the possibility of this cannot be ruled out.

True, but Lord Monckton and I use the assumption that there has to be a 50% chance of cooling rather than a 30% chance of cooling before we calculate any pause.

at what level of anomaly will it be before the trend line has no positive slope?

It is not only the anomaly that is important but the length of time at that anomaly. Right now, at 0.229, RSS is at the zero line. If it drops 0.1 below this, it will take x months to have a zero slope. But if it drops 0.2 below this, it will take x/2 months to have a zero slope, etc.

Nick Stokes
Reply to  Werner Brozek
January 12, 2017 3:38 pm

To put numbers on that – the mean of RSS from mid 1997 to Feb 2016 was m=0.26°C. The sum of degrees exceeding that, to end 2016 was 2.6. So (approx) it would take 26 months at 0.16C (m-0.1) or 13 months at 0.06C. Dec was 0.23C.

Richard M
Reply to  Werner Brozek
January 12, 2017 4:11 pm

You simply cannot end a trend with a super El Nino. That is meaningless. If you want to look at a trend the best you can do is stop right before the El Nino kicked in. Then you can look at the El Nino itself to see if it shows any behavior outside the ordinary. If you do this you get something like the following graph.
http://www.woodfortrees.org/plot/rss/from:1997/to/plot/rss/from:1997/to:2014.75/trend/plot/rss/from:2016.15/to/trend/plot/rss/from:2014.75/to:2016.15/trend
The pause is still active using this slightly different view. I never liked Monckton’s pause because it was always destined to end whenever we hit a strong El Nino. Now, you could go back to Monckton’s method whenever the current La Nina ends but that but that might not be for 2 more years. Better to just ignore ENSO altogether. Has anyone tried to chart only months with ENSO neutral conditions? This would seem to solve the problem.

Nick Stokes
Reply to  Werner Brozek
January 12, 2017 5:00 pm

Richard M,
“You simply cannot end a trend with a super El Nino. That is meaningless. If you want to look at a trend the best you can do is stop right before the El Nino kicked in.”
So you show a plot starting with a super El Nino, but with the one at the end carefully excluded!

TRM
Reply to  Werner Brozek
January 12, 2017 6:33 pm

Nick, he also included the followup la-nina and stated that using Monckton’s method would be valid after the current la-nina is finished. I too would like to see the trend for ENSO neutral months.

Richard M
Reply to  Werner Brozek
January 12, 2017 8:35 pm

Nick, starting with an El Nino almost always starts with a La Nina immediately after. These balance out as far as most trends go. So, yes starting with an El Nino-La Nina pair is perfectly valid. Ending with an El Nino does not include the balancing La Nina which is clearly nonsense. I find it amusing you would even ask such a silly question.

Nick Stokes
Reply to  Werner Brozek
January 12, 2017 10:38 pm

Richard M
“Ending with an El Nino does not include the balancing La Nina which is clearly nonsense.”
Sounds like you’re making up these rules as you go along. Lots of special pleading. But how long do we have to wait for that La Nina? Seems like it might have come and gone.

Richard M
Reply to  Werner Brozek
January 13, 2017 11:31 am

Nick, it’s called common sense and is the only way to understand what is really going on if you want to continue to use trends across ENSO active years. We just had two years of El Nino conditions so it will likely take two more years to see what happens. I still think a better way is to remove all the noise and just look at trends in the neutral years.

Richard Barraclough
Reply to  Werner Brozek
January 14, 2017 1:57 pm

Richard M January 12, 2017 at 4:11 pm says “The pause is still active using this slightly different view”
That certainly is a different view. You stop measuring it just before it vanished and then say “Look! It never went away”

Mark from the Midwest
January 12, 2017 8:15 am

Simple question from a guy in telecom: The use of the t-distribution assumes a normal distribution. Are temperatures really distributed normally? Second, do we have good historical data that would tell us that the variance since 1998 is in line with the true population variance, or is everyone just winging it?
Thanks to whoever answers

Mark from the Midwest
Reply to  Mark from the Midwest
January 12, 2017 8:55 am

wrong question, should be: are the anomalies distributed normally?

paqyfelyc
Reply to  Mark from the Midwest
January 12, 2017 9:26 am

fair enough, although, since anomalies are just (real temperature – temporal average temperature) … the answer keeps the same.
No.

Mark from the Midwest
Reply to  Mark from the Midwest
January 12, 2017 10:42 am

OK, so now I don’t have a whole lot of confidence in the confidence interval, per se. Based on the statistical analysis one would hedge the bet to reflect a 70-30 up-down ratio, but based on the notions about the impact of the last El Nino I’d be willing to go all-in on the notion that the trend line slope moves toward zero.

Bellman
Reply to  Mark from the Midwest
January 12, 2017 5:50 pm

If you’re talking about the trend the question should be, are the residuals distributed normally?

Reply to  Mark from the Midwest
January 12, 2017 9:05 am

Nick Stokes will be available in a few hours to answer all questions based on his numbers.

commieBob
Reply to  Mark from the Midwest
January 12, 2017 10:01 am

The telecom problem of extracting a signal from signal-plus-noise is trivial compared with extracting a supposed trend from temperature data. Given the number of periodic and quasi-periodic signals, there is zero chance that the ‘noise’ in the temperature data is gaussian.

Nick Stokes
Reply to  Mark from the Midwest
January 12, 2017 10:01 am

” or is everyone just winging it?”
Yes, when it comes to estimating uncertainty. It’s uncertain. You don’t know that the residuals in a trend are iid normal, in fact they aren’t. More details here. There is a lot of debate, for example, about how to treat the autocorrelation of residuals. In the end, you get someone’s estimate of uncertainty. You can be more or less uncertain if you wish.

Dave Fair
Reply to  Nick Stokes
January 12, 2017 12:06 pm

IPCC climate models admit to no uncertainty.

Nick Stokes
Reply to  Nick Stokes
January 12, 2017 5:55 pm

With GCMs there is nothing to be uncertain of. You run a model, and that is what it said. AS I said in another comment, uncertainty is about what would happen if you had done things differently. With a model that is easy. You just do things differently and see. Run it again. That is what they do.

Bartemis
Reply to  Mark from the Midwest
January 12, 2017 11:04 am

And, more importantly, are they independent?

Reply to  Mark from the Midwest
January 12, 2017 11:54 am

Mark from the Midwest January 12, 2017 at 8:15 am

Simple question from a guy in telecom: The use of the t-distribution assumes a normal distribution. Are temperatures really distributed normally?

Generally, no. They typically are high Hurst exponent data. As a result, the confidence intervals are much wider. See my post A Way To Calculate Effective N.
Alternatively, climate datasets can also be modeled successfully as an ARMA process, typically with a high AR value and a negative MA value.
FOR EXAMPLE: the UAH MSU lower troposphere temperature dataset has a Hurst exponent of 0.7, and is modeled as an ARMA process with AR = 0.93 and MA = -0.31.
Short answer is that many, many uncertainties are underestimated in climate science due to bad statistics.
w.

urederra
January 12, 2017 8:15 am

So, which temperature data set is the correct one?

richard verney
Reply to  urederra
January 12, 2017 8:21 am

None of them. They all have issues. They are at best a guide.

Reply to  urederra
January 12, 2017 9:18 am

So, which temperature data set is the correct one?

Only GISS will show a statistically significant record this year. But in its defense, GISS had 2015 as its previous warmest year whereas the satellites had 1998 as their warmest year. And the satellites are much warmer as compared to 2015 than GISS will be. But then again, satellites respond hugely to a strong El Nino.
Perhaps the best answer I can give is to look at the WTI line on the graph in my article that combines and averages 4 data sets. However I believe it uses UAH5.6 and not UAH6.0.

Dave Fair
Reply to  Werner Brozek
January 12, 2017 12:14 pm

Satellite and radiosonde TRENDS destroy IPCC climate model “projections.” They, additionally, disprove surface temperature estimates. Nitpick the details all you want, but satellite trends show the way.

urederra
Reply to  Werner Brozek
January 12, 2017 12:45 pm

Thanks for the reply.
I have 2 clocks in my kitchen. One hung on the wall, the other in the microwave oven. Had I got only only one, I would know the time, but since I have two, they drift apart. Now one is two minutes ahead.
I can calculate the mean, but my best option is to synchronize them again with Windows time server, which is my reference time.
The problem here is that there is no reference point or reference data set and people use the data set that fits better with their narrative.

GregK
Reply to  Werner Brozek
January 12, 2017 5:06 pm

uredarra said….
“Thanks for the reply.
I have 2 clocks in my kitchen. One hung on the wall, the other in the microwave oven. Had I got only only one, I would know the time, but since I have two, they drift apart. Now one is two minutes ahead”.
But what would be the point of having two clocks if they showed the same time?

observa
Reply to  Werner Brozek
January 17, 2017 6:14 am

“But what would be the point of having two clocks if they showed the same time?”
So that if they don’t show the same time you know you’ve got the wrong time and can adjust them to be the same again so then you know for sure you’ve got the right time. It’s called climatology and only climatologists have the right clocks.

Another Doug
January 12, 2017 8:17 am

Whatever happened to the concept of significant digits? Lots of pointless claims and counterclaims could be avoided by eliminating the second and third digits after the decimal. Maybe even the first.

richard verney
Reply to  Another Doug
January 12, 2017 8:24 am

None of these data sets should present results to more than a tenth of a degree. Even then, one would wish to see a proper and reasonable error bar set out.

firetoice2014
Reply to  richard verney
January 12, 2017 8:54 am

Clearly, one accurate temperature data set would be more valuable than four temperature estimate sets, which is what we have now. GISS, NCEI, HadCRUT and Japan are NOT data sets.

Dave Fair
Reply to  richard verney
January 12, 2017 12:24 pm

Prior to the satellite era, no measuring device was up to supporting claims of measuring climate change parameters, much less fundamentally changing our society, economy or energy systems.
ARGO is only a decade old.
Give it a few years, then come back with your save-the-world schemes.

Matt Bergin
Reply to  Another Doug
January 12, 2017 9:02 am

If we are talking the original thermometers you should ignore “everything” after the decimal. Since they were graduated in single degrees.

Reply to  Matt Bergin
January 12, 2017 9:31 am

If we are talking the original thermometers you should ignore “everything” after the decimal. Since they were graduated in single degrees.

In that case, my whole table would be either zeros or ones. How useful would that be?

Tom Dayton
Reply to  Matt Bergin
January 12, 2017 9:49 am

Matt Bergin: Learn about the Law of Large Numbers, please.

Reply to  Matt Bergin
January 12, 2017 10:05 am

“In that case, my whole table would be either zeros or ones. How useful would that be?”
Very useful Werner. It would give the perspective of endangered professional – a qualified metrologist.

Paul Penrose
Reply to  Matt Bergin
January 12, 2017 11:48 am

Tom Dayton,
The Law Of Large Numbers assumes the error distribution of the data is normal. Since this is not the case the with these data sets, you can’t remove the “noise” by averaging.

MarkW
Reply to  Matt Bergin
January 12, 2017 11:52 am

To add to Paul’s point. The law of large numbers also assumes that the equipment is either the same or identical, and the circumstances surrounding each measurement are also the same.
Neither is true when it comes to temperature measurements.

Menicholas
Reply to  Matt Bergin
January 12, 2017 2:37 pm

Yes, the law of large numbers is “a theorem that describes the results of performing the same experiment a large number of times”.
Measuring temps on different days in widely separated locations with different equipment is not performing the same experiment a large number of times.
Not even close

Menicholas
Reply to  Matt Bergin
January 12, 2017 2:41 pm

Also, the law of large numbers does not erase the distinction between precision and accuracy.

Michael Kelly
Reply to  Matt Bergin
January 12, 2017 4:25 pm

The most elegant statement of how to present numerical results I have seen is that one writes down the first known digit(s), followed by the first uncertain digit. I work in a government regulatory agency, and battled for 5 years over the misuse of a mantissa containing a lead digit and three decimal digits. The number in question is a risk, and it is generated by a Monte Carlo analysis which has never (and can never, even in principle) been shown to produce accurate results. Seldom are more than 300 Monte Carlo runs done in this particular calculation, which may converge on a first digit. But each digit after that requires 100 time more calculations than the one before, so there should be no discussion of anything but the first digit. The really ridiculous part, though, is that the risk number emerging from this Monte Carlo is then multiplied by another probability, whose value can range from 0.3 to 0.7 depending on which expert one asks.
Going back to the initial statement, then, one could not write down any known digits, and would thus be constrained to the first unknown one – which might itself be wrong by a factor of 2.33. Yet analysts wasted an enormous amount of government and industry time and effort (in the millions of dollars) expressing “concern” that a number was coming out as 2.978 rather than 2.971. None of them went to school in the precalculator era (I used a slide rule in my freshman and sophomore engineering classes).
I managed to get the rule changed to use a one digit mantissa, rounded from the nearest decimal fraction. It’s still ultra conservative, and it is going to reduce cost by millions of dollars a year.

Dave Fair
Reply to  Michael Kelly
January 12, 2017 5:15 pm

I bought the first HP35 on campus ($395 in 1973!), even on a lean student budget. It is the only way I passed vector analysis. Graduated as Outstanding Senior Engineer, top of my class in 1974, having upgraded to an HP45 by then.
I still have both of them (and my last sliderule), and use an HP12C for everyday math when away from the PC. Screw anybody who doesn’t understand Reverse Polish Notation or how to use a sliderule.
I pre-paid my student GI Bill and current VA healthcare in Vietnam. Charlie Skeptic was born in the mud, the blood and the beer. Everybody got a piece of him.
All of you, dance around the numbers all you want. Nothing has changed in over 10,000 years, unless, maybe, it is slightly cooler. Trends are until they aren’t.
IPCC climate models are bunk. CAGW is religion.

Reply to  Another Doug
January 12, 2017 9:26 am

Whatever happened to the concept of significant digits?

As a retired physics teacher, I know exactly what you mean. Think of me as a humble reporter just reporting what the learned men are saying. I report what they are “saying” without trying to tidy up their “language” first.
But in my defense, my whole introduction and Section 1 dealt with uncertainties.

Another Doug
Reply to  Werner Brozek
January 12, 2017 9:50 am

To be clear, I’m not criticizing your reporting or analysis. Just think it’s silly that (somewhat arbitrary) global averages are reported to thousandths of a degree.

gnomish
Reply to  Werner Brozek
January 12, 2017 10:21 pm

what happened was that computers now do the number crunching and they do not find it tedious.
so you can go ahead and round your data input as much as you want but then the computer is going to do the calculations with as many bits as the alu uses regardless. and it will mercilessly inflate your one digit decimal integer fractions into infinity and beyond!
then it’s going to output numbers that you have to explicitly format if you want them rounded down – but when the computer prints the stuff it’s more tedious to spend time at desk editing – that’s what grad students and secretaries are for.

Carbon BIgfoot
Reply to  Another Doug
January 12, 2017 4:25 pm

OMG we finally have a mathematical student. I remember when a Township Engineer ( Civil ) wanted me to calculate a retention pond size to four decimal places when the soil coefficient was 0.75!! He was highly insulted when I pissed my self laughing. You Climate Wonks never learned Algebra I.

John Peter
January 12, 2017 8:19 am

“Does it seem odd that only GISS will probably set a statistically significant record in 2016?”
Maybe you won’t have to answer that question in a few months time.

Reply to  John Peter
January 12, 2017 9:34 am

Maybe you won’t have to answer that question in a few months time.

Are you suggesting that GISS is part of the swamp that Trump will drain?

Bartemis
Reply to  Werner Brozek
January 12, 2017 11:08 am

I would say, temperatures are still declining from the El Nino, which is still masking the underlying trend. We will have a much better picture in the next year.

Menicholas
Reply to  Werner Brozek
January 12, 2017 2:48 pm

“Are you suggesting that GISS is part of the swamp that Trump will drain?”
My hope is that a soup to nuts audit of procedures and practices will be ordered and, additionally, transparency will be implemented and enforced regarding such practices as adjustments, infilling, homogenization, and numerous other highly questionable and dubious methods employed by GISS and other participants in the CAGW cabal.
And someone at the top should have to ‘splain to all of us peons exactly how it is that all of the adjustments add up to a perfect match of the CO2 concentration chart!

Bartemis
Reply to  Werner Brozek
January 12, 2017 3:41 pm

comment image:large

Alba
January 12, 2017 8:38 am

Edited by Just the Correct Punctuation.
“As can be seen from the above graphic, the slope is positive from January 1998 to December 2016, however with the error bars, we cannot be 95% certain that warming has in fact taken place since January 1998.”
This should either be:
“As can be seen from the above graphic, the slope is positive from January 1998 to December 2016. However, with the error bars, we cannot be 95% certain that warming has in fact taken place since January 1998.”
Or
“As can be seen from the above graphic, the slope is positive from January 1998 to December 2016; however, with the error bars, we cannot be 95% certain that warming has in fact taken place since January 1998.”
“If anyone has an exact percentage here, please let us know, however it should be around a 60% chance that a record was indeed set for RSS.”
This should either be:
“If anyone has an exact percentage here, please let us know. However, it should be around a 60% chance that a record was indeed set for RSS.”
Or
“If anyone has an exact percentage here, please let us know; howeve,r it should be around a 60% chance that a record was indeed set for RSS.”
This has it correct in the original:
“Since this is less than the error margin of 0.1 C, we can say that 2016 and 1998 are statistically tied for first place. However there is still over a 50% chance that 2016 did indeed set a record, but the probability for that is far less than 95% that climate science requires so the 2016 record is statistically insignificant.”
If it’s important to get the science and maths right, it’s also important to get grammar and punctuation correct.
Incidentally, an issue is something over which there is disagreement. Thus whether or not the world is getting hotter too quickly is an issue.
A problem is something which has to be sorted. If your central heating is not working on a very cold day you have a problem, not an issue. (The issue might be the cause of your problem.)
So can we please go back to the sensible days when we called a problem a problem and we called an issue an issue.

Menicholas
Reply to  Alba
January 12, 2017 2:59 pm

I tried not to let it happen, but I could not help it…my eyeballs just rolled clean out of their sockets.

O R
January 12, 2017 8:50 am

UAH 5.6 TLT is significant from 1998- 2016, according to the Moyhu trend calculator:
Rate: 1.425°C/Century;
CI from 0.220 to 2.631;
t-statistic 2.317
This is not good credentials to the AMSU diurnal drift correction and satellite picks in UAH v6.
UAH 5.6 relies on non-drifting AMSU satellites and should be a good benchmark for validation of trends in v6.

Reply to  O R
January 12, 2017 9:43 am

UAH 5.6 TLT is significant

If Dr. Spencer feels that 6.0beta5 is the best for now, I am in no position to argue.

O R
Reply to  Werner Brozek
January 12, 2017 2:50 pm

The only “validation” of AMSU-era trends that I have seen over at Spencer’s, was that v6 agreed with RSS TLT 3.3. But that’s history, v3.3 was proven faulty long ago.
Actually, RSS tested the UAH 5.6 concept when they developed their new product. I think it was called MIN_DRIFT. It corroborated the finally chosen method, but had certain limitations (no nondrifting MSU-satellites).

AndyG55
Reply to  Werner Brozek
January 12, 2017 6:29 pm

UAH 6 also agrees with the only pristine surface data set in the world, USCRN
So don’t let the FARCE that is the GISS surface data fool you too much.

Richard M
Reply to  O R
January 12, 2017 4:24 pm

Ending a trend with a super El Nino is meaningless. You know it and I know it. Quit producing nonsense.

Reply to  Richard M
January 12, 2017 5:23 pm

Ending a trend with a super El Nino is meaningless. You know it and I know it. Quit producing nonsense.

If you also start with one, it balances out.

Richard M
Reply to  Richard M
January 12, 2017 8:37 pm

No Werner, it doesn’t balance out as I explained above. You MUST include EL Nino – La Nina pairs for a valid trend or completely eliminate all months that are not ENSO neutral.

David grawrock
January 12, 2017 8:56 am

Does anyone else note the correlation between the WFT graph and that graph floating around showing the democrats loss of positions under Obamacare? I’m sure there must be some causation we can find, since it comes from trees possibly it’s the amount of hot air generated by politicians relates to tree growth which is inverse of their ability to stay in office?

paqyfelyc
January 12, 2017 9:12 am

The 1998-2016 trend is useless. Unless you can explain why January 1998 is a relevant beginning date, as oppose as, say, June 1997 or October 1999.
I can bet the words “cherrypicked” and “El nino” are coming …
“Section 1” is far more interesting.
Seems to me a very large discrepancy, that Hadcrut4.5 and GISS report a significant warming, while UAH, RSS and Hadsst3 do not.

Javier
Reply to  paqyfelyc
January 12, 2017 9:48 am

I agree. Actually if you are ending right after a strong El Niño, then you could make a point that starting right after a previous strong El Niño would make some sense, but obviously then you get a rising trend.
Alternatively you could go from peak El Niño to peak El Niño. You still get contamination from each El Niño not being exactly the same, but you could argue that the effect of that is likely to be smaller than choosing any other starting point.
Best approach if you want to discuss rate of warming would be to represent rate of warming, and not temperatures.

Reply to  paqyfelyc
January 12, 2017 9:51 am

The 1998-2016 trend is useless. Unless you can explain why January 1998 is a relevant beginning date, as oppose as, say, June 1997 or October 1999.

Starting at the beginning of one very strong El Nino and ending with the end of an equally strong later El Nino is as fair as you can get. It is better than going from El Nino to La Nina or vice versa.
(Hadcrut4.5 will not show a statistically significant record.)

Richard M
Reply to  Werner Brozek
January 12, 2017 4:33 pm

Absolutely not. El Nino and La Nina are not random events. La Nina typically comes right after an El Nino. Hence if you start right before El Nino you capture both the El Nino and La Nina at the start of the trend. These tend to balance out. If you stop right after an El Nino you miss out on the balancing effect of the La Nina.
The only valid way is to start before one of the pairs and end after one the pairs. Nothing else is valid. Or, as I keep saying, build a graph with only ENSO neutral months. Throw out all the El Nino and La Nina months. That is, treat them as outliers.

Reply to  paqyfelyc
January 12, 2017 9:54 am

Seems to me a very large discrepancy, that Hadcrut4.5 and GISS report a significant warming, while UAH, RSS and Hadsst3 do not.

That is true! And both have seen many “adjustments”.

Nick Stokes
Reply to  paqyfelyc
January 12, 2017 10:07 am

“Seems to me a very large discrepancy, that Hadcrut4.5 and GISS report a significant warming, while UAH, RSS and Hadsst3 do not.”
UAH, RSS and Hadsst3 are measuring different places. But for more confusion, UAH 6 is not significant but UAH 5.6 is. Remember though that there is a continuum of uncertainties, and a 95% cut-off is arbitrary.
People often assign the wrong significance to significance.

Bartemis
Reply to  Nick Stokes
January 12, 2017 11:15 am

But for more confusion, UAH 6 is apparently not significant but UAH 5.6 apparentlyis.
FIFY. If they are both apparently significant or not significant, then they may reflect reality. If one is and one isn’t, then at least one of them is wrong.
GISS is not even a serious contender – too many tenuous “adjustments”.

Nick Stokes
Reply to  Nick Stokes
January 12, 2017 11:33 am

‘too many tenuous “adjustments”.’
The cumulative adjustments to GISS are small compared to the adjustment made in going from UAH 5.6 to 6.

Dave Fair
Reply to  Nick Stokes
January 12, 2017 12:43 pm

Do any of you understand “significant?”
Today’s climate does not differ materially from any of those experienced over the Holocene. Until it does, it’s all mental masturbation, number mongering and sophisticated-sounding speculation about the past and future. Statistics cannot describe the unknown. Unhinged numbers are not reflections of the reality.
Anybody that understands the climate and its drivers, stand up. Everyone else, shoot them. They are liars and charlatans.

Bartemis
Reply to  Nick Stokes
January 12, 2017 1:28 pm

“The cumulative adjustments to GISS are small compared to the adjustment made in going from UAH 5.6 to 6.”
Not really. And, adjustments to UAH have gone both ways. “Adjustments” to GISS are predominantly in one direction. It will be taught in years to come as a particularly egregious exemplar of confirmation bias.

Nick Stokes
Reply to  Nick Stokes
January 12, 2017 2:44 pm

Bartemis,
“Not really. And, adjustments to UAH have gone both ways. “Adjustments” to GISS are predominantly in one direction.”
Here is the graph since 1979 (duration of UAH), all data set to the UAH anomaly base of 1981-2010, with 12 month running mean on monthly data. The GISS versions are as taken from the Wayback machine:comment image
Here is the difference graph. It’s pretty much downhill for UAH. Not much pattern for GISScomment image

Bartemis
Reply to  Nick Stokes
January 12, 2017 3:36 pm

The difference between UAH versions is basically a bias shift after 2004:
http://woodfortrees.org/plot/uah6/from:2000/plot/uah5/to:2004/plot/uah5/from:2004/offset:-0.08
GISS has been systematically “adjusted” to make the past colder and the present warmer.

Nick Stokes
Reply to  Nick Stokes
January 12, 2017 4:36 pm

Bartemis,
“The difference between UAH versions is basically a bias shift after 2004”
Your WFT plots really need at least 12-month running mean so one can see what is happening. Here is WFT for 5.6 and 6 with that smoothing and trends shown. There is a particularly large trend difference (downward) since 1997.comment image

Bartemis
Reply to  Nick Stokes
January 12, 2017 6:00 pm

What is it with the “trends” crowd? They think a least squares linear regression is some kind of holy thing.
You’re drawing trends of noise, and drawing conclusions. Stop it.

Bartemis
Reply to  Nick Stokes
January 12, 2017 6:03 pm

Besides which, you did it wrong. Your trend differences are just the smoothing of the step change I highlighted.
http://woodfortrees.org/plot/uah6/from:2000/mean:12/plot/uah5/to:2004/mean:12/plot/uah5/from:2004/offset:-0.08/mean:12

AndyG55
Reply to  Nick Stokes
January 12, 2017 6:32 pm

UAH5.6 to UAH 6…. KNOWN justifiable adjustments
GISS…. unjustified scam driven adjustments to cover a failed hypothesis and agenda
There is a HUGE difference, Nick…
And I’m pretty sure you KNOW that, if your pay packet didn’t depend on you NOT knowing

Reply to  Nick Stokes
January 12, 2017 7:16 pm

Bartemis January 12, 2017 at 3:36 pm
The difference between UAH versions is basically a bias shift after 2004:
http://woodfortrees.org/plot/uah6/from:2000/plot/uah5/to:2004/plot/uah5/from:2004/offset:-0.08

The difference between the two versions of UAH is much more than that. Version 6 is a different product, unlike RSS they didn’t change the name. Version 6 represents the troposphere with a maximum weighting at 4km as opposed to version 5.6 which has a maximum weighting at 2km. The reason for this was to use a more robust method for calculating the temperature which avoids the interference with the surface present in version 5.6. RSS did a similar change and produced a new product TTT which has a very similar weighting to UAH version 5.6, like UAH I suspect that they will drop the TLT product since the surface interference is inherent to it (RSS always recognized this by not covering high altitude cold regions: Antarctica, Himalayas and Greenland). The new method allows wider latitude coverage because it doesn’t have the same deficiencies near the poles that the previous method did.

Bartemis
Reply to  Nick Stokes
January 13, 2017 8:25 am

Thank you for the info, Phil. The practical effect of it on the data is an apparent constant displacement after 2004. But, it is nice to know the reason, and that it is a good reason for expecting the later product to be more accurate.

Toneb
Reply to  Nick Stokes
January 13, 2017 10:54 am

“There is a particularly large trend difference (downward) since 1997.”
Yes Nick ….. since the new AMSU on NOAA5 took over…
http://postmyimage.com/img2/792_UAHRatpacvalidation2.png

Reply to  Nick Stokes
January 14, 2017 6:10 am

Toneb said, January 13, 2017 at 10:54 am:

Yes Nick ….. since the new AMSU on NOAA5 took over…

Nice try, Toneb, but no. The change in the difference trend evidently starts a few years before the MSU->AMSU transition. In fact, it starts in 1995-1996:comment image
NOAA on the RATPAC-A dataset:
https://www.ncdc.noaa.gov/data-access/weather-balloon/radiosonde-atmospheric-temperature-products-accessing-climate
“RATPAC-A contains adjusted global, hemispheric, tropical, and extratropical mean temperature anomalies. From 1958 through 1995, the bases of the data are spatial averages of the Lanzante et al. (2003a,b; hereafter LKS) adjusted 87-station temperature data. After 1995, the Integrated Global Radiosonde Archive (IGRA) station data form the basis of the RATPAC-A data. NOAA scientists used the so-called “first difference method” to combine the IGRA data. This method is intended to reduce the influence of inhomogeneities resulting from changes in instrumentation, observing practice, or station location.”
A bit more in depth:
ftp://ftp.ncdc.noaa.gov/pub/data/images/Ratpac-datasource.docx
“The scientific team derived the LKS data from data in the Comprehensive Aerological Reference Dataset (CARDS) obtained from NCDC (…). A team of three climate scientists adjusted monthly means for 87 carefully selected stations using a multifactor expert analysis, without use of satellite data as references and with minimal use of neighbor station comparisons. The team visually examined time series of temperatures at multiple levels, night-day temperature differences, temperatures predicted from regression relationships, and temperatures at other nearby stations. They also considered metadata, statistical change points, the Southern Oscillation Index and the dates of major volcanic eruptions. Using these indicators, they identified artificial change points and remedied them by either adjusting the time series at each affected level or, if adjustment was not feasible, by deleting data. Examinations were made of the adjustments for reasonableness. RATPAC uses the “LIBCON” version of the LKS adjusted data, which includes the most complete set of adjustments and uses the preferred adjustment method. The LKS data consist of monthly temperatures for 16 atmospheric levels from the surface to 10 mb, from 1948 to 1997.
(…)
IGRA
To remedy various recently identified problems in the CARDS database, NCDC has undertaken a wholesale revision of the CARDS quality-control procedures (Durre et al. 2005). Using the resulting IGRA dataset, rather than the CARDS dataset, extended the station data past 1997. The global mean time series from the (CARDS-based) unadjusted LKS and IGRA differ most notably before 1965.
Combining LKS and IGRA
Because of the careful scrutiny used by the LKS team to create the adjusted LKS data, LKS is likely to be more reliable than a dataset derived by applying the first difference method to the IGRA data before 1995. The RATPAC team therefore uses LKS instead of IGRA before 1995 to reap the substantial benefits of the LKS homogeneity adjustments.
However, because of the differences between the datasets before 1965, view RATPAC data from that period cautiously.”

Sorry, but this elaborate adjusting and combining procedure does not lend much credence to the long-term (climate-gauging) accuracy of the RATPAC-A series …

O R
Reply to  Nick Stokes
January 15, 2017 1:44 am

Kristian, nice try but worth nothing. Because Ratpac agree with other radiosonde datasets before and after 1996. You may even compare UAH with unadjusted radiosonde data, but the “AMSU break” is still there. Further, the AMSU-era started in the summer 1998, but there is an MSU/AMSU overlap until summer 2001 (in UAH 6). The year 2000 is in the middle of this transition, hence a proper start point for the “AMSU era”. Starting this era in 1998, 2000 or 2001 doesn’t matter, the trend break in UAH 6 vs radiosondes is similar..

Reply to  Nick Stokes
January 15, 2017 3:12 pm

O R said, January 15, 2017 at 1:44 am:

You may even compare UAH with unadjusted radiosonde data, but the “AMSU break” is still there. Further, the AMSU-era started in the summer 1998, but there is an MSU/AMSU overlap until summer 2001 (in UAH 6). The year 2000 is in the middle of this transition, hence a proper start point for the “AMSU era”. Starting this era in 1998, 2000 or 2001 doesn’t matter, the trend break in UAH 6 vs radiosondes is similar..

There’s no “AMSU break”, Olof. The ‘breaks’ occur earlier and later:comment image

(…) Ratpac agree with other radiosonde datasets before and after 1996.

Yes, it’s adjusted to agree, Olof. However, it very clearly does NOT agree with ANY satellite OR surface dataset. And THAT’S where the problem lies. The radiosonde datasets do NOT represent “Troposphere Truth” as you seem to think. They’re a jumbled MESS. Their trend is way too low from 1979 to 2001, and way too high from 1996 to 2016.comment imagecomment imagecomment image

O R
Reply to  Nick Stokes
January 16, 2017 6:59 am

Kristian, you have obviously a lot to learn.
Avoid all fuss about things that are not statistically significant. There is no significant difference between radiosonde, surface or satellite data in 1979-1999.
Surface and troposphere data are not the same thing. Satellite and radiosonde data can be comparable if you check possible effects of weighting functions, and geographical subsampling.
The period of 1979 to1999 is problematic due to two major volcanic eruptions. You cant expect that surface and troposphere agree during this period.
Radiosonde data are far more reliable after year 2000 than before that, There has been technical development, metadata is far better, and adjustments due to inhomogeneities are small.
Radiosonde datasets rely on millions of factory calibrated single use instruments that are sent aloft. Satellite data rely on a handful of instruments, where you cant reliably check the calibration once they are launched. Actually, the diverging trends in the AMSU-era relies on one single type of instrument used for the AMSU5-channel, which differ from that of the other AMSU channels.
The trend in AMSU 5 should be right between those of AMSU 4 and 6, but it is actually near half that rate. Strange, isnt it? The coolest sensor in the troposphere that doesnt agree with anything else..;-)
ftp://ftp.star.nesdis.noaa.gov/pub/smcd/emb/mscat/data/AMSU/AMSU_v2.0/AMSUA_only_Monthly_Layer_Temperature/AMSU_L3_Inter-Bias_vs_Merged_Trend_Ch4-8.jpg
If you want to compare data do it by difference charts… If we look at the following chart again:
http://postmyimage.com/img2/792_UAHRatpacvalidation2.png
Blue graph, the most fair comparison, here are my fifty cents..
The drop doesnt start in 1996, it start in 1998 when the AMSU is introduced.
The drop accelerates in 2001 when the last MSU disappears and the rogue NOAA-15 runs alone. UAH drops like a rock until the nondrifitng Aqua and other satellites chime in near 2005.
When Aqua is discontinued after 2010, the drop continues….

Reply to  Nick Stokes
January 17, 2017 4:52 pm

O R said, January 16, 2017 at 6:59 am:

Kristian, you have obviously a lot to learn.

Ah, the condescending route. The one taken by a man who’s got nothing of factual, objective (actually scientific) value in his argumentation.

There is no significant difference between radiosonde, surface or satellite data in 1979-1999.
Surface and troposphere data are not the same thing. Satellite and radiosonde data can be comparable if you check possible effects of weighting functions, and geographical subsampling.
The period of 1979 to1999 is problematic due to two major volcanic eruptions. You cant expect that surface and troposphere agree during this period.

LOL! Stop it with your apologetic nonsense. UAH were forced to increase their pre-2001 trend and did. The UAH team is actually pretty unique in their adjustment history in that their record includes a fairly even distribution between UP and DOWN adjustments over time. Normally such a distribution is what would be expected, but is something we hardly see anywhere in “Climate Science” of today; here, rather, the adjustments of relevant observed climate parameters over time quite consistently have a strong lopsided tendency (like towards 9:1), steadily moving the long-term trend evolution of the parameter in one particular direction, and then pretty much always in the direction of MODEL predictions/expectations. Which is naturally highly suspicious in and of itself …
UAH used to agree very well with the radiosondes up until 2001. Then an obvious need for an upward correction was found and adjusted for. By the UAH team. The people compiling the radiosonde datasets, however, apparently never got the memo, and thus never made the same necessary adjustments. And so, RATPAC, as our case in point, is still in a slump post 1995, making its 1979-2001 trend way too low, still to this day:comment image
The situation post 1995 is even much worse, though. You say:

Radiosonde data are far more reliable after year 2000 than before that, There has been technical development, metadata is far better, and adjustments due to inhomogeneities are small.

You just don’t get it, do you Olof? This isn’t about the actual radiosonde data from the individual stations. It’s about how it’s ASSEMBLED (and adjusted) into what is called a climate quality dataset.
The post 1995 progression of the RATPAC data is simply hopelessly out of touch with reality. It doesn’t fit at all with any other relevant physical parameter, the surface data, the satelllite data, or the OLR at the ToA data from CERES.
And the reason why is how the original data – riddled with inhomogeneities – is adjusted and compiled into a global, long-term series. It is simply a subjectively constructed dataset. Anyone can see this.
It’s as if you think the people doing the adjustments to obtain the final radiosonde datasets somehow magically have the “Troposphere Truth” hardwired in them and so simply cannot be wrong, even though their end product disagrees strongly both with the satellite AND the surface datasets, while the people making the satellite datasets are pure dimwits who don’t know how to correct for anything the satellites and their instruments do (and/or aren’t even aware that such a need exists), and that they therefore somehow don’t …
UAHv6 gl vs. gl OLR at the ToA (CERES EBAF, directly based on CERES SSF1deg and SYN1deg data):comment image
This is such a tight fit it is hard to claim it a coincidence. Why? Because we know the physical relationship between tropospheric temperatures and OLR at the ToA: the latter is (principally) a direct radiative effect of the former. We see the short-term cloud/humidity perturbations to this relationship during strong ENSO events, but beside these, the two parameters follow each other basically in lockstep over time. We see the same thing pre 2000, between UAH (and RSS) and (this time) ERBS Ed3_Rev1:comment image

Reply to  Nick Stokes
January 17, 2017 4:56 pm

comment imagecomment image
This tight correlation is no coincidence, Olof.
Meanwhile, the RATPAC-A trend Jan’79-Jun’01 is +0.085 K/decade, the GISTEMP LOTI trend during that same interval is +0.135 K/decade, almost 60% higher, but all that spectacularly reverses when the RATPAC-A dataset suddenly sports a +0.242 K/decade trend from Jan’99 to Dec’15 (almost three times higher than the 79-01 trend!), a warming rate 45% higher than the GISTEMP LOTI trend during the same time, rising at a mere +0.166 K/decade. (And we know the GISTEMP data itself is already rising way too fast post 1997…)
Why anyone, based on these un-physical divergences, would trust the RATPAC-A dataset is beyond me. Self-inflicted ideological, dogmatic blindness is the only explanation as far as I can see.
Once again, Olof: There is no AMSU break. The breaks are before and after 1998-2000. It’s right here, right in front of you (graph from Tamino). All you need to do is look:comment image
You come off as a clown trying to argue against (and/or ignore, and/or deny) this plot, and the other plots that I’ve shown you.

Dodgy Geezer
January 12, 2017 9:21 am

We should not, repeat NOT, be discussing ‘warming’ as if any warming proved AGW!
It is not enough for there to be warming – there has to be warming in compliance with the model predictions. In fact, there has to be ‘unusual’ warming – and any warming which matches the warming rate seen before 1950 should be presumed to be natural…

AGW is not Science
Reply to  Dodgy Geezer
January 12, 2017 12:24 pm

Agreed 100%. I’m sick of the unspoken presumption that “Any warming = caused by CO2.” BS! There is still no empirical evidence that CO2 causes temperature to rise. And the HYPOTHETICAL CO2 effect on temperature requires that “all other things” remain “equal,” which of course is not reality.
According to the Earth’s climate history, there is either NO relationship between CO2 and temperature (geologic time scales, hundreds of millions of years) or on shorter time scales where a correlation DOES exist (tens of thousands of years, or less), CO2 FOLLOWS temperature. There are also striking examples or REVERSE CORRELATION which simply would not exist if CO2 had any significant effect on temperature. It therefore holds that CO2 most certainly does NOT “drive” temperature. Not unless PROVEN otherwise. The Null Hypothesis (climate change is NATURAL) stands.
Temperature trend flat, up, or down, it doesn’t mean a damn thing – CO2 is NOT the driver, unless someone can PROVE otherwise.

Darrell Demick
Reply to  AGW is not Science
January 12, 2017 1:12 pm

Very well said and accurately stated, AGW. My compliments!

Chris Schoneveld
Reply to  Dodgy Geezer
January 12, 2017 12:42 pm

+10

Richard M
Reply to  Dodgy Geezer
January 12, 2017 4:37 pm

It’s not the warming we are looking for it is the lack of warming. In science this is called falsification. Good experiments look to falsify theories and if they fail then it strengthens the theory (still not proof). However, all it takes is one failed experiment and the theory is toast.
AGW has already been falsified according to Santer et al 2012 at 95% confidence. All we are doing now is looking to up that confidence level.

Nick Stokes
Reply to  Richard M
January 12, 2017 5:58 pm

“AGW has already been falsified according to Santer et al 2012 at 95% confidence.”
Nonsense.

Richard M
Reply to  Richard M
January 12, 2017 8:43 pm

Sure Nick, according to you anything is nonsense that doesn’t support AGW. However, the facts differ from your opinion. Santer found 95% of climate model runs show a warming trend within 17 years. We’re now over 20 years without any statistically significant warming. We are way past the 17 years specified by Santer and probably 99% of climate model outputs. And, it is only going to get worse as La Nina conditions could very well hang around for a couple of years.
Your denial is noted.

Nick Stokes
Reply to  Richard M
January 12, 2017 9:48 pm

Richard M
“However, the facts differ from your opinion. Santer found 95% of climate model runs show a warming trend within 17 years.”
Quote please. You are garbling the facts. What he said in para 30 was quite the contrary. 95% of control (unforced) runs were less than observed UAH/RSS (my bold):

On timescales longer than 17 years, the average
trends in RSS and UAH near-global TLT data consistently exceed 95% of the unforced trends in the CMIP3 control runs (Figure 6d), clearly indicating that the observed multi-decadal warming of the lower troposphere is too large to be explained by model estimates of natural internal variability.

And he didn’t speak anywhere of requiring “statistically significant” trends. The idea that you can deduce something useful from a failure of statistical significance is a local WUWT fantasy.

Richard M
Reply to  Richard M
January 13, 2017 12:20 pm

Your playing a game, Nick. I was basing my statement on the abstract.
“Our results show that temperature records of at least 17 years in length
are required for identifying human effects on global-mean tropospheric
temperature.”
Maybe I assumed too much that this would be based on statistical significance. This clearly has nothing to do with the control runs you mentioned. Of course, if you look past 2011 I suspect most of the observational trends up to the El Nino would be equal to or less than the control runs.
So, the question is what do these words in the abstract mean? If it means what it seems to say, then any trend over 17 years that shows no warming implies there are no human effects. That sounds an awful lot like falsification to me.

Nick Stokes
Reply to  Richard M
January 13, 2017 12:51 pm

“Your playing a game, Nick. I was basing my statement on the abstract.”
That statement says nothing about 95% of runs show…, as you quoted. The other does. But there is no substitute for proper referencing and quoting. Then readers know what you are referring to.
The statement in the abstract is based on the quote I gave. It’s where the 17 years comes from. And it simply says, it’s no use looking for human effects in records of less than 17 years. What it does not say:
1. You’ll find effects in records of 17 years + 1 day
2. You can try any test you can dream up after 17 years, and if it fails, AGW is falsified.
It certainly doesn’t say that 17 years without statistically significant warming would disprove something. That is a local nonsense. Lack of SS never proved anything. I gave below the example of UAH V6 from 1979 to 1998. 19 years of no SS warming. But the trend was 0.982 C/Cen. Not too far below what was expected. And you can get quite long periods showing trends exactly as expected, but not SS. That is obviously not disproving what was expected.

Richard M
Reply to  Richard M
January 13, 2017 6:58 pm

Nick, it is climate models that are setting the limit of 17 years. Hence, trends of 17+ years are demonstrating the climate models do not correctly describe reality. It is this claim that is falsified. Now, that does not disprove AGW but it certainly places limits on it.

January 12, 2017 9:37 am

I say
I must be one of the 30% who say it is cooling
my long term result is in agreement with RSS /UAH
about +0.012K/ yr warming since 40 years ago
however, according to my results we are cooling at least -0.01K/yr since 2000

Nick Stokes
January 12, 2017 9:47 am

” we cannot be 95% certain that warming has in fact taken place since January 1998″
As I have said before on occasions, this is quite a wrong understanding of trend uncertainty. There is high confidence that the warming in fact took place. The uncertainty refers to the statistical modelling of weather variability. It says that there is at least a 5% chance that the weather could have turned out differently, with negative trend. But we now know how the weather turned out. The uncertainty is about the weather that might have been, not the weather that was.
The way to think about stated uncertainties is that they represent the range of results that could have been obtained if things had been done differently. And so the question is, which “things”. This concept is made explicit in the HADCRUT ensemble approach, where they do 100 repeated runs, looking at each stage in which an estimated number is used, and choosing other estimates from a distribution. Then the actual spread of results gives the uncertainty. Brohan et al 2006 lists some of the things that are varied.
The underlying concept is sampling error. Suppose you conduct a poll, asking 1000 people if they will vote for A or B. You find 52% for A. The uncertainty comes from, what if you had asked different people? For temperature, I’ll list three sources of error important in various ways:
1. Measurement error. This is what many people think uncertainties refer to, but it usually isn’t. measurement errors become insignificant because of the huge number of data that is averaged. Measurement error estimates what could happen if you had used different observers or instruments to make the same observation, same time, same place.
2. Location uncertainty. This is dominant for global annual and monthly averages.You measured in sampled locations – what if the sample changed? You measured in different places around the earth? Same time, different places.
3. Trend uncertainty, what we are talking about above. You get trend from a sttaiistical model, in which the residuals are assumed to come from a random distribution, representing unpredictable aspects (weather). The trend uncertainty is calculated on the basis of, what if you sampled differently from that distribution? Had different weather? This is important for deciding if your trend is something that might happen again in the future. If it is a rare event, maybe. But it is not a test of whether it really happened. We know how the weather turned out.

Reply to  Nick Stokes
January 12, 2017 10:00 am

Nick
I would like you to respond to my comment here
https://wattsupwiththat.com/2017/01/08/ocean-acifidication-failure-uea-prof-complains-government-failed-to-shut-down-press-freedom/#comment-2395424
isn’t it amazing that it is the sh.t in the world that stimulates life?
enjoy!

Reply to  Nick Stokes
January 12, 2017 10:12 am

Thank you Nick!
I am sure you are aware of the interview with Phil Jones in 2010. Here is one question and answer:
B – Do you agree that from 1995 to the present there has been no statistically-significant global warming
Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. The positive trend is quite close to the significance level. Achieving statistical significance in scientific terms is much more likely for longer periods, and much less likely for shorter periods.”
To put these numbers into perspective, 0.12 C/decade is almost 3 times the 0.0450 C/decade that RSS showed from 1998 to 2016. The times are almost the same: 19 years versus 15 years.
My question to you is this:
If Phil Jones were asked the following, what would you think he would say:
Do you agree that from 1998 to 2016 there has been no statistically-significant global warming using RSS?

Nick Stokes
Reply to  Werner Brozek
January 12, 2017 10:21 am

Werner,
I don’t see the point of that response at all. Jones is just saying that in a certain circumstance you could calculate a significance, and it was close to 95%. That can happen. I doubt if Jones would comment on RSS V3.3 (particularly given their “use with caution” advisory), but if he did, he would probably cite the same sorts of results that I calculate.
I think the point Jones was making was that the trend, although marginally significant, happened, and it wasn’t small. You just can’t (yet) rule out the possibility that it could have been a quirk of weather. A little more time needed for that.

Reply to  Werner Brozek
January 12, 2017 10:49 am

I doubt if Jones would comment on RSS V3.3 (particularly given their “use with caution” advisory)

Perhaps I should have said UAH6.0beta5. But WFT did not update it since October with its new URL.
But if asked about the correctness of my title, I believe he would agree that the warming on UAH6.0beta5 is not statistically significant from 1998 to 2016 at the 95% level.
Another way of looking at the 95% certainty is to use the uncertainty in the yearly averages. Using Dr, Spencer’s 0.1, if we assume 1998, 1999 and 2000 were 0.1 C higher than given, and that 2016, 2015 and 2014 were 0.1 C lower than given, the slope from 1998 to 2016 would be negative. True?

Nick Stokes
Reply to  Werner Brozek
January 12, 2017 11:54 am

“Using Dr, Spencer’s 0.1, if we assume 1998, 1999 and 2000 were 0.1 C higher than given, and that 2016, 2015 and 2014 were 0.1 C lower than given, the slope from 1998 to 2016 would be negative. True?”
I guess so. But it’s a complete misuse of the notion of CI. You can maybe say that there is a 5% chance that 1998 was 0.1 higher than given. But the chance that both 1998 and 1999 were 0.1C higher is much smaller – as independent events, something like 0.0025%. The chance of that whole scenario is vanishingly small.
I’ve never understood why Jones’ 1995 quote gets repeated. It’s just a simple statement of calculation, like you or I do routinely. It doesn’t mean much, and I doubt Jones would have raised it himself, but since he was asked…

Reply to  Werner Brozek
January 12, 2017 12:39 pm

as independent events, something like 0.0025%

Good point!

I’ve never understood why Jones’ 1995 quote gets repeated.

Having long periods of statistically insignificant warming or even a pause suggests that global warming is not happening at catastrophic levels that needs to be stopped at all costs.

Nick Stokes
Reply to  Werner Brozek
January 12, 2017 2:31 pm

“Having long periods of statistically insignificant warming or even a pause suggests that global warming is not happening…”
No, it doesn’t, as I keep saying to no apparent effect. If you want to know about warming, look at the trend that happened. “Statistically insignificant” means you can’t be sure through the fog of noise that it will continue. It doesn’t suggest it isn’t happening. Not having glasses can make you uncertain of what is happening; it doesn’t suggest that there is nothing.

Reply to  Werner Brozek
January 12, 2017 3:02 pm

Werner Brozek January 12, 2017 at 10:12 am
Thank you Nick!
I am sure you are aware of the interview with Phil Jones in 2010. Here is one question and answer:
“B – Do you agree that from 1995 to the present there has been no statistically-significant global warming
Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. The positive trend is quite close to the significance level. Achieving statistical significance in scientific terms is much more likely for longer periods, and much less likely for shorter periods.”

Yes the loaded question was chosen carefully to produce the answer ‘Yes’, if 1994 had been chosen the positive trend was significant. If you choose a short enough period you can guarantee non significance and that what was done, Jones pointed this out in his answer.
In a BBC interview a year later Jones commented that the HadCRUT warming trend since 1995 was now statistically significant:
“Basically what’s changed is one more year [of data]. That period 1995-2009 was just 15 years – and because of the uncertainty in estimating trends over short periods, an extra year has made that trend significant at the 95% level which is the traditional threshold that statisticians have used for many years.
“It just shows the difficulty of achieving significance with a short time series, and that’s why longer series – 20 or 30 years – would be a much better way of estimating trends and getting significance on a consistent basis.”

Reply to  Werner Brozek
January 12, 2017 3:44 pm

“Statistically insignificant” means you can’t be sure through the fog of noise that it will continue.

And if we are not sure it will continue, should we spend billions to stop it? But even if we know it will continue, can we know it will be catastrophic before we run out of oil?

Hivemind
Reply to  Werner Brozek
January 12, 2017 3:45 pm

Nick, you’re dead wrong on this one: “Statistically insignificant” means you can’t be sure through the fog of noise that it will continue. It doesn’t suggest it isn’t happening.”
It really does mean that you can’t be sure that it is happening.
A second point: statistically significant simply means you can see the signal through the noise. It doesn’t mean that what you see means something. Warmer is good. I used to live near Chicago. I can assure you that colder is bad. If there were a statistically significant cooling signal, that really would mean something.

Nick Stokes
Reply to  Werner Brozek
January 12, 2017 6:05 pm

“And if we are not sure it will continue, should we spend billions to stop it?”
Well, we are not sure that we’ll be attacked, but we spend billions on defence forces. But in this case there are good physical reasons to expect warming. Observation shows warming as expected, but maybe you can’t quite rule out that it is due to weather variation (provided you choose the wobbliest dataset). That doesn’t give reason to doubt the physical reasons.

Bellman
Reply to  Werner Brozek
January 12, 2017 6:12 pm

“I’ve never understood why Jones’ 1995 quote gets repeated. It’s just a simple statement of calculation, like you or I do routinely. It doesn’t mean much, and I doubt Jones would have raised it himself, but since he was asked…”
I suspect the question was very leading. The start point of 1995 was chosen to be the earliest point where the warming was statistically insignificant. But if you choose a period for that reason the significance test is invalid, and certainly misleading.
In any event, the main purpose of the question was to get the headline “scientist admits no significant warming in last 15 years”, knowing that most people won’t understand the difference between statistical significance and the everyday meaning of the word.
By the way, all data sets apart from UAH 6 and RSS 3.3 now show significant warming since 1995. That’s using the 2 sigma values from
https://skepticalscience.com/trend.php

Reply to  Werner Brozek
January 12, 2017 7:22 pm

By the way, all data sets apart from UAH 6 and RSS 3.3 now show significant warming since 1995. That’s using the 2 sigma values from
https://skepticalscience.com/trend.php

RSS was statistically significant from about November 1992 which is pretty close to Nick Stokes’ July 1994. But I only saw UAH5.6 and not UAH6. Or did I miss it somehow? And UAH5.6 did show significant warming since 1995. You need to see Nick’s to see UAH6.0beta5.

Bellman
Reply to  Werner Brozek
January 13, 2017 4:32 am

“But I only saw UAH5.6 and not UAH6. Or did I miss it somehow? And UAH5.6 did show significant warming since 1995. You need to see Nick’s to see UAH6.0beta5.”
I assumed UAH 6 is not significant since 1995 because RSS 3.3 is not significant and UAH 6 is very similar to RSS 3.3. I used the Skeptical Science trend calculator, rather than Nick Stokes, simply because it has larger confidence intervals ans so requires a higher standard of significance.

Reply to  Werner Brozek
January 13, 2017 5:02 am

I used the Skeptical Science trend calculator, rather than Nick Stokes, simply because it has larger confidence intervals ans so requires a higher standard of significance.

There may be other slight differences, but Nick uses 95% and Skeptical Science uses 2 sigma which is 95.4%.

Bob boder
Reply to  Werner Brozek
January 13, 2017 5:02 am

Nick
We spend billions on defense so people don’t attack us not because we think someone might attacks, we have evidence that they will if they could, Pearl Harbor, war of 1812, 911.
Statistically significant means exactly what it’s states. Your trying to change the meaning with your typical bunch of BS, but the truth is right in the words themselves.
You know that there is no C in AGW but you blather on because your wallet tells you too.

Toneb
Reply to  Werner Brozek
January 13, 2017 10:59 am

““And if we are not sure it will continue, should we spend billions to stop it?”
Well, we are not sure that we’ll be attacked, but we spend billions on defence forces.”
We are not sure that our homes will be burned down, or flooded, or burgled.
Yet we get home insurance.
Well the sensible do.
Now why would that be?
Because the result would be far worse were we not.
No matter how small the risk.

Bob boder
Reply to  Werner Brozek
January 13, 2017 12:06 pm

Tonedeaf
Do you have anti space alien invasion insurance? Oh wait maybe I am asking the wrong guy.

Bellman
Reply to  Werner Brozek
January 15, 2017 5:50 am

“There may be other slight differences, but Nick uses 95% and Skeptical Science uses 2 sigma which is 95.4%.”
I don’t think that’s the main reason for the difference between the two. For example taking RSS 3.3 from 1979, Skeptical Science gives a 2 sigma value of 1.70 C/century, so sigma = 0.85.
For the same period Nick Stokes gives 0.633 C/century, with a 95% confidence interval of -0.48 to 1.746, which would imply sigma = 0.57.

Reply to  Werner Brozek
January 15, 2017 8:03 am

For the same period Nick Stokes gives 0.633 C/century

They are not that far off.
Temperature Anomaly trend
Jan 1979 to Dec 2016 
Rate: 1.350°C/Century;
CI from 0.953 to 1.747;
t-statistic 6.667;
Temp range -0.134°C to 0.378°C

Nick Stokes
Reply to  Werner Brozek
January 15, 2017 11:20 am

“I don’t think that’s the main reason for the difference between the two.”
The 95.4% difference is minor. The main reason is that, following Tamino, SkS uses an ARMA(1,1) model for autocorrelation. I use the more conventional AR(1), which generally gives narrower CIs. I look at the pros and cons here.

Bartemis
Reply to  Nick Stokes
January 12, 2017 11:21 am

4. Autocorrelation – if your statistical dependencies are wrong, your brackets are probably wrong, too. And, we know that these data are not derived from an underlying process that is a trend extending to infinity with i.i.d. measurement noise on top. How far removed from that model we are over the given timeline has not been resolved.

Caligula Jones
January 12, 2017 9:48 am

Time for a drive by Griff cherry picking in 3…2…1…

Walter Sobchak
January 12, 2017 9:53 am

“Does it seem odd that only GISS will probably set a statistically significant record in 2016”
No, because figures don’t lie; Liars figure.

Resourceguy
January 12, 2017 10:25 am

Yes, it does seem odd. At what point do we start defining the outlier?

Martin A
January 12, 2017 10:37 am

I have never understood how statistical analysis can be applied where you don’t have a statistical model of what it happening.

Toneb
January 12, 2017 10:46 am

RSS v3.3 TLT is no longer supported.
RSS v4.0 TTT has taken over the “surface” record.comment image

Dave Fair
Reply to  Toneb
January 12, 2017 12:54 pm

Wadda yuck!
1980 through 1990’s, nothing happened.
2000 through 2015, nothing happened.
CO2 caused it all!

Toneb
Reply to  Dave Fair
January 13, 2017 3:36 am

No, natural variation causes the, err – variability.
CO2 cause the long-term trend.
Do try to fathom that the GMST will never rise monotonically.
Whatever the driver.
Because of ….
Natural variability in climate.
Chief among them the PDO/ENSO.

Dave Fair
Reply to  Toneb
January 13, 2017 5:44 pm

20 years, nothing. Jump. 15 years, nothing. Jump.
What long term trend?

Bob boder
Reply to  Dave Fair
January 13, 2017 5:06 am

Tonedeaf
Says you, does that hold for the reset of geological history. Natural variation is not a zero line it is variation.

Bob boder
Reply to  Dave Fair
January 13, 2017 5:07 am

Rest not reset
Sorry

Bartemis
Reply to  Dave Fair
January 13, 2017 8:38 am

If “natural variability” is large enough to cancel out the trend, it was large enough to cause the trend in the first place. There is no evidence that CO2 has anything to do with it, only rationalization based on extrapolation of an effect that holds in a controlled laboratory setting, but for which there is no assurance whatsoever that it holds for a complex feedback regulated system of the Earth’s climate.
You cannot just assume CO2 has an impact. You must demonstrate it, uniquely and convincingly.

Reply to  Dave Fair
January 13, 2017 6:50 pm

20 years, nothing. Jump. 15 years, nothing. Jump.
What long term trend?

That is one way of looking at it! And it is certainly hard to blame a steadily increasing CO2 for it.

ferdberple
January 12, 2017 10:48 am

Werner Brozek
GISS may well not be showing any significant change in temperature, because temperature is not normally distributed. Your underlying assumption of 95% significance based on the assumption that temperature is normally distributed, which has narrow tails.
Is this realistic? Isn’t it more likely that temperature is not normally distributed. Rather, that temperature is a fractal distribution, also known as 1/f noise? For example: Atmospheric flows, fluid flows, population growth, stock market indices, heart beat patterns, etc. (Mandelbrot, 1975).
This distinction is important because fractal distributions have much fatter tails than the normal distribution. For example: for normally distributed data, the probability of a measurement lying more than 10 sigma from the mean is 10-24. However, we observe 10 sigma events every few months in stock prices.
http://users.math.yale.edu/public_html/People/frame/Fractals/RandFrac/StandardDeviation/StandardDeviation.html
https://arxiv.org/ftp/arxiv/papers/1002/1002.3230.pdf

Reply to  ferdberple
January 12, 2017 10:59 am

Your underlying assumption of 95% significance

My understanding is that climate scientists came up with the 95% to determine what is statistically significant and what is not. I am just applying their standards.

Reply to  Werner Brozek
January 12, 2017 6:57 pm

That’s a scientific standard, nothing specific to climate science.

Bartemis
Reply to  Werner Brozek
January 13, 2017 8:54 am

“That’s a scientific standard, nothing specific to climate science.”
It is a very specific scientific standard, under the requirement that the multi-variate distribution of the time series is that of identically distributed Gaussian variates with independent samples.
Those requirements hold, at least approximately, for a great many natural and artificial phenomena due to A) the central limit theorem, and B) the tendency of systems to vary over a wide bandwidth.
But, they do not hold for all. They particularly do not hold for climate data, which has long term and cyclical correlations. These claims of “significance” do not tell you anything you can’t see better with your own eyes just looking at the data.

ferdberple
January 12, 2017 10:50 am

Recent research has shown that some aspects of climate variability are best described by a
“long memory” or “power-law” model. Such a model fits a temporal spectrum to a single
power-law function, which thereby accumulates more power at lower frequencies than an
AR1 fit. Power-law behavior has been observed in globally and hemispherically averaged
surface air temperature (Bloomfield 1992; Gil-Alana 2005), station surface air temperature
(Pelletier 1997), geopotential height at 500 hPa (Tsonis et al. 1999), temperature paleoclimate
proxies (Pelletier 1997; Huybers and Curry 2006), and many other studies (Vyushin and
Kushner, 2009).
https://arxiv.org/ftp/arxiv/papers/1002/1002.3230.pdf

Bartemis
Reply to  ferdberple
January 13, 2017 8:56 am

The model should be at least AR(2), to model the ~60 year quasi-cycle in the data.

Schrodinger's Cat
January 12, 2017 10:56 am

It seems to me that arguing over whether the warming (or lack of it) is statistically significant or not is like two bald men fighting over a comb. One thing is clear, the global temperature trend since 1998 is not consistent with the warming guaranteed by climatologists who claim that the climate is controlled by carbon dioxide.
The debate should be about the credibility of AGW, not whether the temperature for December was a hundredth of a degree this way or that.
Let’s face it, it matters not whether the pause continues or whether we had slight warming, maybe even less than the long term trend. There is no AGW signal. That should be the conclusion at this juncture.

Reply to  Schrodinger's Cat
January 14, 2017 5:43 pm

+1

Nick Stokes
January 12, 2017 11:03 am

“Does it seem odd that only GISS will probably set a statistically significant record in 2016?”
A more interesting fact is that almost all indices will set records. Land, sea, global surface, troposphere. But the situation with GISS is that it had a relatively small rise in 2015. NOAA and HADCRUT rose much more. It’s as if they responded earlier to El Nino. All three rose by about the same amount in total since 2014.

Bartemis
Reply to  Nick Stokes
January 12, 2017 11:23 am

“A more interesting fact is that almost all indices will set records.”
Not really. There was a big El Nino. Meh.

Toneb
Reply to  Bartemis
January 12, 2017 12:15 pm

There was a bigger one in ’98 but this one peaked at ~ 0.4C above that.

Bartemis
Reply to  Bartemis
January 12, 2017 1:20 pm

Noise. Signal variability. Meh. It was also a much narrower spike. You are just trying to convince yourself of something that is not in evidence.

Richard M
Reply to  Bartemis
January 12, 2017 4:45 pm

Sorry ToneB, but the 1998 El Nino did not occur at the peak of the AMO. All the differences between the two sets of data are easily explained by the AMO. I realize you aren’t interested in the truth. You are simply here to push your bias.

Reply to  Bartemis
January 12, 2017 6:56 pm

It is interesting to note that 2015 saw the largest jump in CO2 at 3.03 ppm. The previous record was 1998 at 2.93 ppm. Does this suggest that the 2015/16 was slightly stronger then in 1997/98? Last year came in at 2.77 which is 3rd highest in the Mauna Loa record.

Reply to  Bartemis
January 12, 2017 7:43 pm

It is interesting to note that 2015 saw the largest jump in CO2 at 3.03 ppm. The previous record was 1998 at 2.93 ppm. Does this suggest that the 2015/16 was slightly stronger then in 1997/98?

I think it has more to do with China and India emitting much more CO2.

Reply to  Werner Brozek
January 12, 2017 8:10 pm

No body must read what I write. Over and over I’ve gone on about NOAA changing the record in 2005 from 2.52 to 3.10. That’s above even what you are saying if it is truly the number you say it is. The co2 levels in addition to following temperature also follow the solar cycle peak to peak. That’s why I’m upset about NOAA changing the data. Do you know how much more co2 per year we are producing now as opposed to 1998 ? I thought the count this year (2016) would have been at least 4 or 5. If it’s 3.01, that’s unbelievable. Really unbelievable. That means the co2 sinks are accelerating. Or the natural release of co2 is diminishing, by a lot.

Toneb
Reply to  Bartemis
January 13, 2017 3:32 am

Bartemis:
Apples v apples – GMST’s are ~ 0.4C higher than 18 years ago.
And that you say …..
“You are just trying to convince yourself of something that is not in evidence.”
It’s even on a trop sat temp series.
That RSS is now disowned here, along with GISS of course.
Makes the above just typically “down the rabbit-hole”.

Toneb
Reply to  Bartemis
January 13, 2017 4:23 am

Richard:
“Sorry ToneB, but the 1998 El Nino did not occur at the peak of the AMO. All the differences between the two sets of data are easily explained by the AMO. I realize you aren’t interested in the truth. You are simply here to push your bias.”
No.
I am here to correct the bias of denizens.
To deny some of the ignorance on display.
Like, every, literally every, climate related science head post on here is introduced as “from the dept of …..”. Or “Claim …..”
The bias comes from the ideological standpoint, projected onto the science.
Just like it is impossible to get ever forecast right, it is equally impossible to get every one wrong my friend.
Here they are all wrong.
And you think WUWT has no bias?
The AMO is it?
Well it isn’t wasn’t much different in ’16 than ’98, though it was riding a curious spike in ’98.comment image
The PDO/ENSO has far more effect
Yet temps didn’t dip during the -ve phase through the “pause”…..
http://2.bp.blogspot.com/-Fkg790Q3b8o/VMRGN17t2oI/AAAAAAAAHwo/GTCVnmku248/s1600/GISTempPDO.gif

Bartemis
Reply to  Bartemis
January 13, 2017 6:30 am

rishrac @ January 12, 2017 at 8:10 pm
“That means the co2 sinks are accelerating. Or the natural release of co2 is diminishing, by a lot.”
Accelerating sink activity is a kluge used to explain the apparent change in the relationship between emissions and concentration. It is apparent, but it is not real, because emissions do not drive concentration. The rate of change of concentration is simply tracking temperature.
http://woodfortrees.org/plot/esrl-co2/derivative/mean:12/from:1979/plot/rss/offset:0.6/scale:0.22
Toneb @ January 13, 2017 at 3:32 am
“It’s even on a trop sat temp series.
That RSS is now disowned here, along with GISS of course.”

Not this RSS:
http://woodfortrees.org/plot/rss
Perhaps you are talking about some “adjusted” product that is not carried on WFT yet.
“GMST’s are ~ 0.4C higher than 18 years ago.”
El Nino’s are variable, not like a standard candle in astronomy. And, temperatures as of now are comparable to the whole of the past two decades, and falling fast.

Toneb
Reply to  Bartemis
January 13, 2017 10:34 am

Bartemis:
WFT interactive still has RSS v3.3 TLT
Of which RSS say ….
“The lower tropospheric (TLT) temperatures have not yet been updated at this time and remain V3.3. The V3.3 TLT data suffer from the same problems with the adjustment for drifting measurement times that led us to update the TMT dataset. V3.3 TLT data should be used with caution.”
Therefore, as I said above – This is the equivalent.comment image

Bartemis
Reply to  Bartemis
January 13, 2017 11:18 am

Well, isn’t that convenient. Too bad. RSS was doing a good job. I guess the pressure was too great.
It still does not change the fact that you are basing your conclusion on a needle sharp spike that would be virtually eliminated with a little more smoothing than is already done. And, the influence of El Nino hasn’t faded entirely yet. We will see what happens in the year to come.

Schrodinger's Cat
January 12, 2017 11:26 am

This seems to me like two bald men fighting over a comb. It matters not whether the pause continues or not or whether December’s temperature is a hundredth of a degree higher or lower. It probably does matter to the record keepers, but not to the climate change debate.
The relationship between temperature and carbon dioxide is broken at this juncture. That is the conclusion. The temperature may be flat or increasing slightly since 1998, but it is less of an increase than the long term warming and very much less than the warming promised by AGW. That is the conclusion.

Nick Stokes
Reply to  Schrodinger's Cat
January 12, 2017 11:38 am

” very much less than the warming promised by AGW”
No, it’s quite close. Here is the comparison of CMIP 5 averages with recent surface temperature observationscomment image

Reply to  Nick Stokes
January 12, 2017 12:00 pm

Yeah, maybe it’s quite close no, but … ::comment image

Toneb
Reply to  Nick Stokes
January 12, 2017 12:25 pm

No.
Current observations are well within the FAR range of uncertainty.
And actually above SAR.
http://blogs.reading.ac.uk/climate-lab-book/files/2016/02/WGI_AR5_Fig1-4_UPDATE.jpg

Reg Nelson
Reply to  Nick Stokes
January 12, 2017 12:31 pm

CMIP 5 was published in 2014, so 90% of that graph is meaningless and incredibly misleading.

Schrodinger's Cat
Reply to  Nick Stokes
January 12, 2017 12:42 pm

Is this an ensemble of different models with different assumptions and different initialisation values?

Toneb
Reply to  Nick Stokes
January 12, 2017 12:58 pm

Eh?
I was responding to the graph posted by Robert.
It has a trend drawn from 1990.

Dave Fair
Reply to  Nick Stokes
January 12, 2017 1:01 pm

Does nobody understand that the climate of the 21st Century has not responded as predicted by IPCC climate models? They even had the actual numbers through 2005, and still got it wrong.
Quit arguing minutia, and attack the liars where they live.

Toneb
Reply to  Nick Stokes
January 12, 2017 1:02 pm

Showing the range of estimates from FAR to AR4 – which was published in 2007.

Nick Stokes
Reply to  Nick Stokes
January 12, 2017 2:18 pm

Robert Kermodle,
“Yeah, maybe it’s quite close no, but”
That’s MofB’s trick graph where he rings in troposphere data instead of the surface measures they were predicting.

Bill Illis
Reply to  Nick Stokes
January 14, 2017 6:25 am

Just noting that we just had a super-El Nino which kind of makes it silly to run a 12 month running mean and compare that to global warming projections.
NCDC-NOAA was 0.32C in November on your chart above, well below all of the AR5 forecasts produced just 3 years ago (as in they had all the data up to 2013). NCDC-NOAA is also going lower in the months ahead to about 0.1C on your above chart.
Hadcrut4.5 was 0.29C in November on your chart and will also be going lower in the months ahead.
So, it took a lot of work to put that together and it kept some people “believing” for a period of time right now. But what happens when you have to face the music again in the near future about the models being so far off (even ones produced just a few years ago which had all the historical data to work with).
The solution can not be be to adjust the temperature data once again because even that is not working. Its still way below even though they just added 0.1C to the numbers over the last year.
WHEN is it face the music time?

AGW is not Science
Reply to  Schrodinger's Cat
January 12, 2017 12:33 pm

Not to mention the elephant in the room – they have no evidence that CO2 is the CAUSE OF the minuscule amount of warming, regardless of how close or far apart the models are from reality. Push THAT discussion to its conclusion, and invariably it will end up at “they can’t otherwise account for it,” which is the classic AGW argument based on climate ignorance.

Bartemis
Reply to  AGW is not Science
January 13, 2017 9:04 am

Miniscule is right. We’re getting wrapped around the axle here over a 1 degC rise per century, when the ASHRAE standard for the temperature differential between your head and your feet is a whopping 3 degC!

Kenneth N. Shonk
Reply to  AGW is not Science
January 19, 2017 2:47 am

Yes, there is a mechanism to account for warming other than CO2 and that is “Ozone Depletion” which can be anthropogenic or natural. Ozone depletion allows additional UVB to reach the lower troposphere and earth’s surface and produce additional warming of the troposphere and surface. Since oxygen photodisassociation and creation of ozne UVB in the stratophere normally absorbs most of the UVB, less absorption of UVB in the stratosphere produces stratospheric cooling (this has been observed) but surface and troposphere warming. From 1970 to 1998 ozone depletion was anthropogenic due to human made and released chlorofluorcarbon gases. This is well known., Ozone depletion was at its maximum in 1998 and was superimposed on the El Nino event so warming was maximum. Lucky for the AGW crowd, Boaorabunga volcano in Iceland erupting starting in Oct, 2014 and continued through March, 2015. The eruption was effusive not explosive so it produced no significant aerosols or particulates to produce cooling but lots of gases. Effusive eruptions release HF, HCl, and HBr (halogen gases) that still reached the stratopshere within a few months and produce ozone depletion. This occurred by February, 2015, resulting in warming through 2015 into 2016 .(so AGW crowd now think they have been saved from the “hiatus” – they will be disappinted!). Ozone depletion probably peaked in mid-2016 and is now reversing. The peak in ozone depletion corresponded to the El Nino event related warming which is why 2015-2016 were near or at record temperatures. The reversal in the ozone depletion trend will now correspond to the La Nina cooling trend so the downward temperature trend for 2017-2018 should be steep as the trend for November and December, 2016 suggests. If these correlations with exogenous, random events are correct, calculating trendlines, correlation coefficients, error ranges, and probabilties have no real world relevance.
Now the “ozone depletion” theory is not my idea. The significance of ozone depletion to mean global temperatures is Dr. (PhD) Peter Langdon Ward’s idea. A full explanation can be found at his website: https://www.WhyClimateChanges.com. My miniscule contribution is suggesting that ozone depletion peaks have accidently corresponded with El Nino events exacerbating warming and the following:
The Davos Conspiracy (of January, 2017)
Davos elites meet and greet with alarm to decry with derision
the possible decrepitation of their “global warming” delusion
but it’s just a billionaire’s Juke and Jive dance to distract us
while they slither into the pocketbooks of each dumb cuss.
CO2 doesn’t cause climate change as Al Gore’s preachin’,
his religious obfuscation of the truth of “ozone depletion”
from the impact of CFC’s and effusive volcanic gas emissions,
a marvelous dance between oxygen’s photodisassociation
from UVB radiation and ozone creation and destruction.
Check out https://WhyClimateChanges.com for a lesson,
and you will conclude Davos is a conspiracy of high treason
worthy of a racketeering and corrupt practices conviction.
MHPublishing, Copyright 2017 – distribute freely with attribution.
The ozone depletion idea is controversial because it means calculations of current calculations of radiant energy are incorrect and leads to the conclusion that the current physics paradigm of visualizing light as packets with wave-particle duality is an artificial contruct and has no basis in reality. It works for me because it can explain the recent temperature records, the historical anecdotal climate record, and the geologic record. As a geologist, the geologic record is all that really counts to me and the CO2 AGW theory just doesn’t cut it. The rocks tell the story and volcanoes rule. It also means humans can affect the earth’s climate. Just start up CFC production again if you want to swim on a Greenland beach without freezing or put a cork in Iceland’s volcanoes if you don’t.

Reply to  Schrodinger's Cat
January 12, 2017 12:50 pm

It matters not whether the pause continues or not

For two scientists communicating with each other, I agree. But if you want to mention it to your neighbor at a coffee shop, “no warming” is much easier to understand than “no statistically significant warming at the 95% level over 20 years”. His glazed eyes will soon be looking for the door.

Paul Penrose
January 12, 2017 11:53 am

Werner,
Where did you get the 0.1C and 0.2C margins of error you quoted?

Reply to  Paul Penrose
January 12, 2017 1:01 pm

Where did you get the 0.1C and 0.2C margins of error you quoted?

From here:
http://www.drroyspencer.com/2017/01/global-satellites-2016-not-statistically-warmer-than-1998/

We estimate that 2016 would have had to be 0.10 C warmer than 1998 to be significantly different at the 95% confidence level.

But where did I say 0.2C? Are you referring to the latest RSS anomaly of 0.229 C that was rounded to 0.2 C?

MarkW
Reply to  Werner Brozek
January 12, 2017 1:16 pm

You made a comment about the pause returning in half the time if the temp dropped 0.2C vs 0.1C below the mean.
I’m guessing he’s referring to that.

Reply to  Werner Brozek
January 12, 2017 1:44 pm

You made a comment about the pause returning in half the time if the temp dropped 0.2C vs 0.1C below the mean.
I’m guessing he’s referring to that.

Thank you! If that was meant, see my comment at:
https://wattsupwiththat.com/2017/01/12/satellite-records-and-slopes-since-1998-are-not-statistically-significant-now-includes-november-and-december-data/comment-page-1/#comment-2395807

January 12, 2017 12:38 pm

Werner, as I’ve told you before to justify this statement:
“On several different data sets, there has been no statistically significant warming for between 0 and 23 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.”
you need to be performing a one-tailed test, is that what you did?
When you do a two-tailed test you’re saying that the 2.5% chance that the warming above 1.784 (For UAH6.0) is not warming, which is clearly nonsense!

Nick Stokes
Reply to  Phil.
January 12, 2017 2:54 pm

Phil.,
He’s using results from here. There is a 95% probability of being within CIs (t limit1.96), so yes, 2.5% of being beyond each extreme. So when Werner says “no significant warming” I think he means that zero trend is within those 95%CIs about the observed trend.

Reply to  Nick Stokes
January 12, 2017 3:07 pm

Exactly, I pointed this out to him before, if he wants to say ‘warming’ he has to change his limit, I think it’s ~1.65 rather than 1.96.

January 12, 2017 1:40 pm

When will the pause return?
No one knows when or if the pause will ever return. However certain conditions must be met. Namely the area below the zero line after December 2016 must be equal to the area above the zero line from February 2016 to November 2016. The average from February to November on RSS was 0.60. The zero line, which is where RSS is at present is 0.23. That leaves a difference of 0.37 for a period of 10 months above the zero line. 0.37 x 10 = 3.7.
So if the RSS anomaly drops by 0.10 from 0.23 to 0.13 and stays there, it would take 37 months for the pause to return.
If the anomaly drops by 0.2 from 0.23 to 0.03 and stays there for 19 months, the pause will return.
If the anomaly drops by 0.3 from 0.23 to -0.070 and stays there for 12 months, the pause will return.
If RSS makes adjustments, the pause will never return! ☹

Menicholas
January 12, 2017 2:03 pm

“If my math is correct, there is about a 30% chance that cooling has taken place since 1998 and about a 70% chance that warming has taken place.”
Golly, I am pretty sure every single person who is not an active skeptic has clean forgotten to mention anything about confidence levels or uncertainty ranges!
That goes for every MSM news source too.
Must be just an innocent oversight, huh?
I mean, anyone who has taken and passed even one single college level science class knows all about uncertainty, what it is and what it means, not to mention how important it is, so, it must just be an innocent oversight…right?
By every one of them, every single time they mention anything about it, for years on end…just an oversight.

Bellman
January 12, 2017 6:34 pm

One thing that I’m not sure about with all this talk of significant warming, is how much it matters that there are multiple data sets.
To take the question of whether 2016 was warmer than 1998, if we only have UAH 6.0 and that’s showing 2016 as 0.02 C warmer than 1998, then given the amount of uncertainty there might be a 40% (or whatever) chance that 1998 was warmer. But if RSS 3.3 is also showing 2016 as being 0.02 C warmer, then that must increase the confidence that 2016 really was warmer. If both data sets were completely independent than the chances of 1998 being warmer would drop to 16%, but as they are not independent I guess the real odds of 1998 being warmer would be somewhere between 16% and 40%.
And that’s only looking at the two data sets showing the smallest difference between the two years.

Reply to  Bellman
January 12, 2017 7:05 pm

The slight difference in CO gain for the year/s involved may be a clue. 2015= 3.03 ppm, 1998= 2.93 ppm. The year 1998 held the record as the greatest yearly gain on the Mauna Loa site. Now 2015 is 1st with 2016 3rd at 2.77 ppm

January 12, 2017 7:23 pm

Probability UAH LT for 2016 is warmest year is 61%. See the comments on Roy’s blog, or my blog.
UAH press releases says 2016 is “warmest.”

Reply to  David Appell (@davidappell)
January 14, 2017 7:31 pm

Thank you! So I was in the ballpark.
I must have missed your comment earlier. Was your post in moderation for a long time?

ossqss
January 12, 2017 8:17 pm

Thank you Werner and JTF, and Nick too!
Good info sharing once again.
I find it interesting we debate hundredths of a degree to leverage change in policies. Frankly, we need warmer Temps to feed the populations and provide a place to house them. CO2 could be our saviour, until the logarithmic relationship to temperature comes into play more so. Just sayin, careful what you wish for……

Frank
January 12, 2017 9:31 pm

Lack of statistically significant warming doesn’t mean that it hasn’t been warming!
Let’s compare the warming trend for UAH6.0 for the first half of the record, the last half of the record and the full record:
1/79-10/16: 0.883 K/century (95% ci of 0.411 to 1.256 K/century). “statistically significant”
1/79-1/98: 0.283 K/century (95% ci of -0.695 to 1.263 K/century). “statistically insignificant”
1/98-10/16: 0.611 K/century (95% ci of -0.803 to +2.024 K/century). “statistically insignificant”
Interesting. Two periods with insignificant warming add up one combined period with significant warming. (:)) So what does a lack of statistically significant warming prove? Nothing. It just means variability/noise can make it difficult to prove the existence of warming over relatively short periods.
Notice that the warming trend for both shorter periods is lower than for the full period and larger during the so-called “Pause”. Ouch.

Reply to  Frank
January 12, 2017 10:04 pm

Interesting. Two periods with insignificant warming add up one combined period with significant warming.

An interesting point! But are your numbers right? For the full period for UAH6.0beta5, I get:
Temperature Anomaly trend
Jan 1979 to Dec 2016 
Rate: 1.230°C/Century;
CI from 0.815 to 1.646;
t-statistic 5.803;
Temp range -0.209°C to 0.257°C
The rate of 1.23 C/century is not too high, but your number of 0.883 K/century is even less! Should we even be concerned about that?

Frank
Reply to  Werner Brozek
January 13, 2017 1:35 am

Werner: Nick is correct. I selected UAH6mt rather than UAH6.LT. Thanks for catching my mistake. Nick’s numbers below are correct. They support essentially the same point: Absense of statistically significant warming does not prove the absence of warming. Natural variability obscures warming over periods of one or two decades.
I forget which record I was working with, but I found a dividing point where the full period slope and both half period slopes were very similar, but only the full period was significant.

Nick Stokes
Reply to  Werner Brozek
January 13, 2017 1:45 am

Frank and Werner,
” but only the full period was significant.”
I think that is to be expected. If you throw four heads in a row that is not significant (1/16). If it happens again, that isn’t significant on its own. But eight in a row is (1/256).

Bellman
Reply to  Werner Brozek
January 13, 2017 4:51 am

“I think that is to be expected. If you throw four heads in a row that is not significant (1/16). If it happens again, that isn’t significant on its own. But eight in a row is (1/256).”
Yes. This is a point that I don’t think everyone using the term “statistically significant” realizes. Whether something is significant or not depends both on the strength of the signal over the noise, and the size of the sample.
Say you were testing a drug – you give it to 50 people and find it did significantly better than a control group given a placebo. Now if you you split the 50 people into two groups of 25 each it may well be the case that neither group shows a significant improvement over the control, not because the results are worse but simply because 25 is a smaller sample than 50. It would be absurd to point to the sample of 25 and claim that this meant the drug stopped working on those 25.
But this is what happens with the temperature trends. There is a statistically significant warming since the start of the satellite era, but by looking at the last 20 years or so that warming becomes insignificant. In part this might be because the trend was smaller, but it might also be because the sample size is less. Just saying the rise was insignificant since 1998 tells us little.

Reply to  Werner Brozek
January 13, 2017 5:17 am

Just saying the rise was insignificant since 1998 tells us little.

To a certain extent, you are right. Keep in mind that all numbers from Section 1 show from where the warming could include zero. So one or more months longer does not include zero any more.
I talked with a warmist years ago regarding Phil Jones interview and at that time he said that 8 years means nothing, but that 15 years should be taken more seriously.

Frank
Reply to  Werner Brozek
January 13, 2017 10:26 am

Nick wrote: “I think that is to be expected. If you throw four heads in a row that is not significant (1/16). If it happens again, that isn’t significant on its own. But eight in a row is (1/256).”
However, many people think multi-year climate change is deterministic, not chaotic. They think in terms of cause and effect, not coin-flipping.
According to climate models, in any five-year period with today’s growing forcing, there is a 25% chance the temperature has fallen. Your coin flipping analogy is valid. So there shouldn’t be too many 10-year periods in your post 1979 trend viewer with cooling, but the 95% confidence interval certainly should include 0 warming. I personally think observations show that models produce too little unforced variability and/or too much warming, but that is hard to prove.

Nick Stokes
January 13, 2017 12:27 am

Werner, Frank
“C:\mine\blog\bloggers\Reader\WUWT\_WUWT.html”
I think Frank has given figures for UAH6.6 MT, which also makes the point. For LT I get

  Period      Trend   CI    CI
1/98->12/16   0.476 -0.813 1.765
1/79->1/98    0.982 -0.044 2.088
1/79->12/16   1.230  0.815 1.646

Also two insignificant parts, with a significant whole.And the trend for the first part is really quite high.
“Should we even be concerned about that?”
No. Planes will be OK. Surface temperatures are our issue.

Nick Stokes
Reply to  Nick Stokes
January 13, 2017 12:29 am

Sorry I pasted the wrong thing here. I’m responding to Werner
“But are your numbers right? “

Reply to  Nick Stokes
January 13, 2017 5:18 am

Thank you!

January 13, 2017 2:25 am

The El Nino heat spike comming quickliy back to normal can be explained in a simple way.
The the sudden atmospheric warming can simply explained by the fact that hot water stored some hundred meters deep in the West pacific is suddely released to the surface of the whole Pacific, which makes one third circumference of the globe – a real big area!
The heat in the West Pacific comes from the trade winds, having no clouds and exposing a big part of the entire Pacific permanently to the sun. Without the winds, the warmed up and piled up waters are flushing back toward East and the American Continent.
Water can store 1000 times more heat than air, so the atmophere is heated up quickly. When cold water again is pushed from America to the West Pacific, the heat release is stopped. And the atomosphere ist radiating quickly its surplus heat towards space.

Bob boder
Reply to  Johannes S. Herbst
January 13, 2017 5:21 am

But co2 should be slowing the quick radiation to space so the heat should be retained in the atmosphere longer. Is it?

Bob boder
Reply to  Bob boder
January 13, 2017 5:43 am

Per Nick, Tonedeaf, griff and others we have not yet reached equilibrium so why doesn’t this new heat stay in the atmosphere and just get us closer to equilibrium? The co2 molecules should be absorbing all this LWR and then transfer it to other molecules through collision and it should stay trapped. They rate of heat transfer to space should stay the same.

Bob boder
Reply to  Bob boder
January 13, 2017 6:01 am

Unless this heat is some how magically being returned to the oceans there is nothing in CAGW theory that should allow it to quickly radiate to space. But if you look at h2o and co2 as radiative coolers of the atmosphere than it makes sense how this heat escapes. I have said it a thousand times the oceans oceans warm and heat the atmosphere not the other way around. CO2 is not the magic blanket that causes the oceans to warm.

Frank
Reply to  Bob boder
January 13, 2017 8:41 am

Bob wrote: “Unless this heat is some how magically being returned to the oceans there is nothing in CAGW theory that should allow it to quickly radiate to space.”
The idea the GHGs permanently trap heat in the atmosphere was created by alarmists for the simple-minded. GHGs both absorb and emit thermal infrared radiation. As with almost all materials (N2 and O2 begin important exception in our atmosphere), emission increases with temperature. You can use the S-B eqn to show that a blackbody near 255 K, for example, emits about 3.8 W/m2 for every 1 degK it warms. So the 0.5 K increases in temperature during strong El Ninos potentially could emit an additional 2 W/m2 of LWR to space. Given that current forcing from anthropogenic GHGs is only about 2.5 W/m2, it is crazy to say that the temperature doesn’t drop after an El Nino because it is escaping to space.
The temperature falls every winter because GHGs emit radiation to space. It falls every night for the same reason.
You are correct in saying that we haven’t reached a new equilibrium temperature in response to the current forcing of about 2.5 W/m2 because the heat is flowing into the deep ocean – about 0.5 W/m2 according to ARGO. The atmosphere “doesn’t know” about heat flux into the deep ocean – (following the laws of physics), it radiates more heat towards space and the surface when it is warmer during El Ninos.

Reply to  Frank
January 13, 2017 9:08 am

That’s not the information for the energy budget of the earth.The net retained was greater than the outgoing. Via the satellite back then when co2 levels were 370 ppm, the net retained was 2/3 of the incoming. After years of constant increases in co2, one would expect to have more, not less retained energy. Show me in the energy budget there was a net spike in energy released. That would also be the end for AGW as well. I’ve even argued that when there is more evaporation that more heat is released. The answer to that was the energy budget, that the latent heat heat was retained rather than released.
I can’t tell if you are a skeptic or a warmist, but the argument supports a skeptics view.
In essence what I’m overall saying is that after 20 years, the global temperature is currently up only 0.2 C, that is a demarcation point. Water does have a higher heat capacity, where is the heat being stored ? If you ignore the trends, the question to ask is, in the next 12 months will the global temperature rise or fall ? If it rises is it due to co2, if it falls below the long term average of what was considered equilibrium, then climate can not possibly be related to co2. The water would be warmer than the atmosphere at that point and would be giving up heat rather than absorbing it. Isn’t that the rationale behind the Arctic ice melting ?

Bob boder
Reply to  Bob boder
January 13, 2017 10:06 am

Frank
The myth is that there is such a thing as equilibrium when it comes to global temperature and that warming of the oceans have anything to do with CO2 when the opposite is the truth.

Frank
Reply to  Bob boder
January 13, 2017 10:10 am

Rishrac wrote: “That’s not the information for the energy budget of the earth.The net retained was greater than the outgoing. Via the satellite back then when co2 levels were 370 ppm, the net retained was 2/3 of the incoming.”
Satellites are incapable of measuring the difference between incoming and outgoing radiation with enough accuracy to detect a radiative imbalance of a few W/m2. (They don’t cover the full wavelength range of incoming and outgoing energy with a linear response. 1 W/m2 is only an 0.4% change in the post albedo 240 W/m2 entering the planet) If you believed their raw output, the Earth would be warming much faster than it is. However, satellites can detect a CHANGE of a few tenths of a W/m2 in both SWR and LWR.
Since most (roughly) of the energy entering our planet ends up in the ocean, we deployed the ARGO buoys to measure our current imbalance. The current best estimate is 0.5 W/m2 (and it is currently being used to correct some versions of the satellite data, CERES-EBAG.)
rishrac: “Water does have a higher heat capacity, where is the heat being stored ?
ARGO tells us that 0.5 W/m2 of heat (from a radiative imbalance) is accumulating in the deep ocean. The top 100 meters of ocean warms and cools with ENSO (and seasons), but the accumulation of heat below is fairly steady. Unfortunately, when that little heat flux is spread over 2000 m of ocean, the temperature rise in a decade average only 0.025 K (assuming my calcs are correct). Does ARGO have this level of accuracy? It was designed to be this accurate. There are 3000 of them reporting every 10 days. Sample buoys are removed and tested for biases every year. The picture is evolving slowly. WIllis reported at WUWT that all oceans are not warming at the same rate. (I don’t have much faith in preARGO data; it is limited and required large corrections.)
http://climate4you.com/images/ArgoWorldOceanSince200401%2065N-65S.gif
http://climate4you.com
rishrac comments: “I can’t tell if you are a skeptic or a warmist, but the argument supports a skeptics view.
Does it make a difference? The important question is whether I have provided Bob with the correct reason why the temperature falls after El Ninos – and summers and daylight hours. And accurate information to you. I’m a skeptic – about both the IPCC and skeptics. I’d like to understand what is true, no matter where that leads. So, if I’ve got something wrong, let me know.