Satellite Records and Slopes Since 1998 are Not Statistically Significant. (Now Includes November and December Data)

Guest Post by Werner Brozek, Edited by Just The Facts

WoodForTrees.org – Paul Clark – Click the pic to view at source

As can be seen from the above graphic, the slope is positive from January 1998 to December 2016, however with the error bars, we cannot be 95% certain that warming has in fact taken place since January 1998. The high and low slope lines reflect the margin of error at the 95% confidence limits. If my math is correct, there is about a 30% chance that cooling has taken place since 1998 and about a 70% chance that warming has taken place. The 95% confidence limits for both UAH6.0beta5 and RSS are very similar. Here are the relevant numbers from Nick Stokes’ Trendviever site for both UAH and RSS:

For RSS:
Temperature Anomaly trend
Jan 1998 to Dec 2016
Rate: 0.450°C/Century;
CI from -0.750 to 1.649;
t-statistic 0.735;
Temp range 0.230°C to 0.315°C

For UAH:
Temperature Anomaly trend
Jan 1998 to Dec 2016
Rate: 0.476°C/Century;
CI from -0.813 to 1.765;
t-statistic 0.724;
Temp range 0.113°C to 0.203°C

If you wish to see where warming first becomes statistically significant, see Section 1 below. In addition to the slopes showing statistically insignificant warming, the new records for 2016 over 1998 are also statistically insignificant for both satellite data sets.
In 2016, RSS beat 1998 by 0.573 – 0.550 = 0.023 or by 0.02 to the nearest 1/100 of a degree. Since this is less than the error margin of 0.1 C, we can say that 2016 and 1998 are statistically tied for first place. However there is still over a 50% chance that 2016 did indeed set a record, but the probability for that is far less than 95% that climate science requires so the 2016 record is statistically insignificant.

If anyone has an exact percentage here, please let us know, however it should be around a 60% chance that a record was indeed set for RSS. In 2016, UAH6.0beta5 beat 1998 by 0.505 – 0.484 = 0.021 or also by 0.02 to the nearest 1/100 of a degree. What was said above for RSS applies here as well. My predictions after the June data came in were therefore not correct as I expected 2016 to come in under 1998.

What about GISS and HadSST3 and HadCRUT4.5? The December numbers are not in yet, but GISS will set a statistically significant record for 2016 over its previous record of 2015 since the new average will be more than 0.1 above the 2015 mark. HadSST3 will set a new record in 2016, but it will only be by a few hundredths of a degree so it will not be statistically significant. HadCRUT4.5 is still up in the air. The present average after 11 months is 0.790. The 2015 average was 0.760. As a result, December needs to come in at 0.438 to tie 2015. The November anomaly was 0.524, so only a further drop of 0.086 is required. This cannot be ruled out, especially since this Nicks site shows December 0.089 lower than November:

Also worth noting are that UAH dropped by 0.209 from November to December and RSS dropped by 0.162. Whatever happens with HadCRUT4.5, 2016 and 2015 will be in a statistical tie with a possible difference in the thousandths of a degree. The difference will be more important from a psychological perspective than a scientific perspective as it will be well within the margin of error.

In the sections below, we will present you with the latest facts. The information will be presented in two sections and an appendix. The first section will show for how long there has been no statistically significant warming on several data sets. The second section will show how 2016 so far compares with 2015 and the warmest years and months on record so far. For three of the data sets, 2015 also happens to be the warmest year. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data. Only the satellite data go to December.

Section 1

For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 0 and 23 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.

The details for several sets are below.

For UAH6.0: Since November 1993: Cl from -0.009 to 1.784
This is 23 years and 2 months.
For RSS: Since July 1994: Cl from -0.005 to 1.768 This is 22 years and 6 months.
For Hadcrut4.5: The warming is statistically significant for all periods above four years.
For Hadsst3: Since March 1997: Cl from -0.003 to 2.102 This is 19 years and 9 months.
For GISS: The warming is statistically significant for all periods above three years.

Section 2

This section shows data about 2016 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadsst3, and GISS.

Down the column, are the following:
1. 15ra: This is the final ranking for 2015 on each data set.
2. 15a: Here I give the average anomaly for 2015.
3. year: This indicates the warmest year on record so far for that particular data set. Note that the satellite data sets have 1998 as the warmest year and the others have 2015 as the warmest year.
4. ano: This is the average of the monthly anomalies of the warmest year just above.
5. mon: This is the month where that particular data set showed the highest anomaly prior to 2016. The months are identified by the first three letters of the month and the last two numbers of the year.
6. ano: This is the anomaly of the month just above.
7. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.
8. sy/m: This is the years and months for row 7.
9. Jan: This is the January 2016 anomaly for that particular data set.
10. Feb: This is the February 2016 anomaly for that particular data set, etc.
21. ave: This is the average anomaly of all months to date.
22. rnk: This is the rank that each particular data set would have for 2016 without regards to error bars and assuming no changes to the current average anomaly. Think of it as an update 55 minutes into a game. However the satellite data are complete for the year.

Source UAH RSS Had4 Sst3 GISS
1.15ra 3rd 3rd 1st 1st 1st
2.15a 0.261 0.381 0.760 0.592 0.86
3.year 1998 1998 2015 2015 2015
4.ano 0.484 0.550 0.760 0.592 0.86
5.mon Apr98 Apr98 Dec15 Sep15 Dec15
6.ano 0.743 0.857 1.024 0.725 1.11
7.sig Nov93 Jul94 Mar97
8.sy/m 23/2 22/6 19/9
Source UAH RSS Had4 Sst3 GISS
9.Jan 0.539 0.681 0.906 0.732 1.15
10.Feb 0.831 0.994 1.068 0.611 1.33
11.Mar 0.732 0.871 1.069 0.690 1.29
12.Apr 0.713 0.784 0.915 0.654 1.08
13.May 0.544 0.542 0.688 0.595 0.93
14.Jun 0.337 0.485 0.731 0.622 0.75
15.Jul 0.388 0.491 0.728 0.670 0.83
16.Aug 0.434 0.471 0.770 0.654 0.98
17.Sep 0.440 0.581 0.710 0.606 0.90
18.Oct 0.407 0.355 0.586 0.601 0.88
19.Nov 0.452 0.391 0.524 0.488 0.95
20.Dec 0.243 0.229
21.ave 0.505 0.573 0.790 0.629 1.01
22.rnk 1st 1st 1st 1st 1st
Source UAH RSS Had4 Sst3 GISS

If you wish to verify all of the latest anomalies, go to the following:
For UAH, version 6.0beta5 was used.
http://www.nsstc.uah.edu/data/msu/v6.0/tlt/tltglhmam_6.0.txt
For RSS, see: ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt
For Hadcrut4, see: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.5.0.0.monthly_ns_avg.txt
For Hadsst3, see: https://crudata.uea.ac.uk/cru/data/temperature/HadSST3-gl.dat
For GISS, see:
http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

To see all points since January 2016 in the form of a graph, see the WFT graph below.

WoodForTrees.org – Paul Clark – Click the pic to view at source

As you can see, all lines have been offset so they all start at the same place in January 2016. This makes it easy to compare January 2016 with the latest anomaly.
The thick double line is the WTI which shows the average of RSS, UAH6.0beta5, HadCRUT4.5 and GISS.

Appendix

In this part, we are summarizing data for each set separately.

UAH6.0beta5

For UAH: There is no statistically significant warming since November 1993: Cl from -0.009 to 1.784. (This is using version 6.0 according to Nick’s program.)
The UAH average anomaly for 2016 is 0.505. This sets a new record. 1998 was previously the warmest at 0.484. Prior to 2016, the highest ever monthly anomaly was in April of 1998 when it reached 0.743. The average anomaly in 2015 was 0.261 and it was ranked third but will now be in fourth place.

RSS

Presently, for RSS: There is no statistically significant warming since July 1994: Cl from -0.005 to 1.768.
The RSS average anomaly for 2016 is 0.573. This sets a new record. 1998 was previously the warmest at 0.550. Prior to 2016, the highest ever monthly anomaly was in April of 1998 when it reached 0.857. The average anomaly in 2015 was 0.381 and it was ranked third but will now be in fourth place.

Hadcrut4.5

For Hadcrut4.5: The warming is significant for all periods above four years.
The Hadcrut4.5 average anomaly so far is 0.790. This would set a record if it stayed this way. Prior to 2016, the highest ever monthly anomaly was in December of 2015 when it reached 1.024. The average anomaly in 2015 was 0.760 and this set a new record.

Hadsst3

For Hadsst3: There is no statistically significant warming since March 1997: Cl from -0.003 to 2.102.
The Hadsst3 average anomaly so far for 2016 is 0.629. This would set a record if it stayed this way. Prior to 2016, the highest ever monthly anomaly was in September of 2015 when it reached 0.725. The average anomaly in 2015 was 0.592 and this set a new record.

GISS

For GISS: The warming is significant for all periods above three years.
The GISS average anomaly so far for 2016 is 1.01. This would set a record if it stayed this way. Prior to 2016, the highest ever monthly anomaly was in December of 2015 when it reached 1.11. The average anomaly in 2015 was 0.86 and it set a new record.

Conclusion

Does it seem odd that only GISS will probably set a statistically significant record in 2016?

Advertisements

287 thoughts on “Satellite Records and Slopes Since 1998 are Not Statistically Significant. (Now Includes November and December Data)

  1. I am 95% certain that my body temperature has not increased by one degree F.

    I have no clue whether it has increased by 0.5 F., if this gives you a clue about how much this worries me. (^_^)

  2. It may interest you to know that according to Nick’s site mentioned above, the average anomaly for the first 10 days in January 2017 is lower than any monthly average since August 2015. Of course, things can easily change before the end of the month.

    • In RSS, a zero trend exists from 1887 to just before the 2015/16 El Nino. (Green)

      The transient of that El Nino (Blue) has now decayed to just below that trend line.

      UAH zero trend line was slightly lower so remains above, but should drop below its 20 year zero trend either in January or February

  3. Do these GISS figures seem odd? Hmmm. I know what I suspect but maybe try asking Gavin! Good luck with that, Werner. :-)

  4. All those numbers are irrelevant. The only number that matters right now is what is the temperature. 0.2 C above normal. Without explaining where or how the heat comes and goes, a trend is meaningless. According to AGW theory the heat gets retained. There is no where for the heat to go. If the temperature was 1.0 C, how has global temperature fallen 0.8 C in such a short period of of time ?
    If the a warming falls to 0.00 C or lower, it proves that a combination of external or internal factors that have not been accounted for the dramatic shifts in the retained atmospheric heat. I submit that it has already on at least 3 different occasions since 1998.
    Co2 as a control knob of temperature, it isn’t. What? the temperatures are going to have to be adjusted again ? The thermostats just aren’t recording the temperature properly ? According to me, co2 ppm/v increase for this year should decrease along with the temperature. Let’s see if that happens. They will probably adjust that too. It’s a difficult thing to explain: if last year (2016) the increase was 8.0 ppm/v , then if it’s cooler in 2017, I expect that the ppm/v will be less than 8.0 ppm/v increase (or whatever it was) for this year. Unless of course IPCC and associates proclaim an over all reduction in the production of co2…. as if that will actually happen. Oh, what the heck, doesn’t matter, it’ll be 8.2 ppm/v increase. Somewhat what like 2005, co2 level stood for years at 2.52 ppm/v increase. And then by magic, 3.10 ppm/v. .. 0.58 ppm/v looks like a small number until you realize how much anthropogenic co2 is needed to achieve that number ( again if the increase is actually the result of anthropogenic co2) . It also helps them explain where the unaccounted co2 went. Well, it had to have ended up in the atmosphere, we can’t say we don’t know. The amount previously ( before magic numbers) unaccounted for is enormous. There currently still is huge amounts missing in the numbers, but I’m sure NOAA is working tirelessly to correct that. ( sarc at NOAA)

    • It is quite clear from the satellite record that natural variability is large. On short term basis, it is more than 1 degC. This is no doubt one reason why the signal to CO2, if any at all, cannot be isolated and eeked out from the variability of natural variation and the inherent deficiencies/short comings (including sensitivity and errors) of our best measuring equipment.

    • Rishrac says above:
      “The only number that matters right now is what is the temperature. 0.2 C above normal.”
      Please define “normal”.

      • “The only number that matters right now is what is the temperature. 0.2 C above normal.”
        Please define “normal”.

        The present anomaly for December was 0.229 above its December average with respect to the base line years that RSS chooses to use. Of course it is not normal in the same way we have a “normal” body temperature when we do not have a fever.

      • On the left hand side of the vertical are numbers. It’s the one designated as zero. Which is what I’m referencing. The temperature up or down could be in absolute terms. Doesn’t matter as the global temperature has fallen.
        Below are attempts to explain short term variations on ” the heat hiding in the ocean “. Where did the oceans warm up 0.8 C ? It should be more than 0.8 C shouldn’t it. While the global temperature measured the entire earth, oceans only comprise 7/10ths. Ok, there are variations in temperature. So some areas are the same, some cooler, and using CAGW s line that it is the average, where is the ocean water that much warmer ? We just have el nino after el nino, on and on ? From one peak el nino to the next was 18 years. That’s not the premise of AGW. I pretty sure that when the “heat was hiding in the ocean”, they went looking for it and didn’t find it. It’s like the tropical hotspot. Doesn’t exist. Co2’s signal is background noise at best.
        I do think that there has been a slight warming trend. Who knows ? I also think it isn’t due to co2. However there is a strong possibility that nothing has happened. If the zero reference point is moved up 2 tenths, there has been no warming at all. The adjustments that NASA/NOAA have made could well cover 2/10ths. We could just as well be in a LIA and still get a similar looking graph. The central point is, where is the heat now ? On the one hand AGW says it took 100 years to get 1.0 C of warming, and on the other 0.8 C of cooling in 6 to 7 months. How can the atmosphere loose that much heat ? And considering the large increase in atmospheric co2. Aren’t we familiar with the lab experiments detailing how co2 retains heat ? It gave up the heat because…… ?? The nature of co2 changed ?
        Minor variations in global temperature is understandable, large ones like this, are not.

      • On the left hand side of the vertical are numbers. It’s the one designated as zero. Which is what I’m referencing.

        That, (0), is the average value of their baseline anomaly which is from 1979 to the end of 1998.

        Where did the oceans warm up 0.8 C ? It should be more than 0.8 C shouldn’t it.

        I am not sure what you are referring to here, but during an El Nino, section 3-4 warms up by up to 3 C and this has affects on the whole ocean after a while.

        How can the atmosphere loose that much heat ?

        When less of the ocean surface is warm, the atmosphere feels it. This can be done by winds causing warm water to pile up in one place or a 1 or 2% increase in cloud cover.

        If the zero reference point is moved up 2 tenths, there has been no warming at all.

        Different reference periods are one cause of different anomalies.

    • If the temperature was 1.0 C, how has global temperature fallen 0.8 C in such a short period of of time ?

      Heat that was deep in the western Pacific Ocean with little area exposed to the atmosphere was spread out with a much larger area exposed to the air. So the temperature spiked. As this large area lost heat, temperatures dropped.
      As an analogy, suppose a city has a mile long cylindrical piece of iron at 1000 C in the middle of a city, but that the iron is stuck in the ground with only 1 square metre exposed to the air. What would happen if this mile long iron at 1000 C is pulled out and placed in a horizontal position? The temperature in the city would briefly spike until the bar cooled off due to natural processes.

      • Maybe, but if this heat was deep in the Pacific it must have been there for a long time. What we are really talking about is that Pacific surface water was replaced by water from below that was a degree or so warmer. So it had to have been a relic of an earlier warm period.
        I have no problem understanding how this could be, but extensive knowledge of the Earth’s reservoirs of heat does not exist. In substitution we have flaky suppositions of tree ring differentials of less than a millimeter. It is pretend science for the benefit of the practitioners and politicians.

      • Maybe, but if this heat was deep in the Pacific it must have been there for a long time.

        Bob Tisdale is the expert here. But my understanding is that it is not deep, but extending down from the surface perhaps a few hundred metres and about 3 C above normal. It is due to wind blowing hotter water west causing a slight elevation of hotter water. Then when winds die down over a period of a few years, the hotter water sloshes back to the middle of the Pacific Ocean greatly increasing the area of warmer water exposed to air and thereby warming the air.

      • Did you do the math on this ? That’s an amazing feat that a relatively small area of the Pacific can pull down so much energy in such a short amount of time. We aren’t talking about the entire pacific ocean. That’d be an oceanic vortex in such a relatively small area. I suppose by spring we should see another el nino ? Surely we should see the depth and volume of water warming at an accelerated pace.
        Let’s picture it this way. It’s 70 F today, a cold front moves through and drops the temperature to 40 F. The system that was here is displaced. That’s weather. The dropping of temperature from 1.0 C to 0.2 C is climate. The warm air didn’t get displaced. The energy only has a couple of ways to go. It either gets absorbed, released, or a combination of the two. … in any event, that is not what AGW says. Short term ? Do you see the squiggly lines where for a few years the temperature was building before the election nino ? Then on the other side, now, a sharp drop off. All that energy is gone. The temperature of the water and air are relative to one another. A sharp differential is sheer.
        0.8 C drop is a very large amount of energy that went somewhere. It should be obvious and not conjecture of where that energy is. Just stating that it gets absorbed by a part of the Pacific Ocean, is that so, or being where it is warms up due to other factors. .? How do you filter out the one from the other when the ocean acts like a weather system ?

      • Did you do the math on this ? 

        Ask Bob Tisdale when he has his next article. Keep in mind that water has a much larger heat capacity than air. So a relatively small amount of water can heat a large amount of air.

      • And we can calculate it too. As I stated, CAGW went looking for the heat in the ocean and it wasn’t there. You remember the phrase ” the heats hiding in the ocean ” ?
        The two reasons I think there has been a slight warming trend is because I remember the 1970s and that I am convinced that co2 follows temperature . Otherwise, there is a very strong probability that nothing has changed except weather patterns, and not the climate.

      • The the sudden atmospheric warming can simply explained by the fact that hot water stored some hundred meters deep in the West pacific is suddely released to the surface of the whole Pacific, which makes one third circumference of the globe – a real big area!

        The heat in the West Pacific comes from the trade winds, having no clouds and exposing a big part of the entire Pacific permanently to the sun. Without the winds, the warmed up and piled up waters are flushing back toward East and the American Continent.

        Water can store 1000 times more heat than air, so the atmophere is heated up quickly. When cold water again is pushed from America to the West Pacific, the heat release is stopped. And the atomosphere ist radiating quickly its heat towards space.

      • The CERES satellite says the energy (originally stored in the Western Warm Pool area next to Indonesia in the top 200 metres from let’s say 2014 to early 2016 but the circulated underneath back to the East in the undercurrent to surface at the Galapagos Islands and formed the super-El Nino) …

        … was emitted back to space, more-or-less 3 months after the super-El Nino peaked.

        The CERES Long-Wave emissions from Earth to space peaked in February 2016 (a pretty good spike there)

        Very similar to the spike that the ERBE satellite recorded for the 1997-98 super-El Nino.

        Energy stored in the western warm pool (you can say from the Sun because that was the original source – eventually get circulated to a spot where it gives it back to the atmosphere, warming occurs temporarily – but then the energy gets emitted back to space – temperatures back to normal – awaiting the next temporary energy build-up. The history of the ENSO regions says this happens repeatedly and so does period when the energy is drained down and there are La Ninas. Probably happening like this since the Pacific became a wide deep ocean about 400 million years ago.

    • “The only number that matters right now is what is the temperature. 0.2 C above normal.”

      I disagree. The only number that matters right now is how much grant money can be extracted from the taxpaying suckers.

    • spot on rishrac . there is a very good reason anomalies are used instead of absolute temps . the so called positive anomalies are within the variation to be expected just about everywhere the temperature is recorded , no one seems to be too bothered about this issue though.

  5. Werner

    So it is arguable that, according to the satellite data, there is still a pause in the warming, and it goes back to 1998. At any rate, the possibility of this cannot be ruled out.

    You suggest that the first 10 days of January are showing cooling. Obviously, we do not know whether this will continue or not for the remainder of the month. But speculating that January will continue where December left off, and that January will show a decline in the anomaly, at what level of anomaly will it be before the trend line has no positive slope?

    Of course, that level of anomaly might not be reached until later this year (if indeed it is reached, which no doubt will depend upon whether the ENSO neutral conditions change towards La Nina conditions).

    • So it is arguable that, according to the satellite data, there is still a pause …

      Depends on what you look at. If you let the Warmunists define the metric and only look at average temperatures then it isn’t so obvious. Taking the average loses a lot of information, after all, the average of 1 and 99 is 50 and the average of 49 and 51 is also 50. But if you look at Summer time Maximum Temperatures it becomes a whole lot more apparent that what is being claimed might not be exactly so.

      • So, Toneb, the takeaway message is that temperatures are not actually getting higher, but the spread between low and high is decreasing, which augurs more benign weather, yes?

      • You are using the warming prior to 1920 to create that slope.
        However the warming prior to 1920 cannot have been caused by CO2, since CO2 levels weren’t rising at the time.
        In short, you have disproven your own hypothesis.

      • Toneb January 12, 2017 at 9:49 am
        No, the best metric to pick the AGW signal is minimum temperature….

        When Johnny Carson said, “Wow! Was it ever hot to day, a real scorcher!” And the audience chimed in, “How hot was it?” They weren’t asking how warm it was at 2:00 AM. The summer time minimum temperature sheds no light on whether or not the warming is a catastrophe or not. After all, it’s the extreme weather that we are told is the problem. Cooler summers and warmer winters doesn’t constitute extreme weather. The Warmunistas are going to have start telling us about extreme mildness.

      • “Increases in minimum temperatures simply reflect UHI effect. Get a grip, data mongers.”

        Nope.
        Ask the BEST crew and former sceptic Richard Muller.

        http://berkeleyearth.org/faq/
        “Is the urban heat island (UHI) effect real?
        The Urban Heat Island effect is real. Berkeley’s analysis focused on the question of whether this effect biases the global land average. Our UHI paper analyzing this indicates that the urban heat island effect on our global estimate of land temperatures is indistinguishable from zero.”

      • “You are using the warming prior to 1920 to create that slope.
        However the warming prior to 1920 cannot have been caused by CO2, since CO2 levels weren’t rising at the time.
        In short, you have disproven your own hypothesis.”

        Not at all.
        It was the only graph of US min temps I could find.
        You DO need to do a OLS trend by eye from around 1970 when GHG forcing outweighed aerosols content.
        Or 1930 if you prefer, to compare with the trend drawn on the max graph just above.
        It would not be negative.

      • 1) BEST has been thoroughly refuted.
        2) Muller was never a skeptic.
        Two lies in one sentence. You are improving your padawan.

      • Toneb: If the only graph that you can find disproves your point, then your point is mighty weak to begin with.

      • Toneb @ January 12, 2017 at 12:51 pm

        “You DO need to do a OLS trend by eye from around 1970 when GHG forcing outweighed aerosols content. Or 1930 if you prefer, to compare with the trend drawn on the max graph just above.

        Merely manifestations of the ~60 year cycle:

      • Funny thing: Outside of conversations about global warming with alarmists, I hardly ever get to use the word “HOGWASH!”

      • and former sceptic Richard Muller

        https://www.technologyreview.com/s/403256/global-warming-bombshell/

        by Richard Muller October 15, 2004

        If you are concerned about global warming (as I am) and think that human-created carbon dioxide may contribute (as I do), then you still should agree that we are much better off having broken the hockey stick. Misinformation can do real harm, because it distorts predictions. Suppose, for example, that future measurements in the years 2005-2015 show a clear and distinct global cooling trend. (It could happen.) If we mistakenly took the hockey stick seriously–that is, if we believed that natural fluctuations in climate are small–then we might conclude (mistakenly) that the cooling could not be just a random fluctuation on top of a long-term warming trend, since according to the hockey stick, such fluctuations are negligible. And that might lead in turn to the mistaken conclusion that global warming predictions are a lot of hooey. If, on the other hand, we reject the hockey stick, and recognize that natural fluctuations can be large, then we will not be misled by a few years of random cooling.

        A phony hockey stick is more dangerous than a broken one–if we know it is broken. It is our responsibility as scientists to look at the data in an unbiased way, and draw whatever conclusions follow. When we discover a mistake, we admit it, learn from it, and perhaps discover once again the value of caution.

      • “and former sceptic Richard Muller”

        roflmao.. you are in inebriated LIAR, toneb

        Muller never was a skeptic.

      • Here is the Earliest US temperature graph that there is: The NCDC completely wiped these records out of everywhere but they missed one the Wayback Machine.

        And the data: here

        http://web.archive.org/web/20000817172708/http://www.ncdc.noaa.gov/ol/climate/research/1999/ann/usthcnann.txt

        One of the earliest global temperature records from the NCDC. Adjusted out quite a bit of this now.

        http://web.archive.org/web/20000815214002/http://www.ncdc.noaa.gov/ol/climate/research/1997/globet3.txt

      • Muller was a skeptic but no denier. See the difference!

        “See the difference!”

        Did you confuse an exclamation point with a question mark?

        Reading what Muller said, no. Muller is a believer.

        Does Toneb know the difference?

    • So it is arguable that, according to the satellite data, there is still a pause in the warming, and it goes back to 1998. At any rate, the possibility of this cannot be ruled out.

      True, but Lord Monckton and I use the assumption that there has to be a 50% chance of cooling rather than a 30% chance of cooling before we calculate any pause.

      at what level of anomaly will it be before the trend line has no positive slope?

      It is not only the anomaly that is important but the length of time at that anomaly. Right now, at 0.229, RSS is at the zero line. If it drops 0.1 below this, it will take x months to have a zero slope. But if it drops 0.2 below this, it will take x/2 months to have a zero slope, etc.

      • To put numbers on that – the mean of RSS from mid 1997 to Feb 2016 was m=0.26°C. The sum of degrees exceeding that, to end 2016 was 2.6. So (approx) it would take 26 months at 0.16C (m-0.1) or 13 months at 0.06C. Dec was 0.23C.

      • You simply cannot end a trend with a super El Nino. That is meaningless. If you want to look at a trend the best you can do is stop right before the El Nino kicked in. Then you can look at the El Nino itself to see if it shows any behavior outside the ordinary. If you do this you get something like the following graph.

        http://www.woodfortrees.org/plot/rss/from:1997/to/plot/rss/from:1997/to:2014.75/trend/plot/rss/from:2016.15/to/trend/plot/rss/from:2014.75/to:2016.15/trend

        The pause is still active using this slightly different view. I never liked Monckton’s pause because it was always destined to end whenever we hit a strong El Nino. Now, you could go back to Monckton’s method whenever the current La Nina ends but that but that might not be for 2 more years. Better to just ignore ENSO altogether. Has anyone tried to chart only months with ENSO neutral conditions? This would seem to solve the problem.

      • Richard M,
        “You simply cannot end a trend with a super El Nino. That is meaningless. If you want to look at a trend the best you can do is stop right before the El Nino kicked in.”
        So you show a plot starting with a super El Nino, but with the one at the end carefully excluded!

      • Nick, he also included the followup la-nina and stated that using Monckton’s method would be valid after the current la-nina is finished. I too would like to see the trend for ENSO neutral months.

      • Nick, starting with an El Nino almost always starts with a La Nina immediately after. These balance out as far as most trends go. So, yes starting with an El Nino-La Nina pair is perfectly valid. Ending with an El Nino does not include the balancing La Nina which is clearly nonsense. I find it amusing you would even ask such a silly question.

      • Richard M
        “Ending with an El Nino does not include the balancing La Nina which is clearly nonsense.”
        Sounds like you’re making up these rules as you go along. Lots of special pleading. But how long do we have to wait for that La Nina? Seems like it might have come and gone.

      • Nick, it’s called common sense and is the only way to understand what is really going on if you want to continue to use trends across ENSO active years. We just had two years of El Nino conditions so it will likely take two more years to see what happens. I still think a better way is to remove all the noise and just look at trends in the neutral years.

      • Richard M January 12, 2017 at 4:11 pm says “The pause is still active using this slightly different view”

        That certainly is a different view. You stop measuring it just before it vanished and then say “Look! It never went away”

  6. Simple question from a guy in telecom: The use of the t-distribution assumes a normal distribution. Are temperatures really distributed normally? Second, do we have good historical data that would tell us that the variance since 1998 is in line with the true population variance, or is everyone just winging it?

    Thanks to whoever answers

      • fair enough, although, since anomalies are just (real temperature – temporal average temperature) … the answer keeps the same.
        No.

      • OK, so now I don’t have a whole lot of confidence in the confidence interval, per se. Based on the statistical analysis one would hedge the bet to reflect a 70-30 up-down ratio, but based on the notions about the impact of the last El Nino I’d be willing to go all-in on the notion that the trend line slope moves toward zero.

    • The telecom problem of extracting a signal from signal-plus-noise is trivial compared with extracting a supposed trend from temperature data. Given the number of periodic and quasi-periodic signals, there is zero chance that the ‘noise’ in the temperature data is gaussian.

    • ” or is everyone just winging it?”
      Yes, when it comes to estimating uncertainty. It’s uncertain. You don’t know that the residuals in a trend are iid normal, in fact they aren’t. More details here. There is a lot of debate, for example, about how to treat the autocorrelation of residuals. In the end, you get someone’s estimate of uncertainty. You can be more or less uncertain if you wish.

      • With GCMs there is nothing to be uncertain of. You run a model, and that is what it said. AS I said in another comment, uncertainty is about what would happen if you had done things differently. With a model that is easy. You just do things differently and see. Run it again. That is what they do.

    • Mark from the Midwest January 12, 2017 at 8:15 am

      Simple question from a guy in telecom: The use of the t-distribution assumes a normal distribution. Are temperatures really distributed normally?

      Generally, no. They typically are high Hurst exponent data. As a result, the confidence intervals are much wider. See my post A Way To Calculate Effective N.

      Alternatively, climate datasets can also be modeled successfully as an ARMA process, typically with a high AR value and a negative MA value.

      FOR EXAMPLE: the UAH MSU lower troposphere temperature dataset has a Hurst exponent of 0.7, and is modeled as an ARMA process with AR = 0.93 and MA = -0.31.

      Short answer is that many, many uncertainties are underestimated in climate science due to bad statistics.

      w.

    • So, which temperature data set is the correct one?

      Only GISS will show a statistically significant record this year. But in its defense, GISS had 2015 as its previous warmest year whereas the satellites had 1998 as their warmest year. And the satellites are much warmer as compared to 2015 than GISS will be. But then again, satellites respond hugely to a strong El Nino.
      Perhaps the best answer I can give is to look at the WTI line on the graph in my article that combines and averages 4 data sets. However I believe it uses UAH5.6 and not UAH6.0.

      • Satellite and radiosonde TRENDS destroy IPCC climate model “projections.” They, additionally, disprove surface temperature estimates. Nitpick the details all you want, but satellite trends show the way.

      • Thanks for the reply.

        I have 2 clocks in my kitchen. One hung on the wall, the other in the microwave oven. Had I got only only one, I would know the time, but since I have two, they drift apart. Now one is two minutes ahead.

        I can calculate the mean, but my best option is to synchronize them again with Windows time server, which is my reference time.

        The problem here is that there is no reference point or reference data set and people use the data set that fits better with their narrative.

      • uredarra said….

        “Thanks for the reply.

        I have 2 clocks in my kitchen. One hung on the wall, the other in the microwave oven. Had I got only only one, I would know the time, but since I have two, they drift apart. Now one is two minutes ahead”.

        But what would be the point of having two clocks if they showed the same time?

      • “But what would be the point of having two clocks if they showed the same time?”

        So that if they don’t show the same time you know you’ve got the wrong time and can adjust them to be the same again so then you know for sure you’ve got the right time. It’s called climatology and only climatologists have the right clocks.

  7. Whatever happened to the concept of significant digits? Lots of pointless claims and counterclaims could be avoided by eliminating the second and third digits after the decimal. Maybe even the first.

    • None of these data sets should present results to more than a tenth of a degree. Even then, one would wish to see a proper and reasonable error bar set out.

      • Clearly, one accurate temperature data set would be more valuable than four temperature estimate sets, which is what we have now. GISS, NCEI, HadCRUT and Japan are NOT data sets.

      • Prior to the satellite era, no measuring device was up to supporting claims of measuring climate change parameters, much less fundamentally changing our society, economy or energy systems.

        ARGO is only a decade old.

        Give it a few years, then come back with your save-the-world schemes.

    • If we are talking the original thermometers you should ignore “everything” after the decimal. Since they were graduated in single degrees.

      • If we are talking the original thermometers you should ignore “everything” after the decimal. Since they were graduated in single degrees.

        In that case, my whole table would be either zeros or ones. How useful would that be?

      • “In that case, my whole table would be either zeros or ones. How useful would that be?”

        Very useful Werner. It would give the perspective of endangered professional – a qualified metrologist.

      • Tom Dayton,
        The Law Of Large Numbers assumes the error distribution of the data is normal. Since this is not the case the with these data sets, you can’t remove the “noise” by averaging.

      • To add to Paul’s point. The law of large numbers also assumes that the equipment is either the same or identical, and the circumstances surrounding each measurement are also the same.
        Neither is true when it comes to temperature measurements.

      • Yes, the law of large numbers is “a theorem that describes the results of performing the same experiment a large number of times”.
        Measuring temps on different days in widely separated locations with different equipment is not performing the same experiment a large number of times.
        Not even close

      • The most elegant statement of how to present numerical results I have seen is that one writes down the first known digit(s), followed by the first uncertain digit. I work in a government regulatory agency, and battled for 5 years over the misuse of a mantissa containing a lead digit and three decimal digits. The number in question is a risk, and it is generated by a Monte Carlo analysis which has never (and can never, even in principle) been shown to produce accurate results. Seldom are more than 300 Monte Carlo runs done in this particular calculation, which may converge on a first digit. But each digit after that requires 100 time more calculations than the one before, so there should be no discussion of anything but the first digit. The really ridiculous part, though, is that the risk number emerging from this Monte Carlo is then multiplied by another probability, whose value can range from 0.3 to 0.7 depending on which expert one asks.

        Going back to the initial statement, then, one could not write down any known digits, and would thus be constrained to the first unknown one – which might itself be wrong by a factor of 2.33. Yet analysts wasted an enormous amount of government and industry time and effort (in the millions of dollars) expressing “concern” that a number was coming out as 2.978 rather than 2.971. None of them went to school in the precalculator era (I used a slide rule in my freshman and sophomore engineering classes).

        I managed to get the rule changed to use a one digit mantissa, rounded from the nearest decimal fraction. It’s still ultra conservative, and it is going to reduce cost by millions of dollars a year.

      • I bought the first HP35 on campus ($395 in 1973!), even on a lean student budget. It is the only way I passed vector analysis. Graduated as Outstanding Senior Engineer, top of my class in 1974, having upgraded to an HP45 by then.

        I still have both of them (and my last sliderule), and use an HP12C for everyday math when away from the PC. Screw anybody who doesn’t understand Reverse Polish Notation or how to use a sliderule.

        I pre-paid my student GI Bill and current VA healthcare in Vietnam. Charlie Skeptic was born in the mud, the blood and the beer. Everybody got a piece of him.

        All of you, dance around the numbers all you want. Nothing has changed in over 10,000 years, unless, maybe, it is slightly cooler. Trends are until they aren’t.

        IPCC climate models are bunk. CAGW is religion.

    • Whatever happened to the concept of significant digits?

      As a retired physics teacher, I know exactly what you mean. Think of me as a humble reporter just reporting what the learned men are saying. I report what they are “saying” without trying to tidy up their “language” first.
      But in my defense, my whole introduction and Section 1 dealt with uncertainties.

      • To be clear, I’m not criticizing your reporting or analysis. Just think it’s silly that (somewhat arbitrary) global averages are reported to thousandths of a degree.

      • what happened was that computers now do the number crunching and they do not find it tedious.
        so you can go ahead and round your data input as much as you want but then the computer is going to do the calculations with as many bits as the alu uses regardless. and it will mercilessly inflate your one digit decimal integer fractions into infinity and beyond!
        then it’s going to output numbers that you have to explicitly format if you want them rounded down – but when the computer prints the stuff it’s more tedious to spend time at desk editing – that’s what grad students and secretaries are for.

    • OMG we finally have a mathematical student. I remember when a Township Engineer ( Civil ) wanted me to calculate a retention pond size to four decimal places when the soil coefficient was 0.75!! He was highly insulted when I pissed my self laughing. You Climate Wonks never learned Algebra I.

  8. “Does it seem odd that only GISS will probably set a statistically significant record in 2016?”
    Maybe you won’t have to answer that question in a few months time.

      • I would say, temperatures are still declining from the El Nino, which is still masking the underlying trend. We will have a much better picture in the next year.

      • “Are you suggesting that GISS is part of the swamp that Trump will drain?”
        My hope is that a soup to nuts audit of procedures and practices will be ordered and, additionally, transparency will be implemented and enforced regarding such practices as adjustments, infilling, homogenization, and numerous other highly questionable and dubious methods employed by GISS and other participants in the CAGW cabal.
        And someone at the top should have to ‘splain to all of us peons exactly how it is that all of the adjustments add up to a perfect match of the CO2 concentration chart!

  9. Edited by Just the Correct Punctuation.
    “As can be seen from the above graphic, the slope is positive from January 1998 to December 2016, however with the error bars, we cannot be 95% certain that warming has in fact taken place since January 1998.”
    This should either be:
    “As can be seen from the above graphic, the slope is positive from January 1998 to December 2016. However, with the error bars, we cannot be 95% certain that warming has in fact taken place since January 1998.”
    Or
    “As can be seen from the above graphic, the slope is positive from January 1998 to December 2016; however, with the error bars, we cannot be 95% certain that warming has in fact taken place since January 1998.”

    “If anyone has an exact percentage here, please let us know, however it should be around a 60% chance that a record was indeed set for RSS.”
    This should either be:
    “If anyone has an exact percentage here, please let us know. However, it should be around a 60% chance that a record was indeed set for RSS.”
    Or
    “If anyone has an exact percentage here, please let us know; howeve,r it should be around a 60% chance that a record was indeed set for RSS.”

    This has it correct in the original:
    “Since this is less than the error margin of 0.1 C, we can say that 2016 and 1998 are statistically tied for first place. However there is still over a 50% chance that 2016 did indeed set a record, but the probability for that is far less than 95% that climate science requires so the 2016 record is statistically insignificant.”

    If it’s important to get the science and maths right, it’s also important to get grammar and punctuation correct.

    Incidentally, an issue is something over which there is disagreement. Thus whether or not the world is getting hotter too quickly is an issue.
    A problem is something which has to be sorted. If your central heating is not working on a very cold day you have a problem, not an issue. (The issue might be the cause of your problem.)
    So can we please go back to the sensible days when we called a problem a problem and we called an issue an issue.

    • I tried not to let it happen, but I could not help it…my eyeballs just rolled clean out of their sockets.

  10. UAH 5.6 TLT is significant from 1998- 2016, according to the Moyhu trend calculator:
    Rate: 1.425°C/Century;
    CI from 0.220 to 2.631;
    t-statistic 2.317

    This is not good credentials to the AMSU diurnal drift correction and satellite picks in UAH v6.
    UAH 5.6 relies on non-drifting AMSU satellites and should be a good benchmark for validation of trends in v6.

      • The only “validation” of AMSU-era trends that I have seen over at Spencer’s, was that v6 agreed with RSS TLT 3.3. But that’s history, v3.3 was proven faulty long ago.
        Actually, RSS tested the UAH 5.6 concept when they developed their new product. I think it was called MIN_DRIFT. It corroborated the finally chosen method, but had certain limitations (no nondrifting MSU-satellites).

      • UAH 6 also agrees with the only pristine surface data set in the world, USCRN

        So don’t let the FARCE that is the GISS surface data fool you too much.

    • Ending a trend with a super El Nino is meaningless. You know it and I know it. Quit producing nonsense.

      • No Werner, it doesn’t balance out as I explained above. You MUST include EL Nino – La Nina pairs for a valid trend or completely eliminate all months that are not ENSO neutral.

  11. Does anyone else note the correlation between the WFT graph and that graph floating around showing the democrats loss of positions under Obamacare? I’m sure there must be some causation we can find, since it comes from trees possibly it’s the amount of hot air generated by politicians relates to tree growth which is inverse of their ability to stay in office?

  12. The 1998-2016 trend is useless. Unless you can explain why January 1998 is a relevant beginning date, as oppose as, say, June 1997 or October 1999.
    I can bet the words “cherrypicked” and “El nino” are coming …

    “Section 1” is far more interesting.
    Seems to me a very large discrepancy, that Hadcrut4.5 and GISS report a significant warming, while UAH, RSS and Hadsst3 do not.

    • I agree. Actually if you are ending right after a strong El Niño, then you could make a point that starting right after a previous strong El Niño would make some sense, but obviously then you get a rising trend.

      Alternatively you could go from peak El Niño to peak El Niño. You still get contamination from each El Niño not being exactly the same, but you could argue that the effect of that is likely to be smaller than choosing any other starting point.

      Best approach if you want to discuss rate of warming would be to represent rate of warming, and not temperatures.

    • The 1998-2016 trend is useless. Unless you can explain why January 1998 is a relevant beginning date, as oppose as, say, June 1997 or October 1999.

      Starting at the beginning of one very strong El Nino and ending with the end of an equally strong later El Nino is as fair as you can get. It is better than going from El Nino to La Nina or vice versa.

      (Hadcrut4.5 will not show a statistically significant record.)

      • Absolutely not. El Nino and La Nina are not random events. La Nina typically comes right after an El Nino. Hence if you start right before El Nino you capture both the El Nino and La Nina at the start of the trend. These tend to balance out. If you stop right after an El Nino you miss out on the balancing effect of the La Nina.

        The only valid way is to start before one of the pairs and end after one the pairs. Nothing else is valid. Or, as I keep saying, build a graph with only ENSO neutral months. Throw out all the El Nino and La Nina months. That is, treat them as outliers.

    • Seems to me a very large discrepancy, that Hadcrut4.5 and GISS report a significant warming, while UAH, RSS and Hadsst3 do not.

      That is true! And both have seen many “adjustments”.

    • “Seems to me a very large discrepancy, that Hadcrut4.5 and GISS report a significant warming, while UAH, RSS and Hadsst3 do not.”
      UAH, RSS and Hadsst3 are measuring different places. But for more confusion, UAH 6 is not significant but UAH 5.6 is. Remember though that there is a continuum of uncertainties, and a 95% cut-off is arbitrary.

      People often assign the wrong significance to significance.

      • But for more confusion, UAH 6 is apparently not significant but UAH 5.6 apparentlyis.

        FIFY. If they are both apparently significant or not significant, then they may reflect reality. If one is and one isn’t, then at least one of them is wrong.

        GISS is not even a serious contender – too many tenuous “adjustments”.

      • Do any of you understand “significant?”

        Today’s climate does not differ materially from any of those experienced over the Holocene. Until it does, it’s all mental masturbation, number mongering and sophisticated-sounding speculation about the past and future. Statistics cannot describe the unknown. Unhinged numbers are not reflections of the reality.

        Anybody that understands the climate and its drivers, stand up. Everyone else, shoot them. They are liars and charlatans.

      • “The cumulative adjustments to GISS are small compared to the adjustment made in going from UAH 5.6 to 6.”

        Not really. And, adjustments to UAH have gone both ways. “Adjustments” to GISS are predominantly in one direction. It will be taught in years to come as a particularly egregious exemplar of confirmation bias.

      • Bartemis,
        “Not really. And, adjustments to UAH have gone both ways. “Adjustments” to GISS are predominantly in one direction.”

        Here is the graph since 1979 (duration of UAH), all data set to the UAH anomaly base of 1981-2010, with 12 month running mean on monthly data. The GISS versions are as taken from the Wayback machine:

        Here is the difference graph. It’s pretty much downhill for UAH. Not much pattern for GISS

      • Bartemis,
        “The difference between UAH versions is basically a bias shift after 2004”
        Your WFT plots really need at least 12-month running mean so one can see what is happening. Here is WFT for 5.6 and 6 with that smoothing and trends shown. There is a particularly large trend difference (downward) since 1997.

      • What is it with the “trends” crowd? They think a least squares linear regression is some kind of holy thing.

        You’re drawing trends of noise, and drawing conclusions. Stop it.

      • UAH5.6 to UAH 6…. KNOWN justifiable adjustments

        GISS…. unjustified scam driven adjustments to cover a failed hypothesis and agenda

        There is a HUGE difference, Nick…

        And I’m pretty sure you KNOW that, if your pay packet didn’t depend on you NOT knowing

      • Bartemis January 12, 2017 at 3:36 pm
        The difference between UAH versions is basically a bias shift after 2004:

        http://woodfortrees.org/plot/uah6/from:2000/plot/uah5/to:2004/plot/uah5/from:2004/offset:-0.08

        The difference between the two versions of UAH is much more than that. Version 6 is a different product, unlike RSS they didn’t change the name. Version 6 represents the troposphere with a maximum weighting at 4km as opposed to version 5.6 which has a maximum weighting at 2km. The reason for this was to use a more robust method for calculating the temperature which avoids the interference with the surface present in version 5.6. RSS did a similar change and produced a new product TTT which has a very similar weighting to UAH version 5.6, like UAH I suspect that they will drop the TLT product since the surface interference is inherent to it (RSS always recognized this by not covering high altitude cold regions: Antarctica, Himalayas and Greenland). The new method allows wider latitude coverage because it doesn’t have the same deficiencies near the poles that the previous method did.

      • Thank you for the info, Phil. The practical effect of it on the data is an apparent constant displacement after 2004. But, it is nice to know the reason, and that it is a good reason for expecting the later product to be more accurate.

      • “There is a particularly large trend difference (downward) since 1997.”

        Yes Nick ….. since the new AMSU on NOAA5 took over…

      • Toneb said, January 13, 2017 at 10:54 am:

        Yes Nick ….. since the new AMSU on NOAA5 took over…

        Nice try, Toneb, but no. The change in the difference trend evidently starts a few years before the MSU->AMSU transition. In fact, it starts in 1995-1996:

        NOAA on the RATPAC-A dataset:
        https://www.ncdc.noaa.gov/data-access/weather-balloon/radiosonde-atmospheric-temperature-products-accessing-climate
        “RATPAC-A contains adjusted global, hemispheric, tropical, and extratropical mean temperature anomalies. From 1958 through 1995, the bases of the data are spatial averages of the Lanzante et al. (2003a,b; hereafter LKS) adjusted 87-station temperature data. After 1995, the Integrated Global Radiosonde Archive (IGRA) station data form the basis of the RATPAC-A data. NOAA scientists used the so-called “first difference method” to combine the IGRA data. This method is intended to reduce the influence of inhomogeneities resulting from changes in instrumentation, observing practice, or station location.”

        A bit more in depth:
        ftp://ftp.ncdc.noaa.gov/pub/data/images/Ratpac-datasource.docx
        “The scientific team derived the LKS data from data in the Comprehensive Aerological Reference Dataset (CARDS) obtained from NCDC (…). A team of three climate scientists adjusted monthly means for 87 carefully selected stations using a multifactor expert analysis, without use of satellite data as references and with minimal use of neighbor station comparisons. The team visually examined time series of temperatures at multiple levels, night-day temperature differences, temperatures predicted from regression relationships, and temperatures at other nearby stations. They also considered metadata, statistical change points, the Southern Oscillation Index and the dates of major volcanic eruptions. Using these indicators, they identified artificial change points and remedied them by either adjusting the time series at each affected level or, if adjustment was not feasible, by deleting data. Examinations were made of the adjustments for reasonableness. RATPAC uses the “LIBCON” version of the LKS adjusted data, which includes the most complete set of adjustments and uses the preferred adjustment method. The LKS data consist of monthly temperatures for 16 atmospheric levels from the surface to 10 mb, from 1948 to 1997.
        (…)

        IGRA
        To remedy various recently identified problems in the CARDS database, NCDC has undertaken a wholesale revision of the CARDS quality-control procedures (Durre et al. 2005). Using the resulting IGRA dataset, rather than the CARDS dataset, extended the station data past 1997. The global mean time series from the (CARDS-based) unadjusted LKS and IGRA differ most notably before 1965.

        Combining LKS and IGRA
        Because of the careful scrutiny used by the LKS team to create the adjusted LKS data, LKS is likely to be more reliable than a dataset derived by applying the first difference method to the IGRA data before 1995. The RATPAC team therefore uses LKS instead of IGRA before 1995 to reap the substantial benefits of the LKS homogeneity adjustments.

        However, because of the differences between the datasets before 1965, view RATPAC data from that period cautiously.”

        Sorry, but this elaborate adjusting and combining procedure does not lend much credence to the long-term (climate-gauging) accuracy of the RATPAC-A series …

      • Kristian, nice try but worth nothing. Because Ratpac agree with other radiosonde datasets before and after 1996. You may even compare UAH with unadjusted radiosonde data, but the “AMSU break” is still there. Further, the AMSU-era started in the summer 1998, but there is an MSU/AMSU overlap until summer 2001 (in UAH 6). The year 2000 is in the middle of this transition, hence a proper start point for the “AMSU era”. Starting this era in 1998, 2000 or 2001 doesn’t matter, the trend break in UAH 6 vs radiosondes is similar..

      • O R said, January 15, 2017 at 1:44 am:

        You may even compare UAH with unadjusted radiosonde data, but the “AMSU break” is still there. Further, the AMSU-era started in the summer 1998, but there is an MSU/AMSU overlap until summer 2001 (in UAH 6). The year 2000 is in the middle of this transition, hence a proper start point for the “AMSU era”. Starting this era in 1998, 2000 or 2001 doesn’t matter, the trend break in UAH 6 vs radiosondes is similar..

        There’s no “AMSU break”, Olof. The ‘breaks’ occur earlier and later:

        (…) Ratpac agree with other radiosonde datasets before and after 1996.

        Yes, it’s adjusted to agree, Olof. However, it very clearly does NOT agree with ANY satellite OR surface dataset. And THAT’S where the problem lies. The radiosonde datasets do NOT represent “Troposphere Truth” as you seem to think. They’re a jumbled MESS. Their trend is way too low from 1979 to 2001, and way too high from 1996 to 2016.



      • Kristian, you have obviously a lot to learn.
        Avoid all fuss about things that are not statistically significant. There is no significant difference between radiosonde, surface or satellite data in 1979-1999.
        Surface and troposphere data are not the same thing. Satellite and radiosonde data can be comparable if you check possible effects of weighting functions, and geographical subsampling.
        The period of 1979 to1999 is problematic due to two major volcanic eruptions. You cant expect that surface and troposphere agree during this period.
        Radiosonde data are far more reliable after year 2000 than before that, There has been technical development, metadata is far better, and adjustments due to inhomogeneities are small.

        Radiosonde datasets rely on millions of factory calibrated single use instruments that are sent aloft. Satellite data rely on a handful of instruments, where you cant reliably check the calibration once they are launched. Actually, the diverging trends in the AMSU-era relies on one single type of instrument used for the AMSU5-channel, which differ from that of the other AMSU channels.
        The trend in AMSU 5 should be right between those of AMSU 4 and 6, but it is actually near half that rate. Strange, isnt it? The coolest sensor in the troposphere that doesnt agree with anything else..;-)
        ftp://ftp.star.nesdis.noaa.gov/pub/smcd/emb/mscat/data/AMSU/AMSU_v2.0/AMSUA_only_Monthly_Layer_Temperature/AMSU_L3_Inter-Bias_vs_Merged_Trend_Ch4-8.jpg

        If you want to compare data do it by difference charts… If we look at the following chart again:

        Blue graph, the most fair comparison, here are my fifty cents..
        The drop doesnt start in 1996, it start in 1998 when the AMSU is introduced.
        The drop accelerates in 2001 when the last MSU disappears and the rogue NOAA-15 runs alone. UAH drops like a rock until the nondrifitng Aqua and other satellites chime in near 2005.
        When Aqua is discontinued after 2010, the drop continues….

      • O R said, January 16, 2017 at 6:59 am:

        Kristian, you have obviously a lot to learn.

        Ah, the condescending route. The one taken by a man who’s got nothing of factual, objective (actually scientific) value in his argumentation.

        There is no significant difference between radiosonde, surface or satellite data in 1979-1999.
        Surface and troposphere data are not the same thing. Satellite and radiosonde data can be comparable if you check possible effects of weighting functions, and geographical subsampling.
        The period of 1979 to1999 is problematic due to two major volcanic eruptions. You cant expect that surface and troposphere agree during this period.

        LOL! Stop it with your apologetic nonsense. UAH were forced to increase their pre-2001 trend and did. The UAH team is actually pretty unique in their adjustment history in that their record includes a fairly even distribution between UP and DOWN adjustments over time. Normally such a distribution is what would be expected, but is something we hardly see anywhere in “Climate Science” of today; here, rather, the adjustments of relevant observed climate parameters over time quite consistently have a strong lopsided tendency (like towards 9:1), steadily moving the long-term trend evolution of the parameter in one particular direction, and then pretty much always in the direction of MODEL predictions/expectations. Which is naturally highly suspicious in and of itself …

        UAH used to agree very well with the radiosondes up until 2001. Then an obvious need for an upward correction was found and adjusted for. By the UAH team. The people compiling the radiosonde datasets, however, apparently never got the memo, and thus never made the same necessary adjustments. And so, RATPAC, as our case in point, is still in a slump post 1995, making its 1979-2001 trend way too low, still to this day:

        The situation post 1995 is even much worse, though. You say:

        Radiosonde data are far more reliable after year 2000 than before that, There has been technical development, metadata is far better, and adjustments due to inhomogeneities are small.

        You just don’t get it, do you Olof? This isn’t about the actual radiosonde data from the individual stations. It’s about how it’s ASSEMBLED (and adjusted) into what is called a climate quality dataset.

        The post 1995 progression of the RATPAC data is simply hopelessly out of touch with reality. It doesn’t fit at all with any other relevant physical parameter, the surface data, the satelllite data, or the OLR at the ToA data from CERES.

        And the reason why is how the original data – riddled with inhomogeneities – is adjusted and compiled into a global, long-term series. It is simply a subjectively constructed dataset. Anyone can see this.

        It’s as if you think the people doing the adjustments to obtain the final radiosonde datasets somehow magically have the “Troposphere Truth” hardwired in them and so simply cannot be wrong, even though their end product disagrees strongly both with the satellite AND the surface datasets, while the people making the satellite datasets are pure dimwits who don’t know how to correct for anything the satellites and their instruments do (and/or aren’t even aware that such a need exists), and that they therefore somehow don’t …

        UAHv6 gl vs. gl OLR at the ToA (CERES EBAF, directly based on CERES SSF1deg and SYN1deg data):

        This is such a tight fit it is hard to claim it a coincidence. Why? Because we know the physical relationship between tropospheric temperatures and OLR at the ToA: the latter is (principally) a direct radiative effect of the former. We see the short-term cloud/humidity perturbations to this relationship during strong ENSO events, but beside these, the two parameters follow each other basically in lockstep over time. We see the same thing pre 2000, between UAH (and RSS) and (this time) ERBS Ed3_Rev1:


      • This tight correlation is no coincidence, Olof.

        Meanwhile, the RATPAC-A trend Jan’79-Jun’01 is +0.085 K/decade, the GISTEMP LOTI trend during that same interval is +0.135 K/decade, almost 60% higher, but all that spectacularly reverses when the RATPAC-A dataset suddenly sports a +0.242 K/decade trend from Jan’99 to Dec’15 (almost three times higher than the 79-01 trend!), a warming rate 45% higher than the GISTEMP LOTI trend during the same time, rising at a mere +0.166 K/decade. (And we know the GISTEMP data itself is already rising way too fast post 1997…)

        Why anyone, based on these un-physical divergences, would trust the RATPAC-A dataset is beyond me. Self-inflicted ideological, dogmatic blindness is the only explanation as far as I can see.

        Once again, Olof: There is no AMSU break. The breaks are before and after 1998-2000. It’s right here, right in front of you (graph from Tamino). All you need to do is look:

        You come off as a clown trying to argue against (and/or ignore, and/or deny) this plot, and the other plots that I’ve shown you.

  13. We should not, repeat NOT, be discussing ‘warming’ as if any warming proved AGW!

    It is not enough for there to be warming – there has to be warming in compliance with the model predictions. In fact, there has to be ‘unusual’ warming – and any warming which matches the warming rate seen before 1950 should be presumed to be natural…

    • Agreed 100%. I’m sick of the unspoken presumption that “Any warming = caused by CO2.” BS! There is still no empirical evidence that CO2 causes temperature to rise. And the HYPOTHETICAL CO2 effect on temperature requires that “all other things” remain “equal,” which of course is not reality.

      According to the Earth’s climate history, there is either NO relationship between CO2 and temperature (geologic time scales, hundreds of millions of years) or on shorter time scales where a correlation DOES exist (tens of thousands of years, or less), CO2 FOLLOWS temperature. There are also striking examples or REVERSE CORRELATION which simply would not exist if CO2 had any significant effect on temperature. It therefore holds that CO2 most certainly does NOT “drive” temperature. Not unless PROVEN otherwise. The Null Hypothesis (climate change is NATURAL) stands.

      Temperature trend flat, up, or down, it doesn’t mean a damn thing – CO2 is NOT the driver, unless someone can PROVE otherwise.

    • It’s not the warming we are looking for it is the lack of warming. In science this is called falsification. Good experiments look to falsify theories and if they fail then it strengthens the theory (still not proof). However, all it takes is one failed experiment and the theory is toast.

      AGW has already been falsified according to Santer et al 2012 at 95% confidence. All we are doing now is looking to up that confidence level.

      • Sure Nick, according to you anything is nonsense that doesn’t support AGW. However, the facts differ from your opinion. Santer found 95% of climate model runs show a warming trend within 17 years. We’re now over 20 years without any statistically significant warming. We are way past the 17 years specified by Santer and probably 99% of climate model outputs. And, it is only going to get worse as La Nina conditions could very well hang around for a couple of years.

        Your denial is noted.

      • Richard M
        “However, the facts differ from your opinion. Santer found 95% of climate model runs show a warming trend within 17 years.”
        Quote please. You are garbling the facts. What he said in para 30 was quite the contrary. 95% of control (unforced) runs were less than observed UAH/RSS (my bold):

        On timescales longer than 17 years, the average
        trends in RSS and UAH near-global TLT data consistently exceed 95% of the unforced trends in the CMIP3 control runs (Figure 6d), clearly indicating that the observed multi-decadal warming of the lower troposphere is too large to be explained by model estimates of natural internal variability.

        And he didn’t speak anywhere of requiring “statistically significant” trends. The idea that you can deduce something useful from a failure of statistical significance is a local WUWT fantasy.

      • Your playing a game, Nick. I was basing my statement on the abstract.

        “Our results show that temperature records of at least 17 years in length
        are required for identifying human effects on global-mean tropospheric
        temperature.”

        Maybe I assumed too much that this would be based on statistical significance. This clearly has nothing to do with the control runs you mentioned. Of course, if you look past 2011 I suspect most of the observational trends up to the El Nino would be equal to or less than the control runs.

        So, the question is what do these words in the abstract mean? If it means what it seems to say, then any trend over 17 years that shows no warming implies there are no human effects. That sounds an awful lot like falsification to me.

      • “Your playing a game, Nick. I was basing my statement on the abstract.”
        That statement says nothing about 95% of runs show…, as you quoted. The other does. But there is no substitute for proper referencing and quoting. Then readers know what you are referring to.

        The statement in the abstract is based on the quote I gave. It’s where the 17 years comes from. And it simply says, it’s no use looking for human effects in records of less than 17 years. What it does not say:
        1. You’ll find effects in records of 17 years + 1 day
        2. You can try any test you can dream up after 17 years, and if it fails, AGW is falsified.

        It certainly doesn’t say that 17 years without statistically significant warming would disprove something. That is a local nonsense. Lack of SS never proved anything. I gave below the example of UAH V6 from 1979 to 1998. 19 years of no SS warming. But the trend was 0.982 C/Cen. Not too far below what was expected. And you can get quite long periods showing trends exactly as expected, but not SS. That is obviously not disproving what was expected.

      • Nick, it is climate models that are setting the limit of 17 years. Hence, trends of 17+ years are demonstrating the climate models do not correctly describe reality. It is this claim that is falsified. Now, that does not disprove AGW but it certainly places limits on it.

  14. I say
    I must be one of the 30% who say it is cooling

    my long term result is in agreement with RSS /UAH

    about +0.012K/ yr warming since 40 years ago

    however, according to my results we are cooling at least -0.01K/yr since 2000

  15. ” we cannot be 95% certain that warming has in fact taken place since January 1998″
    As I have said before on occasions, this is quite a wrong understanding of trend uncertainty. There is high confidence that the warming in fact took place. The uncertainty refers to the statistical modelling of weather variability. It says that there is at least a 5% chance that the weather could have turned out differently, with negative trend. But we now know how the weather turned out. The uncertainty is about the weather that might have been, not the weather that was.

    The way to think about stated uncertainties is that they represent the range of results that could have been obtained if things had been done differently. And so the question is, which “things”. This concept is made explicit in the HADCRUT ensemble approach, where they do 100 repeated runs, looking at each stage in which an estimated number is used, and choosing other estimates from a distribution. Then the actual spread of results gives the uncertainty. Brohan et al 2006 lists some of the things that are varied.

    The underlying concept is sampling error. Suppose you conduct a poll, asking 1000 people if they will vote for A or B. You find 52% for A. The uncertainty comes from, what if you had asked different people? For temperature, I’ll list three sources of error important in various ways:

    1. Measurement error. This is what many people think uncertainties refer to, but it usually isn’t. measurement errors become insignificant because of the huge number of data that is averaged. Measurement error estimates what could happen if you had used different observers or instruments to make the same observation, same time, same place.

    2. Location uncertainty. This is dominant for global annual and monthly averages.You measured in sampled locations – what if the sample changed? You measured in different places around the earth? Same time, different places.

    3. Trend uncertainty, what we are talking about above. You get trend from a sttaiistical model, in which the residuals are assumed to come from a random distribution, representing unpredictable aspects (weather). The trend uncertainty is calculated on the basis of, what if you sampled differently from that distribution? Had different weather? This is important for deciding if your trend is something that might happen again in the future. If it is a rare event, maybe. But it is not a test of whether it really happened. We know how the weather turned out.

    • Thank you Nick!

      I am sure you are aware of the interview with Phil Jones in 2010. Here is one question and answer:
      B – Do you agree that from 1995 to the present there has been no statistically-significant global warming
      Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. The positive trend is quite close to the significance level. Achieving statistical significance in scientific terms is much more likely for longer periods, and much less likely for shorter periods.”

      To put these numbers into perspective, 0.12 C/decade is almost 3 times the 0.0450 C/decade that RSS showed from 1998 to 2016. The times are almost the same: 19 years versus 15 years.
      My question to you is this:
      If Phil Jones were asked the following, what would you think he would say:
      Do you agree that from 1998 to 2016 there has been no statistically-significant global warming using RSS?

      • Werner,
        I don’t see the point of that response at all. Jones is just saying that in a certain circumstance you could calculate a significance, and it was close to 95%. That can happen. I doubt if Jones would comment on RSS V3.3 (particularly given their “use with caution” advisory), but if he did, he would probably cite the same sorts of results that I calculate.

        I think the point Jones was making was that the trend, although marginally significant, happened, and it wasn’t small. You just can’t (yet) rule out the possibility that it could have been a quirk of weather. A little more time needed for that.

      • I doubt if Jones would comment on RSS V3.3 (particularly given their “use with caution” advisory)

        Perhaps I should have said UAH6.0beta5. But WFT did not update it since October with its new URL.
        But if asked about the correctness of my title, I believe he would agree that the warming on UAH6.0beta5 is not statistically significant from 1998 to 2016 at the 95% level.

        Another way of looking at the 95% certainty is to use the uncertainty in the yearly averages. Using Dr, Spencer’s 0.1, if we assume 1998, 1999 and 2000 were 0.1 C higher than given, and that 2016, 2015 and 2014 were 0.1 C lower than given, the slope from 1998 to 2016 would be negative. True?

      • “Using Dr, Spencer’s 0.1, if we assume 1998, 1999 and 2000 were 0.1 C higher than given, and that 2016, 2015 and 2014 were 0.1 C lower than given, the slope from 1998 to 2016 would be negative. True?”
        I guess so. But it’s a complete misuse of the notion of CI. You can maybe say that there is a 5% chance that 1998 was 0.1 higher than given. But the chance that both 1998 and 1999 were 0.1C higher is much smaller – as independent events, something like 0.0025%. The chance of that whole scenario is vanishingly small.

        I’ve never understood why Jones’ 1995 quote gets repeated. It’s just a simple statement of calculation, like you or I do routinely. It doesn’t mean much, and I doubt Jones would have raised it himself, but since he was asked…

      • as independent events, something like 0.0025%

        Good point!

        I’ve never understood why Jones’ 1995 quote gets repeated.

        Having long periods of statistically insignificant warming or even a pause suggests that global warming is not happening at catastrophic levels that needs to be stopped at all costs.

      • “Having long periods of statistically insignificant warming or even a pause suggests that global warming is not happening…”
        No, it doesn’t, as I keep saying to no apparent effect. If you want to know about warming, look at the trend that happened. “Statistically insignificant” means you can’t be sure through the fog of noise that it will continue. It doesn’t suggest it isn’t happening. Not having glasses can make you uncertain of what is happening; it doesn’t suggest that there is nothing.

      • Werner Brozek January 12, 2017 at 10:12 am
        Thank you Nick!

        I am sure you are aware of the interview with Phil Jones in 2010. Here is one question and answer:
        “B – Do you agree that from 1995 to the present there has been no statistically-significant global warming
        Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. The positive trend is quite close to the significance level. Achieving statistical significance in scientific terms is much more likely for longer periods, and much less likely for shorter periods.”

        Yes the loaded question was chosen carefully to produce the answer ‘Yes’, if 1994 had been chosen the positive trend was significant. If you choose a short enough period you can guarantee non significance and that what was done, Jones pointed this out in his answer.
        In a BBC interview a year later Jones commented that the HadCRUT warming trend since 1995 was now statistically significant:

        “Basically what’s changed is one more year [of data]. That period 1995-2009 was just 15 years – and because of the uncertainty in estimating trends over short periods, an extra year has made that trend significant at the 95% level which is the traditional threshold that statisticians have used for many years.

        “It just shows the difficulty of achieving significance with a short time series, and that’s why longer series – 20 or 30 years – would be a much better way of estimating trends and getting significance on a consistent basis.”

      • “Statistically insignificant” means you can’t be sure through the fog of noise that it will continue.

        And if we are not sure it will continue, should we spend billions to stop it? But even if we know it will continue, can we know it will be catastrophic before we run out of oil?

      • Nick, you’re dead wrong on this one: “Statistically insignificant” means you can’t be sure through the fog of noise that it will continue. It doesn’t suggest it isn’t happening.”

        It really does mean that you can’t be sure that it is happening.

        A second point: statistically significant simply means you can see the signal through the noise. It doesn’t mean that what you see means something. Warmer is good. I used to live near Chicago. I can assure you that colder is bad. If there were a statistically significant cooling signal, that really would mean something.

      • “And if we are not sure it will continue, should we spend billions to stop it?”
        Well, we are not sure that we’ll be attacked, but we spend billions on defence forces. But in this case there are good physical reasons to expect warming. Observation shows warming as expected, but maybe you can’t quite rule out that it is due to weather variation (provided you choose the wobbliest dataset). That doesn’t give reason to doubt the physical reasons.

      • “I’ve never understood why Jones’ 1995 quote gets repeated. It’s just a simple statement of calculation, like you or I do routinely. It doesn’t mean much, and I doubt Jones would have raised it himself, but since he was asked…”

        I suspect the question was very leading. The start point of 1995 was chosen to be the earliest point where the warming was statistically insignificant. But if you choose a period for that reason the significance test is invalid, and certainly misleading.

        In any event, the main purpose of the question was to get the headline “scientist admits no significant warming in last 15 years”, knowing that most people won’t understand the difference between statistical significance and the everyday meaning of the word.

        By the way, all data sets apart from UAH 6 and RSS 3.3 now show significant warming since 1995. That’s using the 2 sigma values from

        https://skepticalscience.com/trend.php

      • By the way, all data sets apart from UAH 6 and RSS 3.3 now show significant warming since 1995. That’s using the 2 sigma values from
        https://skepticalscience.com/trend.php

        RSS was statistically significant from about November 1992 which is pretty close to Nick Stokes’ July 1994. But I only saw UAH5.6 and not UAH6. Or did I miss it somehow? And UAH5.6 did show significant warming since 1995. You need to see Nick’s to see UAH6.0beta5.

      • “But I only saw UAH5.6 and not UAH6. Or did I miss it somehow? And UAH5.6 did show significant warming since 1995. You need to see Nick’s to see UAH6.0beta5.”

        I assumed UAH 6 is not significant since 1995 because RSS 3.3 is not significant and UAH 6 is very similar to RSS 3.3. I used the Skeptical Science trend calculator, rather than Nick Stokes, simply because it has larger confidence intervals ans so requires a higher standard of significance.

      • I used the Skeptical Science trend calculator, rather than Nick Stokes, simply because it has larger confidence intervals ans so requires a higher standard of significance.

        There may be other slight differences, but Nick uses 95% and Skeptical Science uses 2 sigma which is 95.4%.

      • Nick

        We spend billions on defense so people don’t attack us not because we think someone might attacks, we have evidence that they will if they could, Pearl Harbor, war of 1812, 911.

        Statistically significant means exactly what it’s states. Your trying to change the meaning with your typical bunch of BS, but the truth is right in the words themselves.

        You know that there is no C in AGW but you blather on because your wallet tells you too.

      • ““And if we are not sure it will continue, should we spend billions to stop it?”
        Well, we are not sure that we’ll be attacked, but we spend billions on defence forces.”

        We are not sure that our homes will be burned down, or flooded, or burgled.
        Yet we get home insurance.
        Well the sensible do.
        Now why would that be?
        Because the result would be far worse were we not.
        No matter how small the risk.

      • “There may be other slight differences, but Nick uses 95% and Skeptical Science uses 2 sigma which is 95.4%.”

        I don’t think that’s the main reason for the difference between the two. For example taking RSS 3.3 from 1979, Skeptical Science gives a 2 sigma value of 1.70 C/century, so sigma = 0.85.

        For the same period Nick Stokes gives 0.633 C/century, with a 95% confidence interval of -0.48 to 1.746, which would imply sigma = 0.57.

      • For the same period Nick Stokes gives 0.633 C/century

        They are not that far off.
        Temperature Anomaly trend
        Jan 1979 to Dec 2016 
        Rate: 1.350°C/Century;
        CI from 0.953 to 1.747;
        t-statistic 6.667;
        Temp range -0.134°C to 0.378°C

      • “I don’t think that’s the main reason for the difference between the two.”
        The 95.4% difference is minor. The main reason is that, following Tamino, SkS uses an ARMA(1,1) model for autocorrelation. I use the more conventional AR(1), which generally gives narrower CIs. I look at the pros and cons here.

    • 4. Autocorrelation – if your statistical dependencies are wrong, your brackets are probably wrong, too. And, we know that these data are not derived from an underlying process that is a trend extending to infinity with i.i.d. measurement noise on top. How far removed from that model we are over the given timeline has not been resolved.

  16. “Does it seem odd that only GISS will probably set a statistically significant record in 2016”

    No, because figures don’t lie; Liars figure.

  17. I have never understood how statistical analysis can be applied where you don’t have a statistical model of what it happening.

    • Wadda yuck!

      1980 through 1990’s, nothing happened.

      2000 through 2015, nothing happened.

      CO2 caused it all!

      • No, natural variation causes the, err – variability.
        CO2 cause the long-term trend.
        Do try to fathom that the GMST will never rise monotonically.
        Whatever the driver.

        Because of ….
        Natural variability in climate.
        Chief among them the PDO/ENSO.

      • Tonedeaf

        Says you, does that hold for the reset of geological history. Natural variation is not a zero line it is variation.

      • If “natural variability” is large enough to cancel out the trend, it was large enough to cause the trend in the first place. There is no evidence that CO2 has anything to do with it, only rationalization based on extrapolation of an effect that holds in a controlled laboratory setting, but for which there is no assurance whatsoever that it holds for a complex feedback regulated system of the Earth’s climate.

        You cannot just assume CO2 has an impact. You must demonstrate it, uniquely and convincingly.

      • 20 years, nothing. Jump. 15 years, nothing. Jump.
        What long term trend?

        That is one way of looking at it! And it is certainly hard to blame a steadily increasing CO2 for it.

  18. Werner Brozek

    GISS may well not be showing any significant change in temperature, because temperature is not normally distributed. Your underlying assumption of 95% significance based on the assumption that temperature is normally distributed, which has narrow tails.

    Is this realistic? Isn’t it more likely that temperature is not normally distributed. Rather, that temperature is a fractal distribution, also known as 1/f noise? For example: Atmospheric flows, fluid flows, population growth, stock market indices, heart beat patterns, etc. (Mandelbrot, 1975).

    This distinction is important because fractal distributions have much fatter tails than the normal distribution. For example: for normally distributed data, the probability of a measurement lying more than 10 sigma from the mean is 10-24. However, we observe 10 sigma events every few months in stock prices.

    http://users.math.yale.edu/public_html/People/frame/Fractals/RandFrac/StandardDeviation/StandardDeviation.html
    https://arxiv.org/ftp/arxiv/papers/1002/1002.3230.pdf

    • Your underlying assumption of 95% significance

      My understanding is that climate scientists came up with the 95% to determine what is statistically significant and what is not. I am just applying their standards.

      • “That’s a scientific standard, nothing specific to climate science.”

        It is a very specific scientific standard, under the requirement that the multi-variate distribution of the time series is that of identically distributed Gaussian variates with independent samples.

        Those requirements hold, at least approximately, for a great many natural and artificial phenomena due to A) the central limit theorem, and B) the tendency of systems to vary over a wide bandwidth.

        But, they do not hold for all. They particularly do not hold for climate data, which has long term and cyclical correlations. These claims of “significance” do not tell you anything you can’t see better with your own eyes just looking at the data.

  19. Recent research has shown that some aspects of climate variability are best described by a
    “long memory” or “power-law” model. Such a model fits a temporal spectrum to a single
    power-law function, which thereby accumulates more power at lower frequencies than an
    AR1 fit. Power-law behavior has been observed in globally and hemispherically averaged
    surface air temperature (Bloomfield 1992; Gil-Alana 2005), station surface air temperature
    (Pelletier 1997), geopotential height at 500 hPa (Tsonis et al. 1999), temperature paleoclimate
    proxies (Pelletier 1997; Huybers and Curry 2006), and many other studies (Vyushin and
    Kushner, 2009).
    https://arxiv.org/ftp/arxiv/papers/1002/1002.3230.pdf

  20. It seems to me that arguing over whether the warming (or lack of it) is statistically significant or not is like two bald men fighting over a comb. One thing is clear, the global temperature trend since 1998 is not consistent with the warming guaranteed by climatologists who claim that the climate is controlled by carbon dioxide.

    The debate should be about the credibility of AGW, not whether the temperature for December was a hundredth of a degree this way or that.

    Let’s face it, it matters not whether the pause continues or whether we had slight warming, maybe even less than the long term trend. There is no AGW signal. That should be the conclusion at this juncture.

  21. “Does it seem odd that only GISS will probably set a statistically significant record in 2016?”
    A more interesting fact is that almost all indices will set records. Land, sea, global surface, troposphere. But the situation with GISS is that it had a relatively small rise in 2015. NOAA and HADCRUT rose much more. It’s as if they responded earlier to El Nino. All three rose by about the same amount in total since 2014.

    • “A more interesting fact is that almost all indices will set records.”

      Not really. There was a big El Nino. Meh.

      • Noise. Signal variability. Meh. It was also a much narrower spike. You are just trying to convince yourself of something that is not in evidence.

      • Sorry ToneB, but the 1998 El Nino did not occur at the peak of the AMO. All the differences between the two sets of data are easily explained by the AMO. I realize you aren’t interested in the truth. You are simply here to push your bias.

      • It is interesting to note that 2015 saw the largest jump in CO2 at 3.03 ppm. The previous record was 1998 at 2.93 ppm. Does this suggest that the 2015/16 was slightly stronger then in 1997/98? Last year came in at 2.77 which is 3rd highest in the Mauna Loa record.

      • It is interesting to note that 2015 saw the largest jump in CO2 at 3.03 ppm. The previous record was 1998 at 2.93 ppm. Does this suggest that the 2015/16 was slightly stronger then in 1997/98?

        I think it has more to do with China and India emitting much more CO2.

      • No body must read what I write. Over and over I’ve gone on about NOAA changing the record in 2005 from 2.52 to 3.10. That’s above even what you are saying if it is truly the number you say it is. The co2 levels in addition to following temperature also follow the solar cycle peak to peak. That’s why I’m upset about NOAA changing the data. Do you know how much more co2 per year we are producing now as opposed to 1998 ? I thought the count this year (2016) would have been at least 4 or 5. If it’s 3.01, that’s unbelievable. Really unbelievable. That means the co2 sinks are accelerating. Or the natural release of co2 is diminishing, by a lot.

      • Bartemis:
        Apples v apples – GMST’s are ~ 0.4C higher than 18 years ago.
        And that you say …..
        “You are just trying to convince yourself of something that is not in evidence.”

        It’s even on a trop sat temp series.
        That RSS is now disowned here, along with GISS of course.
        Makes the above just typically “down the rabbit-hole”.

      • Richard:
        “Sorry ToneB, but the 1998 El Nino did not occur at the peak of the AMO. All the differences between the two sets of data are easily explained by the AMO. I realize you aren’t interested in the truth. You are simply here to push your bias.”

        No.
        I am here to correct the bias of denizens.
        To deny some of the ignorance on display.
        Like, every, literally every, climate related science head post on here is introduced as “from the dept of …..”. Or “Claim …..”
        The bias comes from the ideological standpoint, projected onto the science.

        Just like it is impossible to get ever forecast right, it is equally impossible to get every one wrong my friend.
        Here they are all wrong.
        And you think WUWT has no bias?

        The AMO is it?
        Well it isn’t wasn’t much different in ’16 than ’98, though it was riding a curious spike in ’98.

        The PDO/ENSO has far more effect
        Yet temps didn’t dip during the -ve phase through the “pause”…..

      • rishrac @ January 12, 2017 at 8:10 pm

        “That means the co2 sinks are accelerating. Or the natural release of co2 is diminishing, by a lot.”

        Accelerating sink activity is a kluge used to explain the apparent change in the relationship between emissions and concentration. It is apparent, but it is not real, because emissions do not drive concentration. The rate of change of concentration is simply tracking temperature.

        http://woodfortrees.org/plot/esrl-co2/derivative/mean:12/from:1979/plot/rss/offset:0.6/scale:0.22

        Toneb @ January 13, 2017 at 3:32 am

        “It’s even on a trop sat temp series.
        That RSS is now disowned here, along with GISS of course.”

        Not this RSS:

        http://woodfortrees.org/plot/rss

        Perhaps you are talking about some “adjusted” product that is not carried on WFT yet.

        “GMST’s are ~ 0.4C higher than 18 years ago.”

        El Nino’s are variable, not like a standard candle in astronomy. And, temperatures as of now are comparable to the whole of the past two decades, and falling fast.

      • Bartemis:

        WFT interactive still has RSS v3.3 TLT

        Of which RSS say ….

        “The lower tropospheric (TLT) temperatures have not yet been updated at this time and remain V3.3. The V3.3 TLT data suffer from the same problems with the adjustment for drifting measurement times that led us to update the TMT dataset. V3.3 TLT data should be used with caution.”

        Therefore, as I said above – This is the equivalent.

      • Well, isn’t that convenient. Too bad. RSS was doing a good job. I guess the pressure was too great.

        It still does not change the fact that you are basing your conclusion on a needle sharp spike that would be virtually eliminated with a little more smoothing than is already done. And, the influence of El Nino hasn’t faded entirely yet. We will see what happens in the year to come.

  22. This seems to me like two bald men fighting over a comb. It matters not whether the pause continues or not or whether December’s temperature is a hundredth of a degree higher or lower. It probably does matter to the record keepers, but not to the climate change debate.

    The relationship between temperature and carbon dioxide is broken at this juncture. That is the conclusion. The temperature may be flat or increasing slightly since 1998, but it is less of an increase than the long term warming and very much less than the warming promised by AGW. That is the conclusion.

    • ” very much less than the warming promised by AGW”
      No, it’s quite close. Here is the comparison of CMIP 5 averages with recent surface temperature observations

      • Is this an ensemble of different models with different assumptions and different initialisation values?

      • Does nobody understand that the climate of the 21st Century has not responded as predicted by IPCC climate models? They even had the actual numbers through 2005, and still got it wrong.

        Quit arguing minutia, and attack the liars where they live.

      • Robert Kermodle,
        “Yeah, maybe it’s quite close no, but”
        That’s MofB’s trick graph where he rings in troposphere data instead of the surface measures they were predicting.

      • Just noting that we just had a super-El Nino which kind of makes it silly to run a 12 month running mean and compare that to global warming projections.

        NCDC-NOAA was 0.32C in November on your chart above, well below all of the AR5 forecasts produced just 3 years ago (as in they had all the data up to 2013). NCDC-NOAA is also going lower in the months ahead to about 0.1C on your above chart.

        Hadcrut4.5 was 0.29C in November on your chart and will also be going lower in the months ahead.

        So, it took a lot of work to put that together and it kept some people “believing” for a period of time right now. But what happens when you have to face the music again in the near future about the models being so far off (even ones produced just a few years ago which had all the historical data to work with).

        The solution can not be be to adjust the temperature data once again because even that is not working. Its still way below even though they just added 0.1C to the numbers over the last year.

        WHEN is it face the music time?

    • Not to mention the elephant in the room – they have no evidence that CO2 is the CAUSE OF the minuscule amount of warming, regardless of how close or far apart the models are from reality. Push THAT discussion to its conclusion, and invariably it will end up at “they can’t otherwise account for it,” which is the classic AGW argument based on climate ignorance.

      • Miniscule is right. We’re getting wrapped around the axle here over a 1 degC rise per century, when the ASHRAE standard for the temperature differential between your head and your feet is a whopping 3 degC!

      • Yes, there is a mechanism to account for warming other than CO2 and that is “Ozone Depletion” which can be anthropogenic or natural. Ozone depletion allows additional UVB to reach the lower troposphere and earth’s surface and produce additional warming of the troposphere and surface. Since oxygen photodisassociation and creation of ozne UVB in the stratophere normally absorbs most of the UVB, less absorption of UVB in the stratosphere produces stratospheric cooling (this has been observed) but surface and troposphere warming. From 1970 to 1998 ozone depletion was anthropogenic due to human made and released chlorofluorcarbon gases. This is well known., Ozone depletion was at its maximum in 1998 and was superimposed on the El Nino event so warming was maximum. Lucky for the AGW crowd, Boaorabunga volcano in Iceland erupting starting in Oct, 2014 and continued through March, 2015. The eruption was effusive not explosive so it produced no significant aerosols or particulates to produce cooling but lots of gases. Effusive eruptions release HF, HCl, and HBr (halogen gases) that still reached the stratopshere within a few months and produce ozone depletion. This occurred by February, 2015, resulting in warming through 2015 into 2016 .(so AGW crowd now think they have been saved from the “hiatus” – they will be disappinted!). Ozone depletion probably peaked in mid-2016 and is now reversing. The peak in ozone depletion corresponded to the El Nino event related warming which is why 2015-2016 were near or at record temperatures. The reversal in the ozone depletion trend will now correspond to the La Nina cooling trend so the downward temperature trend for 2017-2018 should be steep as the trend for November and December, 2016 suggests. If these correlations with exogenous, random events are correct, calculating trendlines, correlation coefficients, error ranges, and probabilties have no real world relevance.

        Now the “ozone depletion” theory is not my idea. The significance of ozone depletion to mean global temperatures is Dr. (PhD) Peter Langdon Ward’s idea. A full explanation can be found at his website: https://www.WhyClimateChanges.com. My miniscule contribution is suggesting that ozone depletion peaks have accidently corresponded with El Nino events exacerbating warming and the following:

        The Davos Conspiracy (of January, 2017)

        Davos elites meet and greet with alarm to decry with derision
        the possible decrepitation of their “global warming” delusion
        but it’s just a billionaire’s Juke and Jive dance to distract us
        while they slither into the pocketbooks of each dumb cuss.
        CO2 doesn’t cause climate change as Al Gore’s preachin’,
        his religious obfuscation of the truth of “ozone depletion”
        from the impact of CFC’s and effusive volcanic gas emissions,
        a marvelous dance between oxygen’s photodisassociation
        from UVB radiation and ozone creation and destruction.
        Check out https://WhyClimateChanges.com for a lesson,
        and you will conclude Davos is a conspiracy of high treason
        worthy of a racketeering and corrupt practices conviction.
        MHPublishing, Copyright 2017 – distribute freely with attribution.

        The ozone depletion idea is controversial because it means calculations of current calculations of radiant energy are incorrect and leads to the conclusion that the current physics paradigm of visualizing light as packets with wave-particle duality is an artificial contruct and has no basis in reality. It works for me because it can explain the recent temperature records, the historical anecdotal climate record, and the geologic record. As a geologist, the geologic record is all that really counts to me and the CO2 AGW theory just doesn’t cut it. The rocks tell the story and volcanoes rule. It also means humans can affect the earth’s climate. Just start up CFC production again if you want to swim on a Greenland beach without freezing or put a cork in Iceland’s volcanoes if you don’t.

    • It matters not whether the pause continues or not

      For two scientists communicating with each other, I agree. But if you want to mention it to your neighbor at a coffee shop, “no warming” is much easier to understand than “no statistically significant warming at the 95% level over 20 years”. His glazed eyes will soon be looking for the door.

  23. Werner, as I’ve told you before to justify this statement:
    “On several different data sets, there has been no statistically significant warming for between 0 and 23 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.”
    you need to be performing a one-tailed test, is that what you did?
    When you do a two-tailed test you’re saying that the 2.5% chance that the warming above 1.784 (For UAH6.0) is not warming, which is clearly nonsense!

    • Phil.,
      He’s using results from here. There is a 95% probability of being within CIs (t limit1.96), so yes, 2.5% of being beyond each extreme. So when Werner says “no significant warming” I think he means that zero trend is within those 95%CIs about the observed trend.

      • Exactly, I pointed this out to him before, if he wants to say ‘warming’ he has to change his limit, I think it’s ~1.65 rather than 1.96.

  24. When will the pause return?
    No one knows when or if the pause will ever return. However certain conditions must be met. Namely the area below the zero line after December 2016 must be equal to the area above the zero line from February 2016 to November 2016. The average from February to November on RSS was 0.60. The zero line, which is where RSS is at present is 0.23. That leaves a difference of 0.37 for a period of 10 months above the zero line. 0.37 x 10 = 3.7.
    So if the RSS anomaly drops by 0.10 from 0.23 to 0.13 and stays there, it would take 37 months for the pause to return.
    If the anomaly drops by 0.2 from 0.23 to 0.03 and stays there for 19 months, the pause will return.
    If the anomaly drops by 0.3 from 0.23 to -0.070 and stays there for 12 months, the pause will return.
    If RSS makes adjustments, the pause will never return! ☹

  25. “If my math is correct, there is about a 30% chance that cooling has taken place since 1998 and about a 70% chance that warming has taken place.”
    Golly, I am pretty sure every single person who is not an active skeptic has clean forgotten to mention anything about confidence levels or uncertainty ranges!
    That goes for every MSM news source too.
    Must be just an innocent oversight, huh?
    I mean, anyone who has taken and passed even one single college level science class knows all about uncertainty, what it is and what it means, not to mention how important it is, so, it must just be an innocent oversight…right?
    By every one of them, every single time they mention anything about it, for years on end…just an oversight.

  26. One thing that I’m not sure about with all this talk of significant warming, is how much it matters that there are multiple data sets.

    To take the question of whether 2016 was warmer than 1998, if we only have UAH 6.0 and that’s showing 2016 as 0.02 C warmer than 1998, then given the amount of uncertainty there might be a 40% (or whatever) chance that 1998 was warmer. But if RSS 3.3 is also showing 2016 as being 0.02 C warmer, then that must increase the confidence that 2016 really was warmer. If both data sets were completely independent than the chances of 1998 being warmer would drop to 16%, but as they are not independent I guess the real odds of 1998 being warmer would be somewhere between 16% and 40%.

    And that’s only looking at the two data sets showing the smallest difference between the two years.

    • The slight difference in CO gain for the year/s involved may be a clue. 2015= 3.03 ppm, 1998= 2.93 ppm. The year 1998 held the record as the greatest yearly gain on the Mauna Loa site. Now 2015 is 1st with 2016 3rd at 2.77 ppm

  27. Thank you Werner and JTF, and Nick too!

    Good info sharing once again.

    I find it interesting we debate hundredths of a degree to leverage change in policies. Frankly, we need warmer Temps to feed the populations and provide a place to house them. CO2 could be our saviour, until the logarithmic relationship to temperature comes into play more so. Just sayin, careful what you wish for……

  28. Lack of statistically significant warming doesn’t mean that it hasn’t been warming!

    Let’s compare the warming trend for UAH6.0 for the first half of the record, the last half of the record and the full record:

    1/79-10/16: 0.883 K/century (95% ci of 0.411 to 1.256 K/century). “statistically significant”
    1/79-1/98: 0.283 K/century (95% ci of -0.695 to 1.263 K/century). “statistically insignificant”
    1/98-10/16: 0.611 K/century (95% ci of -0.803 to +2.024 K/century). “statistically insignificant”

    Interesting. Two periods with insignificant warming add up one combined period with significant warming. (:)) So what does a lack of statistically significant warming prove? Nothing. It just means variability/noise can make it difficult to prove the existence of warming over relatively short periods.

    Notice that the warming trend for both shorter periods is lower than for the full period and larger during the so-called “Pause”. Ouch.

    • Interesting. Two periods with insignificant warming add up one combined period with significant warming.

      An interesting point! But are your numbers right? For the full period for UAH6.0beta5, I get:
      Temperature Anomaly trend
      Jan 1979 to Dec 2016 
      Rate: 1.230°C/Century;
      CI from 0.815 to 1.646;
      t-statistic 5.803;
      Temp range -0.209°C to 0.257°C

      The rate of 1.23 C/century is not too high, but your number of 0.883 K/century is even less! Should we even be concerned about that?

      • Werner: Nick is correct. I selected UAH6mt rather than UAH6.LT. Thanks for catching my mistake. Nick’s numbers below are correct. They support essentially the same point: Absense of statistically significant warming does not prove the absence of warming. Natural variability obscures warming over periods of one or two decades.

        I forget which record I was working with, but I found a dividing point where the full period slope and both half period slopes were very similar, but only the full period was significant.

      • Frank and Werner,
        ” but only the full period was significant.”
        I think that is to be expected. If you throw four heads in a row that is not significant (1/16). If it happens again, that isn’t significant on its own. But eight in a row is (1/256).

      • “I think that is to be expected. If you throw four heads in a row that is not significant (1/16). If it happens again, that isn’t significant on its own. But eight in a row is (1/256).”

        Yes. This is a point that I don’t think everyone using the term “statistically significant” realizes. Whether something is significant or not depends both on the strength of the signal over the noise, and the size of the sample.

        Say you were testing a drug – you give it to 50 people and find it did significantly better than a control group given a placebo. Now if you you split the 50 people into two groups of 25 each it may well be the case that neither group shows a significant improvement over the control, not because the results are worse but simply because 25 is a smaller sample than 50. It would be absurd to point to the sample of 25 and claim that this meant the drug stopped working on those 25.

        But this is what happens with the temperature trends. There is a statistically significant warming since the start of the satellite era, but by looking at the last 20 years or so that warming becomes insignificant. In part this might be because the trend was smaller, but it might also be because the sample size is less. Just saying the rise was insignificant since 1998 tells us little.

      • Just saying the rise was insignificant since 1998 tells us little.

        To a certain extent, you are right. Keep in mind that all numbers from Section 1 show from where the warming could include zero. So one or more months longer does not include zero any more.
        I talked with a warmist years ago regarding Phil Jones interview and at that time he said that 8 years means nothing, but that 15 years should be taken more seriously.

      • Nick wrote: “I think that is to be expected. If you throw four heads in a row that is not significant (1/16). If it happens again, that isn’t significant on its own. But eight in a row is (1/256).”

        However, many people think multi-year climate change is deterministic, not chaotic. They think in terms of cause and effect, not coin-flipping.

        According to climate models, in any five-year period with today’s growing forcing, there is a 25% chance the temperature has fallen. Your coin flipping analogy is valid. So there shouldn’t be too many 10-year periods in your post 1979 trend viewer with cooling, but the 95% confidence interval certainly should include 0 warming. I personally think observations show that models produce too little unforced variability and/or too much warming, but that is hard to prove.

  29. Werner, Frank
    “C:\mine\blog\bloggers\Reader\WUWT\_WUWT.html”
    I think Frank has given figures for UAH6.6 MT, which also makes the point. For LT I get

      Period      Trend   CI    CI
    1/98->12/16   0.476 -0.813 1.765
    1/79->1/98    0.982 -0.044 2.088
    1/79->12/16   1.230  0.815 1.646
    

    Also two insignificant parts, with a significant whole.And the trend for the first part is really quite high.

    “Should we even be concerned about that?”
    No. Planes will be OK. Surface temperatures are our issue.

  30. The El Nino heat spike comming quickliy back to normal can be explained in a simple way.

    The the sudden atmospheric warming can simply explained by the fact that hot water stored some hundred meters deep in the West pacific is suddely released to the surface of the whole Pacific, which makes one third circumference of the globe – a real big area!

    The heat in the West Pacific comes from the trade winds, having no clouds and exposing a big part of the entire Pacific permanently to the sun. Without the winds, the warmed up and piled up waters are flushing back toward East and the American Continent.

    Water can store 1000 times more heat than air, so the atmophere is heated up quickly. When cold water again is pushed from America to the West Pacific, the heat release is stopped. And the atomosphere ist radiating quickly its surplus heat towards space.

    • But co2 should be slowing the quick radiation to space so the heat should be retained in the atmosphere longer. Is it?

      • Per Nick, Tonedeaf, griff and others we have not yet reached equilibrium so why doesn’t this new heat stay in the atmosphere and just get us closer to equilibrium? The co2 molecules should be absorbing all this LWR and then transfer it to other molecules through collision and it should stay trapped. They rate of heat transfer to space should stay the same.

      • Unless this heat is some how magically being returned to the oceans there is nothing in CAGW theory that should allow it to quickly radiate to space. But if you look at h2o and co2 as radiative coolers of the atmosphere than it makes sense how this heat escapes. I have said it a thousand times the oceans oceans warm and heat the atmosphere not the other way around. CO2 is not the magic blanket that causes the oceans to warm.

      • Bob wrote: “Unless this heat is some how magically being returned to the oceans there is nothing in CAGW theory that should allow it to quickly radiate to space.”

        The idea the GHGs permanently trap heat in the atmosphere was created by alarmists for the simple-minded. GHGs both absorb and emit thermal infrared radiation. As with almost all materials (N2 and O2 begin important exception in our atmosphere), emission increases with temperature. You can use the S-B eqn to show that a blackbody near 255 K, for example, emits about 3.8 W/m2 for every 1 degK it warms. So the 0.5 K increases in temperature during strong El Ninos potentially could emit an additional 2 W/m2 of LWR to space. Given that current forcing from anthropogenic GHGs is only about 2.5 W/m2, it is crazy to say that the temperature doesn’t drop after an El Nino because it is escaping to space.

        The temperature falls every winter because GHGs emit radiation to space. It falls every night for the same reason.

        You are correct in saying that we haven’t reached a new equilibrium temperature in response to the current forcing of about 2.5 W/m2 because the heat is flowing into the deep ocean – about 0.5 W/m2 according to ARGO. The atmosphere “doesn’t know” about heat flux into the deep ocean – (following the laws of physics), it radiates more heat towards space and the surface when it is warmer during El Ninos.

      • That’s not the information for the energy budget of the earth.The net retained was greater than the outgoing. Via the satellite back then when co2 levels were 370 ppm, the net retained was 2/3 of the incoming. After years of constant increases in co2, one would expect to have more, not less retained energy. Show me in the energy budget there was a net spike in energy released. That would also be the end for AGW as well. I’ve even argued that when there is more evaporation that more heat is released. The answer to that was the energy budget, that the latent heat heat was retained rather than released.
        I can’t tell if you are a skeptic or a warmist, but the argument supports a skeptics view.
        In essence what I’m overall saying is that after 20 years, the global temperature is currently up only 0.2 C, that is a demarcation point. Water does have a higher heat capacity, where is the heat being stored ? If you ignore the trends, the question to ask is, in the next 12 months will the global temperature rise or fall ? If it rises is it due to co2, if it falls below the long term average of what was considered equilibrium, then climate can not possibly be related to co2. The water would be warmer than the atmosphere at that point and would be giving up heat rather than absorbing it. Isn’t that the rationale behind the Arctic ice melting ?

      • Frank

        The myth is that there is such a thing as equilibrium when it comes to global temperature and that warming of the oceans have anything to do with CO2 when the opposite is the truth.

      • Rishrac wrote: “That’s not the information for the energy budget of the earth.The net retained was greater than the outgoing. Via the satellite back then when co2 levels were 370 ppm, the net retained was 2/3 of the incoming.”

        Satellites are incapable of measuring the difference between incoming and outgoing radiation with enough accuracy to detect a radiative imbalance of a few W/m2. (They don’t cover the full wavelength range of incoming and outgoing energy with a linear response. 1 W/m2 is only an 0.4% change in the post albedo 240 W/m2 entering the planet) If you believed their raw output, the Earth would be warming much faster than it is. However, satellites can detect a CHANGE of a few tenths of a W/m2 in both SWR and LWR.

        Since most (roughly) of the energy entering our planet ends up in the ocean, we deployed the ARGO buoys to measure our current imbalance. The current best estimate is 0.5 W/m2 (and it is currently being used to correct some versions of the satellite data, CERES-EBAG.)

        rishrac: “Water does have a higher heat capacity, where is the heat being stored ?

        ARGO tells us that 0.5 W/m2 of heat (from a radiative imbalance) is accumulating in the deep ocean. The top 100 meters of ocean warms and cools with ENSO (and seasons), but the accumulation of heat below is fairly steady. Unfortunately, when that little heat flux is spread over 2000 m of ocean, the temperature rise in a decade average only 0.025 K (assuming my calcs are correct). Does ARGO have this level of accuracy? It was designed to be this accurate. There are 3000 of them reporting every 10 days. Sample buoys are removed and tested for biases every year. The picture is evolving slowly. WIllis reported at WUWT that all oceans are not warming at the same rate. (I don’t have much faith in preARGO data; it is limited and required large corrections.)


        http://climate4you.com

        rishrac comments: “I can’t tell if you are a skeptic or a warmist, but the argument supports a skeptics view.

        Does it make a difference? The important question is whether I have provided Bob with the correct reason why the temperature falls after El Ninos – and summers and daylight hours. And accurate information to you. I’m a skeptic – about both the IPCC and skeptics. I’d like to understand what is true, no matter where that leads. So, if I’ve got something wrong, let me know.

      • Bob Boder:
        “Per Nick, Tonedeaf, griff and others we have not yet reached equilibrium so why doesn’t this new heat stay in the atmosphere and just get us closer to equilibrium? The co2 molecules should be absorbing all this LWR and then transfer it to other molecules through collision and it should stay trapped. They rate of heat transfer to space should stay the same.

        Becasue ~93% of TSI goes into the oceans … which then releases it to the atmosphere.
        OHC shows us that it is storing it long-term – if it were not then there would be just natural variation in it NOT a steep climb for the last ~ 100 years – (NB: bar temporary coolings due volcanic aerosol injection into the stratosphere).

        The equilibrium will only come once the atmosphere is heated enough by the oceans to attain a temp high enough to “get through” the GHG “fog” and exit to space.
        AT that point incoming will equal outgoing and both ocean and atmos temps will stabilise.

        “CO2 is not the magic blanket that causes the oceans to warm.”

        It’s not “magic” just basic radiative physics.
        GHG molecules attenuate LWIR’s ability to exit to space. That allows more TSI to be absorbed as the “insulation” takes place. Just like you or I do when we put on thick clothing. Our bodies are still radiating (sun still shining) but your heat is kept in (GHG “blanket” thicker). Energy in>energy out=warming.

        Frank:
        “The idea the GHGs permanently trap heat in the atmosphere was created by alarmists for the simple-minded.”

        It wasn’t “created” – it was discovered by repeated experimentation and observation and verified by applied theory, going all the way back to Tyndall and Arrhenius from 1859. (were there “alarmists” then??

        http://history.aip.org/climate/co2.htm

      • Tonedeaf

        I have never purchased a blanket that is a heat radiator I like staying warm.

        The oceans will never reach equilibrium because they warm and cool over hundreds, thousands and tens of thousands of years and co2 has nothing to do with it. Your statements are more proof of the failure of understanding of the theory you try to defend.

      • “The equilibrium will only come once the atmosphere is heated enough by the oceans to attain a temp high enough to “get through” the GHG “fog” and exit to space.”

        If your cartoon version of the physics were accurate, it would never “get through” the “GHG fog” to any significant extent. Rather, the mode of the temperature distribution would move higher, and the increased radiation exiting beyond the IR band would balance things out.

        But, that is not the only avenue for getting around the GHG filter. E.g., increasing convection lofts heat higher in the atmosphere, from where it is freer to radiate. And, there are other ameliorating feedback effects, such as cloud cover in response to increasing evaporation.

        It is by no means a given that increasing CO2 concentration will significantly increase the temperature of the Earth, and indeed, it cannot in the aggregate response. Higher temperatures increase atmospheric CO2. If CO2 appreciably increased temperatures in turn, that would comprise a positive feedback loop that could not be stabilized even by T^4 radiation, and we would have reached a tipping point eons ago.

      • “Tonedeaf

        I have never purchased a blanket that is a heat radiator I like staying warm.

        The oceans will never reach equilibrium because they warm and cool over hundreds, thousands and tens of thousands of years and co2 has nothing to do with it. Your statements are more proof of the failure of understanding of the theory you try to defend.”

        Bob Boredwith

        “They don’t warm and cool over ….”.
        In order to do that there has to be an imbalance.

        Otherwise if they absorbed more (and OHC rose) – then the atmosphere would cool.
        Has to, as if solar energy in = LWIR energy out – then if more went into the oceans the atmosphere would recieve less back from them and cool.
        If oceans absorbed less then the atmosphere would warm.

        However.
        For the last ~ 100 years the oceans have been warming.
        And the atmosphere has as well, especially since around 1970, when aerosols no longer neutralized the +ve forcing of GHG’s.

        Yours is the failure my friend,.
        You don’t defend an oulier opinion (and it is outside of this rabbit-hole) by hand-waving … “Your statements are more proof of the failure of understanding of the theory you try to defend.”

        If you do not follow scintific reasoning then try to use common sense.

      • Tonedeaf

        Hand waving?
        Your trying to teach me that a warming ocean cause the atmosphere to warm. I have been saying that since the first post I made here years ago, the hand waving is when you and your buddies say that CO2 is the cause of the oceans warming which is and has been all along ridiculous, first you say co2 warms the atmospher, but can’t prove it, then you wave your hand and now it’s the oceans warming the atmosphere with Co2 now warming the oceans, but can’t prove that.
        The oceans warm and cool and always have and they can do it with or with out an “imbalance”, the oceans store energy and release over very long time scales, the heat being released now could have accumulated thousands of years ago. The variability in cloud cover can dwarf any change in “forcing” the co2 causes. Your arguements have just continued to prove my point.

      • Toneb January 13, 2017 at 10:36 am

        What is the percentage change of the ocean heat content? That is just another fake chart.

        The Ocean heat content has not risen 100%.

        To do that, the temperature would have had to double.

        But the temperature of the 0-2000 metre ocean has risen from about 279.15K (6.0C) to 279.23K (6.08C) or just 0.02%

        So your math is off by 99.98% That’s as bad as it gets. When your quote of 100% is off by 99.98% that is BAD.

        Quit posting fake charts.

        If you want to properly say what is really happening, it is that the 0-2000 metre ocean is absorbing about 0.5 W/m2/year while global warming theory predicted it would rise at 1.4 W/m2/year. And the 0.5 W/m2/year should be compared to the current GHG forcing of 2.3 W/m2/year. Where is the MISSING ENERGY. In 100 years, at current trends, the 0-2000 metre ocean will increase in temperature from its current 6.08C to 6.28C. NOTHING will happen based on this flimsy change.

      • “Tonedeaf
        Hand waving?
        Your trying to teach me that a warming ocean cause the atmosphere to warm. I have been saying that since the first post I made here years ago, the hand waving is when you and your buddies say that CO2 is the cause of the oceans warming which is and has been all along ridiculous, first you say co2 warms the atmospher, but can’t prove it, then you wave your hand and now it’s the oceans warming the atmosphere with Co2 now warming the oceans, but can’t prove that.
        The oceans warm and cool and always have and they can do it with or with out an “imbalance”, the oceans store energy and release over very long time scales, the heat being released now could have accumulated thousands of years ago. The variability in cloud cover can dwarf any change in “forcing” the co2 causes. Your arguements have just continued to prove my point.”

        Bob Boredwith:

        You can Sky-Dragon slay all you want.
        Be my guest – it advances the idea that certain “sceptics” are irrational.

        “the heat being released now could have accumulated thousands of years ago..

        Eh?
        With what mechanism?

        How does heat stored thousands of years ago magically reappear?
        Somehow heat the ocean through it’s entire depth.
        It’s plainly coming in at the surface, so it has to have come from above.

        Stored heat would have to be at the bottom and move up from there.
        I think you will find that colder waters lie at depth.

        If you don’t understand basic thermodynamics, don’t pretend you do and talk absolute b****ks.

        Simply stated.
        The oceans are warming.
        The oceans heat the atmosphere.
        Something is doing it.
        It is not the oceans heating themselves.
        If the heat wer coming from he oceans entirely to heat the atmosphere then the.oceans would cool. (Under radiative balance)

        It is not the sun.
        Not energy coming in.
        It MUST be slowing of energy going out.

        You can do your “with one bound he was free” logic as Sky-dragon slayers do but in the would outside of your rabbit-hole it is empirical science.

        CO2 is a GHG. It’s forcing for 400ppm is known and corresponds to observations by spectroscopic analysis.

        Tata

  31. A couple of decades? It’s too short a time to determine any anthropogenic warming signal. Look at 50 years. Or a hundred is better.

    Or, don’t bother.
    Any trend that is so shallow that it takes a century to be identified is too small to be practically significant.
    We will adapt to it via technologoiy and infrastructure upgrades.

  32. Werner – Thyis entire article is abject nonsense, even apart from those three ridiculous lines on your first graph/ Your problem is that you don’t understand what to do with El Ninos and La Ninas that block your way. There is an El Nino at year 2010 and a La Nina at year 2008. Together they block the nnormal path that a temperature graph would take in the absence of that ENSO segment. Since ENSO is not part of the background warming ignore it completely and draw a straight line from 2002 to 2012. That is your real temperature curve for this segment of “warming” because warming it is not. It has a negative slope of minus 0.15 degrees Celsius per decade, indicating cooling. Before you start drawing any lines, though, take a magic marker and go over the entire temperature record with it in semi-transparent pink. That makes the temperature more visible, especially if the original data were jagged. The correct temperature line now starts near the top of the warming curve and joins the rise to the 2015/16 El Nino at 2012 after crossing the ENSO parts at 2008 and 2010. Negative means cooling and you had no idea that this is what was there all along. There is no excuse for ignoring the actual temperature. You should always make sure you know what the temperature curve tells you. Do not follow in the footsteps of those pseudo-scientists from IPCC who invent warming that does not exist.

    • Since ENSO is not part of the background warming ignore it completely and draw a straight line from 2002 to 2012.

      My reports go to the latest month for which records exist.

      Do not follow in the footsteps of those pseudo-scientists from IPCC who invent warming that does not exist.

      ALL data sets show warming so far in 2016. Granted, adjustments do occur, but at least some warming in 2016 is real and not invented. As for the root causes of the warming, that is a different subject.

    • Arno: The surface of the ocean is much warmer than the bulk of the ocean below. A small reduction in the upwelling of cold deep water (and downwelling of warm surface water) can raise surface temperature substantially. This overturning of the ocean involves fluid flow and therefore is an inherently chaotic process. Changes in upwelling and downwelling are an important aspect of ENSO. Chaotic fluctuations in the redistribution of heat within our environment (the surface, atmosphere and ocean) are responsible for the rapid 0.2 degC changes in temperature that occur with one or two months and the occasional occasional 0.5 degC spike during El Ninos. (A radiative imbalance of +1 W/m2 provides enough energy in ONE YEAR to raise the temperature of the mixed layer of the ocean and atmosphere by 0.2 degC, so the rapid changes in temperature over a few months aren’t cause by a change in radiative heat transport to and from the planet – our satellites would detect these changes. They are caused by changes in where heat is stored within the planet.

      Therefore, it makes good sense to use linear regression to smooth out the large amount of noise due to energy redistribution using linear regression so that we can accurately see how much heat (if any) is accumulating due to the expected slowdown in radiative cooling to space from rising GHGs.

      CO2 is rising at a current rate of 2 ppm/yr or 0.5% per year. If a doubling of CO2 is slows down radiative cooling to space by about 4 W/m2, at the current rate of increase we expect an increase in heat retained of roughly 0.2 W/m2/decade. For a variety of reasons including poorly understood feedbacks, this retained energy is expected to be producing a warming of about 0.2 degC/decade. With monthly changes of 0.2 degC and El Nino spike of 0.5 degC due to internal redistribution of heat, we aren’t going to be able to see slow long-term changes in temperature by measuring from one El Nino spike to another. Not all El Ninos are created equal.

      • Maybe you don’t understand Frank. What matters is the energy budget. How much is incoming and how much is outgoing. The fact that when there is a release of heat from an el nino, is irrelevant to whether the oceans are absorbing energy or not. That is only when the difference between the water temperature and air are great enough for a release. According to AGW, the imbalance in energy retained has been about 2/3 tetained, 1/3 emitted. The obvious implication of more co2 is that moved the retained heat above the 2/3’s level, and the 1/3 emitted should fall to 1/4. That didn’t happen. AGW went looking for the heat and didn’t find it. And do you know how that was confirmed, SLR. If the ocean had warmed in response to AGW, the rise would have been significant. I did the math, and it would have. The numbers are too large. Much discussion went on about the energy it takes to raise the temperature of ice to the same temperature as water.

        AGW wasn’t kidding about both poles melting by 2013. And the math they produced ( which I believe is wrong ) backed that assertion up. Going on about how much is going where now is useless in terms of the energy budget. They backed the assertions up with the satellite data, the S-B equations and the TSI, and the properties of co2. If I didn’t think they were wrong, I’d have to agree. Moreover, if the changes had come about as a result of these things, I would have changed my position. Just the opposite has happened. I am more convinced than ever that C/AGW isn’t just mistaken, but it looks like they’ve committed fraud in order to effect a political agenda.

        Is AGW going to engage in revisionist history by saying they didn’t say , ” we never said it was 343 w/m^2 incoming, retained 240 w/m^2, and 103 w/m^2.. ” ? If it’s not in the oceans, and it didn’t get released ( and if it did, how did that happen ) …. where is it ?

      • rishrac: I think we may agree on some things, especially what you call the energy budget and what I call the radiative [im]balance at the TOA. Above Arno was saying that we should focus on temperature change from El Nino peak to peak, not the long term trend – which is less than the IPCC forecast.

        There are lots of dubious claims being made by both sides. I don’t want to waste my time discussing history, except to say that it has provided ample justification for skepticism.

        Your discussion of 2/3 and 1/3 could be refined. It is an intrinsic property of all materials to emit more radiation when they warm. If we apply a blackbody or graybody (e = 0.61) to the Earth (at 255 K or 288 K), we expect the planet to emit 3.8 or 3.3 W/m2 more radiation to space for every 1 K or surface warming. The fundamental question is what fraction of that increase in radiative cooling escape through the Earth’s atmosphere and escape to space (despite feedbacks). The IPCC’s models say only 25-40%, or 0.7-1.5 W/m2/K. If we say that the planet has warmed 0.5 K during the satellite era, then that would be 0.35-0.75 W/m2. For ECS of 1.6 K (Lewis and Curry), the increase in radiative cooling would be 2.0 W/m2/K, the increase would be about 1.0 W/m2. Changes this small measured over a third of a decade with changing technology are problematic.

        When considering the increase in radiative cooling with temperature, we also need to account for changes in SWR reflected by clouds and the surface. So the numbers I cited above are for the total change in TOA OLR and reflected SWR.

        The best way to see how our planet really behaves to look at its response to 3.5 K of seasonal warming that occurs every year because of the lower heat capacity of the NH. In the LWR channel, radiative cooling appears to increase at about 2.3 W/m2/K and that value is very linear and reproducible. That is consistent with an ECS of about 1.6 K. The SWR response is not as linear and is biased by differences between the hemisphere. Global warming is not seasonal warming.

        http://www.pnas.org/content/110/19/7568.full.pdf

        No matter what the right answer is, the above paper proves that climate models do a lousy job (and mutually inconsistent job) of modeling the LWR and SWR changes that actually occur during seasonal warming.

      • I’ve been down the rabbit hole on the laws of thermodynamics. How AGW can spin things is truly amazing. I had to back off from the minutia and define it this way for AGW and the energy budget. 1) if incoming and outgoing are balanced not much is going to happen, AGW is not valid, 2) outgoing is less than the incoming then by how much and what was predicted, if it meets the target then AGW is valid. Even if there is some warming that doesn’t mean the co2 is the cause. Scientificly it has to meet the target numbers in the time parameters that were ascribed, it is suppose to be based on actual science. 3) obviously if outgoing is greater than incoming, we have cooling, and we are all praying that co2 can be our salvation. AGW is not valid under this condition.
        AGW has failed. There isn’t any other way around it. As I answered Toneb, the current spike in temperature from the last el nino, should be the bottom of what global temperature should be under AGW, not the top and falling. Additionally, there is no way of knowing where the heat is other than escaping, which they aren’t admitting to. It isn’t in the oceans as we would see a dramatic increase in sea level rise from thermal expansion. Heat escaping under AGW negates the positive feedback.

  33. Without a good statistical model with all the proper correlations, all this talk of “statistical significance” is just so much arguing over how many angels can dance on the head of a pin.

    The El Nino has yet to work itself out of the data. In the coming year, we will get a much better view of whether the pause is continuing, or transitioning into a warming or a cooling era. My money is on the last. We shall see…

  34. For average temperature compilations, the margins of error are too large to apply statistical analyses to tiny temperature anomalies, and the report results with two and three decimal places.

    That is bad math and bad science.

    This article does just that — so what we really have here is mathematical mass-turbation by someone who loves playing with numbers and ignoring reasonable data margins of error.

    The proper way to work with very rough estimates like these is to stand far back from a chart and observe the general trend ( and you must do that before the goobermint bureaucrats “adjust” the data into a new trend ! )

    What appears from a long distance away from a chart is a flat trend between the 1998 and 2015 2016 El Nino temperature peaks.

    That could be a meaningful trend … or 50 years from now it could appear to be a meaningless random variation, cause unknown, of a mild long-term warming trend that started in 1850

    You don’t need any math to state a general observation from a chart.

    But if you want to use math, the observation changes to either a slightly rising trend, a flat trend, or a slightly declining average temperature trend when INCLUDING reasonable margins of error (more than +/- 0.1 degree C., in my opinion).

    For the surface measurements, based on the history of those data, you know in advance the initial number are no good and will be repeatedly “adjusted” until they are “right”!

    It makes no sense to apply statistics to surface data you know will be repeatedly “adjusted”,
    have a claimed margin of error of +/- 0.1 degrees C., which is false precision (complete nonsense),
    and are compiled by smarmy left-wing scientists people who can’t be trusted.

    The same people who have predicted a coming global warming catastrophe, have shown their bias in the past by adjusting raw data to show more warming, and better match their long-term predictions.

    How can we trust the temperature actuals if the people compiling the actuals are the same people who make the temperature predictions … and of course they want their predictions to be right ?

    • by someone who loves playing with numbers and ignoring reasonable data margins of error

      Please take this up with Nick Stokes.

      a claimed margin of error of +/- 0.1 degrees C., which is false precision (complete nonsense)

      Please take this up with Dr. Spencer.

    • “How can we trust the temperature actuals if the people compiling the actuals are the same people who make the temperature predictions … and of course they want their predictions to be right ?”

      Simple.
      By not involving a conspiracy just because you do not like the outcome of the “predictions”.

      They do not “want their predictions to be right”.
      You ignor the fact that eh are scientists and therefore investigate things for a living. To study and learn.
      The “predictions” (1.5 to 4.5C per x2 CO2) are a work in progress with field investigation then modelling the chief tools to learning.

      No one here or elsewhere has found fault such that any “adjustments” materially affect the global warming numbers.
      Oh, and I don’t know about you, but personally don’t get any satisfaction at cheating whilst playing patience.
      Oh, and again, neither did the Exxon scientists either.

      • “No one here or elsewhere has found fault such that any “adjustments” materially affect the global warming numbers.”

        Utter rubbish.

        Stop making stuff up.

      • You really need help Toneb. Every skeptic on here has complained about the changing numbers. In fact, it has been the subject of several articles on this site. It’s been a continence issue. This isn’t the only site where it has been discussed and complained about. ….. the irony is even with the adjustments in AGW favor, the theory has stilled failed. AGW should have died in 2015 when the paper showing co2 followed temperature for the last 60 years. But wait ! NOAA improved the numbers so that no such relationship existed, so no need to complain. Are you out of your mind ?
        Perhaps you also remember when AGW also started changing the ending and starting dates of global temperatures when temperature wasn’t going their way. They only got a year or two out of that. And nobody said anything about throwing away the original documents after the numbers had been altered ?
        But then you’re right, nobody that is a true believer in the Holy religion of AGW, is going to complain. And that’s the only people who matter to you.

      • Leftist politicians want a “crisis” to respond to with more government.

        Leftists tend to be a very consistent group, like a flock of parrots, on every subject.

        Consider their recent “Russians elected Trump” squawk squawk “Russians elected Trump” squawk squawk “Russians elected Trump” squawk squawk

        Not one shred of evidence has been presented to the public to prove that allegation, yet Dumbocrats will repeat it as a “fact” for the rest of their lives.

        And Al Gore was cheated in Florida too.

        Leftists like you can not be reasoned out of their beliefs, because they were never ‘reasoned into them” in the first place

        The false climate change “crisis” has worked well for the leftists — a lie repeated enough times can seem to be the truth to many people.

        An invisible crisis — the best kind !

        Claiming a climate change catastrophe is is progress … when anyone with sense would think the current climate was wonderful.

        Claiming the ability to predict the climate 100 years into the future … even after 40 years of grossly inaccurate predictions … 100-year predictions so bad they usually look foolish after just ten years of actuals are collected.

        Fools like you Toneb, demonize CO2 when there is no evidence in 4.5 billion years of climate history that CO2 ever controlled the average temperature, or that C02 levels much higher than today ever caused runaway warming.

        The “scientists” on the government payroll got there, under Obama certainly, because they believe in the “crisis”.

        Since Earth is always cooling or warming, it is always possible to extrapolate the recent average temperature trend and claim a climate crisis, either a warming or cooling ‘crisis”, is on the way.

        And that “crisis” will arrive, we are ALWAYS told, in the future … even after predicting it for 40 years, the “crisis” is still off in the future.

        The “adjustments” have consistently caused more global warming, either by cooling the past, or warming the present.

        Much, if not most, of the claimed warming since 1880 is from “adjustments”

        Much, if not most, of the surface measurements are infilling (wild guesses).

        And, instead of the number of surface thermometers going up over time for better coverage, the number of thermometers used in the global average have declined significantly since the 1960s.

        Toneb, not only are you a fool for believing the CO2 is evil, you are also a liar.

        In your comment you falsely accused me of claiming there is a conspiracy
        ” … just because you do not like the outcome of the “predictions”.

        I never claimed there was a “conspiracy”.

        I only point out that leftists have a very consistent belief about a coming climate crisis (wrong, but consistent), and they are the only people hired by governments headed by leftists to “study” the climate … and when people have a belief, they are subject to confirmation bias.

        Not once in writing about climate change have I ever said I don’t “like” the outcome of the predictions — what I consistently say is I don’t like predictions, for two very good reasons:

        (1) Predictions about the future tend to be wrong, and
        (2) Predictions about the average temperature in the future have been consistently wrong — for about 97% of the simulations — in the past 40 years

        I complain about predictions from skeptics just as much as I complain about predictions from mindless CO2 is evil believers like you.

        The entire climate change crisis claim is based on predictions.

        No one knows what causes climate change but almost everyone makes predictions!

        That’s idiotic.

        The coming global warming crisis predictions could have been made in 1940, at the beginning of the age of manmade CO2.

        They would have been wrong.

        The average temperature declined from 1940 to 1975, and there was a flat trend between the 1998 and 2015/2016 El Nino peaks — both significant periods with no warming trend while CO2 rose significantly.

        We skeptics automatically know from the bad science (predicting the future climate is not really science at all) and the “hide the decline” and the phony Mann Hockey Stick Chart, and especially leftists cowards like you who avoid debate by launching character attacks … which is exactly what you did to me in your comment on January 15, 2017 at 2:22pm:

        (1) You falsely accused me of claiming there was a conspiracy, and falsely claimed I did that because I didn’t “like” the predictions:

        “ By not involving a conspiracy just because you do not like the outcome of the “predictions”.

        (2) You falsely summarized the skeptics who write articles posted here, and those who comment on articles at this website:

        ” No one here or elsewhere has found fault such that any “adjustments” materially affect the global warming numbers.”

        (3) You falsely attacked … I’m not sure who — maybe Exxon — with your concluding, and completely incoherent, sentences:

        “Oh, and I don’t know about you, but personally don’t get any satisfaction at cheating whilst playing patience. Oh, and again, neither did the Exxon scientists either.

        You have to be a leftist, because your comment was one character attack after another — you attacked me, skeptics who get articles posted here, skeptics who make comments about the articles here, and I suppose you were attacking Exxon and any scientists on their payrolls.

        Hey, you forgot to attack Walmart and MacDonalds !

        Nice job with the character attacks — very concise — although incoherent at the end of the comment.

        I suppose your belief is that everyone who observes the current wonderful climate is wrong, and you and your scary predictions of a coming global warming catastrophe are right?

        Is that what you are trying to tell everyone by commenting here?

        Please explain your own climate beliefs in simple English — we all need to laugh.

        My own beliefs are simple and posted under my real name:
        (1) NO ONE CAN PREDICT THE FUTURE CLIMATE, and
        (2) THERE IS NO EVIDENCE CO2 CONTROLS THE AVERAGE TEMPERATURE OF OUR PLANET

        To a leftist like you, I suppose my modest skepticism makes me a “radical”,
        and perhaps I should be imprisoned for being a “science denier”?

        Toneb, what is your real name, and what do you do for a living?
        I assume you’re not involved with science, but am still curious.
        (My real name in on all my posts, and I’m retired, since the end of 2004,
        from the product development organization of a manufacturing company.)

  35. Frank wrote “The idea the GHGs permanently trap heat in the atmosphere was created by alarmists for the simple-minded.”

    Tonyb replied: It wasn’t “created” – it was discovered by repeated experimentation and observation and verified by applied theory, going all the way back to Tyndall and Arrhenius from 1859. (were there “alarmists” then??

    Frank responds: Tyndall, Arrhenius, and other early scientists discussed radiative forcing by GHGs from a surface energy balance perspective. We now know that this approach is grossly flawed. Radiative imbalance at the TOA is the thing that is important. Fortunately, we didn’t have a bunch of environmental fanatics running around back then inflating fears, and we were able to use fossil fuels to dramatically improve our standard of living.

    In the 1960s, Keeling demonstrated that about 50% of CO2 from burning fossil fuels was accumulating in the atmosphere; and Manabe and Weatherall discovered how heat flux through the atmosphere was controlled by a combination of radiation and convection. That shifted our attention from surface energy balance to TOA energy balance and the role GHGs played in that. Despite billions of dollars spent, we aren’t much closer to understanding how much warming that will cause – even the IPCC says the 70% ci for ECS is 1.5-4.5 K/doubling.

    GHGs do not “trap” heat by absorbing thermal infrared. They both absorb and emit thermal infrared. To a first approximation, doubling CO2 doubles both absorption AND emission. Radiative forcing the small difference between the increase in both created by the temperature gradient in our atmosphere. It is accurate to say that GHGs slow radiative cooling to space, but they don’t trap anything.

  36. Frank:
    “Frank responds: Tyndall, Arrhenius, and other early scientists discussed radiative forcing by GHGs from a surface energy balance perspective. We now know that this approach is grossly flawed. Radiative imbalance at the TOA is the thing that is important. Fortunately, we didn’t have a bunch of environmental fanatics running around back then inflating fears, and we were able to use fossil fuels to dramatically improve our standard of living.”

    No they observed by experiment the degree of attenuation of LWIR by concentration of CO2.
    This can then be applied via the Beer Lambert equation to the path length of the atmosphere, which is why CO2 cannot be “saturated” at such small concentrations.
    More can always be added such that the effective height of net emission to space keeps rising. t present this is around 7km which corresponds to an atmospheric temp of -18C. The Earth’s emitted temp as seen from space and the temp it would have with no GHG’s.

    “GHGs do not “trap” heat by absorbing thermal infrared. They both absorb and emit thermal infrared. To a first approximation, doubling CO2 doubles both absorption AND emission. Radiative forcing the small difference between the increase in both created by the temperature gradient in our atmosphere. It is accurate to say that GHGs slow radiative cooling to space, but they don’t trap anything.”

    GHG’s don’t “trap” literally, as in never getting out.
    A delay, such that during it more SW comes in to the tune of around 150 W/m2.
    They emit at a higher level in the troposphere, as stated above and therefore do so at a colder temp and so less efficiently.

    And thanks for indicating where your “ideas” come from.
    Not science.
    But ideological motivation.

    • Toneb, with a net retention of 240 w/m^2 over the last 20 years, would you care to tell us what the earth’s temperature should be ? That was when co2 levels were a frighting 370 ppm/v. If those numbers were right, with out any more co2 produced from that time on, which didn’t happen, co2 kept being produced in record amounts, I can assure you that 1) there would have been severe sea level rise from thermal expansion of the oceans ( that by the way is why the heat isn’t hiding in the oceans, we aren’t talking about a couple of millimeters here) 2) both poles would have been completely melted by 2013, Al Gore wasn’t joking about that, and I agree that if the math were right that would have happened. 3) that last spike from El nino would be the bottom of current global temperature.
      Your picture is wrong and as a result, your analyses is wrong. I have no ideological motivation. If AGW were correct, I would support it. I think most people here would too. However, over time I have become very convinced that AGW is not only wrong, but is fraudulent as well. No other science would allow the kind of manipulation that occurs in AGW . I can assure you that those that once believed in AGW that have become skeptics, aren’t going back unless there is convincing evidence. If anything the evidence is that AGW is dead.

  37. The changeover from a warming period to a pause in warming began in about 2002 not 1998. The spikes or dips in the temperature record caused by short term climate events like El Nino or La Nina should be ignored during the study of long range temperature change as they can serve as confusing distractions. If a horizontal line is drawn from the temperature reading for 2002 forward to the present (ignoring the El Nino reading of 2016) the pause in warming, shown by the temperature record, becomes evident.

Comments are closed.