# Huge Divergence Between Latest UAH & HadCRUT4 Temperature Datasets (Now Includes April Data)

Guest Post by Werner Brozek, Edited by Just The Facts:

Some of you may have wondered why the title and the above plot are comparing different data sets. The reasons are that GISS and HadCRUT4.3 are very similar. As well, UAH6.0 is now very similar to RSS. However WFT does not have the latest UAH nor Hadcrut. So when you plot UAH on WFT, you actually are plotting version 5.5 and not even 5.6. Version 5.5 has not been updated since December 2014, however HadCRUT 4.2 has not been updated since July 2014, so those slopes are really off by a huge amount due to the fact that HadCRUT had record levels over the last year. But GISS and RSS are up to date on WFT.

The times of 58 months and 162 months were chosen to give a symmetric picture of shorter and longer terms where the slopes diverge between the satellites and ground based data sets.

In the next four paragraphs, I will give information for HadCRUT4.2 in July 2014, HadCRUT4.3 in April 2015, UAH5.6 in March 2015 and UAH6.0 in April 2015. The information will be

1. For how long the slope is flat;

2. Since when the warming is not statistically significant according to Nick Stokes’s calculation;

3. Since when the warming is not statistically significant according to Dr. McKitrick’s calculation where applicable;

4. The previous hot record year; and

5. Where each data set would rank after the given number of months.

For HadCRUT4.2 in July 2014, the slope was flat for 13 years and 6 months. There was no statistically significant warming since November 1996 according to Nick Stokes. Dr. McKitrick said there was no statistically significant warming for 19 years. The previous record warm year was 2010. As of July 2014, HadCRUT4.2 was on pace to be the third warmest year on record.

For HadCRUT4.3 in April 2015, the slope is not negative for any period worth mentioning. There is no statistically significant warming since June 2000 according to Nick Stokes. The previous record warm year was 2014. As of April, HadCRUT4.3 is on pace to set a new record. Note that on all criteria, HadCRUT4.3 is warmer than HadCRUT4.2.

For UAH5.6 as of March 2015, the slope was flat for an even 6 years. There was no statistically significant warming since August 1996 according to Nick Stokes. Dr. McKitrick said there was no statistically significant warming for 19 years, however this would be from about April 2014. The previous record warm year was 1998. As of March 2015, UAH5.6 was on pace to be the third warmest year on record.

For UAH6.0 as of April 2015, the slope is negative for 18 years and 4 months. There is no statistically significant warming since October 1992 according to Nick Stokes. The previous record warm year was 1998 as well. As of April, UAH6.0 is on pace to be the 8th warmest year. Note that unlike the HadCRUT comparison, UAH6.0 is colder than UAH5.6.

A year ago, Dr. McKitrick used HadCRUT4.2 and UAH5.6 to come up with times for no statistically significant warming on each of these data sets. In the meantime, HadCRUT4.2 has been replaced by HadCRUT4.3 which has been setting hot records over the past year. However UAH5.6 has been replaced with UAH6.0 which is much cooler than the UAH5.6 version. As a result, his times are no longer valid for these two data sets so I will not give them any more.

For RSS, Dr. McKitrick had a time of 26 years. From Nick Stokes’s time of November 1992 for last April to the present time of January 1993, there is very little change in the starting time, however we are now a year later. Therefore I would predict that if Dr. McKitrick ran the numbers again, he would get a time of 27 years without statistically significant warming.

For UAH5.6, Dr. McKitrick had a time of 16 years. However, Nick Stokes’s new time for UAH6.0 is from October 1992. Since this is three months earlier than the RSS time, I would predict that if Dr. McKitrick ran the numbers again, he would also get a time of 27 years without statistically significant warming for the new UAH6.0.

For Hadcrut4.2, Dr. McKitrick had a time of 19 years. At that time, Nick Stokes’s had a time since October 1996. However Nick Stokes’s new time for Hadcrut4.3 is from June 2000. It would be reasonable to assume that Dr. McKitrick would get 15 years if he did the calculation today.

In the sections below, as in previous posts, we will present you with the latest facts. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on some data sets. At the moment, only the satellite data have flat periods of longer than a year. The second section will show for how long there has been no statistically significant warming on several data sets. The third section will show how 2015 so far compares with 2014 and the warmest years and months on record so far. For three of the data sets, 2014 also happens to be the warmest year. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.

Section 1

This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative on at least one calculation. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.

1. For GISS, the slope is not flat for any period that is worth mentioning.

2. For Hadcrut4, the slope is not flat for any period that is worth mentioning. Note that WFT has not updated Hadcrut4 since July and it is only Hadcrut4.2 that is shown.

3. For Hadsst3, the slope is not flat for any period that is worth mentioning.

4. For UAH, the slope is flat since January 1997 or 18 years and 4 months. (goes to April using version 6.0)

5. For RSS, the slope is flat since December 1996 or 18 years and 5 months. (goes to April)

The next graph shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the upward sloping blue line at the top indicates that CO2 has steadily increased over this period.

When two things are plotted as I have done, the left only shows a temperature anomaly.

The actual numbers are meaningless since the two slopes are essentially zero. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 18 years, the temperatures have been flat for varying periods on the two sets.

Section 2

For this analysis, data was retrieved from Nick Stokes’s Trendviewer available on his website. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set.  In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 14 and 22 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.

The details for several sets are below.

For UAH6.0: Since October 1992: Cl from -0.042 to 1.759

This is 22 years and 7 months.

For RSS: Since January 1993: Cl from -0.023 to 1.682

This is 22 years and 4 months.

For Hadcrut4.3: Since June 2000: Cl from -0.015 to 1.387

This is 14 years and 10 months.

For Hadsst3: Since June 1995: Cl from -0.013 to 1.706

This is 19 years and 11 months.

For GISS: Since November 2000: Cl from -0.041 to 1.354

This is 14 years and 5 months.

Section 3

This section shows data about January 2015 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadsst3, and GISS.

Down the column, are the following:

1. 14ra: This is the final ranking for 2014 on each data set.

2. 14a: Here I give the average anomaly for 2014.

3. year: This indicates the warmest year on record so far for that particular data set. Note that the satellite data sets have 1998 as the warmest year and the others have 2014 as the warmest year.

4. ano: This is the average of the monthly anomalies of the warmest year just above.

5. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year.

6. ano: This is the anomaly of the month just above.

7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0. Periods of under a year are not counted and are shown as “0”.

8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.

9. sy/m: This is the years and months for row 8. Depending on when the update was last done, the months may be off by one month.

10. Jan: This is the January 2015 anomaly for that particular data set.

11. Feb: This is the February 2015 anomaly for that particular data set, etc.

14. ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months.

15. rnk: This is the rank that each particular data set would have for 2015 without regards to error bars and assuming no changes. Think of it as an update 20 minutes into a game.

1.14ra 6th 6th 1st 1st 1st
2.14a 0.170 0.255 0.564 0.479 0.68
3.year 1998 1998 2014 2014 2014
4.ano 0.483 0.55 0.564 0.479 0.68
5.mon Apr98 Apr98 Jan07 Aug14 Jan07
6.ano 0.742 0.857 0.835 0.644 0.93
7.y/m 18/4 18/5 0 0 0
8.sig Oct92 Jan93 Jun00 Jun95 Nov00
9.sy/m 22/7 22/4 14/10 19/11 14/5
10.Jan 0.261 0.367 0.690 0.440 0.75
11.Feb 0.157 0.327 0.660 0.406 0.80
12.Mar 0.139 0.255 0.680 0.424 0.84
13.Apr 0.065 0.174 0.655 0.557 0.71
14.ave 0.156 0.281 0.671 0.457 0.78
15.rnk 8th 6th 1st 2nd 1st

If you wish to verify all of the latest anomalies, go to the following:

For UAH, version 6.0 was used. Note that WFT uses version 5.5 however this version was last updated for December 2014 and it looks like it will no longer be given.

http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/tltglhmam_6.0beta2

For GISS, see:

http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

To see all points since January 2014 in the form of a graph, see the WFT graph below. Note that Hadcrut4 is the old version that has been discontinued. WFT does not show Hadcrut4.3 yet. As well, only UAH version 5.5 is shown which stopped in December. WFT does not show version 6.0 yet.

As you can see, all lines have been offset so they all start at the same place in January 2014. This makes it easy to compare January 2014 with the latest anomaly.

Appendix

In this part, we are summarizing data for each set separately.

The slope is flat since December, 1996 or 18 years, 5 months. (goes to April)

For RSS: There is no statistically significant warming since January 1993: Cl from -0.023 to 1.682.

The RSS average anomaly so far for 2015 is 0.281. This would rank it as 6th place. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2014 was 0.255 and it was ranked 6th.

UAH6.0

The slope is flat since January 1997 or 18 years and 4 months. (goes to April using version 6.0)

For UAH: There is no statistically significant warming since October 1992: Cl from -0.042 to 1.759. (This is using version 6.0 according to Nick’s program.)

The UAH average anomaly so far for 2015 is 0.156. This would rank it as 8th place. 1998 was the warmest at 0.483. The highest ever monthly anomaly was in April of 1998 when it reached 0.742. The anomaly in 2014 was 0.170 and it was ranked 6th.

The slope is not flat for any period that is worth mentioning.

For Hadcrut4: There is no statistically significant warming since June 2000: Cl from -0.015 to 1.387.

The Hadcrut4 average anomaly so far for 2015 is 0.671. This would set a new record if it stayed this way. The highest ever monthly anomaly was in January of 2007 when it reached 0.835. The anomaly in 2014 was 0.564 and this set a new record.

For Hadsst3, the slope is not flat for any period that is worth mentioning. For Hadsst3: There is no statistically significant warming since June 1995: Cl from -0.013 to 1.706.

The Hadsst3 average anomaly so far for 2015 is 0.457. This would rank 2nd if it stayed this way. The highest ever monthly anomaly was in August of 2014 when it reached 0.644. The anomaly in 2014 was 0.479 and this set a new record.

GISS

The slope is not flat for any period that is worth mentioning.

For GISS: There is no statistically significant warming since November 2000: Cl from -0.041 to 1.354.

The GISS average anomaly so far for 2015 is 0.78. This would set a new record if it stayed this way. The highest ever monthly anomaly was in January of 2007 when it reached 0.93. The anomaly in 2014 was 0.68 and it set a new record.

Conclusion

Why are the new satellite and ground data sets going in opposite directions? Is there any reason that you can think of where both could simultaneously be correct? Lubos Motl has an interesting article in which it looks as if satellites can “see” things the ground data sets miss. Do you think there could be something to this for at least a partial explanation?

RSS May: With the May anomaly at 0.310, the 5 month average is 0.287 and RSS would remain in 6th place if it stayed this way. The time for a slope of zero increases to 18 years and 6 months from December 1996 to May 2015.

WFT Update: A few days ago, WFT has been updated and now shows HadCRUT4.3 to date.

UAH5.5 has also been updated but it shows UAH5.6 and not UAH6.0, so it cannot be used

to verify the 18 year and 4 month pause for UAH6.0.

Article Rating
Inline Feedbacks
James
June 9, 2015 6:23 am

ugg. Any way to turn off the auto-play video ads? Obnoxious.

Hugh
June 9, 2015 6:46 am

Adblock in Firefox could do?

June 9, 2015 7:02 am

Yes – go to firefox or mozilla to install (forgot which one)

James
June 9, 2015 8:19 am

Thanks for the tips. I am using Safari on an iPad, and that seems to complicate things…more research I guess!

Robert Westfall
June 9, 2015 8:26 am

Ghostery in Firefox is also good.

TRM
June 9, 2015 9:59 am

select “Click to run”

Jl
June 9, 2015 5:49 pm

Just hit the mute button and keep scrolling down.

Physics Major
June 10, 2015 10:31 am

The latest version of Firefox has “reader view” that will display the post the with all of the extraneous page clutter stripped away. Unfortunately, you can’t see the comments in reader view, but it does make reading the text easier.

June 12, 2015 6:03 pm

My ad blocker software suppresses them.

It doesn't add up...
June 9, 2015 6:25 am

Also relevant to this discussion is the following observation:
http://realclimatescience.com/2015/06/data-tampering-on-the-other-side-of-the-pond/
As I commented there, there’s a publicly available full explanation for the changes out there somewhere, isn’t there?
Seems like HADCRUT 4 has been Karled.

Werner Brozek
June 9, 2015 7:09 am
It doesn't add up...
Reply to  Werner Brozek
June 9, 2015 10:22 am

Has anyone taken a good look at the UHI around those added Chinese stations? It’s not like there hasn’t been major economic development in the country, after all…

george e. smith
Reply to  Werner Brozek
June 12, 2015 2:33 pm

There’s a whole lot of “no statistically significants” in that there above Werner, so it begs the question:
Over the time frames discussed above would there be ANY statistically significant difference between the log of the atmospheric CO2; say from Mauna Loa, and the best straight line linear fit to the same data ??
In other words; why would anybody want to plot something that couldn’t possibly show anything that is statistically significant ?
I venture that statistical significance, is significant only to persons who play the game of statistics.
It isn’t of any significance to the physical universe, which is not even aware that statistical mathematics even exists.
G

Werner Brozek
Reply to  Werner Brozek
June 12, 2015 3:29 pm

Over the time frames discussed above would there be ANY statistically significant difference between the log of the atmospheric CO2; say from Mauna Loa, and the best straight line linear fit to the same data ??

I have never seen this discussed anywhere. It would be something for Nick Stokes to comment on. I just sent him an email asking if he would like to respond.

Nick Stokes
Reply to  Werner Brozek
June 14, 2015 3:57 pm

I’m not sure whether the difference would be statistically significant. The CO2 curve is not well modelled by a random process. The deviations are mainly cyclic.
But in fact the difference between log and linear over the period in question is not great. The arithmetic mean (linear) of 280 and 400 is 340. The geometric mean (log) is 334.7.
“I venture that statistical significance, is significant only to persons who play the game of statistics.”
Statistical significance matters if you are trying to deduce something from observations. But often you aren’t. Exponential rise of CO2 is a reasonable descriptor; no principle really hangs on it. The log dependence of forcing on CO2 is a matter of radiative calculation.

harrytwinotter
June 9, 2015 6:25 am

Why are satellite and surface datasets different? Because they are measurements of different things.
From memory Cowtan and Way came up with a method for converting one to the other, at least in part.

Hugh
June 9, 2015 6:55 am

Could you elaborate why temperature of lower troposphere has a hiatus, pause or plateau, which would not coincide (killing of money?) with surface pause?

harrytwinotter
June 9, 2015 7:00 am

Hugh,
I am not sure I understand your question – are you asking me to explain WHY they are different. I do not know, if I did I would try to write a paper about it.
Which periods from the UAH and RSS datasets do you think contain a “pause”. That is checkable.

MarkW
June 9, 2015 8:25 am

Harry believes that the surface is warming, but that above the surface it’s not.
That’s why the cooked data from HADCRUT shows warming, but the satellite data doesn’t.

Werner Brozek
June 9, 2015 7:31 am

Why are satellite and surface datasets different? Because they are measurements of different things.

Yes, they do measure different things. But unless the adiabatic lapse rate is changing, there should not be huge differences over the long run.

Reply to  Werner Brozek
June 9, 2015 4:14 pm

Perhaps a better way of saying it is that they measure the “same thing” but take the measurements at different places.
The satellite measurements is most certainly a more uniformly distributed measurement of the atmosphere than are the ground station measurements which are predominately clustered around people and cities.

sodk
Reply to  Werner Brozek
June 16, 2015 6:48 am

“there should not be huge differences over the long run”
What???
Lower throposphere should have about 2/3 of surface warming rate. Differece should be huge over the long run.

Werner Brozek
June 9, 2015 7:41 am

From memory Cowtan and Way came up with a method for converting one to the other, at least in part.

I could be wrong here, but I do not think they converted one to the other. I think they used satellite data where there was no surface data available and figured out what the Hadcrut data would have been if it were available. But now that I look at it this way, I suppose it is a conversion of sorts. The interesting thing is that Cowtan and Way must have had a formula such as a constant adiabatic lapse rate to even attempt this conversion. This just reinforces the idea that the two data sets should not be diverging in my opinion.

harrytwinotter
Reply to  Werner Brozek
June 9, 2015 8:21 am

Werner Brozek.
I have never seen a scientific study saying the satellite and surface temperatures should track each-other. If there is such a study someone can post a citation.
The UAH and RSS estimates do not always track each other, either. I do know the methods they use to allow for unsampled regions and data contamination differ.
I do believe if base lines are aligned correctly and a longer trend is computed then satellite and surface temperatures track each other better.

MarkW
Reply to  Werner Brozek
June 9, 2015 8:28 am

Harry, you don’t need a study, just a knowledge of basic physics.
Unless the air column follows the adiabatic rate, it will become unstable. If, the air at the surface is warming faster than the air at altitude, then the atmosphere becomes unstable and quickly over turns. This is why you get afternoon thunderstorms in many places.

Werner Brozek
Reply to  Werner Brozek
June 9, 2015 8:55 am

If there is such a study someone can post a citation. The UAH and RSS estimates do not always track each other, either.

See Paul Homewood’s comment below:
“For those who argue that surface and satellites don’t necessarily follow each other, remember what the Met Office had to say in 2013, when discussing their HADCRUT sets:
Changes in temperature observed in surface data records are corroborated by records of temperatures in the troposphere recorded by satellites””
As for the tracking, that was true before UAH6.0. Now they more or less agree.

harrytwinotter
Reply to  Werner Brozek
June 9, 2015 8:56 pm

Werner Brozek.
“Changes in temperature observed in surface data records are corroborated by records of temperatures in the troposphere recorded by satellites”
Well yes, as I said above if you align the baselines correctly and take longer trendlines they do match better. But on shorter time scales, they don’t – that is pretty obvious.
I still need a citation to the scientific literature. The stand-alone quote is ambiguous, context is required.

harrytwinotter
Reply to  Werner Brozek
June 9, 2015 9:07 pm

Werner Brozek.
The only reference I can find says this:
“Changes in temperature observed in surface data records are corroborated by measurements of temperatures below the surface of the ocean, by records of temperatures in the troposphere recorded by satellites and weather balloons, in independent records of air temperatures measured over the oceans and by records of sea-surface temperatures measured by satellites.”
In context he means the changes are “corroborated”. He doesn’t say they follow each other. So I am calling a straw man argument on this one.

rgbatduke
Reply to  Werner Brozek
June 10, 2015 5:52 am

I could be wrong here, but I do not think they converted one to the other. I think they used satellite data where there was no surface data available and figured out what the Hadcrut data would have been if it were available. But now that I look at it this way, I suppose it is a conversion of sorts. The interesting thing is that Cowtan and Way must have had a formula such as a constant adiabatic lapse rate to even attempt this conversion. This just reinforces the idea that the two data sets should not be diverging in my opinion.

The two data sets should not be diverging, period, unless everything we understand about atmospheric thermal dynamics is wrong. That is, I will add my “opinion” to Werner’s and point out that it is based on simple atmospheric physics taught in any relevant textbook.
This does not mean that the cannot and are not systematically differing; it just means that the growing difference is strong evidence of bias in the computation of the surface record. This bias is not, really surprising, given that every new version of HadCRUT and GISS has had the overall effect of cooling the past and/or warming the present! This is as unlikely as flipping a coin (at this point) ten or twelve times each, and having it come up heads every time for both products. In fact, if one formulates the null hypothesis “the global surface temperature anomaly corrections are unbiased”, the p-value of this hypothesis is less than 0.01, let alone 0.05. If one considers both of the major products collectively, it is less than 0.001. IMO, there is absolutely no question that GISS and HadCRUT, at least, are at this point hopelessly corrupted.
One way in which they are corrupted with the well-known Urban Heat Island effect, wherein urban data or data from poorly sited weather stations shows local warming that does not accurately reflect the spatial average surface temperature in the surrounding countryside. This effect is substantial, and clearly visible if you visit e.g. Weather Underground and look at the temperature distributions from personal weather stations in an area that includes both in-town and rural PWSs. The city temperatures (and sometimes a few isolated PWSs) show a consistent temperature 1 to 2 C higher than the surrounding country temperatures. Airport temperatures often have this problem as well, as the temperatures they report come from stations that are deliberately sited right next to large asphalt runways, as they are primarily used by pilots and air traffic controllers to help planes land safely, and only secondarily are the temperatures they report almost invariably used as “the official temperature” of their location. Anthony has done a fair bit of systematic work on this, and it is a serious problem corrupting all of the major ground surface temperature anomalies.
The problem with the UHI is that it continues to systematically increase independent of what the climate is doing. Urban centers continue to grow, more shopping centers continue to be built, more roadway is laid down, more vehicle exhaust and household furnace exhaust and water vapor from watering lawns bumps greenhouse gases in a poorly-mixed blanket over the city and suburbs proper, and their perimeter extends, increasing the distance between the poorly sited official weather stations and the nearest actual unbiased countryside.
HadCRUT does not correct in any way for UHI. If it did, the correction would be the more or less uniform subtraction of a trend proportional to global population across the entire dataset. This correction, of course, would be a cooling correction, not a warming correction, and while it is impossible to tell how large it is without working through the unknown details of how HadCRUT is computed and from what data (and without using e.g. the PWS field to build a topological correction field, as UHI corrupts even well-sited official stations compared to the lower troposphere temperatures that are a much better estimator of the true areal average) IMO it would knock at least 0.3 C off of 2015 relative to 1850, and would knock off around 0.1 C off of 2015 relative to 1980 (as the number of corrupted stations and the magnitude of the error is not linear — it is heavily loaded in the recent past as population increases exponentially and global wealth reflected in “urbanization” has outpaced the population).
GISS is even worse. They do correct for UHI, but somehow, after they got through with UHI the correction ended up being neutral to negative. That’s right, UHI, which is the urban heat island effect, something that has to strictly cool present temperatures relative to past ones in unbiased estimation of global temperatures ended up warming them instead. Learning that left me speechless, and in awe of the team that did it. I want them to do my taxes for me. I’ll end up with the government owing me money.
However, in science, this leaves both GISS and HadCRUT (and any of the other temperature estimates that play similar games) with a serious, serious problem. Sure, they can get headlines out of rewriting the present and erasing the hiatus/pause. They might please their political masters and allow them to convince a skeptical (and sensible!) public that we need to spend hundreds of billions of dollars a year to unilaterally eliminate the emission of carbon dioxide, escalating to a trillion a year, sustained, if we decide that we have to “help” the rest of the world do the same. They might get the warm fuzzies themselves from the belief that their scientific mendacity serves the higher purpose of “saving the planet”. But science itself is indifferent to their human wishes or needs! A continuing divergence between any major temperature index and RSS/UAH is inconceivable and simple proof that the major temperature indices are corrupt.
RSS and UAH are directly and regularly confirmed by balloon soundings and, over time, each other. They are not unconstrained or unchecked. They are generally accepted as accurate representations of LTT’s (and the atmospheric temperature profile in general).
The question remains as to how accurate/precise they are. RSS uses a sophisticated Monte Carlo process to assess error bounds, and eyeballing it suggests that it is likely to be accurate to 0.1-0.2 C month to month (similar to error claims for HadCRUT4) but much more accurate than this when smoothed over months or years to estimate a trend as the error is generally expected to be unbiased. Again this ought to be true for HadCRUT4, but all this ends up meaning is that a trend difference is a serious problem in the consistency of the two estimators given that they must be linked by the ALR and the precision is adequate even month by month to make it well over 95% certain that they are not, not monthly and not on average.
If they grow any more, I would predict that the current mutter about the anomaly between the anomalies will grow to an absolute roar, and will not go away until the anomaly anomaly is resolved. The resolution process — if the gods are good to us — will involve a serious appraisal of the actual series of “corrections” to HadCRUT and GISSTEMP, reveal to the public eye that they have somehow always been warming ones, reveal the fact that UHI is ignored or computed to be negative, and with any luck find definitive evidence of specific thumbs placed on these important scales. HadCRUT5 might — just might — end up being corrected down by the ~0.3 C that has probably been added to it or erroneously computed in it over time.
rgb

M Courtney
Reply to  Werner Brozek
June 10, 2015 6:00 am

rgbatduke, brilliant post. Worthy of being made a main article, if….
It is true that GISS increases it’s warming trend to counter the increasing Urban Heat Island effect.
That cannot possibly be true. It makes no sense.
Are you sure of that?

Reply to  Werner Brozek
June 10, 2015 6:46 am

Good comments by rgb – thank you.
Also thanks to Werner Brozek, Richard Courtney and others – a worthwhile discussion.
For the record, in my January 2008 paper at http://icecap.us/images/uploads/CO2vsTMacRaeFig5b.xls
I used UAH5.2 and HadCrut3.
In Fig. 1 you can see an apparent warming bias in HadCrut3 vs. UAH5.2 of about 0.2C in three decades, or about 0.06-0,07C per decade.
A critic might suggest that warming bias is closer to 0.1C in three decades… whatever…

richardscourtney
Reply to  Werner Brozek
June 10, 2015 6:55 am

M Courtney
Yes, rgb is right about UHI adjustments.
But there are worse problems than that.
Temperature is an intrinsic property so it cannot be averaged according to the laws of physics. But temperature is averaged according to the laws of climate science, and those laws are problematic.
1.
There is no agreed definition of global average surface temperature anomaly (GASTA).
2.
Each team that produces values of GASTA uses its own definition of GASTA.
3.
Each team that produces values of GASTA alters its definition of GASTA most months and each such alteration changes its indications of past GASTA values.
4.
Whatever definition of GASTA is or could be adopted, there is no possibility of a calibration reference for measurements of GASTA.
I commend you to read this item especially its Appendix B.
Richard

Werner Brozek
Reply to  Werner Brozek
June 10, 2015 6:59 am

Thank you very much rgb!
I also agree that this should be elevated to be a main article.
(If it is, this typo in the second paragraph should be fixed: “This does not mean that the cannot and are not systematically differing”
they cannot)

rgbatduke
Reply to  Werner Brozek
June 10, 2015 10:02 am

That cannot possibly be true. It makes no sense.
Are you sure of that?

No. However, I’ve read a number of articles that are at least reasonably convincing that suggest that it is, some of them on WUWT or linked sites. One such article actually broke down the GISS UHI correction by station and showed that roughly a third of the UHI corrections decreased station temperatures, a third increased them, and a bunch in the middle showed no change. Sadly, I failed to link it and don’t remember who did it (but I have little reason to doubt what they stated). Indeed, articles openly published on the UHI effect by warmist sites directly state that it is the lack of any substantive change to the global anomaly upon applying the GISS correction that proves that UHI is a non-issue.
Other sites (again, referencing articles on WUWT in several cases by people who I believe to be both generally honest and reasonably competent) disagree:
http://www.greenworldtrust.org.uk/Science/Scientific/UHI.htm
or more recently:
http://wattsupwiththat.com/2014/03/14/record-daily-temperatures-and-uhi-in-the-usa/
Personally, I find it very suspicious that a correction that almost certainly should show some cooling effect ends up neutral when applied by GISS, and AFAIK HadCRUT just ignores UHI, and both warmist and skeptical sites seem to agree that the GISS correction is small, essentially neutral. NCDC disagrees, but claims that their (sometimes large) homogenization correction accounts for it and that it doesn’t explain CONUS warming over the last century. The WUWT article linked above (which admittedly focuses on the southeast) disagrees. So does the article on rural vs urban county warming in California. North Carolina (where I live) has been temperature neutral for over 100 years, with no significant trend.
That’s the odd thing about this “Global” warming. It’s not terribly global, even over a century of data, when you look at unadjusted data or data maintained by people who don’t use a model to turn it into a global product.
So no, I’m not certain, and yes, it is a shame because I’d love to be able to trust a global temperature record so that one could compare model predictions or forecasts to an objective standard. However, direct quotes of e.g. Phil Jones in climategate letters and Hansen’s obvious personal bias make it pretty clear that HadCRUT and GISS — their personal babies — have been built and maintained not by objective scientists but by strongly politically partisan scientists who have proven directly obstructive (in the case of Jones) when people interested in checking their data and methods attempted to do so. Yet simple red flags such as the obvious bias in corrections to the major temperature products over many version changes go ignored. If we are to believe them, all of the measurements made before 1950 or so caused daily temperatures to be overestimated, all of the measurements made after 1950 including contemporary ones (go figure!) cause them to be underestimated. Obviously, 1950 was a very good year for buying thermometers. Wish I had me one of them vintage 1950 mercury babies… then I’d know the temperature outside…
This is, to put it simply, extremely implausible. In particular it is difficult to understand how temperatures over the last three decades could possibly require additional upwards correction — from the 50’s or 60’s on (post world war II) weather reporting and weather stations have been built using proper equipment and paid, trained personnel and in much of that, electronic reporting that could not reasonably have a high or low systematic bias such as time of day.
The latest paper that is being skewered is simply the latest, clumsiest version of this. A major world meeting (G7) is going on to try to get agreement among the major world powers on global warming and the need to basically halt modern civilization in its tracks to prevent it at any or all human cost and at exactly the right moment a new paper appears that claims that the fact that there has been no statistically significant warming for at least 15 years (acknowledged even in AR5, which is not known for its objectivity on the issue) if not 18 to 20 years — basically the entire shelf life of the Assessment Reports themselves — is incorrect, that there really has been warming, and that the source of this warming is the underreporting of intake water temperatures on ships!
I’m still working on that one. Intake water drawn in by a ship has not been warmed by the engine in any ship that is under way. The water behind the ship might conceivably be warmed by the engine — if you sampled it at the precise spot where the cooling water was returned to the ocean and within seconds of doing so. After a few seconds, we have a simple mixing problem. Assuming cooling water with an exit temperature of 373 C (an overestimate) and ocean water at 273 C (an obvious underestimate) to get a 0.1 C temperature change in the latter we have to mix each liter of hot exhaust water with 1000 liters of ocean water. 1000 liters happens to be 1 cubic meter of ocean water. No matter how I mentally juggle the mixing of water in the wake of a major boat underway or during the laminar flow of water up from a forward intake to even a badly located temperature sensor (one e.g. sitting right on top of the engine itself) I don’t see much chance of a major warming relative to the true intake temperature (which is true SST). I absolutely don’t see how engine exhaust heat returned as water that is immediately mixed by the wake behind the boat is going to affect intake water temperatures in front of the wake, or for that matter behind the boat after the boat has travelled as little as a single meter. I’d want to see direct evidence, and I do mean direct, experiments performed on an ensemble of randomly selected actual ships underway on randomly selected parts of the ocean with reliable instruments in the sea in front of the intake valves compared to the intake temperature, before I’d even imagine such a correction to be globally accurate.
Worse, this paper argues that ARGO buoys are not measuring SST correctly, but corrected ship intake measurements do! Really? So we spent a gazillion dollars on ARGO to get systematically biased results? My recollection of ARGO is that it was motivated largely because ship-based data is highly unreliable and becomes more unreliable the further you go into the past until it is based on sampling buckets of water drawn up by hand using ordinary thermometers held up in the wind to take a reading by indifferent skippers along 19th century trade routes (giving us no samples at all in most of the ocean).
So what are the odds that this paper is a) apolitical; b) objective; c) timed by pure chance? I’d have to say “zero”. And we haven’t even gotten to the next actual climate summit. Expect yet another “adjustment” of the present and its nasty old inaccurate thermometers just in time to be able to claim that the hiatus is over.
The only problem is, these adjustments are going to serve no useful purpose but to increase the already glaring gap between surface temperatures and RSS/UAH (and the latest creates a new, bigger gap between it and even far-from-objective HadCRUT and GISSTemp, to add to their gap with RSS). At some point the scientists involved will have no choice but to address this gap, because there are simply too many ways to directly measure and challenge the individual temperatures at grid points to build the averages.
Also, the public dislikes being lied to or “fooled”. If the latest paper is torn to shreds in short order even by those that believe in global warming — as I think is not unlikely, as they are going head to head with all of the people with an interest in ARGO, so there is actual money and academic reputation on the line across a broad patch of scientists here who have used ARGO data and Hadley data at face value, all now being challenged — it is not at all unlikely that it will backfire.
I can hardly wait for September. Will we see HadCRUT4 and GISS magically sprout a 0.2 C jump in August (with RSS remaining stubbornly nearly unchanged)? Will ENSO save the day for warmists everywhere and convince China and India to remain energy poor for the rest of eternity, convince the US congress to commit economic hari kiri? Will past-peak solar activity start to have an impact through as-yet unverified means? Stay tuned, folks, and get out that popcorn!
rgb

rgbatduke
Reply to  Werner Brozek
June 10, 2015 10:32 am

It’s hard to keep up with this mini-thread, but:

1.
There is no agreed definition of global average surface temperature anomaly (GASTA).
2.
Each team that produces values of GASTA uses its own definition of GASTA.
3.
Each team that produces values of GASTA alters its definition of GASTA most months and each such alteration changes its indications of past GASTA values.
4.
Whatever definition of GASTA is or could be adopted, there is no possibility of a calibration reference for measurements of GASTA.

Sadly, Richard, I just can’t get started properly on GASTA itself, but I feel your pain. The assertion that is implicit in GASTA is that we know the anomaly as a function of position and time worldwide all the way back into the indefinite past and across multiple changes of equipment and local environment to a precision ten times the precision we know the actual average temperature produced by the exact same thermometers. Thermometers, as you point out, are forced to maintain a constant reference. It would be embarrassing if a thermometer maker sold thermometers that showed water boiling at 110 C or water freezing at -5 C, so nearly all secular thermometers are very likely to be quality controlled across those points.
Then we get to a miracle of modern statistics. The assertion is basically that we know:
$\Delta T_i = (\sum_{j} T_{ij}/M - \sum_{ij} T_{ij}/(N*M)$ more accurately than we know $\sum_{j} T_{ij}/M$.
In words, this means that we know the local temperature anomaly at some site computed as the difference between its recorded temperature and a reference temperature computed using some base period. Somehow this number, averaged over the entire surface of the planet is more accurate that the direct estimate of the average temperature. I’m still trying to figure that one out. I’d very much like somebody to show me an independent, identically distributed set of “random” data, generated numerically from a common distribution, where any similarly defined anomaly in the data is known more accurately than the simple average of the data. Ordinarily I’d say that the sums are linear, the error in the reference temperature has to be added to the overall error obtained by any reaveraging of the monthly local data, and this difference will always be less accurate than the simple average.
But I could be mistaken about this. Stats is hard, and I’m not expert on every single aspect of it. I just have never seen a theorem supporting this in any stats textbook I’ve looked at, and it makes no sense when I try to derive it myself. But perhaps there is a set of assumptions that can justify it. I’d just like to learn what they are supposed to be.
In the meantime, we know global average surface temperature today no more accurately than 1 C (according to NASA) but we know the anomaly in the year 1850 to within around 0.3 C (according to HadCRUT4), which is also only around twice as imprecise as our knowledge of the anomaly today (according to HadCRUT4).
I have to say that I am very, very tempted to just state that this has to be incorrect in some really humongous, unavoidable way. I can see no conceivable way that our knowledge of the global anomaly in 1850 could possibly be only twice as imprecise as our knowledge today. If a Ph.D. student made such an assertion, I’d bitch-slap them and tell them to go try again, or be prepared to have their data and stats gone over with a fine-toothed comb. But it just sits there, on Hadley’s website, right there in their data file containing HadCRUT4, unchallenged.
Madness. Possibly mine, of course. But I’m willing to be taught and corrected, if anybody can direct me to a place where this sort of thing is derived as being valid.
rgb

george e. smith
Reply to  Werner Brozek
June 12, 2015 2:36 pm

Converting one to the other means making up numbers that were never a part of any observation anywhere by anybody. Sorta ike scotch mist !
g

Reg Nelson
June 9, 2015 7:43 am

The main purpose of Cowtan and Way was to refute the pause. They did this by infilling data at the poles, where there are few records, with what they thought the data should be. And voila! Pesky pause removed.

Werner Brozek
Reply to  Reg Nelson
June 9, 2015 7:57 am

I am curious what their analysis would show with the new UAH6.0 that is very close to RSS now. Would the pause still be there?

harrytwinotter
Reply to  Reg Nelson
June 9, 2015 8:27 am

Reg Nelson,
I don’t see any problem with a study that comes up with an estimate for unsampled regions. From memory that is why HadCRUT4 and GISTEMP differ as well.
I keep saying the IPCC never referred to the slowdown as a ‘pause’, they called it a hiatus. But I find the new found trust in the various global average temperature increase encouraging, far better than outright denial.

MarkW
Reply to  Reg Nelson
June 9, 2015 8:32 am

There is always a problem with estimating missing data.
Among other thing it dramatically increases your error bars.
The other is when the estimation is inherently invalid, such as taking land stations and stretching them across on ice covered ocean.

firetoice2014
Reply to  Reg Nelson
June 9, 2015 8:36 am

Data that do not exist cannot be created from “whole cloth” and “infilled”. No measurement=no data.

Werner Brozek
Reply to  Reg Nelson
June 9, 2015 9:01 am

But I find the new found trust in the various global average temperature increase encouraging

It seems to me that most people here trust the satellites that show no temperature increase.

Reply to  Reg Nelson
June 9, 2015 9:23 am

Werner Brozek,
Correct. Satellite data is the most accurate.
That said, it should be noted that no dataset agrees completely with any others. There are reasons for those slight discrepancies. In the past, RSS and UAH have diverged by a tenth of a degree or so. But UAH 6 is almost identical now with RSS.
What is more important than a specific temperature point is the trend. For the past 18 ½ years there has been no trend in global T. Global warming has s stopped.
It may resume. Or not. Or the planet could cool. But readers should remember that the change in T over the past century is only about 0.7ºC. Geologically speaking, that is nothing. That is as close to no change as you can find in the temperature record.
Just prior to our current Holocene, temperatures changed by TENS of degrees — and within only a decade or two. That is scary; 0.7ºC is not. In fact, it is hard to find a century in the temperature record that has been as benign as the past century. Current global temperatures could hardly be any better.
Climate alarmists are desperately looking for something to show that there is a problem. So far, they have failed.

Werner Brozek
Reply to  Reg Nelson
June 9, 2015 9:46 am

That said, it should be noted that no dataset agrees completely with any others.

True. For one thing, UAH goes to 85 S but if I am not mistaken, RSS only goes to 70 S. However there is good agreement despite this. Presumably 70 S to 85 S is not much different than the rest of the earth.

The Ghost Of Big Jim Cooley
Reply to  Reg Nelson
June 9, 2015 12:57 pm

Werner, this is my problem with some contributors here. I am deeply suspicious of satellite data, as it doesn’t directly record temperature. Surface temperature datasets are dubious for known reasons, but I am uneasy about satellite data being taken as ‘gospel’ on here by authors of articles, and contributors to those posts. Believing what you want to because it reinforces your belief is dangerous to your objectivity. We need 1,000 (worldwide) set of pylons, 100 metres up (away from influence) containing a Stevenson screen – and direct temperatures taken without any adjustments, run by solar cells, and the info beamed across the internet 24 hours a day. Imagine, no more fiddling, and no more arguing!

Werner Brozek
Reply to  Reg Nelson
June 9, 2015 1:23 pm

I am deeply suspicious of satellite data, as it doesn’t directly record temperature.

In the same way, does a mercury thermometer really record temperature or does it record the expansion of mercury in a glass tube that has certain markings on it? Different physical attributes can easily be quantified to represent temperature differences. I do not share this suspicion of yours although the mercury thermometer seems more straightforward to me. Nothing here is perfect, but we do the best we can with what we have.

Anto
Reply to  Reg Nelson
June 9, 2015 7:12 pm

TGOBJC,
What’s important for the purposes of the pause argument is not whether satellites measure the absolute temperature accurately, but whether or not they measure the changes in temperature from their previous measurements accurately. I would argue that they do so sufficiently accurately to establish that the rate of change for the past 18 years has been near enough to nil.
Additionally, any adjustments are made in a transparent manner and for transparent reasons, based on sound calculations. Compare that with the vast problems of ground and sea-based temperature measurements (calibration, siting, site changes, missing data) and the subsequent opaque and in many cases, downright questionable adjustments (TOBA, site changes, in-filling missing data, homogenising “nearby” stations, etc.). The result is such a vastly altered record as to massively reduce the confidence in even the comparability of previous readings with present readings, such that you may as well be comparing early 20th century apples with 21st century pork sausages.

Reply to  Reg Nelson
June 10, 2015 7:15 pm

TGOBJC, as I recall, the satellite readings of the lower troposphere (LT) are verified by balloon measurements of the LT, at least within measurement uncertainty.

george e. smith
Reply to  Reg Nelson
June 12, 2015 2:52 pm

If you buy one of the new 4K ultr-high definition television sets (from almost any vendor) you face the problem that there is virtually no interesting 4k software to show you.
Nobody broadcast the recent French Open Tennis event (way to go Serena) in 4K TV broadcasts; so HD is about all you can get (I don’t even pay to get any of that on my 26 inch TV).
BUT !! every one of those fancy sets can “upconvert” or “back fill” faux hi res interpolations between the HD data. Maybe they use Dr. Roy Spencer’s third order polynomial “just for amusement” algorithm.
Bottom line is that those 4K up conversions from HD create astonishingly good looking TV images, which are very pleasant to watch.
But of course they are quite fake pictures. Well the Louvre is full of “fake pictures”.
No the paintings aren’t forgeries; but they are some artist’s representation of a false reality, which many of us find quite wonderful in many ways.
I dare say, that all of these up conversions and backfilling of real data with false rubbish also creates beautiful pictures that some find pleasant to view.
But beautiful or not; they are still fake pictures, and really don’t add anything different from “an artist’s impression” of reality.
So we should not be taken in by fake 4K climate data; it isn’t an image of reality.
g

Alx
June 9, 2015 8:36 am

Well they are measuring different things and in different ways as well.
So if that’s the case “global temperature” seems to be a mythical creature like a Unicorn, that is not existing in reality or in any empirical sense.
Taking temperature readings from the center, four corners, and various elevations in my home and factoring out elevation, distance to stove, where most people congregate, heating elements, sunny days which heat upper elevations, etc. to determine an annual “Home Temperature” cannot be considered a temperature reading. It is a concept of “Home Temperature” based on measurements, which in the case of homes is useless and I wonder in terms of the earth if it is equally useless. Yeah I know if “Global Temperature” increases by x amount everything bad that can possibly go bad in the world will go bad. Unfortunately there is about as much evidence for that as evidence for the Rapture.
Regardless, the problem is the term “temperature” suggests an empirical measurement (like heating a an oven to 350 degrees) when “Global Temperature” is not an empirical measurement at all. It is a stew of measurements, calculations and theory, whose ingredients and seasoning are based entirely on the whims of the cook.

harrytwinotter
June 9, 2015 8:43 am

Alx,
I am surprised you know the difference between a hot day and a cold day, considering you do not trust thermometers.

MarkW
June 9, 2015 10:31 am

Harry, did you take lessons in missing the point, or are you just naturally talented.
He said nothing about not trusting thermometers, he said that taking a handful of readings does not equate to knowing the “temperature” of the whole house.

June 9, 2015 2:10 pm

Surface station average temperature is the average of Tmin,Tmax; which is al all honesty temporally extremely sparse, only 2 measurements / day, where the satellite takes measurements on 16 orbits per day. Surface station Temperature are also both spatially sparse and irregular resulting in the need for infilling, problems; satellite measurements not perfect are at least regular and are producing something much closer to what most people would think of when they think average.

george e. smith
June 12, 2015 3:06 pm

The global Temperature is 288 K or 15 deg. C or 59 deg. F Kevin Trenberth et al says so, and it never changes any time or anywhere.
That should be compared to the climate Temperature (local phenomenon) from various places on earth where the reading can be from about -94 deg. C and about + 60 deg. C , and due to an argument by Galileo Galilei , there are an infinite number of places all over the earth, where you will be able to find ANY Temperature in that entire range.
No that does not work; you cannot have two different Temperatures for any single place at any single time, only one Temperature per instance please.
But given that for the real world, a one deg. C change in 150 years, at some place that nobody knows where it was measured, is not something to even mention; let alone worry about.
g

Werner Brozek
June 12, 2015 3:38 pm

The global Temperature is 288 K or 15 deg. C or 59 deg. F Kevin Trenberth et al says so, and it never changes any time or anywhere.

That is the average. It actually changes by 3.8 C between January and June. See:
http://theinconvenientskeptic.com/2013/03/misunderstanding-of-the-global-temperature-anomaly/

Jquip
June 9, 2015 8:45 am

Hear, hear! It’s obvious to everyone that the heat is hiding in the deep atmosphere. Due to the very slow circulation it will reappear some 500 to 1000 years from now.

JP
June 9, 2015 9:10 am

Cowtan and Way used a questionable method known as krigging to infill the high latitudes, and viola, the hiatus was gone. It’s kind of the same game GISS plays – most of GISS’s warming occurs where there are no reporting stations – their grid squares uses extrapolated data. Remove the estimated data and you remove almost all of the “warming”.

rgbatduke
June 10, 2015 6:08 am

Kriging per se isn’t necessarily questionable — it is just error prone. If you have to krige, it cannot “produce” data, it’s just a way of smoothing over missing regions not unlike a cubic spline or other smooth interpolation. The questionable part is strictly the way it affects precision. Kriged data cannot reduce error estimates as if it is an independent and identically distributed sample from some distribution. It by its nature smooths out any peaks or valleys in the kriged region, and is perfectly happy to make systematic errors.
For example, if one kriged missing data in Antarctica using sea surface temperatures from a ring around the continent, one would make a really huge error, would one not? Because the SSTs are all going to be in the vicinity of 0 C, causing a krige of 0 C across Antarctica in winter, which is a wee bit of a joke. Nor can one place a single weather station at the south pole and krige between it and the surrounding ring — this would produce a smooth temperature field out to the warm boundary and is again a joke — the temperature variation between the sea and the land is enormous and rapid. Similar considerations involve the Arctic — kriging either way across different surface and (especially!) across the arctic circulation and local phenomena that strongly affect local temperatures is going to smooth away reality and replace it with a number, but one that has to have a huge error bar added to it that contributes just as much error to the final error estimate as having no data there at all because you can’t squeeze statistical blood from a stone! Missing information is missing.
rgb

Chris Schoneveld
June 10, 2015 4:01 am

See: John Christy et al.
Geophysical newsletter Vol 28, Jan 2001
“Differential Trends in Tropical Sea Surface and Atmospheric Temperatures since 1979”

george e. smith
Reply to  Chris Schoneveld
June 12, 2015 3:14 pm

“””””…..Missing information is missing.
rgb…..”””””
Well Robert, I Kringe every time I think of Kriging.
I would venture (lacking a handy rigorous proof) that Kriging can only be tolerated, if NONE of the “Kriged” faux data points causes the Kriged signal to suddenly acquire faux signal component frequencies that are above the Nyquist bandwidth limit corresponding to the pre-Kriged sampled data.
As I have (in effect) asserted elsewhere “Kriged pictures look prettier; but they are still false representations of reality.
g

June 9, 2015 6:28 am

James….use Ad Block Plus

Stevan Makarevich
June 9, 2015 8:52 am

Thank you VERY much! WUWT ads aren’t too bad because they allow you to skip them, but I absolutely detest some other sites, such as YouTube and my local news site, because either having to wait for complete ads to play or the stupid pop-ups – I’ve yet to see one of these that made me think “Hey – I need to buy that!”.
Anyway, I’ve installed Ad Block Plus and have so far had a most enjoyable morning, ad free!

David A
June 9, 2015 6:30 am

UHI, plus data fabrication of surface records is, IMV, causing the divergence. There is significant evidence for this.

patrioticduo
June 9, 2015 6:33 am

Nowhere in any of this is there any sign of an increase in acceleration of warming which is the basic premise of “human produced CO2 will cause a run away global warming catastrophe”. No acceleration of warming means the theory of man made climate destruction is wrong.

June 9, 2015 7:38 am

If accelerated warming occurred without the predicted spike in water vapor, wouldn’t that also disprove the AGW hypothesis?

Randy
Reply to  Jean Parisot
June 9, 2015 11:49 am

You certainly couldnt get to the more alarming end of the claims without the feedbacks that were to cause 2/3 of the warming.

Werner Brozek
June 9, 2015 7:52 am

Nowhere in any of this is there any sign of an increase in acceleration of warming

Even NOAA did not attempt to show an acceleration in warming. I do not agree with their analysis, but they were happy to show no hiatus by comparing 1950 to 1999 with 2000 to 2014. In my opinion, they should have compared 2000 to 2014 with 1975 to 1998.

Walt D.
June 9, 2015 6:34 am

The problem I see with this data analysis is that it ignores seasonality. It would seem that a better approach would be to take the average of the differences 12 months apart. In other words average [t(i+12) -t(i)].
This way you are comparing January with January and February with February etc.
Most of the parametric statistical tests used are invalid. However, with the suggested method, you can easily do a non-parametric test on the signs of these differences.

Henry Galt
Reply to  Walt D.
June 9, 2015 6:47 am

Good point Walt D. It would be … interesting to see annual ‘anomalies’ if years started in different months. Does 2014 ‘break a record’ if taken from March 1st to March 1st 2015? Shifting the ‘year’ by two months (or more, or less, forward or backwards) would expose the lottery we are basing ‘our’ treasure-shift upon.

Werner Brozek
Reply to  Henry Galt
June 9, 2015 7:23 am

It would be … interesting to see annual ‘anomalies’ if years started in different months.

Yes, shifting to different 12 month intervals can give slightly higher values than the January to December averages. If you want to clearly see this, plot what you want on WFT, then take a mean of 12.

harrytwinotter
Reply to  Walt D.
June 9, 2015 6:51 am

Walt D.
I agree, it seems to make sense to annualise the temperature data. The season are a pretty obvious cycle.

David A
June 11, 2015 6:05 am

Yet globally all seasons are occurring all the time. However the difference between the global mean of each season is informative. In January the earth experiences plus 90 watts per sq meter insolation yet the atmosphere cools on average.

george e. smith
June 12, 2015 3:20 pm

If I’m not mistaken, Willis Eschenbach has searched in vain for a “seasonal” signature in “Global Temperature”, and so far found none, despite some quite intriguing methodologies.
Temperature change is not a global phenomenon; it is entirely local.
g

Werner Brozek
Reply to  Walt D.
June 9, 2015 7:17 am

The problem I see with this data analysis is that it ignores seasonality.

By using anomalies, we automatically compare Januaries with Januaries, etc. If we did not use anomalies, then all Januaries would be 3.8 C colder than all Julies on the average.

Gary Pearse
Reply to  Walt D.
June 9, 2015 7:40 am

When you get down to such small fractions of a degree, they are meaningless relative to each other. I grant you, you may discover different seasons are warming or cooling at different rates if the size of the units permits definitive stats. Since we are examining global warming and the wiggles are in the noise, it’s best to do it the way its done here. We already have to much distraction with statistically insignificant data in climate science.

Mary Brown
Reply to  Gary Pearse
June 9, 2015 8:04 am

Yes, when you take a step back, warming rates of .01 deg C a year in the AGW era are insanely small. Yet, somehow, we are being told it will do all of this…
http://whatreallyhappened.com/WRHARTICLES/globalwarming2.html

Werner Brozek
Reply to  Gary Pearse
June 9, 2015 9:06 am

I wonder how many of those bad things are contradictions of each other.

Eliza
June 9, 2015 6:42 am

This is the kind of posting that makes the Team shiver with fright and impending trials LOL. Keep it up. Tony Heller does it every day.

Werner Brozek
June 9, 2015 6:58 am

UAH Update: With the May anomaly of 0.27, the 5 month average is 0.18 and this would rank in 6th place if it stayed this way.
Hadsst3 Update: With the May anomaly of 0.593, the 5 month average is 0.484 and this would set a new record if it stayed this way.

Scottish Sceptic
June 9, 2015 7:08 am

it’s what they call “the Paris effect”: the last ditched attempt by the numbskulls to try to con everyone that their modified data “proves” the world is/was/is going/will be going to/certainly will be going to … warm by a fraction of a degree.

Sun Spot
June 9, 2015 7:11 am

How many climate change angels are dancing on the head of the dataset pin (you ask)? The answer is; cAGW CO2 hypothesis, says we should be seeing accelerated catastrophic warming, all observations and all data sets show no such warming rates, the cAGW hypothesis/theory is dead wrong (those angels aren’t dancing).

rbabcock
June 9, 2015 7:20 am

“With the May anomaly of 0.593”
When will people please stop printing temperatures to the thousandth of a degree? Is it supposed to make it look more accurate? Why not release .59325439777 as the absolute temperature anomaly? Now that would make me believe.

MarkW
June 9, 2015 8:38 am

With the land/sea based measuring system, claiming an accuracy of 1C is way too optimistic.

firetoice2014
June 9, 2015 8:44 am

Anomalies reported to resolutions higher than the instrument resolution are a joke. Three decimal place resolution for “adjustments” is hysterical.

June 9, 2015 7:26 am

None of the land data sets are worth a plug nickel. The extent to which they have been manipulated in one form or another renders them absolutely useless. Administrative adjustments, as Dr. Ole Humlum has shown, produce half of the “warming” we have seen.

Reply to  Alan Poirier
June 9, 2015 7:38 am

And maybe ocean temps as well. Attempting to resolve incomplete and varying types of suspicious (can’t think of the right word) data with adjustments seems to be stretching for a result that can be regarded as scientific.

June 9, 2015 8:29 am

“questionable”, I think is the word you were looking for.

menicholas
June 9, 2015 8:50 am

Although for color I like the ” not worth a plug nickel” line.
Then again, they are only worthless from the perspective of doing objective scientific analysis.
For the purpose of political posturing, or for obtaining fat grants, or for attempting to prove a false hypothesis, the seem to have some value.

urederra
Reply to  Alan Poirier
June 9, 2015 8:34 am

I can’t believe they are already working with HadCRUT4.3. Didn’t they move from HadCRUT3 to HadCRUT4 five years ago? Did they already modify the set twice since then? Or three times?

Werner Brozek
June 9, 2015 9:31 am

Did they already modify the set twice since then? Or three times?

There have been several updates. And guess in which direction the latest numbers went? See:

Paul
Reply to  Alan Poirier
June 9, 2015 8:44 am

“…renders them absolutely useless”
Useless? They’re absolutely priceless!
Look at all of the “hottest whatever since whenever” press.

Neil Jordan
June 9, 2015 7:31 am

ClimSci(TM) might be interested in this webinar:
http://www.foresteruniversity.net/webinar-surviving-media-crisis.html
“Surviving a Media Crisis
“At some point we all find ourselves in the hot seat. And if you’re a public figure, that hot seat is likely in front of a media firing squad. Are you prepared to answer the tough questions? Are you ready for the media’s tricks? Do you have a plan in hand to manage your reputation? You should.”

June 9, 2015 7:34 am

It’s pretty obvious why these are diverging.
People who’s job depends evidence of temperature increasing are “adjusting” GISS and HADCRUT, particularly GISS. When your temperature readings represent a small fraction of 1% of the earth’s surface and you are compelled to interpolate, extrapolate, adjust for UHI, and otherwise torture the data you can get any answer that Obama wants. Heck, we’re only talking variations of around 1C for the whole shebang, so it’s easy to fiddle around and shave a tenth or two off here and there. Then all you have to do is withhold raw data and methods and who can challenge the result? Is anyone really surprised?
The Russians have a saying: The future is certain. It’s the past that keeps changing.
It’s far harder to muck with RSS or UAH and keep the tampering off the ‘radar’ so to speak. Also, RSS and particularly UAH are run by dirty dog skeptics (sorry Dr. Roy, you know I love you) so they are continuously under the microscope. Actually, it’s hard to imagine that our Imperious Leader has tolerated RSS and UAH divergence for so long, and one would expect a “purge of the unfaithful” to bring these results more in line with the thermometer measurements.
But hey, color me cynical.

Climate Pete
June 9, 2015 9:02 am

The real difference between the surface temperature data sets and the satellite data sets is that the algorithms for calculating adjustments to the surface temperature data sets and the subsequent trends are publicly available. However, the algorithms for UAH are not released.
Not that you can blame the owners of this data set. The UAH longest available trends at each point in time have had to be raised time and time again as more errors are found in the data set, as a result of errors in the UAH calculation which have had to be identified by independent reviewers the hard way – without having access to the code.
After all, why give those wishing to independently check the UAH results any more assistance than you absolutely need to?

Werner Brozek
Reply to  Climate Pete
June 9, 2015 9:21 am

After all, why give those wishing to independently check the UAH results any more assistance than you absolutely need to?

Have you seen this very extensive article:
If it does not have what you are looking for, ask them for what it is that you need.

Tim Wells
June 9, 2015 10:31 am

And just what is the Ministry of Truth doing while RSS and UAH perpetuate these thought crimes?

Leonard Lane
Reply to  Tim Wells
June 10, 2015 10:57 pm

The Ministry of Truth is to busy re-writing history and public property (temperature data) to notice the RSS and UAH are committing the crime of non-corruption of public property, When they notice (or perhaps can grasp the concepts) then the just will be punished and the data adjusted.

Editor
June 9, 2015 7:46 am

For those who argue that surface and satellites don’t necessarily follow each other, remember what the Met Office had to say in 2013, when discussing their HADCRUT sets:
Changes in temperature observed in surface data records are corroborated by records of temperatures in the troposphere recorded by satellites”
https://notalotofpeopleknowthat.wordpress.com/2015/01/17/met-office-say-surface-temperatures-should-agree-with-satellites/
Well, except when they’re not!

Werner Brozek
Reply to  Paul Homewood
June 9, 2015 8:12 am

Thank you! At what point will scientists wake up and smell the coffee?

harrytwinotter
Reply to  Paul Homewood
June 9, 2015 9:05 pm

Paul Homewood.
The only reference I can find says this:
“Changes in temperature observed in surface data records are corroborated by measurements of temperatures below the surface of the ocean, by records of temperatures in the troposphere recorded by satellites and weather balloons, in independent records of air temperatures measured over the oceans and by records of sea-surface temperatures measured by satellites.”
In context he means the changes are “corroborated”. He doesn’t say they follow each other. So I am calling a straw man argument on this one.

June 9, 2015 7:47 am

“Why are the new satellite and ground data sets going in opposite directions? ”
Don’t both GISS and HadCRUT get (at least some of) their input from NCDC?

Mary Brown
June 9, 2015 7:48 am

How accurate is the measurement of global temperature? How about global ocean temperatures?
I haven’t seen a good article that addresses this based on real data…the monthly disparities in the different data sets.
Here is one way of looking at it. I took the Wood For Trees Index (WTI) components RSS UAH HADCRUT GISS… then I computed how much each month’s change was different from the mean monthly change.
Here are some results. First, the WTI had a monthly standard deviation of 0.09 deg C. So, that means there is about 5% chance that the monthly temp will change more than 0.18 deg.
UAH was most correlated to the mean each month (84%). Hadcrut was lowest (66%).
Over the last 60 months, the four data sets were an average of 0.06 degrees away from the mean of the four. In other words, the different data sets disagree by an average of 0.06 a month. If they were perfect, they wouldn’t disagree at all. (well, they would a bit because they do measure slightly different things).
So, seems to me that a rough measure of the measurement accuracy is about 0.06 deg for global temp. That’s a ballpark but I think it’s reasonable.
Ocean Temps…
I have read that the ARGO data measures the Ocean Temps with an error of 0.005°C. Considering the analysis above, I hope you are laughing at the absurdity of the ARGO accuracy claims. My gut says ocean error must be at least as high as air, but I could be wrong. But I’m sure it is vastly larger than 0.005 deg C.
This is all back of the envelope. I would love if people could add to this.

Werner Brozek
Reply to  Mary Brown
June 9, 2015 8:22 am

I took the Wood For Trees Index (WTI)

Are you aware of the fact that the WTI is totally out of date? It uses Hadcrut3 which has not been updated since May 2014. So there is no WTI since then either. And with record high temperatures since then on certain data sets, that makes a huge difference. As well, it used UAH5.5 and of course does not use the new UAH6.0 which would also make a huge difference.

Mary Brown
Reply to  Werner Brozek
June 9, 2015 11:21 am

I maintain my own WTI data set and did the calcs from that

Werner Brozek
Reply to  Werner Brozek
June 9, 2015 12:11 pm

I maintain my own WTI data set and did the calcs from that

Is there anything you want to make available for the rest of us to use? For example, I would really love to plot the NOAA numbers on a WFT type format, but WFT does not have NOAA.

Mary Brown
Reply to  Werner Brozek
June 10, 2015 6:42 am

“Is there anything you want to make available for the rest of us to use? For example, I would really love to plot the NOAA numbers on a WFT type format, but WFT does not have NOAA.”
The stat guys here at work did this all “back of envelope” so the data would need to be double-checked. But there is nothing magical about WTI. It’s just an average of the four main data sets…adjusted to a baseline.
I like to use it because it eliminates the cherry picking and arguing over which is better… sat vs ground. It just uses them all. Although not perfect as the many comments here point out, it is a nice consensus statistical assessment of the relative warming of the planet.

Werner Brozek
Reply to  Werner Brozek
June 10, 2015 7:15 am

The stat guys here at work did this all “back of envelope” so the data would need to be double-checked.

Could you please send me the URL for what you have so I can double check it? Thanks!

Climate Pete
Reply to  Mary Brown
June 9, 2015 9:23 am

The Argo floats are calibrated before they are released. More to the point, the occasional float drifts ashore by chance. Here are the results of retesting the calibration of Argo floats after they have been in the water for a few years and retested.
http://api.ning.com/files/p0ZiZvXgHQgJ3GvAOz4p06LWIb0*vjZa*BuLBDKPYxkvlpeSIaNQMBeEFKGlICP3jdJRtIX1ApcKaelqe7YMMlOVlV*2a559/ArgoRecoveredFloatTestResults.jpg
Note the American float had been in the water for 3 years before checking the calibration. And it was still accurate to three ten-thousandths of a degree C.
Apart from the sensor accuracy and repeatability itself, there is every reason why the Argo float temperature sensor readings should be much more accurate than surface temperature thermometers. The Argo float is at a well away from the warming influence of the sun. And it is stationary – drifting with the current at that depth. And lastly the water has intimate contact with the whole of the float assembly and has plenty of time to come to thermal equilibrium.
The complexity with the Argo floats is that there are under 4,000 of them to cover all earth’s oceans, though they do take readings at depths between 0 and 2000m, so you get good vertical coverage down to 2000m.
It just goes to show how wrong gut feel can be.

Reply to  Climate Pete
June 9, 2015 9:59 am
MarkW
Reply to  Climate Pete
June 9, 2015 10:39 am

climate pete:
Argo measures to 0.00001C???? That’s absurd.

MarkW
Reply to  Climate Pete
June 9, 2015 10:42 am

3600 floats to cover 3.6 * 10^8 m^2. And you call that good coverage?
Your standards are amazingly flexible.

Mary Brown
Reply to  Climate Pete
June 9, 2015 11:52 am

I agree with much of this and get the fact that measuring temp of water is in many ways easier than air. But there is a lot of other uncertainties. First, in the error chart above, all the errors for dT and dS and dp are in the same direction. The statistics on overall ARGO errors generally assumes a normal shaped distribution of the errors. But the chart above shows that they can all be in the same direction, leading to bias far greater than a bell shaped estimate. Then there is the problem of having just buoy for every 100,000 sq KM and the drifting of measurements.
So getting a representative overall ocean heat content measure is significantly more imprecise than the estimated error rate of a few thermometers.

Billy Liar
Reply to  Climate Pete
June 9, 2015 1:28 pm

I think I’m going to buy an Argo float to use as a benchtop multimeter. It seems that Argo floats can outperform the best 7½ digit multimeter that money can buy by a large margin. The Keysight (formerly Agilent, formerly Hewlett-Packard 34470A only manages to measure resistance (ie temperature if using a platinum resistance thermometer) to an accuracy of 40 parts per million after an hour warm-up in an environment within 1°C of the calibration temperature within 24 hours of calibration. After 2 years it’s accuracy is 170 parts per million after a 1 hour warm-up in an environment within 5°C of the calibration temperature.
To make measurements in a dynamic environment when far away from the calibration temperature makes Argo floats the best thermometers in the galaxy!
In reality, I suspect a lot of self-delusion is going on.

Doonman
Reply to  Climate Pete
June 9, 2015 11:38 pm

My gut feeling reading your chart is that the second float’s serial number is entered incorrectly. Call me superstitious if you like, but I doubt that Japan has released 2871011 argo buoys, so something is amiss.

Science or Fiction
Reply to  Climate Pete
June 13, 2015 6:55 am

I am still curious about the exceptional low uncertainty you indicate for the individual temperature measurements in each ARGO float. However, you also have an unquantified statement about vertical coverage. I guess it took you a few seconds to write: “so you get good vertical coverage down to 2000m”.
Sorry for not being able to debunk your claims at the same tremendous speed as you make them.
However here are some findings on the good vertical coverage:
“[52] In conclusion, our study has shown encouraging agreement, typically to within 0.5°C, between the Argo-based temperature field at 36°N and the corresponding field from a hydrographic section. Furthermore, the model analysis demonstrated that within the subtropical North Atlantic, sampling of the temperature field at the Argo resolution results in a level of uncertainty of around 10–20 Wm−2 at monthly timescales, falling to 7 Wm−2 at seasonal timescales. This is sufficiently small that it should allow investigations of variability in this region, on these timescales.”
On the accuracy of North Atlantic temperature and heat storage fields from Argo
Authors R. E. Hadfield, N. C. Wells, S. A. Josey, J. J-M. Hirschi
First published: 25 January 2007

Science or Fiction
Reply to  Climate Pete
June 14, 2015 12:27 am

1 ARGO float per 100,000 square km performing 1 measurement every ten days.
Even if the accuracy of each ARGO temperature sensor hold its specification of 0,005 °C. I think the uncertainty related to sparse sampling of a vast volume will be dominating the uncertainty budget for the uncertainty in determination of the average temperature of selected depths down to 2000 meters.
I think it is misleading to draw attention to the exceptionally low uncertainty of the temperature sensor when it has such a low contributions to total uncertainty.
“Argo floats drift freely at a predetermined parking depth, which is typically 2000 decibars, rising up to the sea surface every ten days by changing their volume and buoyancy. During the ascent they measure temperature, salinity, and pressure with a conductivity-temperature- depth (CTD) sensor module. They send the observed temperature and salinity data, obtained at about 70 sampling depths, to satellites during their stay at the sea surface, and then return to the parking depth. The floats’ battery capacity is equivalent to more than 150 CTD profiles, which determines their lifetime of about four years. The accuracy requirement of the float measurements in Argo is 0.005°C for temperature and 0.01 practical salinity units (psu) for salinity. The temperature requirement is relatively easy to attain, while that for salinity is not easy, due to drift of the conductivity sensor.”
(Ref: Recalibration of temperature and conductivity sensors affixed on Argo floats” JAMSTEC Report of Research and Development, Volume 5, March 2007, 31–39″ Makito Yokota et al)

Climate Pete
Reply to  Mary Brown
June 9, 2015 10:46 am

Argo shows heating between 2004 and 2012.
dbstealey said

ARGO shows cooling at most depths:

Argo floats take readings down to 2000m. All DBStealey’s Argo chart show layers from the surface down to particular depths, so the temperature of a higher layer is included in the average for the chart at the next depth down.
So what the charts show is a drop in temperature over 8 years of around 0.06 C for depths down to 20m. By the time you take averages for all layers down to 150m the trend is flat. The average of 0-200m shows an increase in 0.02 C. In other words, deeper layers are warmer and are more than offsetting the cooling trend at the surface.
It is not the layer at 200m alone which shows warming. It is the average of all layers from the surface down to 200m which shows warming.
The assertion that ARGO shows cooling at most depths is not right. Although this particular set of charts only goes down to 200m (194.7m), a chart supplied by dbstealey shows that from 0 to 2000m the oceanic trend is 0.02 C / decade
Although 2004 to 2012 is not quite a decade, the two trends (0-200m and 0-2000m) are pretty close (0.02 C over 8 years versus 0.02 C over 10 years).
The only possible conclusion is that dbstealey’s charts confirm Argo is showing warming around the 0.02 C / decade mark.

Reply to  Climate Pete
June 9, 2015 11:03 am

Are you kidding me? 0.02 C/decade. That is supposed to be statistically significant?

Jquip
Reply to  Climate Pete
June 9, 2015 11:50 am

I don’t see how your response is valid. Certainly, if it is CO2 that is increasing, and that this will keep the radiation near the surface, then we cannot state that the consequence of all this extra surface warming, is a surface cooling. The very notion refutes itself.
That we can take an average here is not very germane. For if we state that the average is a valid view, then it is a valid view of averaging historical and present warming. Where historical can reach back up to a millenium, IIRC. But such a moving average isn’t relevant to what should be happening at near the surface, given our concern is with respect to what happens near the surface in the present.

MarkW
Reply to  Climate Pete
June 9, 2015 1:35 pm

If the so called warming is more than degree of magnitude less than your error bars, then you have not found a signal. Period.

Science or Fiction
Reply to  Climate Pete
June 9, 2015 2:29 pm

And by which mechanism do you imagine that the energy get absorbed by CO2 in the atmosphere without warming it, then pass through the upper 150 meters of ocean while cooling it, and then start heating the deep oceans below 150 meters?

Menicholas
Reply to  Climate Pete
June 9, 2015 4:46 pm

“And by which mechanism do you imagine…”
I think the last word there is on the right track re the thought process of his ilk.
This is the Warmista way…imagine it, then insist it must be true, then warn everyone it will be very very bad, then insist on it, shout down and insult any who disagree, stuff fingers in ears when being contradicted…
No need to get bogged down with rational arguments, evidence based science, or plausible mechanisms when one is a True Believer or the “Science” of Climastrology.

Reply to  Climate Pete
June 9, 2015 7:02 pm

Mary Brown says:
How accurate is the measurement of global temperature? How about global ocean temperatures?
Good questions. The answer is that ARGO’s error bars are far wider than its claimed accuracy, so we don’t have a good answer.
But various other observations show ocean warming of around a quarter of a degree per century. That’s entirely natural, and it is simply a recovery from the Little Ice Age.
At ≈0.23ºC/century, that is entirely beneficial warming. More would be better. Without very sensitive instruments, no one could even tell. It would be like the 0.7ºC global warming (again natural) over the past century; without very sensitive instruments no one could even tell.
That shows how preposterous the “dangerous man-made global warming” (MMGW) scare is. The whole thing is a giant head fake: a scare that exists only in the frightened minds of the global warming crowd, which otherwise makes no difference at all.
There is no problem with MMGW. At all. There is NO problem. MMGW is a complete false alarm. On a list of 10 things to worry about, it isn’t even #11. It is more like #97, and only the first dozen matter.
So relax, Mary. They are trying to scare you with stories of a monster under the bed. But there is no monster, and there never was. The MMGW scare is a self-serving ploy, a hoax to get the public to open their wallets to the climate charlatans.
They can’t even come up with a measurement of MMGW. Could they be any less credible?

Alx
June 9, 2015 7:55 am

Imagine if the trajectories and course plots were determined the same way global temperature was determined. Get a half dozen or more ways to collect data and process it and then look at average, medians, and trends to plot the course to the moon. If this approach was indeed used for the first manned flight to the moon, our poor astronauts would still be traveling in some random direction out in the blackness of space.
Temperature is empirical, determining global temperature should not be as vague and uncertain as determining whether Sandy Koufax or Bob Gibson was the better baseball pitcher, but the point is, it is that vague and uncertain. Apparently what great statesmen and thinkers are forced to do is average Sandy Koufax and Bob Gibson together, create an imaginary best baseball pitcher and then bet their countries economies on the imaginary best baseball pitcher.
The reality is there is no discrete proven definition specifying what global temperature is and no entirely reliable and thorough method for recording it. So we have a bunch of different approaches that sometimes align, sometimes don’t, and are each constantly adjusted suggesting as of the date of their publication they are already immediately wrong.
BTW imagine how much more knowledgeable we would be if NASA devoted its dollars to studying our solar system instead of climate. With government spending on global warming 9 billion dollars annually, NASA would do well to dump GISS and become more useful.

June 9, 2015 8:13 am

22 Billion for 2015 or 2014

June 9, 2015 12:10 pm

There is a bill in Congress to strip NASA of its climatic “responsibilities” and consolidate them in NOAA. But then NASA is also squandering tax dollars on promoting the glories of Islamic science rather than exploring and exploiting space.

robinedwards36
June 9, 2015 7:55 am

I could set about verifying all the linear fits that have been reported here – but would I am sure get the same results, and I don’t use WFT. What I always wonder is why the climate analysis community (both “Us” and “The Establishment”) are so so keen on using linear models when any graphics software shows that the resemblance of climate data to a linear model is doubtful, to put it mildly. There are certainly data sets and periods,varying between months and a century where a linear model seems to be adequate or appropriate, but as a universal method for describing climate temporal dynamics I have considerable misgivings.

Werner Brozek
June 9, 2015 8:35 am

but as a universal method for describing climate temporal dynamics I have considerable misgivings

I agree. Lines cannot show the 60 year cycles for example. Lines just happen to be very convenient to compare slopes of various periods.
You say

I don’t use WFT

Nick Stokes’ source here gives the same numbers:
http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html
However Nick also has UAH6.0 and NOAA and other things that WFT does not have, so I value his additional things.

Jquip
June 9, 2015 11:56 am

I’m right there with you. But there are two valid manners of rebutting an argument. The one is to rebut it based on the form of the argument (linear slopes), and the other is to rebut it based on the content of the argument (the slope is going the wrong way.)
The trick to using the latter, even when the form is nonsense, is that whoever is being rebutted must either acknowledge the validity of the outcome, discard the bad form, or hit the bully pulpit with cries to criminalize the people that demonstrated the original argument’s weakness.

Mary Brown
June 9, 2015 8:13 am

There is another type of data set that receives little mention but I find it intriguing. Dr. Ryan Maue of WeatherBell keeps a running average of global temps that are used to initialize the GFS weather forecast models.
http://models.weatherbell.com/climate/cfsr_t2m_2005.png
So, the way I think this works is this… weather models are initialized very carefully several times a day. Data pours in from obs and is error-checked and smoothed in a grid over the entire globe. A global grid of 2m temps is created. This is then used as the starting point for the model run.
This multi-day, global 2m temp grid then becomes an alternative global temp data set.
I haven’t seriously considered the plusses/minuses of the use of this in climate change assessment. But my first instinct is that it would be more trustworthy than a bunch of coop stations with a mish-mash of measuring times and techniques and locations that have been adjusted repeatedly.
Thoughts ?

rgbatduke
Reply to  Mary Brown
June 11, 2015 4:33 am

I don’t know if you are still listening, but this is not quite what they do in GCMs. There are two reasons for this. One is that a global grid of 2 million temperatures sounds like a lot, but it’s not. Remember the atmosphere has depth, and they have to initialize at least to the top of the troposphere, and if they use 1 km thick cells there are 9 or 10 layers. Say 10. Then they have 500 million square kilometers of area to cover. Even if the grid itself has two million cells, that is still cells that contain 250 square km. This isn’t terrible — 16x16x1 km cells (20 million of them assuming they follow the usual practice of slabs 1 km thick) are small enough that they can actually resolve largish individual thunderstorms — but is still orders of magnitude larger than distinct weather features like individual clouds or smaller storms or tornadoes or land features (lakes, individual hills and mountains) that can affect the weather. There is also substantial error in their initial conditions — as you say, they smooth temperatures sampled at a lot fewer than 2 million points to cover vast tracts of the grid where there simply are no thermometers, and even where they have surface thermometers they do not generally have soundings (temperature measurements from e.g. balloons that ride up the air column at a location) so they do not know the temperature in depth. The model initialization has to do things like take the surface temperature guess (from a smoothing model) and guess the temperature profile overhead using things like the adiabatic lapse rate, a comparative handful of soundings, knowledge of the cloudiness or whatever of the cell obtained from satellite or radar (where available) or just plain rules of thumb (all built into a model to initialize the model. Then there is the ocean. Sea surface temperatures matter a great deal, but so do temperatures down to some depth (more for climate than for weather, but when large scale phenomena like hurricanes come along, the heat content of the ocean down to some depth very much plays a role in their development) so they have to model that, and the better models often contain at least one if not more layers down into the dynamic ocean. The Gulf Stream, for example, is a river in the Atlantic that transports heat and salinity and moves around 200 kilometers in a day on the surface, less at depth, which means that fluctuations in surface temperature, fed back or altered by precipitation or cloudiness or wind, move across many cells over the course of a day.
Even with all of the care I describe above and then some, weather models computed at close to the limits of our ability to compute (and get a decent answer faster than nature “computes” it by making it actually happen) track the weather accurately for a comparatively short time — days — before small variations between the heavily modelled, heavily undersampled model initial conditions and the actual initial state of the weather plus errors in the computation due to many things — discrete arithmetic, the finite grid size, errors in the implementation of the climate dynamics at the grid resolution used (which have to be approximated in various ways to “mimic” the neglected internal smaller scaled dynamics that they cannot afford to compute) cause the models to systematically diverge from the actual weather. If they run the model many times with small tweaks of the initial conditions, they have learned empirically that the distribution of final states they obtain can be reasonably compared to the climate for a few days more in an increasingly improbable way, until around a week or ten days out the variation is so great that they are just as well off predicting the weather by using the average weather for a date over the last 100 years and a bit of sense, just as is done in almanacs. In other words, the models, no matter how many times they are run or how carefully they are initialized, produce results with no “lift” over ordinary statistics at around 10 days.
Then here is the interesting point. Climate models are just weather models run in exactly this way, with one exception. Since they know that the model will produce results indistinguishable from ordinary static statistics two weeks in, they don’t bother initializing them all that carefully. The idea is that no matter how then initialize them, after running them out to weeks or months the bundle of trajectories they produce from small perturbations will statistically “converge” at any given time to what is supposed to be the long time statistical average, which is what they are trying to predict. This assumption is itself dubious, as neither the weather nor the climate is stationary and it is most definitely non-Markovian so that the neglected details in the initial state do matter in the evolution of both, and there is also no theorem of which I am aware that states that the average or statistical distribution of a bundle of trajectories generated from a nonlinear chaotic model of this sort will in even the medium run be an accurate representation of the nonstationary statistical distribution of possible future climates. But it’s the only game in town, so they give it a try.
They then run this repurposed, badly initialized weather model out until they think it has had time to become a “sample” for the weather for some stationary initial condition (fixed date, sunlight, atmosphere, etc) and then they vary things like CO_2 systematically over time while integrating and see how the run evolves over future decades. The bundle of future climate trajectories thus generated from many tweaks of initial conditions and sometimes the physical parameters as well is then statistically analyzed, and its mean becomes the central prediction of the model and the variance or envelope of all of the trajectories become confidence intervals of its predictions.
The problem is that they aren’t really confidence intervals because we don’t really have any good reason to think that the integration of the weather ten years into the future at an inadequate grid size, with all of the cumulation of error along the way, is actually a sample from the same statistical distribution that the real weather is being drawn from subject to tiny perturbations in its initial state. The climate integrates itself down to the molecular level, not on a 16×16 km grid, and climate models can’t use that small a grid size and run in less than infinite time, so the highest resolution I’ve heard of is 100×100 km^2 cells (10^4 square km, which is around 50,000 cells, not two million). At this grid size they cannot see individual thunderstorms at all. Indeed, many extremely dynamic features of heat transport in weather have to be modelled by some sort of empirical “mean field” approximation of the internal cell dynamics — “average thunderstormicity” or the like as thunderstorms in particular cause rapid vertical transport of a lot of heat up from the surface and rapid transport of chilled/chilling water down to the surface, among other things. The same is true of snowpack — even small errors in average snowpack coverage make big differences in total heat received in any given winter and this can feed back to kick a model well off of the real climate in a matter of years. So far, it looks like (not unlike the circumstance with weather) climate models can sometimes track the climate for a decade or so before they diverge from it.
They suffer from many other ailments as well — if one examines the actual month to month or year to year variance of the “weather” they predict, it has the wrong amplitude and decay times compared to the actual climate, which is basically saying (via the fluctuation-dissipation theorem) that they have the physics of the open system wrong. The models heavily exaggerate the effect of aerosols and tend to overreact to things like volcanic eruptions that dump aerosols into the atmosphere. The models are tuned to cancel the exaggerated effect of aerosols with an exaggerated feedback on top of CO_2 driven warming to make them “work” to track the climate over a 20 year reference period. Sadly, this 20 year reference period was chosen to be the single strongest warming stretch of the 20th century, ignoring cooling periods and warming periods that preceded it and (probably as a consequence) diverging from the flat-to-slightly cooling period we’ve been in for the last 16 or so years (or more, or less, depending on who you are talking to, but even the IPCC formally recognizes “the pause, the hiatus”, the lack of warming for this interval, in AR5. It is a serious problem for the models and everybody knows it.
The IPCC then takes the results of many GCMs and compounds all errors by superaveraging their results (which has the effect of hiding the fluctuation problem from inquiring eyes), ignoring the fact that some models in particular truly suck in all respects at predicting the climate and that others do much better, because the ones that do better predict less long run warming and that isn’t the message they want to convey to policy makers, and transform its envelope into a completely unjustifiable assertion of “statistical confidence”.
This is a simple bullshit lie. Each model one at a time can have the confidence interval produced by the spread in long-run trajectories produced by the perturbation of its initial conditions compared to the actual trajectory of the climate and turned into a p-value. The p-value is a measure of the probability of the truth of the null hypothesis — “This climate model is a perfect model in that its bundle of trajectories is a representation of the actual distribution of future climates”. This permits the estimation of the probability of getting our particular real climate given this distribution, and if the probability is low, especially if it is very low, we under ordinary circumstances would reject the huge bundle of assumptions tied up in as “the hypothesis” represented by the model itself and call the model “failed”, back to the drawing board.
One cannot do anything with the superaverage of 36 odd non-independent grand average per-model results. To even try to apply statistics to this shotgun blast of assumptions one has to use something called the Bonferroni correction, which basically makes the p-value for failure of individual models in the shotgun blast much, much larger (because they have 36 chances to get it right, which means that even if all 36 are wrong pure chance can — no, probably will — make a bad model come out within a p = 0.05 cutoff as long as the models aren’t too wrong yet.
By this standard, “the set of models in CMIP5” has long since failed. There isn’t the slightest doubt that their collective prediction is statistical horseshit. It remains to be seen if individual models in the collection deserve to be kept in the running as not failed yet, because even applying the Bonferroni correction to the “ensemble” of CMIP5 is not good statistical practice. Each model should really be evaluated on its own merits as one doesn’t expect the “mean” or “distribution” of individual model results to have any meaning in statistics (note that this is NOT like perturbing the initial conditions of ONE model, which is a form of Monte Carlo statistical sampling and is something that has some actual meaning).
Hope this helps.
rgb

richardscourtney
June 11, 2015 4:54 am

Mods
I ask you to bring the above post by rgb (at June 11, 2015 at 4:33 am) to the attention of our host with a view to the post being a main article.
In my opinion it is the best post on WUWT so far this year.
Richard

rgbatduke
June 11, 2015 5:45 am

Oh, dear. If I thought it was going to be an article, I might have actually done a better job with my paragraphs (turning some of the long ones into maybe two or three:-) and might have closed a few unclosed parentheses.
Which reminds me, I’d better go change my underwear. You never can tell who is going to get to see it…;-)
rgb

Ron Clutz
June 11, 2015 6:45 am

Excellent explanation. Reposted this at Science Matters (I did put in more paragraphs for clarity).
https://rclutz.wordpress.com/2015/06/11/climate-models-explained/

Mary Brown
June 11, 2015 6:48 am

Thanks RGB for the amazingly detailed response. I’m aware of the basics of the weather modelling process since I’ve working extensively at statistically post-processing GCM data for decades. You filled in some great info, esp how it relates to the differences between weather and climate GCMs.
So, seems to me a climate model, in a nutshell, does this…you take a weather model, simplify the initial conditions to make computation manageable, then decide on the answer you want to get, install those answers in the parameterizations of unknowns like convection, aerosols, water vapor, etc, then burn lots of coal making electricity to run the model to get the answer you programmed in to start with. That may be a bit cynical, but I can’t see how the process fundamentally differs. Since the predictability of this chaotic system trends asymptotically towards zero in the 1-3 week times frame, all you are left with is the original GIGO inputs, which are guesses, not physics.

Mary Brown
June 11, 2015 6:50 am

But RGB, you didn’t really get to the point of my original post. Does a historical record of the wx model initializations make a valid climate database?
Data for the entire earth is collected, error-check, and smoothed into a 2m grid several times a day. This is a consistent, objective process and since it is not in the climate debate, it is “untampered”.
Seems to me, and apparently Dr. Maue, that the 2m GCM initializations are a valid historical record perhaps on par in accurate representation with satellites or land based measurements.

RACookPE1978
Editor
June 11, 2015 8:30 am

RGBatDuke

Climate models are just weather models run in exactly this way, with one exception. Since they know that the model will produce results indistinguishable from ordinary static statistics two weeks in, they don’t bother initializing them all that carefully. The idea is that no matter how then initialize them, after running them out to weeks or months the bundle of trajectories they produce from small perturbations will statistically “converge” at any given time to what is supposed to be the long time statistical average, which is what they are trying to predict.

Let me add two small additions to this excellent summary of the Global Circulation Models’ original intent and fundamental method. When you read the textbook histories of the growth, political support and funding of the NCAR “computer campus” in Boulder by various politicians through the years, their bias in purpose – and thus their almost-certain bias in methodology and programming and error-checking (er, promulgation) becomes more evident.
The first circulation models were very limited in extent, intent and duration and event – but these models created the need for the original core routines in the boundary value selection, mass and energy transfer betwen cells, and the first-order differential equations chosen to define those transfers; and the evaluation of the specific constants used in those differential equations at the boundaries of each cell. Once the core is written, ever-larger areas and ever-longer timeframes are readily extended after computer processing power is purchased by the funding politicians. But the core routines need not be touched nor do they become more fundamentally accurate as each model is extended longer into the future over ever-widening areas, into ever-smaller cubes, only refined. Many of these “constants” are valid and are properly defined. But – are all of these properties correctly and exactly defined across every pressure, temperature, and phase change: as pressures reduce, temperatures increase or decrease, altitudes and latitudes and durations change during the 85, 285, or 485 years change?
The original models were written to study the particulate pollution in valley regions with unusual inversion patterns – most notably “plume” studies across LA basin, Pittsburgh and NY, and the Canadian nickel smelters just across the NY border. Once “running”, the political side of the funding equation and regulatory agencies used those results to both create, study and justify more regulations on these particulate plumes, but also to justify more funding for the “model research” and ANY other environmental study that might “use” the results of the models to justify even more “research”.
The self-feeding, self-funding “enviro-political-funding triumfervent so accurately predicted by President Eisenhower was born. It expanded into acid rain studies across the northeast, then the US and then Europe. It expanded (via the “circulation” mode of the now regional models for those who needed to study/regulate the newly-discovered Antarctic Ozone Hole, but only began feeding the CO2 machine after these earlier modeling “successes” by the laboratories who used each new regulation to create another computer lab, fund another super-computer aomplex at yet another university, and (obviously) support even more researchers looking for another easy-to-publicize (er easy-to-publish wow-factor) article for another journal.
The highly visualized, super-computer Finite-Element Analysis industry had arrived. It grew sideways of course immediately into engineering design, real materials and controls analysis, and entertainment – not at all expected by the original programmers by the way! But the basis of circulation models always remained not “weather models” but “regional particulate plume studies” that behaved like weather models into an assumed future.
today, I understand the GCM’s are not actually fed “starting parameters” from the beginnign, but rather are run from “near-zero” conditions for several thousands of cycles – until the “weather” they are trying to predict worldwide stabilizes across all cells in the model.
“Forcings” defining the original case (TSI, CO2 levels, particulates, volcano aerosols, etc) of each model are NOT changed through the standardization runs. (One assumes that each cell is then checked against the real world “weather” (pressure, wind, temperatures, humidity, etc) at each location of each cell to verify that “realty” is matched before the “experiment” is then run. Forcings are then changed (usually in the form of “What if CO2 is increased by this amount beginning at this period?” or “What if volcano aerosols had changed during this period in this way?”) and then the model runs are continued from the “standardization” cycles are then run forward into the future as far and as fast as funding permits.
But individual boundary values for each original cell are not NOT force-fed into the climate models on each cell at the beginning of each model run.
The result is then publicized (er, published) by the national ABCNNBCBS doting press with the funding political parties standing by with their press release already written and TV cameras ready.

rgbatduke
June 11, 2015 8:51 am

But RGB, you didn’t really get to the point of my original post. Does a historical record of the wx model initializations make a valid climate database?
Data for the entire earth is collected, error-check, and smoothed into a 2m grid several times a day. This is a consistent, objective process and since it is not in the climate debate, it is “untampered”.
Seems to me, and apparently Dr. Maue, that the 2m GCM initializations are a valid historical record perhaps on par in accurate representation with satellites or land based measurements.

No, I didn’t get the point, sorry. But that is a very interesting question indeed. Presumably one could almost instantly transform them into a global average temperature, further one that is “validated” to some extent by the success of the following weather predictions based on the entire smoothed field. Note well (also) that this would give one a consistent direct estimate of the actual global temperature, not the anomaly! Since I personally am deeply cynical about the whole “anomaly is more accurate than our knowledge of the average temperature” business, I’d be extremely interested in the result. It would also be very easy to subtract and generate an “anomaly” out of the timeseries of actual averages for comparison purposes.
One thing that people likely forget is that climate models produce actual global temperatures, not temperature anomalies! They have little choice! The radiative, conductive, convective, and latent heat transfers are all physics expressed not only in degrees, but in degrees absolute or kelvin. One has no choice whatsoever in their “zero”. This is why the fact that the model timesteps aren’t stable with respect to energy and have to periodically be renormalized (de facto renormalizing the temperature) is so unnerving. How is this even possible without a renormalization scheme that must be biased as it presumes a prior knowledge of the actual energy imbalance that one is trying to compute so it can be preserved while and indistinguishable numerical error is removed? In another thread (yesterday or the day before) I discussed this extensively in the context of solving simple orbital trajectory problem with 1-step Euler and then trying to remove the accumulating energy/angular momentum drift with some step-renormalization scheme. If one didn’t know that energy and angular momentum were strictly conserved by real orbits, how could one implement such a scheme?
So the other interesting thing is that these averages could be directly compared not to the “anomalies” of the GCM temperatures with their completely free parameter, the actual average temperature, that is already not in very good agreement across models(!) but to the actual average temperature as determined by the weather model initialization, computed with a reasonably consistent algorithm and, one might hope, with no dog in the race other than a desire to get next week’s weather as correct as possible, which will only work if the temperature initialization isn’t too far off from reality and maybe lacks a systematic bias. One could hope. At least a month to month, year to year time dependent systematic bias, since it may well be empirically that weather models perform better if the initialization temperature is slightly biased if they tend to over or under estimate cooling/heating.
Surely somebody has looked at this?
rgb

Mary Brown
June 11, 2015 9:19 am

“Surely somebody has looked at this?”
rgb
////////////////////
Not that I’ve ever seen before but I’m just a spare time climate wonk. Dr. Maue’s web blog is the only place I’ve ever seen anything like it.
But like you say, weather modellers have a vested interest in getting the initial 2m temp field as correct as possible to make the best possible forecast. This is done objectively many times a day (used to be 4, may be 24 now). So, they are accidentally creating a very detailed and objective record of the climate.
Dr. Maue’s link again to his data…
http://models.weatherbell.com/temperature.php#!prettyPhoto
The stat geeks here at work use his “month to date” global temp anomalies to forecast where the final monthly figures will come in for UAH RSS HadCrut, GISS. Works quite well.
Interestingly, his global temp traces don’t show 1998 as the warmest ever. They have a period 2002-2007 as generally warmest, with cooling until 2011-2012 and a modest rebound warming since, still below 2002-2007 period. Eyeballing, it looks like about 0.35 deg C rise since 1979, which off the top of my head is similar to satellites.

rgbatduke
June 11, 2015 11:27 am

Absolutely awesome site, I bookmarked it. I’d even sign up for it, but $200/year is an insane price — weather underground is$5/year. I did look at the temperature product though, and it is a) very fine grained; b) as you say, quite different in structure from all of the temperature series. In particular, it totally removes the ENSO bumps most of the other records have. This all by itself is puzzling, but IMO quite possibly reasonable. The other really, really interesting thing about the record is that it is remarkably flat across the early 80s where HadCRUT and GISS show a huge warming. Indeed, the graph is very nearly trendless. 0.1 C/decade at most, with the odd bump well AFTER the 1998 ENSO that appears to end AT the 2010 ENSO.
It would be enormously interesting to compare this to RSS, but even there RSS has the 1998/1999 bump and the 2010 bump, because those were largely troposphere/atmospheric warming phenomena. This temperature graph appears to really track something else entirely. No wonder Joe Bastardi is skeptical (he seems to be affiliated with this site).
rgb

mpaul
June 9, 2015 8:21 am

As it related to the pause, why isn’t satellite data the gold standard? This is not a rhetorical question. Is there any reason to doubt that the satellites are far more accurate and precise than other approaches? Can we quantify this?
I’ve heard that the satellite data is accurate to +/- 0.01 C. Surely the chicken-bones-and-tea-leaves-adjusted-by-partisans data is less accurate.
Since the pause has accrued entirely during the satellite era, why would people use less accurate data as authority?

Werner Brozek
June 9, 2015 8:45 am

As it related to the pause, why isn’t satellite data the gold standard?

That is a good question. In all fairness, we need to acknowledge that we had UAH5.5, UAH5.6 and UAH6.0 during the past year. As you know, only 6.0 closely agrees with RSS.
However now that we have this agreement, this question should be given much more serious consideration.
It used to be said that RSS was an outlier. But not anymore.

firetoice2014
June 9, 2015 9:01 am

The “chicken-bones-and-tea-leaves-adjusted-by-partisans data” are actually not data, but rather estimates of what the data might have been had they been collected timely from properly selected, sited, calibrated, installed and maintained instruments, thus rendering the “chicken-bones-and-tea-leaves” adjustments unnecessary.

Climate Pete
June 9, 2015 10:14 am

If the satellite data is accurate to +/- 0.01 C, how can Christie and Spencer have just amended the most recent temperatures by upwards of 0.1 C?
The problem is that, while the sensors on the satellites are highly accurate, they do not actually measure lower tropospheric temperatures directly. You have to combine readings from different sensors, combine readings from different satellites, compensate for satellite orbital decay (except for some satellites which have physical thrusters to do this). And the number of satellites and overlaps is now huge :
and those in the sky at the same time cross the same point at different times, meaning you have to guess at the real temperature at that point before trying to stitch the two records together. Get any one of the overlaps wrong and you change the long term trend hugely.
Roy Spencer’s description at http://www.drroyspencer.com/2015/04/version-6-0-of-the-uah-temperature-dataset-released-new-lt-trend-0-11-cdecade/ of what needs to be done to convert the MSU readings on various channels to consistent temperatures includes the following :

Roy Spencer said
All data adjustments required to correct for these changes involve decisions regarding methodology, and different methodologies will lead to somewhat different results. This is the unavoidable situation when dealing with less than perfect data.

We want to emphasize that the land vs. ocean trends are very sensitive to how the difference in atmospheric weighting function height is handled between MSU channel 2 early in the record, and AMSU channel 5 later in the record (starting August, 1998).

In other words, they are applying judgement as to how to do it, which means the result is a long way from being objective. They could pretty much pick the result they wanted, then select a methodology to ensure that is the answer they get.

We have performed some calculations of the sensitivity of the final product to various assumptions in the processing, and find it to be fairly robust. Most importantly, through sensitivity experiments we find it is difficult to obtain a global LT trend substantially greater than +0.114 C/decade without making assumptions that cannot be easily justified.

Great words. Now let’s see the UAH 6.0 code so that everyone can see whether the assumptions made are generally agreed to be justified themselves.
Have a good read of Spencer’s blog entry on the method, then decide for yourself whether you believe the results are repeatable to 0.01 C over the 17 generations of satellites!

Reply to  Climate Pete
June 9, 2015 10:26 am

Climate Pete,
Great words. So maybe you can explain why scientists from across the board — warmists, skeptics, alarmists, the IPCC, etc., etc., all now admit that global warming has stopped.
See, it’s like this, Pete:
Global warming stopped many years ago.
You are trying to force the facts to fit your beliefs, instead of following where the data, the evidence, and the observations lead.
No wonder your conclusions are wrong.

Reply to  Climate Pete
June 9, 2015 10:48 am

I most definitely believe the satellite data and not the agenda driven data you may care to choose.
In addition the satellite data is supported by radisonde data and independent temperature monitoring agencies as Weatherbell Inc.

Climate Pete
Reply to  Climate Pete
June 9, 2015 11:04 am

The earth continues to warm
This means that more energy is received at the top of the atmosphere from the sun than is emitted into space from the top of the atmosphere. Most of the warming goes into the oceans.
Prior to the early 2000s deep ocean temperatures were measured from instruments lowered overboard from ships and platforms. Starting in 2000 a network of Argo floats has been deployed throughout the world’s oceans. The name was chosen from “Jason and the Argonauts” to be complementary to the Jason altitude measuring satellite. The Argo floats drift with ocean currents. Every 10 days they descend to 2000m then ascend to the surface, pausing and taking temperature and salinity readings. At the surface the readings are transmitted to a satellite, and the cycle repeats.
The following US National Oceanographic Data Center (NODC) chart uses Argo and other sources of ocean temperature readings.
http://e360.yale.edu/images/slideshows/heat_content2000m.png
The chart shows that between 1998 and 2012 the 5-year-smoothed energy content of the oceans from 0-2000m has risen from 5 x 10^22 (5 times 10 to the power 22) to 17 x 10^22 Joules relative to 1979. Since the earth’s area is 510 million square km = 5.1 x 10^14 square metres, and there are 442 million seconds in 14 years, then this rise equates to a rate of heating of 12 x 10^22 / (5.1 x 10^14 x 4.4 x 10^8) = 0.53 W/square metre continuously over the whole of earth’s surface.
There is a myth that global warming stopped in 1998, the year of a large El Nino event. The evidence put forward for this claim is usually the following RSS lower tropospheric temperature graph :
http://wattsupwiththat.files.wordpress.com/2015/04/clip_image002_thumb3.png
There are a number of fallacies associated with the claim. Firstly the temperature data set is cherry-picked – RSS is only one of two satellite temperature data sets, the other satellite data set, UAH (5.6 or 6.0), shows statistically significant warming since 1998 and the average of RSS and UAH shows warming. All the surface temperature data sets also show statistically significant warming since 1998.
Secondly, the use of lower tropospheric (or surface) temperatures is cherry picked out of all the many indications of continued warming which include total earth energy content, surface temperatures and the continuing reduction in upper stratospheric temperatures.
Thirdly, the use of a start date coinciding with the largest recent El Nino event, known to boost surface temperatures, is cherry picked to minimise the warming trend as much as possible.
Further the use of a simple temperature chart to claim global warming has stopped is misrepresentation. Before drawing conclusions from temperature trends, adjustments should be made for El Nino status, solar output and volcanic aerosols over the period, all of which are random external factors affecting temperatures which are independent of the underlying global warming. If you make those adjustments to control for the external factors then temperature charts also show significant warming since 1998.
The evidence of ongoing warming is clear, particularly from ocean heat content measurements.

Reply to  Climate Pete
June 9, 2015 11:08 am

CP sez:
“The earth continues to warm”
Wrong.

Werner Brozek
Reply to  Climate Pete
June 9, 2015 11:13 am

No, not all of them

That is an outlier then by even denying a hiatus. Even though GISS does not show a stop, it does show a hiatus here:
http://www.woodfortrees.org/plot/gistemp/from:1950/plot/gistemp/from:1950/to:2000/trend/plot/gistemp/from:2000/trend/plot/gistemp/from:1975/to:1999/trend

Reply to  Climate Pete
June 9, 2015 11:20 am

Just so everyone is clear on the terminology: “hiatus” and “pause” both mean the same thing: global warming has stopped.
Some folks cannot accept reality. Skeptics are here to help them.
The new Narrative is: “global warming never stopped.” That isn’t true, of course.

Werner Brozek
Reply to  Climate Pete
June 9, 2015 11:21 am

the other satellite data set, UAH (5.6 or 6.0), shows statistically significant warming since 1998

That is not correct. With the March data, 5.6 had no statistically significant warming since August 1996. However with the new 6.0 version, there is no warming at all since January 1997 as this post shows.

Werner Brozek
Reply to  Climate Pete
June 9, 2015 11:28 am

Just so everyone is clear on the terminology: “hiatus” and “pause” both mean the same thing: global warming hasstopped.

Just a point of clarification: I am not saying they are right, but the paper: Possible artifacts of data biases in the recent global surface warming hiatus
has these sentences: “The more recent trend was “estimated to be around one-third to one-half of the trend over 1951-2012.” The apparent slowdown was termed a “hiatus,” and inspired a suite of physical explanations for its cause…”

Werner Brozek
Reply to  Climate Pete
June 9, 2015 11:38 am

Thirdly, the use of a start date coinciding with the largest recent El Nino event, known to boost surface temperatures, is cherry picked to minimise the warming trend as much as possible.

See my earlier article here:
The 1998 El Nino is not relevant. The warming stopped before then and it also stopped after that.

Reply to  Climate Pete
June 9, 2015 11:50 am

Climate Pete,
There is a number of fallacies associated with your claims.
Firstly the temperature data set is not cherry-picked. RSS now agrees very well with UAH. All the surface temperature “data” sets also show no statistically significant warming from various periods, but are being “adjusted” to try to show it, but they can’t be as freely adjusted as past “records” because the satellites are watching.
Secondly, the use of lower tropospheric (or surface) temperatures is not cherry picked. It’s what IPCC has used to promote the repeatedly falsified, evidence-free hypothesis of man-made global warming via the GHE.
Thirdly, the start date is not cherry picked, but derived from a linear regression which extends back to before the super El Niño of 1997-98.
You assert that, “Further the use of a simple temperature chart to claim global warming has stopped is misrepresentation”. In that case, the UN’s IPCC misrepresents alleged “global warming” as well. (Which of course it does, in various ways.)
The evidence of ongoing warming is non-existent. In RSS and now maybe UAH (haven’t checked), the planet’s “now trend” is cooling.

Climate Pete
Reply to  Climate Pete
June 9, 2015 12:47 pm

My spreadsheet has data centrally averaged over 12 months and has detailed data up to the end of 2013. It shows a UAH 5.6 trend for 9/1997 to 7/2013 of 0.68 degrees C per century, with a 2 sigma confidence (around 95%) of 0.37 degrees C per century. That means the 95% range is 0.31 to 1.05 degrees C per century.
WFT shows the UAH 5.5 trend from 9/1997 does not change between end dates of mid 2013 and September 2015. So the confidence levels are not going to change much either. If anything the range will be lower.
So UAH 5.6 shows significant warming from 9/2015 to mid 2013 and will do the same to date.
I agree that the UAH 6.0 data set does not show warming any more over this period.
The same spreadsheet shows the surface temperature data sets Cowtan & Way and GISTEMP also show statistically significant warming from 9/1997 to mid 2013.

mpaul
Reply to  Climate Pete
June 9, 2015 1:02 pm

The earth continues to warm
This means that more energy is received at the top of the atmosphere from the sun than is emitted into space from the top of the atmosphere. Most of the warming goes into the oceans.

OK, I accept this argument from a thermodynamics standpoint. How accurately can we measure TOA energy in and energy out?

Reply to  Climate Pete
June 9, 2015 1:20 pm

Cowtan and Way’s methodology is anti-scientific. Your spreadsheet for some unaccountable reason doesn’t agree with the statistical analysis of people who know what they’re doing.

Billy Liar
Reply to  Climate Pete
June 9, 2015 2:12 pm

I love that ocean heat content graph in joules. It’s how to lie with statistics on steroids. If you convert all those joules locked up in the top 2000m of ocean to temperature it turns out to be an increase of about 50 milli-Kelvins in 50 years or one thousandth of a degree C per year. The sea is insignificantly warmer than it was 50 years ago.
Oooooh! is all that heat going to come out and bite us in the near future?

Science or Fiction
Reply to  Climate Pete
June 9, 2015 3:01 pm

Billy Liar June 9, 2015 at 2:12 pm
“I love that ocean heat content graph in joules. It’s how to lie with statistics on steroids. If you convert all those joules locked up in the top 2000m of ocean to temperature it turns out to be an increase of about 50 milli-Kelvins in 50 years or one thousandth of a degree C per year. The sea is insignificantly warmer than it was 50 years ago.
Oooooh! is all that heat going to come out and bite us in the near future?”
Good point. 🙂 Will it be possible for you to show us the input figures and the calculation?

Menicholas
Reply to  Climate Pete
June 9, 2015 4:50 pm

“They could pretty much pick the result they wanted, then select a methodology to ensure that is the answer they get.”
A method well known to Warmistas. In fact, this seems to be the only way to arrive at a conclusion to a Warmista. No wonder you imagine that others do the same.

Chris Schoneveld
Reply to  Climate Pete
June 10, 2015 4:18 am

Climate Pete,
The catastrophic warming was supposed to be related to the warming of our atmosphere and not to the hardly noticeable warming of the deep oceans. Now that the oceans are taking up all the heat we no longer have to worry about all the land animals that were feared to get extinct because they couldn’t adapt or migrate fast enough. Also the fear that humans would suffer/die from more heat waves can now put to rest.The fish in the sea won’t notice a hundredth of a degree increase in water temperature. So where is the catastrophe? The rate of sea level rise? That also does seem to change much at all.

rgbatduke
Reply to  Climate Pete
June 10, 2015 6:49 am

Are you serious? Have you even visited the RSS site and looked over their error analysis? As far as I can see, RSS claims a monthly precision using Monte Carlo and comparison with ground soundings of around 0.1 to 0.2 C, comparable to HadCRUT4. As I pointed out above, this means that there is a problem when they systematically diverge by more than their combined error.
This has just happened, and is the point of Werner’s figure above. Note that a systematic divergence doen’t have to be 0.2 C to be significant because we aren’t comparing single months. At this point it is something like 99.99% probable that one estimator or the other (or both) is in serious error. I gave several very good reasons in a comment above for being deeply suspicious of HadCRUT4 — one of them being a series of corrections that somehow always warm the present relative to the past over decades at this point, the same criterion you use above to cast doubt on UAH — another its admitted neglect of UHI in its land surface record, where most warming occurs.
For some time now, the trend difference between RSS and both HadCRUT and GISSTEMP has been more than disturbing, it has been statistical proof of serious error. RSS has the substantial advantage of “directly” measuring the temperature of the actual atmosphere, broadly averaged over specific sub-volumes. It automatically gets UHI right by directly measuring its local contribution to global warming of the atmosphere directly above urban centers. Land based thermometers that are poorly sited or sited in urban locations do the exact opposite — they do not average over any part of the surrounding territory and local temperatures can vary by whole degrees C if you move a thermometer twenty meters one way or another and can easily have systematic warming biases. It is not clear how any land based surface thermometer could have a cooling bias, unless it is placed inside a refrigerator, but I’m sure that one can imagine a way (HadCRUT and GISSTEMP both have successfully imagined it and implemented it as they cool past temperature readings with their “corrections” to relatively warm the present compared to the past).
You are entitled to your opionion, of course. My only suggestion is that you open up your mind to two possibilities that seem to elude you. One is that global warming is occurring, some fraction of it is anthropogenic, and that the rate of this warming is neither alarming nor likely to become catastrophic and overall has so far been entirely beneficial or neutral to civilization as a whole. The second is that there might, actually, be substantial bias, deliberate or not, in reported global temperature estimates. The problem of predicting the climate is a hard problem. There is — seriously — no good reason to think that we have solved it, and direct evidence strongly suggests (some would say proves beyond reasonable doubt) that we have not.
rgb

David A
Reply to  Climate Pete
June 11, 2015 6:33 am

As RGB shows above, the satellite data is reinforced by the raw surface data being much closer to the satellite data. Additionally the tail cannot wag the dog. The oceans will always dominate the land, and the difference between land and ocean can only have very minor flux.
Furthermore, both satellite data sets show 1998 as being the warmest year ever by A LOT. Attention WERNER, it is inadequate to say the satellites show 2015 as the sixth warmest year. The difference between 1998 and 2014 and 2015 is huge, far beyond any possible error in the satellites. In a race when 2nd through 10th place is separated by a few hundredth of a second, but first place is in first by two tenths of a second, it is important to highlight the difference.

Werner Brozek
Reply to  Climate Pete
June 11, 2015 8:28 am

Attention WERNER, it is inadequate to say the satellites show 2015 as the sixth warmest year. The difference between 1998 and 2014 and 2015 is huge, far beyond any possible error in the satellites.

I am well aware of that. I do not believe there is the faintest hope of either satellite to get higher than third by the time the year is over. I plan to have more on this in 2 months when I plan to discuss the possibilities of records in 2015.

David A
Reply to  Climate Pete
June 11, 2015 9:41 pm

Thanks Werner. I appreciate your work. if you include the global graphics in comparing 1998 to any other year it is very clear visually as well.

george e. smith
Reply to  Climate Pete
June 12, 2015 4:19 pm

So Climate Pete, are you ESL disadvantaged or what ?
How many times have readers at WUWT been informed, (told in no uncertain terms) that the “start date” for the Monckton graph of “no statistically significant Temperature change”, is in fact the date of the most recently released RSS measured data.
They use the most recently available for the start date, because next month’s data is not yet available to use; so it will be used next month.
It is the “end date” some 18 1/2 years ago, which is COMPUTED by the Monckton algorithm. It is not CHOSEN by anybody or any process or any thing.
So if you are not able to read Lord Monckton’s description for yourself, get some help on it, and stop repeating the nonsense that anybody cherry picks the 1987 date for any purpose. Planet earth set that time as an apparent end of the earlier period of global warming; not Monckton of Brenchley.

Reply to  Climate Pete
June 12, 2015 5:19 pm

george e. smith,
Climate Pete can’t understand that concept. I’ve tried to explain it to him, to no avail. We start measuring from the latest data point, back to where global warming stopped. Right now, that is 18 ½ years. As time goes on, that number may or may not increase. But if global warming does not resume, soon it will be a full twenty years of temperature stasis. That’s very unusual in the temperature record. But facts are facts.
rgb says:
RSS has the substantial advantage of “directly” measuring the temperature of the actual atmosphere, broadly averaged over specific sub-volumes. It automatically gets UHI right by directly measuring its local contribution to global warming of the atmosphere directly above urban centers. Land based thermometers that are poorly sited or sited in urban locations do the exact opposite — they do not average over any part of the surrounding territory and local temperatures can vary by whole degrees C if you move a thermometer twenty meters one way or another and can easily have systematic warming biases…. RSS is a measurement…
Both RSS and UAH agree closely with thousands of radiosonde balloon measurements. Many \$millions are spent annually on those three data collection systems. The people who reject them out of hand do so only because it doesn’t support their confirmation bias.
It’s amazing but true: Global warming stopped many years ago. The alarmist crowd just cannot accept that fact. So they argue incessantly, nitpicking and cherrypicking all the way. But eventually, Truth is too strong a force, and it will trump their Belief. It’s already happening.

Werner Brozek
Reply to  Climate Pete
June 12, 2015 5:45 pm

How many times have readers at WUWT been informed, (told in no uncertain terms) that the “start date” for the Monckton graph of “no statistically significant Temperature change”

Lord Monckton deals with no change whatsoever, or technically an extremely slight negative slope for 18 years and 6 months now.
The “no statistically significant Temperature change” is probably up to 27 years by now according to Dr. McKitrick. For the difference between the two, see my earlier article at: http://wattsupwiththat.com/2014/12/02/on-the-difference-between-lord-moncktons-18-years-for-rss-and-dr-mckitricks-26-years-now-includes-october-data/

Mary Brown
Reply to  Climate Pete
June 15, 2015 8:45 am

I agree. No way they are within 0.01 deg. From the variability of all the temp data sets, I think a consensus product like Wood For Trees, get us within 0.06 deg. That’s just an estimate, but if it was less than that, then the different measuring methods would yield more consistent results with each other. But they don’t.

MarkW
June 9, 2015 10:44 am

Mostly because they aren’t showing the warming many people believe must be there.

June 9, 2015 11:13 am

MarkW,
correct. This graph is reversed (cold on top; warm on lower part), but it shows reality:

Climate Pete
June 9, 2015 1:14 pm

Surface temperature anomalies averaged over medium periods of time are very consistent over long distance – with significant correlation within distances of up to 1000km. So while a surface thermometer measures temperature at only one point, the change over time is more widely applicable. This does not mean the temperatures are the same more widely.
See Spatial Correlations in Station Temperature Anomalies at http://www.smu.edu/~/media/Site/Dedman/Departments/Statistics/TechReports/TR260.ashx?la=en
As an example, take a location with a thermometer which registered an average temperature over 2000 or 10 degrees C and for 2010 of 10.1 degrees C, giving a trend of 0.1 C per decade. Assume a second location 100 miles away with no thermometer. We do not know the average temperature over 2000 or 2010, but we can say that the trend at this second location is likely to be close to 0.1 C per decade too.
This has clearly been verified with locations which are local to each other which both do have thermometers.
jquip said

Where the two products differ is where they should excel in relation to one another. Without regard to adjusting the reading from a thermometer to gain a counterfactual temperature that would have existed if the thermometer didn’t — the satellite results do solve the infilling and interpolation problem for terrestrial sets. They provide a naturally integrated temperature dependent profile in their signal that spans all the places that actual thermometers are not.

Although we do not know the absolute temperature readings elsewhere, we do know that the spatial correlation of anomalies (temperature changes) persists up to quite large distances.
Hence the continuous coverage of the satellite temperature data sets is not a good reason for preferring them when working with temperature anomalies (changes over time), which means that the better accuracy of surface temperature data sets becomes a significant advantage for them when compared to the somewhat arbitrary temperature calculations of the satellite data sets.

MarkW
June 9, 2015 1:40 pm

Now you are just making it up as you go, Pete.
I can find many thermometers that are just a few miles apart that have dramatically different temperature profiles over the last few decades.
Your belief that you only need one thermometer every thousand miles or so is so absurd that only someone who is truly desperate could make it.

Jquip
June 9, 2015 7:22 pm

Hence the continuous coverage of the satellite temperature data sets is not a good reason for preferring them when working with temperature anomalies (changes over time), which means that the better accuracy of surface temperature data sets becomes a significant advantage for them when compared to the somewhat arbitrary temperature calculations of the satellite data sets.

Sadly, you have me completely backwards. Strictly because the satellite sets have the naturally integrated coverage they do, they should be the preferred manner for working with anomalies. What one cannot state is that the construction of a mathemagical infilling algorithm is to be preferred over what is naturally radiating from the Earth and recorded in nearly all its EM glory.

RACookPE1978
Editor
June 11, 2015 7:30 am

Climate Pete

Surface temperature anomalies averaged over medium periods of time are very consistent over long distance – with significant correlation within distances of up to 1000km. So while a surface thermometer measures temperature at only one point, the change over time is more widely applicable. This does not mean the temperatures are the same more widely.
See Spatial Correlations in Station Temperature Anomalies at http://www.smu.edu/~/media/Site/Dedman/Departments/Statistics/TechReports/TR260.ashx?la=en
As an example, take a location with a thermometer which registered an average temperature over 2000 or 10 degrees C and for 2010 of 10.1 degrees C, giving a trend of 0.1 C per decade. Assume a second location 100 miles away with no thermometer. We do not know the average temperature over 2000 or 2010, but we can say that the trend at this second location is likely to be close to 0.1 C per decade too.
This has clearly been verified with locations which are local to each other which both do have thermometers.

Just re-read that paper. It says noting of the sort, and its conclusion forcibly required more study to justify any such claim in the future. Undated, the type-written copy cites the “classic” papers on US stations used by Hansen, Jones, Briffa between 1978 and 1988. (And two others – also by Hansen in that time period. Correlation coefficients in the plots at the end are 1/2 to 3/4! (I won’t even quantify them by assigning a 1 digit decimal place), and two of the processes used by the original researchers reject negative relationships entirely!) Locations were “transferred” so even the basic longitude and latitudes were “moved” to squared-off locations, then not-quite-so-great circle distances were calculated: Even the distance numbers between stations were fudged.
Hansen, Briffa, and Jobes at the time desperately needed as much areas of the earth be covered by “red” (rising) temperature anomalies as possible. The extrapolation across 1000 km was his method.
Hansen’s premis has never been duplicated since. Has never been compared against the satellite “average” year-by-year changes since.

June 12, 2015 5:35 pm

Mark W says:
I can find many thermometers that are just a few miles apart that have dramatically different temperature profiles over the last few decades.
You don’t even need a separation of a few miles.
At the last dinner Anthony hosted I passed out some calibrated thermometers to the attendees who wanted one. They were not cheap; they each had a certificate of calibration verifying accuracy to within 0.5ºC. That is very accurate for a stick thermometer. (I know something about calibration, having worked in a Metrology lab for more than 30 years. Part of the job was calibrating various kinds of thermometers.)
I purchased twenty-four thermometers. I kept them together, and every one of them showed exactly the same temperature, no matter what it was. Eventually I put a few around my house, outside. Within ten feet there was a variation of 3 – 4 degrees, and sometimes much more. As Mark says to ‘Climate’ Pete:
Your belief that you only need one thermometer every thousand miles or so is so absurd that only someone who is truly desperate could make it.
The only way to get a good idea of global temperatures is to have either lots of data points, or to have a snapshot of the whole globe, like satellites provide. Because there are so few thermometers in land-based measurements (and far fewer in the ocean), that data is highly questionable. That’s why satellite data is the best we have.

Jquip
June 9, 2015 12:14 pm

Is there any reason to doubt that the satellites are far more accurate and precise than other approaches?

Essentially, the objection is that they are not thermometers. And that they require modelling and adjustments to produce a valid temperature regionally or for the globe. This is, of course, undeniably true. This is not a refutation of satellites however, so much as it is a confession that this is also true of terrestrially based thermometer products.
The trick is that the satellites trivially show the spatial variances that are directly related to temperature, without directly measuring the temperature itself. The terrestrial thermometers do directly measure temperature, but are completely blind to any region marginally away from the thermometer itself. Even a matter of a mere 20m. But the temperature given — arguably — needs correction as the reading gained includes the interaction of the thermometer with its environment. Rather than giving us the temperature we want, which would be the temperature if the thermometer wasn’t part of the environmnt.
This sounds silly, but it is a legitimate issue in interpreting results from terrestrial thermometers. Satellites don’t have this particular issue, even though they have others.
Essentially, if we take a ‘principle of indifference’ approach to things, then the criticisms about the temperature accuracy of the satellite products are exactly the same criticisms of the terrestrial records: They all require modelling, guestimations, and matters of taste. Of course, one cannot defame one product for one flaw and then proceed to use another product that has exactly the same flaw. Not without losing any sense of sanity or integrity, of course.
Where the two products differ is where they should excel in relation to one another. Without regard to adjusting the reading from a thermometer to gain a counterfactual temperature that would have existed if the thermometer didn’t — the satellite results do solve the infilling and interpolation problem for terrestrial sets. They provide a naturally integrated temperature dependent profile in their signal that spans all the places that actual thermometers are not.
As to whether or not anyone has used the satellite data to establish the error bounds on the terrestrial infilling and interpolation practices, I haven’t the foggiest. But I assume it has not been done, as otherwise the satellite data would be used for the construction of the infilling and interpolation.
Conversely, what I don’t know about the satellite data sets is if they are doing the previously mentioned process to convert their signals into temperature. If so, then there is simply no argument against using the satellite data sets. Likewise, there would no argument for using terrestrial based data sites as anything other than a calibration for the satellite data.

The Ghost Of Big Jim Cooley
June 9, 2015 1:26 pm

The surface data is fiddled, and the satellite data undergoes considerable manipulation to ascertain a result. Although I am a dyed-in-the-wool climate sceptic, I want to stay objective. I therefore have deep reservations about satellite data, and won’t accept it like many others here do. It’s not about accepting ‘something’, or the ‘best’ of a bad bunch. If they’re all useless, then don’t accept any – like I don’t. It’s like being asked to comment on who is the best politican.

Jquip
June 9, 2015 7:14 pm

No argument from me about your statements. I guess that makes us both filthy empiricists.

Pamela Gray
June 9, 2015 8:22 am

My guess: Surface stations are recording outgoing increasing heat at the source of the pump. Satellites, which are further away from the source record all the leaky, mixing atmospheric layers and gives us the average. In comparing surface sensors with “all layers” sensors we see a divergence. Meaning, we are losing heat somewhere, keeping the entire atmospheric heat content stable even though the source pump is pouring out more heat.
Remember, the greenhouse CO2 premise proposes that the troposphere should be getting hotter and hotter. Satellites say it ain’t happening. This could be why scientists are saying the increasing surface heat is somehow going down, not up, and is recycled into the deep ocean layers. Other scientists are unsure where that heat is escaping to. What is interesting is that there is no runaway re-radiating heat getting trapped in the atmosphere. Once that surface heat gets into all the layers of the atmosphere, it is not increasing there. That seems to me to point to a leak and I would look up before I look down.

Matt Schilling
Reply to  Pamela Gray
June 9, 2015 9:03 am

+1

Reply to  Pamela Gray
June 9, 2015 10:05 am

A couple of years ago, I came across an article posted on NASA, where a satellite showed that the ‘extra heat’ was escaping to space. From memory, either the date of the article was Dec. 2012 or it was posted Dec. 2012. A few months ago, I tried to locate it, but no luck. I have immense frustration on the NASA and NOAA sites.

Berényi Péter
Reply to  Pamela Gray
June 9, 2015 12:46 pm

The only explanation is that the upper troposphere is getting progressively dryer, while total precipitable water is of course increasing. Otherwise trend of satellite lower tropospheric temperature datasets would be some 20% higher globally than that of surface datasets, for moist lapse rate is smaller than dry one. It is obviously not the case.
BTW, balloon and satellite measurements do indicate that this is what’s happening.
So, while IR atmospheric depth is increasing in some narrow frequency bands due to well mixed GHGs, on average, integrated along the entire thermal spectrum it does just the opposite, because the upper troposphere is becoming ever more transparent in water vapor absorption bands.
It is a powerful negative feedback loop, which makes climate sensitivity to well mixed GHGs small.
At this point in time we can only speculate what the underlying mechanism might be. Perhaps with increasing surface temperature the water cycle is becoming more efficient, leaving more dry air above after precipitating moisture out of it.
Anyway, computational general circulation climate models are not expected to lend an explanation, because their resolution is far too coarse to represent cloud and precipitation processes faithfully and derive them from first principles.

Werner Brozek
Reply to  Berényi Péter
June 9, 2015 1:51 pm

Thank you! If someone were so inclined, they could probably write a whole post on this topic. GISS and Hadcrut would love you if you could totally explain the discrepancy by concluding that both satellite and other data sets could still be correct.

rogerknights
Reply to  Berényi Péter
June 9, 2015 2:49 pm

Chiefio has a great new thread on dryness and wetness and UV, here:
https://chiefio.wordpress.com/2015/06/06/its-the-water-and-a-lot-more-vapor/

Werner Brozek
Reply to  Berényi Péter
June 9, 2015 3:04 pm

Thank you! There is a lot here about changing humidity in the stratosphere, but I did not see anything to explain why satellite data for the lower troposphere should diverge from GISS.

Science or Fiction
Reply to  Berényi Péter
June 9, 2015 3:30 pm

Are you expecting computational general circulation climate models to lend an explanation and represent cloud and precipitation processes faithfully if you increase their resolution? Do you think that the models got the basic mechanism right but their resolution is to low? What is the basis for this expectation?

Eliza
June 9, 2015 9:47 am

Wallensworth (posted above): I believe RSS is run by warmistas actually. That’s why its so valuable that it correlates with UAH (run by skeptics)

RWturner
June 9, 2015 10:21 am

Shouldn’t the effects of El Nino be apparent on the satellite datasets by now?

JP
June 9, 2015 10:45 am

“Effects”? Do you mean temperature? Yes, it appears May 2015 +0.27 anomaly (an increase of 0.21 from April) is an indication that El Nino is showing up in the satellite data.

MikeB
June 9, 2015 10:55 am
Werner Brozek
June 9, 2015 11:47 am

RSS also went up from 0.175 to 0.310. However it was not enough to prevent an extra month from being added to the pause. The slope just became a bit less negative.
I do not know how the 18 years and 4 months will be affected for UAH. It will either stay at 18 years and 4 months or change by 1 month either way.

David A
Reply to  Werner Brozek
June 11, 2015 6:37 am

Attention WERNER, it is inadequate to say the satellites show 2015 as the sixth warmest year. The difference between 1998 and 2014 and 2015 is huge, far beyond any possible error in the satellites. In a race when 2nd through 10th place is separated by a few hundredth of a second, but first place is in first by two tenths of a second, it is important to highlight the difference.

June 9, 2015 11:20 am

https://bobtisdale.wordpress.com/2013/03/11/is-ocean-heat-content-data-all-its-stacked-up-to-be/
This data contradicts the points Climate Pete keeps trying to make. Climate Pete’s observations being very subjective.

Mary Brown
Reply to  Salvatore Del Prete
June 9, 2015 12:29 pm

Ocean heat, even if accepted as claimed, represents a 0.01 deg C or 0.02 deg rise in temp in 11 years. I get that water is dense and holds a lot of heat so spare me the lecture. But are we really going to conclude that catastrophic global warming is occurring based on an ‘estimate’ of 0.02 deg of ocean warming in a decade? Especially when many, many other data sources (satellite, wx models, balloons, sea level) disagree.
I suspect my average body temp increases by more than .02 deg when I take a single sip of morning coffee. It has taken the earth’s oceans 10 years to warm that much. Am I supposed to be afraid of that ? This is crazy.

Richard Barraclough
Reply to  Mary Brown
June 10, 2015 4:15 am

Science or Fiction says on
June 9, 2015 at 3:01 pm
Good point. 🙂 Will it be possible for you to show us the input figures and the calculation?
Yes Mary – you are quite right.
The volume of the oceans is about 1.33 billion cu km
Therefore the mass of the oceans is about 1.33 * 10^21 kgs
To raise the temperature or 1 kg of water by 1 degree C needs 4180 joules
So to raise the oceans by 1 degree, multiply those 2 figures and you will need 5.5 * 10^24 joules
An increase in heat content of 5.5 * 10^22 joules will therefore raise the (average) ocean temperature by 0.01 degrees C
Not too scary

It doesn't add up...
Reply to  Mary Brown
June 10, 2015 8:58 am

I like the idea of computing the energy in eV. At 6.24 e19 eV/J it makes the numbers look reallyscary – or the original number useless for getting a real feel.

rgbatduke
Reply to  Mary Brown
June 12, 2015 1:04 pm

I suspect my average body temp increases by more than .02 deg when I take a single sip of morning coffee. It has taken the earth’s oceans 10 years to warm that much. Am I supposed to be afraid of that ? This is crazy.

No, you are mistaken, and besides, you aren’t saying it right. The thermal energy content of your body is (gracing you with a body mass of a mere 50 kg) on the order of 50×10^3 x 4 x 310 = 62 MJ. The thermal energy of a 1 gram sip of HOT coffee is 1 x 4 x 360 = 1440 J, but of that only 1 x 4 x 50 = 200 J count. Obviously, the sip of coffee does (on average) “warm you up” but by nowhere near a full 0.02 degree. You’d have to go out to a lot more significant digits than that.
So here’s the way it works in climate science. Presenting the warming in degree C is enormously not alarming, since none of us could detect a change in temperature of 0.02 C with our physiology if our lives depended on it. Presenting the warming as a fraction of the total heat content is not alarming, because one has to put too many damn zeros in front of the first zero, even if one expresses the result as a percent. So instead, let’s use an energy unit that is truly absurd in the context of the total energy content of the ocean, I mean “your body” — joules aren’t scary enough because 1440 isn’t a terribly big number, let’s try ergs, at 10^{-7} J/erg. Now your sip of coffee increased your body’s heat content by 1.44 x 10^10 ergs! Oh No! That’s an enormous number! Let’s all panic! Don’t take another sip, for the love of God, and in fact we’re going to have to ban the manufacture of coffee world wide in case coffee sipper increase their energy content by 5 x 10^10 ergs, which (as everybody knows) is certain to have a catastrophic effect on your physiology, major fever, could cause your cell’s natural metabolism to undergo a runaway warming that leaves you cooked where you stand like a Christmas Goose!
And yes, you said this quite right — you are supposed to be afraid of this, afraid enough to ban the growing or consumption of coffee, afraid enough to spend 400 billion dollars a year (if necessary) to find coffee substitutes that are just like drinking coffee (except for the being hot part), afraid enough to give your government plenipotentiary powers to work out any deal necessary with the United Nations or countries like Columbia that are major coffee exporters since the worldwide embargo on coffee will obviously impact them even more than a coffee-deprived workforce will impact us.
And consider yourself lucky! If they had used electron volts instead of ergs, the numbers would have looked like the graph above on a suitably chosen scale, because 1 ev = 1.6 x 10^{-19} joules, or 1440 J = 9 x 10^21 eV, just about 10^22 eV. Now that is really scary. Heck, that’s almost Avogadro’s Number of eV, and everybody knows that is a positively huge number. If we don’t ban coffee drinking right now you could end up imbibing 3, 4 even 5 x 10^22 eV of energy in a matter of seconds, and then it will be Too Late.
Now do you understand? Your body sheds heat at an average rate of roughly 100 W, and during that one small sip — if it takes 1 whole second to swallow — your energy imbalance was almost 1340 W! In comparison to 0.5 W/m^2 out of several hundred watts/m^2 total average insolation, if you took a 1 gram sip every hour (1440/3600 ~ 0.5 W) surely you can see that you would without any question die horribly in a matter of hours. In fact, you might as well go stick your head in a microwave oven, especially a microwave oven hooked up to a flashlight battery illuminating a couple of square meters, because that’s well known to be able to melt any amount of arctic ice.
Oh, one final comment. If you choose to present your body’s coffee-sip overheating in eV, be sure to never, ever, discuss probable error or the plausibility of being able to measure the effect against the body’s natural homeostatic, homeothermic temperature regulatory mechanisms without which you would die in short order coffee or not. Because 5 x 10^22 eV plus or minus 10^20 or 10^21 eV doesn’t have the same ring to it…
rgb

Reply to  Mary Brown
June 12, 2015 1:13 pm

RGB,
Microergs would be even scarier still!

Reply to  Mary Brown
June 12, 2015 1:25 pm

rgbatduke,
Thanks for posting what’s needed to be said for a long time. As usual, your comments make sense.
The alarmist brigade loves to use those big, scary numbers. To the science-challenged, that’s probably an effective tactic. It’s the “Olympic-sized swimming pools” argument (“Ladies and gents, that is the equivalent of X number of Olympic-sized swimming pools! Run for your lives!”)
Context is everything in cases like these. If the oceans continue to warm by 0.23ºC/century (if…), then on net balance it’s probably a win-win: the Northwest passage will eventually be ice-free, reducing ship transit times and fuel use; vast areas like Canaduh, Siberia, etc., will be opened to agriculture, and so on. What’s the downside? Bueller …?
But when they translate the numbers into ergs or joules, they can post really enormous numbers — which mean nothing different, except to the arithmetic challenged.
That isn’t science, it’s propaganda. Unfortunately, it works on some folks. But the rest of us know they’re trying to sell us a pig in a poke. Elmer Gantry would be impressed with the tactic.

John Peter
June 9, 2015 11:22 am

Steven Goddard has an interesting take on the difference between versions 2 and 3
http://realclimatescience.com/2015/06/data-tampering-on-the-other-side-of-the-pond/

June 9, 2015 11:25 am

http://thefederalist.com/2015/06/08/global-warming-the-theory-that-predicts-nothing-and-explains-everything/
Can not be said any better. AGW is agenda driven and not based on true observational data.

knr
June 9, 2015 11:26 am

No matter what way you look at it, what we see hear is the reality of just how ‘unsettled’ this self claimed ‘settled science’ really is .

The Ghost Of Big Jim Cooley
June 9, 2015 1:29 pm

Exactly! We don’t know. Unfortunately, there are poeple on both sides who think they do.

David A
Reply to  The Ghost Of Big Jim Cooley
June 11, 2015 6:41 am

Ghost, we do know the benefits of increased CO2. We do know the world is not warming like the models predicted. We do know droughts and hurricanes and SL rise and all manner of disaster is not and are not increasing. We do know that energy is the life blood of every economy.

Randy
June 9, 2015 12:05 pm

When we get into discussions of a fraction of a degree I always remember back in the 80s being told from many official sources that we could NEVER have accurately measured the earths temps, even in the modern era. The dataset was ONLY useful for longer term and clear trends. Did this stop being true at some point? Havent heard this mentioned in years. All this talk of single hottest years is 100% meaningless from what was being said back then. Also the “pause” is very clear and undeniable.

June 9, 2015 1:36 pm

I have compared the global anomalies of the trend 1978-2015 for GISS and new UAH, Striking is the fact that GISS invents hot data where there are no observations like central Greenland and the high arctic
UAH:
GISS:

The Ghost Of Big Jim Cooley
Reply to  Hans Erren
June 10, 2015 3:17 am

Hans are you sure about that? I thought there are stations, at least four? Is that wrong?
http://data.giss.nasa.gov/cgi-bin/gistemp/find_station.cgi?dt=1&ds=14&name=&world_map.x=314&world_map.y=17

Reply to  The Ghost Of Big Jim Cooley
June 10, 2015 12:07 pm

yes but the stations are on the edge of greenland, GISS EXTRAPOLATES the data onto the icecap where UAH reports an OBSERVED negative trend.

Reply to  The Ghost Of Big Jim Cooley
June 10, 2015 12:18 pm

Hans,
Telling point, but there are only three actually on Greenland, with two nearby in Canada.

siamiam
June 9, 2015 3:53 pm

So, Climate Pete says there is still AGW but hidden by random external factors that apparently never existed before. So man is causing GW, but we just can’t see it???? Huh!!!!!

June 9, 2015 3:59 pm

“Why are the new satellite and ground data sets going in opposite directions? Is there any reason that you can think of where both could simultaneously be correct? ”
1. They measure different things.
2. They estimate and interpolate in different ways.
3. Both are constructed from differing instruments.
4. Both are heavily adjusted.
5. They measure at different times.
Could they both be “correct”?
That’s a silly question. they are both incorrect. both contain error. both are estimates. they estimate different things in different ways.
Comparing the two is vastly more complex than looking a wiggles on the page. Both wiggles are heavily adjusted, heavily processed, and hard to audit.

Werner Brozek
Reply to  Steven Mosher
June 9, 2015 4:25 pm

Could they both be “correct”?
That’s a silly question. they are both incorrect. both contain error. both are estimates. they estimate different things in different ways.
Comparing the two is vastly more complex than looking a wiggles on the page. Both wiggles are heavily adjusted, heavily processed, and hard to audit.

If that is the case, then there seems to be no justification for spending hundreds of billions of dollars to mitigate something we are not sure is even happening. Would you agree?

Doonman
Reply to  Werner Brozek
June 10, 2015 2:53 am

Raises hand uselessly. I live in California.

Reply to  Werner Brozek
June 10, 2015 10:03 am

No.
your conclusion isnt supported by what I said.
we cannot measure the rain correctly.
we cannot measure sea level correctly
we cannot measure hurricane winds correctly.
none of these prevent us from taking action to adapt to floods drought and storms.

Werner Brozek
Reply to  Werner Brozek
June 10, 2015 11:19 am

none of these prevent us from taking action to adapt to floods drought and storms

We should all take action to mitigate things that can affect us wherever we live. Whether or not we can prove that 15 floods per century may rise to 18 floods per century should really have no effect on how we ought to prepare for example.

Reply to  Werner Brozek
June 10, 2015 8:02 pm

“none of these prevent us from taking action to adapt to floods drought and storms.”
Floods, droughts, storms, earthquakes, volcanoes etc. have all happened in the past.
Catastrophic anthropogenic global warming has never happened. There is currently no evidence that it ever will. There are many predictions of CAGW, but then there have been many predictions of the end of the world, too.
Extraordinary claims require extraordinary evidence.

Science or Fiction
Reply to  Steven Mosher
June 10, 2015 12:48 am

A fundamental principle within measurement, or estimation if you like, is to have a well defined measurand. I Think that temperature data products fails to meet this criteria.
I think that the measurand, the product, of the various temperature data products is not well defined. Even though it is not well defined, it is obvious that it keeps changing. Also, water temperature and air temperature seems to be combined without taking into account differences in mass and heat capacity.
When measurands are not well defined how are you supposed to compare the output from various temperature data products or test the output of climate models?

richardscourtney
Reply to  Science or Fiction
June 10, 2015 1:25 am

Science or Fiction
You say

When measurands are not well defined how are you supposed to compare the output from various temperature data products or test the output of climate models?

Yes. Indeed, it is worse than you say for the following reasons.
1.
There is no agreed definition of global average surface temperature anomaly (GASTA).
2.
Each team that provides a version of GASTA uses a unique definition of GASTA.
3.
Each team that provides a version of GASTA changes its definition of GASTA almost every month with resulting alteration of past data.
4.
There is no possibility of a calibration standard for GASTA whatever definition of GASTA is adopted or could be adopted.
In case you have not seen it, I again link to this and draw especial attention to its Appendix B.
Richard

Science or Fiction
June 10, 2015 11:47 am

Thank you for the link. This makes me think that it is appropriate to draw attention to a quote by Karl R. Popper in his book “The logic of scientific discovery”.
“It is still impossible, for various reasons, that any theoretical system should ever be conclusively falsified. For it is always possible to find some way of evading falsification, for example by introducing ad hoc an auxiliary hypothesis, or by changing ad hoc a definition. It is even possible without logical inconsistency to adopt the position of simply refusing to acknowledge any falsifying experience whatsoever. Admittedly, scientists do not usually proceed in this way, but logically such procedure is possible.”
We could add that the the global warming theory is also not well defined. Exactly what is supposed to be warming? By how much?
Exactly the behavior Karl Popper warns about.

beng135
Reply to  Steven Mosher
June 10, 2015 7:14 am

No, Mosh, they don’t measure “different” things. The surface measurements are a tiny subset of the satellites, the sats measuring the bulk of the atmosphere and the ground measuring, well, just above the ground. So the ground measurements are a tiny, 2-dimensional subset of the sats, volume-wise.
Yes, you know that, but the continual pushing of the “they are different” meme is dishonest.

June 10, 2015 10:00 am

The surface stations measure air temperature at 2meters. they measure TMIN and TMAX once a day.
Satellites measure BRIGHTNESS at the sensor. This measure is not a tmax or tmin measure. a single temperature in INFERRED from the radiance .
“AMSUs are always situated on polar-orbiting satellites in sun-synchronous orbits. This results in their crossing the equator at the same two local solar times every orbit. For example EOS Aqua crosses the equator in daylight heading north (ascending) at 1:30 pm solar time and in darkness heading south (descending) at 1:30 am solar time.
The AMSU instruments scan continuously in a “whisk broom” mode. During about 6 seconds of each 8-second observation cycle, AMSU-A makes 30 observations at 3.3° steps from −48° to +48°. It then makes observations of a warm calibration target and of cold space before it returns to its original position for the start of the next scan. In these 8 seconds the subsatellite point moves about 45 km, so the next scan will be 45 km further along the track. AMSU-B meanwhile makes 3 scans of 90 observations each, with a spacing of 1.1°.
During any given 24-hour period there are approximately 16 orbits. Almost the entire globe is observed in either daylight or nighttime mode, many in both. Polar regions are observed nearly every 100 minutes.”
The brightness is then transformed into a ESTIMATE of temperature kilometers above the surface.
This estimate depends on no less than…
A) idealized atmospheric profiles
B) a radiative transfer model for microwave.
In short, if you have Brightness at the sensor you have to run a physics model to estimate the temperature of the atmosphere that could have produced that brightness at the sensor.
An analogous problem might be this: Your vehicle gets hit by a bullet travelling 1000 fps. you know the enemy as a gun with a muzzle velocity of 3000 fps and you then can figure out how far away he was when he fired. In this case Distance is inferred from terminal velocity and laws of motion. In the case of satilletes temperature is inferred from brightness and the laws of radiative transfer.
Its those laws.. the laws of radiative transfer that tell us doubling c02 will add 3.71 watts to our system.
( all else being equal)
FYI
“The temperature of the atmosphere at various altitudes as well as sea and land surface temperatures can be inferred from satellite measurements. These measurements can be used to locate weather fronts, monitor the El Niño-Southern Oscillation, determine the strength of tropical cyclones, study urban heat islands and monitor the global climate. Wildfires, volcanos, and industrial hot spots can also be found via thermal imaging from weather satellites.
Weather satellites do not measure temperature directly but measure radiances in various wavelength bands. Since 1978 Microwave sounding units (MSUs) on National Oceanic and Atmospheric Administration polar orbiting satellites have measured the intensity of upwelling microwave radiation from atmospheric oxygen, which is proportional to the temperature of broad vertical layers of the atmosphere. ”
“Satellites do not measure temperature. They measure radiances in various wavelength bands, which must then be mathematically inverted to obtain indirect inferences of temperature.[1][2] The resulting temperature profiles depend on details of the methods that are used to obtain temperatures from radiances. As a result, different groups that have analyzed the satellite data have produced differing temperature datasets. Among these are the UAH dataset prepared at the University of Alabama in Huntsville and the RSS dataset prepared by Remote Sensing Systems. The satellite series is not fully homogeneous – it is constructed from a series of satellites with similar but not identical instrumentation. The sensors deteriorate over time, and corrections are necessary for orbital drift and decay. Particularly large differences between reconstructed temperature series occur at the few times when there is little temporal overlap between successive satellites, making intercalibration difficult”
Uddstrom, Michael J. (1988). “Retrieval of Atmospheric Profiles from Satellite Radiance Data by Typical Shape Function Maximum a Posteriori Simultaneous Retrieval Estimators”.

The Ghost Of Big Jim Cooley
June 10, 2015 11:01 am

As I have said many times in the past week, and on different threads, we really shouldn’t be looking at satellite data. But you have people here (no need for me to name them) who think it is the only dataset to use! I have to smile though; if satellite data showed warming in excess of that of the surface, warmists would be using it! I’m absolutely positive about that – they would be hugging it and saying, ‘See, we told yer’. It’s a nonsense way of ‘recording’ temperature, no matter what it shows. We don’t have a reliable and unadulterated dataset. They’ve all been abused and altered beyond comprehension now. Science is the loser. As I said before, we need to start again, with 100-metre high towers, topped with Stevenson Screens, unadjusted data beamed live to the internet.

Mary Brown
Reply to  The Ghost Of Big Jim Cooley
June 10, 2015 12:47 pm

“As I said before, we need to start again, with 100-metre high towers, topped with Stevenson Screens, unadjusted data beamed live to the internet.”
That won’t solve anything. Then there will be the issue of blending the old data with the new and lead to a new round of cooling the past.

June 10, 2015 11:17 am

Mosh & Ghost,
I think you’re missing the point. Whether any particular data point is accurate is not the point. What is important is the trend.
Satellites cover (almost) the whole globe. Thus they can show temperature trends more accurately than land-based thermometers (or ARGO for that matter).
The central metric in the man-made global warming debate is the temperature trend. Satellite measurements do not show any real trend. If global T is up or down by a tenth of a degree or two, that is to be expected. Global temperatures are never absolutely flat. Naturally, temperatures will fluctuate.
The elephant in the room, as they say is the fact that despite a large rise in CO2, global temperatures have not responded as predicted. That means the CO2=AGW conjecture has something seriously wrong with it. What other conclusion could you arrive at?

The Ghost Of Big Jim Cooley
June 10, 2015 12:32 pm

I have no problem whatsoever in saying that, quite evidently, CO2’s effect on climate is nothing like we were warned (if at all, even!). I find predictions of doom absurd. As you say (and I have said myself, many times), despite ‘massive forcing’ the global temperature has hardly risen at all, if it has. It has made a mockery of CO2-induced warming. But I come back to satellites being a curious metric. If I said to you that the width of leeks growing in Norfolk have shown no trend at all in the past 18 years, they would still be a rotten tool to use in climatology. The trend doesn’t matter, satellites are still a rotten tool to use, given the peculiar way ‘temperature’ is arrived at. Trend just doesn’t come into it. dbstealey, you should really desist from welcoming RSS and UAH as a scientific way to indicate temperature and temperature trend. But it’s up to you.