HadCRUT4 is From Venus, GISS is From Mars (Now Includes November Data)

WoodForTrees.org – Paul Clark – Click the pic to view at source

Image Credit: WoodForTrees.org

Guest Post By Werner Brozek, Edited By Just The Facts

With apologies to John Gray, it can be seen from the above graph that the two data sets often either go in opposite directions or are different in other ways. Looking at the first 11 months, there are three months where the jumps are similar, namely May, October and November. However even for November, where both went in the same direction, there is a small discrepancy with respect to the ranking of their respective November anomalies. As we know, November on GISS at 0.77 was the warmest November ever. However the HadCRUT4 November at 0.596 was the third warmest November ever.

Here are the November 2013 rankings on the following other data sets with respect to all other Novembers: HadCRUT3 (1), Hadsst3 (2), UAH (8), and RSS (13). An earlier post on WUWT asked “Claim: November 2013 is the ‘warmest ever’ – but will the real November 2013 temperature please stand up?

In my opinion, the best answer would be how WTI ranks this November. (WTI is a combination of Hadcrut3, UAHversion5.5, RSS, and GISS. It could be argued that WTI would be more meaningful with UAH version 5.6 as well as Hadcrut4 data instead of Hadcrut3. However until that change is made, I have to go with what I have.) WTI gives the November 2013 average as 0.212. This would rank it 7th. It is below the following years, from highest to lowest: 2009 (0.296), 2005, 2010, 2001, 2012, and 2004 (0.219).

One thing to keep in mind is that we are talking about anomalies in November. In terms of actual temperatures, the global change varies from 12.0 C in January to 15.8 C in July. An excellent explanation of this is given at this site. So regardless how high the anomaly was in November, the earth was not sizzling hot. The coldest July since 1850 was still way warmer than this November, as far as actual temperatures are concerned.

As well, as davidmhoffer has often noted, it takes very little energy to raise the temperature of dry, cold air by a certain amount versus raising hot moist air by the same amount. “For easy figuring, it takes about 1.8 w/m2 to raise the temperature in the Antarctic from 200K to 201K, or 1 degree. But that same 1.8 w/m2 in the tropics at 303K only raises the temperature by less than 0.3 degrees!”

Apparently all anomalies are only accurate to 0.1 degrees if I am not mistaken. If that is the case, then some months are really pushing the limits. For example, for the month of March, the difference between Hadcrut4 and GISS is 0.195. However for the month of July, there is no difference. In September, the difference is 0.198. While these differences are technically just within the error bars, they do not inspire confidence in their accuracy.

To put these numbers into perspective, the warmest year in HadCRUT4 is 2010 where the anomaly was 0.547. Subtracting 0.198 from this gives 0.349. An anomaly of 0.349 would rank only 15th! Would you trust GISS or HadCRUT4 or neither if you had to make trillion dollar decisions?

In the parts below, we will present you with the latest facts. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on several data sets. The second section will show for how long there has been no statistically significant warming on several data sets. The third section will show how 2013 to date compares with 2012 and the warmest years and months on record so far. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.

Section 1

This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.

On all data sets below, the different times for a slope that is at least very slightly negative ranges from 8 years and 11 months to 17 years and 3 months.

1. For GISS, the slope is flat since September 2001 or 12 years, 3 months. (goes to November)

2. For Hadcrut3, the slope is flat since June 1997 or 16 years, 6 months. (goes to November)

3. For a combination of GISS, Hadcrut3, UAH and RSS, the slope is flat since December 2000 or exactly 13 years. (goes to November)

4. For Hadcrut4, the slope is flat since December 2000 or exactly 13 years. (goes to November)

5. For Hadsst3, the slope is flat since December 2000 or exactly 13 years. (goes to November)

6. For UAH, the slope is flat since January 2005 or 8 years, 11 months. (goes to November using version 5.5)

7. For RSS, the slope is flat since September 1996 or 17 years, 3 months (goes to November). RSS has passed Ben Santer’s 17 years.

The next graph shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the sloped wiggly line shows how CO2 has increased over this period.

WoodForTrees.org – Paul Clark – Click the pic to view at source

When two things are plotted as I have done, the left only shows a temperature anomaly.

The actual numbers are meaningless since all slopes are essentially zero and the position of each line is merely a reflection of the base period from which anomalies are taken for each set. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 17 years, the temperatures have been flat for varying periods on various data sets.

The next graphs shows the above, but this time, the actual plotted points are shown along with the slope lines and the CO2 is omitted.

WoodForTrees.org – Paul Clark – Click the pic to view at source

Section 2

For this analysis, data was retrieved from Nick Stokes moyhu.blogspot.com. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 16 and 21 years.

The details for several sets are below.

For UAH: Since January 1996: CI from -0.024 to 2.445

For RSS: Since November 1992: CI from -0.008 to 1.959

For Hadcrut4: Since August 1996: CI from -0.005 to 1.345

For Hadsst3: Since January 1994: CI from -0.029 to 1.697

For GISS: Since June 1997: CI from -0.007 to 1.298

Section 3

This section shows data about 2013 and other information in the form of a table. The table shows the six data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadcrut3, Hadsst3, and GISS. Down the column, are the following:

1. 12ra: This is the final ranking for 2012 on each data set.

2. 12a: Here I give the average anomaly for 2012.

3. year: This indicates the warmest year on record so far for that particular data set. Note that two of the data sets have 2010 as the warmest year and four have 1998 as the warmest year.

4. ano: This is the average of the monthly anomalies of the warmest year just above.

5. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year.

6. ano: This is the anomaly of the month just above.

7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0.

8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month is followed by the last two numbers of the year.

9. Jan: This is the January, 2013, anomaly for that particular data set.

10. Feb: This is the February, 2013, anomaly for that particular data set, etc.

21. ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months. However if the data set itself gives that average, I may use their number. Sometimes the number in the third decimal place differs slightly, presumably due to all months not having the same number of days.

22. rnk: This is the rank that each particular data set would have if the anomaly above were to remain that way for the rest of the year. It may not, but think of it as an update 55 minutes into a game. Due to different base periods, the rank is more meaningful than the average anomaly. A “!” indicates a tie for that rank.

Source UAH RSS Had4 Had3 Sst3 GISS
1. 12ra 9th 11th 9th 10th 9th 9th
2. 12a 0.161 0.192 0.448 0.403 0.346 0.57
3. year 1998 1998 2010 1998 1998 2010
4. ano 0.419 0.55 0.547 0.548 0.416 0.67
5. mon Apr98 Apr98 Jan07 Feb98 Jul98 Jan07
6. ano 0.66 0.857 0.829 0.756 0.526 0.93
7. y/m 8/11 17/3 13/0 16/6 13/0 12/3
8. sig Jan96 Nov92 Aug96 Jan94 Jun97
Source UAH RSS Had4 Had3 Sst3 GISS
9. Jan 0.504 0.439 0.450 0.392 0.292 0.63
10.Feb 0.175 0.192 0.479 0.425 0.309 0.51
11.Mar 0.183 0.203 0.405 0.387 0.287 0.60
12.Apr 0.103 0.218 0.427 0.401 0.364 0.48
13.May 0.077 0.138 0.498 0.475 0.382 0.56
14.Jun 0.269 0.291 0.457 0.425 0.314 0.60
15.Jul 0.118 0.222 0.520 0.489 0.479 0.52
16.Aug 0.122 0.166 0.528 0.490 0.483 0.61
17.Sep 0.294 0.256 0.532 0.519 0.457 0.73
18.Oct 0.227 0.207 0.478 0.443 0.391 0.60
19.Nov 0.110 0.131 0.596 0.556 0.427 0.77
Source UAH RSS Had4 Had3 Sst3 GISS
21.ave 0.198 0.224 0.486 0.455 0.380 0.60
22.rnk 7th 9th! 8th 6th 6th 6th!

If you wish to verify all of the latest anomalies, go to the following links, For UAH, version 5.5 was used since that is what WFT used, RSS, Hadcrut4, Hadcrut3, Hadsst3,and GISS

To see all points since January 2013 in the form of a graph, see the WFT graph below.

WoodForTrees.org – Paul Clark – Click the pic to view at source

Note that the satellite data sets often go in the opposite direction to the others. Can you think of any reason for this? As you can see, all lines have been offset so they all start at the same place in January.

Appendix

In this part, we are summarizing data for each set separately.

RSS

The slope is flat since September 1996 or 17 years, 3 months. (goes to November) RSS has passed Ben Santer’s 17 years.

For RSS: There is no statistically significant warming since November 1992: CI from -0.008 to 1.959.

The RSS average anomaly so far for 2013 is 0.224. This would rank as a two way tie for 9th place if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2012 was 0.192 and it came in 11th.

UAH

The slope is flat since January 2005 or 8 years, 11 months. (goes to November using version 5.5)

For UAH: There is no statistically significant warming since January 1996: CI from -0.024 to 2.445.

The UAH average anomaly so far for 2013 is 0.198. This would rank 7th if it stayed this way. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.66. The anomaly in 2012 was 0.161 and it came in 9th.

Hadcrut4

The slope is flat since December 2000 or exactly 13 years. (goes to November)

For HadCRUT4: There is no statistically significant warming since August 1996: CI from -0.005 to 1.345.

The Hadcrut4 average anomaly so far for 2013 is 0.486. This would rank 8th if it stayed this way. 2010 was the warmest at 0.547. The highest ever monthly anomaly was in January of 2007 when it reached 0.829. The anomaly in 2012 was 0.448 and it came in 9th.

Hadcrut3

The slope is flat since June 1997 or 16 years, 6 months. (goes to November)

The Hadcrut3 average anomaly so far for 2013 is 0.455. This would rank 6th if it stayed this way. 1998 was the warmest at 0.548. The highest ever monthly anomaly was in February of 1998 when it reached 0.756. One has to go back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less. The anomaly in 2012 was 0.403 and it came in 10th.

Hadsst3

For Hadsst3, the slope is flat since December 2000 or exactly 13 years. (goes to November).

For Hadsst3: There is no statistically significant warming since January 1994: CI from -0.029 to 1.697.

The Hadsst3 average anomaly so far for 2013 is 0.380. This would rank 6th if it stayed this way. 1998 was the warmest at 0.416. The highest ever monthly anomaly was in July of 1998 when it reached 0.526. The anomaly in 2012 was 0.346 and it came in 9th.

GISS

The slope is flat since September 2001 or 12 years, 3 months. (goes to November)

For GISS: There is no statistically significant warming since June 1997: CI from -0.007 to 1.298.

The GISS average anomaly so far for 2013 is 0.60. This would rank as a 3 way tie for 6th place if it stayed this way. 2010 was the warmest at 0.67. The highest ever monthly anomaly was in January of 2007 when it reached 0.93. The anomaly in 2012 was 0.57 and it came in 9th.

Conclusion

Different data sets can give very different anomalies for any given month. As well, there can be spikes in any given month from some data sets, but not in others. Surface data sets have all kinds of issues such as UHI and poor stations that the satellite data sets do not have. On the other hand, satellites measure slightly different things.

One cannot lose perspective either. While this November was very warm on some data sets, the rankings for 2013 on the six data sets I discuss varies from 6 to 9 after 11 months. Some of these rankings are ties or very close to other years so a departure in December from the current average could easily change these rankings by a small amount. However a record warm year for 2013 is totally out of reach on all data sets.

Just for the fun of it, if you want to know what it would take to set a record, take the value in row 4 of the table, (0.67 for GISS), and subtract the value in row 21 of the table (0.60 for GISS). This gives 0.07 for GISS. Multiply this by 12 and add to the number in row 21 of the table. So GISS would need a December anomaly of 0.07 x 12 + 0.60 = 1.44 to tie a record.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
John Tillman

Shouldn’t GISS be from Venus?

Jeef

November and December have been warm in NZ. When the data from the seven suspect surface stations is used to infill most of the south pacific I’m not surprised to see the result.

Gunga Din

So the weird weather is caused by a conjunction of Mars and Venus and not CO2.
Makes as much sense.

Great job, I love graphs. I too keep up with the UAH and Hadcrut4 data sets and this is my favorite one to show others how the last 17 yrs have been flat in spite of rising CO2.
http://cosmoscon.files.wordpress.com/2013/11/co2-vs-cru.jpg

John Peter

Can Steve Goddard perhaps explain why GISS may show higher values than HADCRUT?
http://stevengoddard.wordpress.com/2013/12/21/thirteen-years-of-nasa-data-tampering-in-six-seconds/

davidmhoffer

Note that the satellite data sets often go in the opposite direction to the others. Can you think of any reason for this?
My guess is the difference is systemic. The satellite data sets derive their data from almost the exact same satellites. My recollection is there is only one that is different. Since their data is pretty much the same, most of the difference between them (not all) can be attributed to processing approaches. Hence no surprise that they move in tandem with each other.
The land/ocean based sets on the other hand are from weather station and ship.buoy based sst measurements. Again, my understanding being that they use almost the exact same data, it is no surprise that their results move in tandem with one another, and most of the difference (not all) is again processing approach.
Not that anyone can say for certain that one is right and the other wrong, but the land/ocean based temperature sets are subject to everything from siting issues to station moves to station drop out to UHI changes and so on. Plus, at end of day, the best that can be said about their coverage is that it sucks. The satellites have pretty good coverage, and while they are affected by drift and other factors, these are increasingly known and understood and corrected for. Their coverage isn’t perfect, but by comparison to land/ocean data, it is excellent in terms of both coverage and consistent data gathering.

Jeff Alberts

Conclusion: A “global temperature” is physically meaningless. Anomalies of same are equally meaningless.

RichardLH

The similarity/differences are even more apparent if one takes just a slightly longer view.
http://www.woodfortrees.org/plot/rss/from:2010/plot/gistemp/from:2010/plot/uah/from:2010/plot/hadcrut4gl/from:2010
Soon one set or the other is going to turn a corner and make for the others. The question re they going up or down!

John Peter says:
December 22, 2013 at 2:37 pm
Can Steve Goddard perhaps explain why GISS may show higher values than HADCRUT?
The main reason GISS if higher here is due to a different baseline. As for “adjustments” I am not convinced HadCRUT is less guilty. For one thing, they switched from HadCRUT3 to HadCRUT4. And guess which has the longer time for a flat slope. And not too long after the new and improved HadCRUT4 came out, it needed more adjusting. See:
http://wattsupwiththat.com/2013/05/12/met-office-hadley-centre-and-climatic-research-unit-hadcrut4-and-crutem4-temperature-data-sets-adjustedcorrectedupdated-can-you-guess-the-impact/
A comment I made there was:
Werner Brozek says:
May 13, 2013 at 3:56 pm
dwr54 says:
May 13, 2013 at 8:27 am
I was surprised that people were making allegations of “corruption” against the HadCRUT4 producers
From 1997 to 2012 is 16 years. Here are the changes in thousandths of a degree with the new version of Hadcrut4 being higher than the old version in all cases. So starting with 1997, the numbers are 2, 8, 3, 3, 4, 7, 7, 7, 5, 4, 5, 5, 5, 7, 8, and 15. The 0.015 was for 2012. What are the chances that the average anomaly goes up for 16 straight years by pure chance alone if a number of new sites are discovered? Assuming a 50% chance that the anomaly could go either way, the chances of 16 straight years of rises is 1 in 2^16 or 1 in 65,536. Of course this does not prove fraud, but considering that “HadCRUT4 was introduced in March 2012”, it just begs the question why it needed a major overhaul only a year later.
I believe people should not wonder why suspicions are aroused as to whether or not everything is kosher.

Kip Hansen

Tillman ==> Yes….
The silliness of watching a global temperature figure (silly enough on its own) on a MONTLY basis is beyond the great beyond. Not only doesn’t this possibly qualify as climate, it barely qualifies as anything other than a game played with numbers on big computers.

Scott Scarborough

It should be mentioned that the satellite and ground based temperature data sets SHOULD BE different from one another. The satellites measure the temperature 14,000 feet above the ground whereas the ground measurements record 6 or 7 feet above the ground. If the atmosphere is warming, the satellites should measure a greater anomaly than the ground measurements because of the tropospheric hot spot but they measure a lesser anomaly which proves that the tropospheric hot spot does not exist (climate science 101 is invalidated or someone is lying about the Glob Warming). This point should be emphasized every time the data sets are mentioned together.

Kip Hansen says:
December 22, 2013 at 3:03 pm
The silliness of watching a global temperature figure (silly enough on its own) on a MONTHLY basis is beyond the great beyond.
At one time, the Met office predicted: “Half of years from 2009-2014 predicted to be hotter than 1998”. If you are curious as to whether 2013 will be one of those years, then you may want to look at the monthly numbers and see what the chances are of that happening. (At this point, there is no chance that HadCRUT3 will beat 1998.) If heads of state were to be more up to date with what is happening, they may not make stupid blunders.

Rob

The surface “trend” is at variance with satellite derived data from both UAH and RSS. Physically,
that is quite impossible. Trouble! .

Scott Scarborough says:
December 22, 2013 at 3:29 pm
Just to be clear on this point, the supposed hot spot is 10 to 12 km up and not 14,000 feet.
I agree that on a monthly basis, there are reasons for differences between 14,000 feet and 6 feet, however when taking slopes over 16 years, they should show very similar things. For example, over a 16 year period, you would not expect the 14,000 foot area to show steadily rising temperatures while temperatures are dropping at the 6 foot level over 16 years. The laws of thermodynamics would not allow this.

Global temperature is meaningless? Ah yes there was no mwp or lia.

RoHa

For those of us who still have trouble counting on our fingers, is there a quick and easy definition of “not statistically significant”. For example, does it mean “within the error bars”, or “you won’t see your ice-cream melt any faster”?

Rob says:
December 22, 2013 at 4:06 pm
The surface “trend” is at variance with satellite derived data from both UAH and RSS. Physically,
that is quite impossible. Trouble! .

What is odder still is why RSS is so different from UAH, GISS and HadCRUT4 from 1998. See:
http://www.woodfortrees.org/plot/gistemp/from:1998/trend/plot/hadcrut4gl/from:1998/trend/offset:0.09/plot/rss/from:1998/trend/offset:0.23/plot/uah/from:1998/trend/offset:0.39

Jim G

Please remember that statistical significance is only measuring the probability of the results being what they are due to random error based upon sample size. It says nothing about error due to poor measurements, poor systems, poor transmission, or, in the case of surface temperatures, UHI issues or substitution of data where none exist, or changes in data to purposely effect the results. Calling these ranges “error bars” is therefore misleading as they only reflect one type of error and with the measurements of “anomalies” being such small increments it is even more misleading with a tendency towards making mountains out of mole hills.

Jeff Alberts

Steven Mosher says:
December 22, 2013 at 4:27 pm
Global temperature is meaningless? Ah yes there was no mwp or lia.

And no modern “global” warming.

RoHa says:
December 22, 2013 at 4:28 pm
For those of us who still have trouble counting on our fingers, is there a quick and easy definition of “not statistically significant”.
Climate science has decided that in order to be statistically significant, you need to be at least 95% certain something is going to happen with respect to climate. So if there are 19 different groups of people measuring global temperature, and if one says there is cooling but 18 say there is warming, then the warming is NOT considered to be statistically significant. But if there are 21 different groups of people measuring global temperature, and if one says there is cooling but 20 say there is warming, then the warming IS considered to be statistically significant.
On the other hand, note this quote:
“The way the SPM deals with uncertainties (e.g. claiming something is 95% certain) is shocking and deeply unscientific. For a scientist, this simple fact is sufficient to throw discredit on the whole summary. The SPM gives the wrong idea that one can quantify precisely our confidence in the [climate] model predictions, which is far from being the case.” This is from:
http://www.climatechangedispatch.com/celebrated-physicist-calls-ipcc-summary-deeply-unscientific.html

Jimbo

After billions upon billions of US Dollars we still cant measure the Earth’s temperature. Yet we can send a man to the moon. It must be more difficult than we previously thought.

Scott Scarborough

UAH temperature plots, The ones that they produce every month, used to come from satellite measurements that were labeled “14,000 feet.” If they are no longer doing that it would be really odd because they are appending the monthly data to the same plot that they have always shown. That is correct that that is a little low for the tropospheric hot spot – it would be at the bottom of it (the hot spot visually runs from 4km to 16km above the surface) . But that level should still warm faster than the surface (whether the surface is defined as 5,6,or 10 feet above the ground).

Scott Scarborough

UAH measures several different levels but the plot that they publish and update every month and everyone talks about (even in this article) is from 14,000 feet.

Scott Scarborough says:
December 22, 2013 at 5:00 pm
But that level should still warm faster than the surface
Take a look at the following. Depending on whether you have an El Nino or a La Nina, it both warms faster and cools faster. It also shows greater extremes. See:
http://www.woodfortrees.org/plot/rss/from:1980/offset:0.2/plot/hadcrut4gl/from:1980

As we know, November on GISS at 0.77 was the warmest November ever. However the HadCRUT4 November at 0.596 was the third warmest November ever.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ever? EVER!???? Like in the last 4 billion years. Equates to “unprecedented”. Whenever I read “unprecedented” I know I am reading promotional material not science. “Ever” falls into the same category. Whenever I read worst “ever” or highest “ever” or greatest “ever” I know I am not reading science.

Scott Scarborough

Actually, It looks pretty close to hadCRUT4. In the big picture all of the temperature sets are probably in the ball park. But the divergence of GISS with the satellite sets is of concern. If there were any divergence, it should have been the other way around… the satellites warming faster than the surface. I believe the intent of the satellites was to get a good global warming signal by measuring where it should be the highest signal.

Wayne Delbeke says:
December 22, 2013 at 5:49 pm
As we know, November on GISS at 0.77 was the warmest November ever. However the HadCRUT4 November at 0.596 was the third warmest November ever.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ever? EVER!????

Ooops! I should have said:
“As we know, November on GISS at 0.77 was the warmest November ever on GISS since their records started in 1880 and after all adjustments were made. However the HadCRUT4 November at 0.596 was the third warmest November ever on HadCRUT4 since their records started in 1850 and after all adjustments were made. Of course the global November anomalies were warmer during the MWP.”
Is that better?

scarletmacaw

Steven Mosher says:
December 22, 2013 at 4:27 pm
Global temperature is meaningless? Ah yes there was no mwp or lia.

Enough of the snark.
It was warmer in the MWP, and cooler in the LIA. We know this from historical records, and the extent of glaciers.
Averaging temperature anomalies between the poles and equatorial regions, which represent differing amounts of thermal energy because of the T**4 relationship, IS meaningless.

Scott Scarborough says:
December 22, 2013 at 6:04 pm
I believe the intent of the satellites was to get a good global warming signal by measuring where it should be the highest signal.
I would have thought the satellites would be there to get the most accurate signal, whether high or low, since satellites can “see” areas where there are no thermometers.

Lady Life Grows

The strongest and most reliable data in all this howling confusion is the CO2 data from Dr. Keeling at Mauna Loa. We know for certain the magnitude and nature of the annual sine wave, and we know for certain that the trend has been rising continuously since Dr. Keeling first measured it. Well, as certain as anything can be in science: there is always the possibility of human error reading the instruments, or of instrumental drift. Since Dr. Keeling has been very careful about such things, his graph has a very high degree of confidence. Use it as an anchor in the scientific debate.
Dr. Keeling felt that something was wrong. I think he was right. The fossil fuels theory makes immediate sense–we all know we have been burning a lot of that. But those who had access to the numbers calculated that all the fossil fuels only accounted for a fifth of the change. Many, many things rose during the 20th century, and it appears that something else is the main source of the CO2 change.
I believe the answer is chemical-based agriculture, and the increased amount of land under cultivation. These two things damage the soils–causing the death of soil organisms and the release of the vast amount of carbon that had been sequestered in the soil all around the world. The consequence would be reduced fertility–but that consequence has been considerably mitigated by the CO2 availability for plant photosynthesis.
The reason we are still arguing over climate sensitivity and the rest of it is partly government greed for a new tax (which includes university scientist greed for some of that money)–and partly the fact that nearly all of us are utterly disconnected from where our food comes from. One farmer feeds 100 to 200 city dwellers. (They don’t even grow their own food any more). So we don’t see the damage to the soil or connect it to the water shortages from damaged land and idiotic irrigation. So we are prey to goofy garbage like AGW.

davidmhoffer

Steven Mosher says:
December 22, 2013 at 4:27 pm
Global temperature is meaningless? Ah yes there was no mwp or lia.
>>>>>>>>>>>>>>>>>>>>
For the purposes of understanding energy balance of the earth at any given moment, global temperature is not only meaningless, it can actually be entirely misleading. That says nothing about the value of understanding both the MWP and the LIA as both had dramatic effects on human civilization. The real question is are you capable of having an informative discussion regarding either or do you simply intend to further discredit yourself by constantly sniping from the sidelines while injecting nothing of value into the discussion.

Werner Brozek

davidmhoffer says:
December 22, 2013 at 2:45 pm
the land/ocean based temperature sets are subject to everything from siting issues to station moves to station drop out to UHI changes and so on
davidmhoffer says:
December 22, 2013 at 8:37 pm
For the purposes of understanding energy balance of the earth at any given moment, global temperature is not only meaningless, it can actually be entirely misleading.
Since you are well aware of how easy it is to raise the temperature of Siberia at this time of year, would you agree with another blogger (Dr. Norman Page?) that only sea surface temperatures should be used to assess if Earth warming or cooling? (In case you are wondering, Hadsst3 is flat for 13 years.)

Jim

As a suggested third line to the title of this peice,
“UAH is from earth”.

davidmhoffer

Werner Brozek;
Since you are well aware of how easy it is to raise the temperature of Siberia at this time of year, would you agree with another blogger (Dr. Norman Page?) that only sea surface temperatures should be used to assess if Earth warming or cooling?
>>>>>>>>>>>>>>>>>
The answer is that I don’t know what the right metric is. In fact, I don’t even know that there IS a metric that is right. Robert G Brown has written several lengthy discussions of same, and I’d direct you to his thoughts on the matter since his expertise and eloquence both outweigh my own. That said, I’d make two observations:
1. The heat capacity of the oceans is on the order of 1200 TIMES that of the atmosphere, and features a fraction of the variability of both atmosphere and land surface. In brief, the oceans are like a large adult dragging a small child named atmosphere through a busy shopping mall. The thrashing and wailing of the child might be what gets our attention, but there is no doubt that this is immaterial to the direction and speed of the child. So yes, I think the oceans are a far better indicator on their own as to general direction that the planet as a whole is taking. But;
2. Temperature as a proxy for energy flux doesn’t work, period. If we want to know if the energy flux is positive or negative, then we have to measure… the energy flux. Take all that temperature data we have, land, sst, satellite, all of it, and convert it to w/m2 first. THEN average it and trend it and calculate anomalies from it.
dmh

Nik Marshall-Blank

According to GISS the RECORD 2013 November anomaly is +77. http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
But according to the web archive http://web.archive.org/web/20101129173733/http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts.txt
a snapshot taken In 2010, November 2005 had an anomaly of +78. You can always claim the latest temperatures are always the highest if you always adjust down previous records.

AGW-enthusiast climate scientists are obviously concerned about the temperature of the earth. Since we have multiple means of measuring that temperature, which one do they consider to be most accurate, and why?
It seems clear from the graphs above that the anomaly differs by well over 0.5’C between the various temperature records. That’s a massive percentage of the warming they tell us has happened since 1850 (or 1980). I guess that when temperature records differ so widely, it’s hardly surprising that the model outputs are so scattered. GIGO comes to mind.
But it doesn’t feel like science just to average everything together and say “there we were right”. We need explanations for why each record is what it is and why it differs from the others. Then we can determine whether there is any truth in any of the records. Perhaps that’s why that’s not happening.

Werner Brozek

Nik Marshall-Blank says:
December 22, 2013 at 11:04 pm
But according to the web archive
Now I am really confused! Below are two current data sets for GISS, yet both are different. One shows November as 0.77 and in first place, the other as 0.87 and tied for third! But your archive matches neither. What is going on?
http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts.txt

Peter Ward says:
December 23, 2013 at 12:21 am
It seems clear from the graphs above that the anomaly differs by well over 0.5′C 
If everyone had the same base period from which they compared things, such as with respect to 1981 to 2010, things would not look quite as bad, although there would still be differences.

Truthseeker

As a number of commenters have said, “global” temperatures are meaningless. For those that have a problem with this concept, let me refer you to the following excellent piece of work on the subject …
http://www.l4patterns.com/uploads/local-vs-global.pdf
Happy holidays everyone.

MikeB

wbrozek says:
December 22, 2013 at 4:44 pm
With respect, I think the requirement for a 95% confidence level given in your example is much too high.
In statistics, a 95% confidence level means that there is only a 1 in 20 probability of the result being obtained by chance alone
If results were decided by a coin flip and 19 out of 20 researchers said it was getting warmer then this would represent a much higher confidence level than a mere 95%.
There are 2^20 possible sets of results. The chance of all 20 researchers agreeing on warming (by chance) is 1 in 1048576. The chance of 19 out of 20 saying warming while one says cooling can happen in one of 20 ways (one researcher being the odd one out each time). This gives a probability of 20/1048576, which relates to a confidence level greater than 99.998%
In fact, a 95% confidence level would be achieved by only seven people coin-flipping warming and one choosing cooling. That is, if these things were to be decided on chance alone which they are not – but it may seem like it.

Gail Combs

Steven Mosher says:
December 22, 2013 at 4:27 pm
Global temperature is meaningless? Ah yes there was no mwp or lia.
>>>>>>>>>>>>>>>>>>>>
Steve is defending the global temperature as measured by GISS, HadCRUT4 or BEST, a project he was involved in.
The problem is temperature measured by thermometer does not take into account the energy related to water so it is actually a terrible measure of the earth’s energy. This is what David M. Hoffer is saying when he says:

“For easy figuring, it takes about 1.8 w/m2 to raise the temperature in the Antarctic from 200K to 201K, or 1 degree. But that same 1.8 w/m2 in the tropics at 303K only raises the temperature by less than 0.3 degrees!” – David M. Hoffer

(Also the amount of energy need to change an equal volume of matter one degree at 200K and one degree at 300K is not going to be exactly the same.However it is the phase chances of water that is the major factor. link )
Plants such as those used in the Köppen climate classification are a much better measure of the actual climate (not weather) and has a more direct bearing on the climate humans are affected by (FOOD PRODUCTION)
Movements of the Köppen climate boundary in the Midwest USA link shows the fudging we see in the manipulation by Hansen of GISS in these graphs. In that area at least the 1930’s were as warm or warmer than today despite what the Hansen manipulated USA graphs show.
Please note that according to NOAA the Medieval Warm Period was from the 9th to 13th Centuries. However [w]hat can be considered the first modern thermometer, the mercury thermometer with a standardized scale, was invented by Daniel Gabriel Fahrenheit in 1714… In 1593, Galileo Galilei invented a rudimentary water thermoscope, which for the first time, allowed temperature variations to be measured.
So thermometers were not around to measure the Medieval Warm Period and you are back to the Medieval Warm Period found in 120 proxies Jo Nova discusses “Two major proxy studies, larger than ever, were released in April and June 2012.”
Steve is mixing donuts and turnips in that sentence in an attempt to mislead the naive into believing skeptics are wrong to point out the problems with the idea of a global temperature anomaly.
John Kehr, a Chemical Engineer, goes into the amount of data that is hidden by using Anomaly instead of the actual temperature in Global Temperature and Anomaly Also see his: The Earth’s Energy Balance: Simple Overview

MikeB

Some measure of the accuracy of global temperature estimates can be made by comparing the different data sets which purport to measure the same thing, i.e. mean global temperature anomaly.
After normalising each data set to account for the different baselines we get differences of as much as 0.4 degrees Celsius between individual monthly results for GISS and HADCRUT3 (0.38 deg for HADCRUT4). Since these datasets are all based on surface station measurements covering both land and sea they should therefore be tracking the same thing but, as the article says, they often go in different directions.
An inaccuracy of 0.4 deg.C in individual measurements is quite large given that temperatures have only risen by about 0.8 deg.C since 1880.

KNR

GISS still has the dead hand of DR Doom on it , did anyone think that he would not make sure that whoever took over would be pf a like mind ? And after years of faithful support for ‘the cause ‘ there is no way they can back down now without losing lots of face and worse lots of cash .

Mindert Eiting

Taking the temperature of an object with a thermometer may be called a measurement. Several authors have argued that the global mean temperature is not a measurement on the globe. Why not? Taking the global mean, not as algebraic manipulation but as measurement, violates a basic assumption of measurement, the absence of interaction. If the temperature of an object would depend on the kind of thermometer used, we would have an interaction.
If on earth all thermometer values were positively correlated over time, we could take an arbitrary selection of points for establishing a global development. If on some spots temperatures increase and on other spots they decrease, it would depend on the selection of spots whether we would get an increasing or decreasing temperature for the globe. The results are population-dependent. Because we have on earth spots with increasing and decreasing temperatures in the same time interval, a global mean is not a measurement in it self. Suggestion: take in calculations of the global average medians in stead of means. This already suffices for getting drastically different trends.

Ed Reid

It would be very interesting to actually compare the anomalies in the actual datasets with the anomalies in the adjusted temperature records published as HadCRUT4 and GISS. The difference might well be the ~50% of warming purportedly caused by human activity.

Werner Brozek

Truthseeker says:
December 23, 2013 at 3:33 am
Thank you for that. I think the last sentence of the article bears repeating as it proves we never have to worry about any small amount of heat going into the deep ocean.
“What the scientists who actually know about the oceans are telling us, but AGW believers ignoring, is that it could take anywhere between 500 and 1000 years for the huge masses of just above freezing layers rise to the surface and during that time absorb huge amount of heat energy.”