Can GISS And Other Data Sets Set Records in 2014? (Now Includes April Data)

Guest Post by Werner Brozek (Edited By Just The Facts)

GISS and other sets are  poised to set new records in 2014, even without an El Nino. The present GISS record is a two way tie for first place with 2005 and 2010 both showing an average anomaly of 0.65. (By the way, in January of this year, it was stated that the 2010 anomaly was 0.67. I do not understand how 2010 lost 0.02 C over the last four months.) The present average over the first four months of 2014 is 0.64, so it is only 0.01 lower. However of greater significance is that the April anomaly was 0.73. If this anomaly were to continue for the rest of the year, the old GISS record would be shattered.

Below, I will provide the corresponding information for the other five data sets that I am following. All differences will be provided to the nearest 1/100 degree.

The current Hadsst3 average is 0.370, which is only 0.05 below its record of 0.416. And as is the case with GISS, the April anomaly was a huge 0.478, so if this anomaly were to continue for the rest of 2014, Hadsst3 would also set a new record.

The current Hadcrut3 average is 0.455, which is only 0.09 below its record of 0.548. And as is the case with GISS, the April anomaly was a huge 0.592, so if this anomaly were to continue for the rest of 2014, Hadcrut3 would virtually tie its record.

The current Hadcrut4 average is 0.500, which is only 0.05 below its record of 0.547. And as is the case with GISS, the April anomaly was a huge 0.641, so if this anomaly were to continue for the rest of 2014, Hadcrut4 would also set a new record.

The current RSS average is 0.222. This is 0.33 below the 1998 record of 0.550. This record seems safe for this year. Even the April anomaly of 0.251 would not challenge the record if it continued for the rest of the year.

The current UAH average is 0.171. This is 0.25 below the 1998 record of 0.419. This record seems safe for this year. Even the April anomaly of 0.184 would not challenge the record if it continued for the rest of the year. (Note: This applies to version 5.5.)

In the table, I have added a row 15 which I have labelled 15.dif and here I give the above differences between rows 4 and 13. Since no rank is first at this point, all numbers have the same sign indicating the present record is still higher than the present average in all cases.

In the parts below, as in the previous posts, we will present you with the latest facts. The information will be presented in three sections and an appendix.

The first section will show for how long there has been no warming on several data sets.

The second section will show for how long there has been no statistically significant warming on several data sets.

The third section will show how 2014 to date compares with 2013 and the warmest years and months on record so far.

The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.

Section 1

This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.

On all data sets below, the different times for a slope that is at least very slightly negative ranges from 9 years and 8 months to 17 years and 9 months

1. For GISS, the slope is flat since November 2001 or 12 years, 6 months. (goes to April)

2. For Hadcrut3, the slope is flat since August 2000 or 13 years, 9 months. (goes to April) The latest spike caused the time to start after the 1998 El Nino.

3. For a combination of GISS, Hadcrut3, UAH and RSS, the slope is flat since December 2000 or 13 years, 5 months. (goes to April)

4. For Hadcrut4, the slope is flat since January 2001 or 13 years, 4 months. (goes to April)

5. For Hadsst3, the slope is flat since December 2000 or 13 years, 5 months. (goes to April)

6. For UAH, the slope is flat since September 2004 or 9 years, 8 months. (goes to April using version 5.5)

7. For RSS, the slope is flat since August 1996 or 17 years, 9 months (goes to April).

The next graph shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the upward sloping blue line indicates that CO2 has steadily increased over this period.

WoodForTrees.org – Paul Clark – Click the pic to view at source

When two things are plotted as I have done, the left only shows a temperature anomaly.

The actual numbers are meaningless since all slopes are essentially zero. As well, I have offset them so they are evenly spaced. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 17 years, the temperatures have been flat for varying periods on various data sets.

The next graph shows the above, but this time, the actual plotted points are shown along with the slope lines and the CO2 is omitted.

WoodForTrees.org – Paul Clark – Click the pic to view at source

Section 2

For this analysis, data was retrieved from Nick Stokes’ Trendviewer page. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 16 and 21 years.

The details for several sets are below.

For UAH: Since February 1996: CI from -0.043 to 2.349

For RSS: Since November 1992: CI from -0.022 to 1.867

For Hadcrut4: Since October 1996: CI from -0.033 to 1.192

For Hadsst3: Since January 1993: CI from -0.016 to 1.813

For GISS: Since August 1997: CI from -0.008 to 1.233

Section 3

This section shows data about 2014 and other information in the form of a table. The table shows the six data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadcrut3, Hadsst3, and GISS.

Down the column, are the following:

1. 13ra: This is the final ranking for 2013 on each data set.

2. 13a: Here I give the average anomaly for 2013.

3. year: This indicates the warmest year on record so far for that particular data set. Note that two of the data sets have 2010 as the warmest year and four have 1998 as the warmest year.

4. ano: This is the average of the monthly anomalies of the warmest year just above.

5.mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year.

6. ano: This is the anomaly of the month just above.

7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0.

8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.

9. Jan: This is the January 2014 anomaly for that particular data set.

10.Feb: This is the February 2014 anomaly for that particular data set, etc.

13.ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months. However if the data set itself gives that average, I may use their number. Sometimes the number in the third decimal place differs slightly, presumably due to all months not having the same number of days.

14.rnk: This is the rank that each particular data set would have if the anomaly above were to remain that way for the rest of the year. It will not, but think of it as an update 15 minutes into a game. Due to different base periods, the rank is more meaningful than the average anomaly.

15.dif: This is row 4 minus row 13. A number of less than 0.10 at this point in time means that a record is possible for 2014 for four of the data sets. Both of the satellite data would need a miracle to set a record this year in my opinion.

Source UAH RSS Had4 Had3 Sst3 GISS
1. 13ra 7th 10th 8th 6th 6th 7th
2. 13a 0.197 0.218 0.486 0.459 0.376 0.59
3. year 1998 1998 2010 1998 1998 2010
4. ano 0.419 0.55 0.547 0.548 0.416 0.65
5.mon Apr98 Apr98 Jan07 Feb98 Jul98 Jan07
6. ano 0.662 0.857 0.829 0.756 0.526 0.92
7. y/m 9/8 17/9 13/4 13/9 13/5 12/6
8. sig Feb96 Nov92 Oct96 Jan93 Aug97
Source UAH RSS Had4 Had3 Sst3 GISS
9.Jan 0.236 0.262 0.507 0.472 0.342 0.68
10.Feb 0.127 0.161 0.304 0.264 0.314 0.44
11.Mar 0.137 0.214 0.544 0.491 0.347 0.70
12.Apr 0.184 0.251 0.641 0.592 0.478 0.73
Source UAH RSS Had4 Had3 Sst3 GISS
13.ave 0.171 0.222 0.500 0.455 0.370 0.64
14.rnk 10th 9th 5th 7th 7th 3rd
15.dif 0.25 0.33 0.05 0.09 0.05 0.01

If you wish to verify all of the latest anomalies, go to the following:

For UAH, version 5.5 was used since that is what WFT used.

http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.5.txt

For RSS, see: ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt

For Hadcrut4, see: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.2.0.0.monthly_ns_avg.txt For Hadcrut3, see: http://www.cru.uea.ac.uk/cru/data/temperature/HadCRUT3-gl.dat

For Hadsst3, see: http://www.cru.uea.ac.uk/cru/data/temperature/HadSST3-gl.dat

For GISS, see:

http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

To see all points since January 2013 in the form of a graph, see the WFT graph below.

WoodForTrees.org – Paul Clark – Click the pic to view at source

As you can see, all lines have been offset so they all start at the same place in January 2013. This makes it easy to compare January 2013 with the latest anomaly.

Appendix

In this part, we are summarizing data for each set separately.

RSS

The slope is flat since August 1996 or 17 years, 9 months. (goes to April)

For RSS: There is no statistically significant warming since November 1992: CI from -0.022 to 1.867.

The RSS average anomaly so far for 2014 is 0.222. This would rank it as 9th place if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2013 was 0.218 and it is ranked 10th.

UAH

The slope is flat since September 2004 or 9 years, 8 months. (goes to April using version 5.5 according to WFT)

For UAH: There is no statistically significant warming since February 1996: CI from -0.043 to 2.349. (This is using version 5.6 according to Nick’s program.)

The UAH average anomaly so far for 2014 is 0.171. This would rank it as 10th place if it stayed this way. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.662. The anomaly in 2013 was 0.197 and it is ranked 7th.

Hadcrut4

The slope is flat since January 2001 or 13 years, 4 months. (goes to April)

For Hadcrut4: There is no statistically significant warming since October 1996: CI from -0.033 to 1.192.

The Hadcrut4 average anomaly so far for 2014 is 0.500. This would rank it as 5th place if it stayed this way. 2010 was the warmest at 0.547. The highest ever monthly anomaly was in January of 2007 when it reached 0.829. The anomaly in 2013 was 0.486 and it is ranked 8th.

Hadcrut3

The slope is flat since August 2000 or 13 years, 9 months. (goes to April)

The Hadcrut3 average anomaly so far for 2014 is 0.455. This would rank it as 7th place if it stayed this way. 1998 was the warmest at 0.548. The highest ever monthly anomaly was in February of 1998 when it reached 0.756. One has to go back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less. The anomaly in 2013 was 0.459 and it is ranked 6th.

Hadsst3

For Hadsst3, the slope is flat since December 2000 or 13 years and 5 months. (goes to April).

For Hadsst3: There is no statistically significant warming since January 1993: CI from -0.016 to 1.813.

The Hadsst3 average anomaly so far for 2014 is 0.370. This would rank it as 7th place if it stayed this way. 1998 was the warmest at 0.416. The highest ever monthly anomaly was in July of 1998 when it reached 0.526. The anomaly in 2013 was 0.376 and it is ranked 6th.

GISS

The slope is flat since November 2001 or 12 years, 6 months. (goes to April)

For GISS: There is no statistically significant warming since August 1997: CI from -0.008 to 1.233.

The GISS average anomaly so far for 2014 is 0.64. This would rank it as 3rd place if it stayed this way. 2010 and 2005 were the warmest at 0.65. The highest ever monthly anomaly was in January of 2007 when it reached 0.92. The anomaly in 2013 was 0.59 and it is ranked 7th.

Conclusion

Even without an El Nino, it appears likely that some records will be set on at least some surface data sets, however not on the satellite data sets.

0 0 votes
Article Rating
124 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Latitude
May 25, 2014 6:08 pm

(By the way, in January of this year, it was stated that the 2010 anomaly was 0.67. I do not understand how 2010 lost 0.02 C over the last four months.)
http://stevengoddard.wordpress.com/2014/03/27/settled-science-update-at-giss/
http://stevengoddard.wordpress.com/2014/01/15/almost-23-of-giss-warming-is-fake/

Admin
May 25, 2014 6:11 pm

Even without an El Nino, it appears likely that some records will be set on at least some surface data sets, however not on the satellite data sets.
Perhaps the surface temperature datasets will show a drop if Obama succeeds in driving up the price of electricity… 🙂

Scott Basinger
May 25, 2014 6:15 pm

Adjust the past down, adjust the present up. Look, a trend!

May 25, 2014 6:26 pm

Latitude says:
May 25, 2014 at 6:08 pm
Thank you!
I can understand that there could possibly be good reasons why things could be adjusted from a hundred years ago, but why would 2010 be adjusted in the last 4 months? And why would the all time record month of January 2007 go down from 0.94 to 0.92 over the last 4 months?

Bill H
May 25, 2014 6:44 pm

One hundredth of a degree C… What instrument is so carefully calibrated and placed around the world so quickly that we are talking this small a change?

May 25, 2014 6:57 pm

I am all out of patience with these anomalies of 0.001 accuracy.
Pure fiction.
In audio this noise is called what it is, noise.
The temperature data lacks both duration and detail(accurate to =/- 1or2 degrees C).
The entrails of a fish would give information as useful as these imaginary changes and the fish might be edible.
Now supposedly, the satellite record is the very best, precision measurement, we have managed to date, being a mere 30-40 years old, its should be throwing up “record” temperatures regularly.
Where are they?

May 25, 2014 6:59 pm

Bill H says:
May 25, 2014 at 6:44 pm
One hundredth of a degree C… What instrument is so carefully calibrated and placed around the world so quickly that we are talking this small a change?
Good question! We must realize a couple of things. If we average a thousand readings that are only accurate to the nearest degree, we can easily get an average to the nearest 1/1000 degree. And we need to realize that numbers within about 0.1 C could be considered a statistical tie. So with GISS, the top ten range from 0.56 to 0.65. So the top ten could be considered to be statistically tied for first.

ossqss
May 25, 2014 7:09 pm

wbrozek says:
May 25, 2014 at 6:26 pm
Latitude says:
May 25, 2014 at 6:08 pm
Thank you!
I can understand that there could possibly be good reasons why things could be adjusted from a hundred years ago, but why would 2010 be adjusted in the last 4 months? And why would the all time record month of January 2007 go down from 0.94 to 0.92 over the last 4 months?
_________________________________________________________________________________________________
Exactly!
Sand through the hourglass.
Who looks at that 2 years from now?
Nobody.
Why did it happen? Somebody knows…….. Why don’t we?

Jared
May 25, 2014 7:16 pm

It’s simple ideology mathematics taught in Climatology school, artificially adjust temps 100 years ago way down, then artificially adjust current temps up. Then after the current hottest year ever sets the record then the process of dropping those temps can begin so the new current year sets the record as hottest ever. This is how you turn a flat line into a upward slope. Climatology mathematics 101.

Anything is possible
May 25, 2014 7:18 pm

“GISS global temperature estimates should be treated with considerable caution”
http://oprj.net/oprj-archive/climate-science/31/oprj-article-climate-science-31.pdf

SIGINT EX
May 25, 2014 7:19 pm

The real accuracy of the datasets listed is in the 10-of-degree range (i.e. from -10 to +10), not the 10th nor hundredth (laughable) of a degree range.
Sorry old boy. Epic fail.

Bill 2
May 25, 2014 7:29 pm

wbrozek says:
May 25, 2014 at 6:26 pm
Thank you!
I can understand that there could possibly be good reasons why things could be adjusted from a hundred years ago, but why would 2010 be adjusted in the last 4 months? And why would the all time record month of January 2007 go down from 0.94 to 0.92 over the last 4 months?

Maybe you should check out the FAQ on the GISS site? Looks like there are a few papers cited that explain the adjustment process. http://data.giss.nasa.gov/gistemp/FAQ.html

Admin
May 25, 2014 7:47 pm

Bill 2 says:
Maybe you should check out the FAQ on the GISS site? Looks like there are a few papers cited that explain the adjustment process. http://data.giss.nasa.gov/gistemp/FAQ.html
Given the magnitude of the adjustment, its a real dog ate my homework effort, regardless of their excuses.
If you add a hockey stick shaped adjustment to otherwise flat temperatures, don’t be surprised if the result is a hockey stick.
But calling the result of this operation a “measurement” is a bit of a stretch. More like water boarding the data until it confesses.
From the NOAA site – the hockey stick shaped adjustment.
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif

NotAGolfer
May 25, 2014 7:49 pm

It’s routine for NOAA to adjust data, and adjustments tend to add in a warming trend. I complained to NASA, a few years ago in an email, about how older temps on their website for my hometown had been adjusted downward, while newer were adjusted upward (I had save a couple of snapshots of the data from their website comparing one timeshot to another) and Reto Ruedy wrote back. Here is the text of his email:
“The station you are referring to is part of USHCN. It is not quite clear to me whether your question concerns the difference between the data that we download from USHCN and the data after our homogenization, or whether it concerns the difference between the data presented by USHCN today as compared to their data from 2007. So I’ll address both instances.
The effect of our homogenization can easily be determined since we present for each station the data before and after that homogenization.
We use the same method to modify all non-rural stations, and averaged over all stations that procedure reduces the global warming trend slightly. For individual stations, the result may well be an increase or a decrease of the long term trend, since the effect is so small that it is easily dwarfed by local natural variations.
What we do essentially is make non-rural stations look like their rural neighbors, which has the same effect as basing our analysis only on rural stations. The adjusted data represent the region rather than the individual station.
We download the NOAA’s adjusted USHCN data; they also make available the unadjusted data; however, many of their adjustments are based on extra information, like documented changes in the location of the thermometer, changes in the way daily/monthly mean temperatures are derived, etc. Not adjusting for such documented changes, i.e. going back to the raw data, preserves artificial inaccuracies in the time series; these may have a substantial impact particularly if they are the result of a regional coordination effort. NOAA, who spent a lot more time and effort to investigate US station data than we did, found that in the US, such systematic changes usually tended to underestimate the warming trend.
See e.g. http://www.ncdc.noaa.gov/oa/climate/research/ushcn and the papers listed at the end of that web site.
Let me know if you have any further questions.
Reto Ruedy”

Sleepalot
May 25, 2014 7:52 pm

I can also explain GISS adjustments – they’re plain old scientific fraud.
Comparisons of individual stations used by GISS to IMO originals: sources given in the comments section.
Stykkisholmur, Iceland
https://www.flickr.com/photos/7360644@N07/11244518635/in/photostream/
Keflavik, Iceland
https://www.flickr.com/photos/7360644@N07/11244423904/in/photostream/
Akureyri, Iceland
https://www.flickr.com/photos/7360644@N07/11244417413/in/photostream/
Teigarhorn, Iceland
https://www.flickr.com/photos/7360644@N07/11244199504/in/photostream/
Holar Hornafirdi, Iceland
https://www.flickr.com/photos/7360644@N07/11243965324/in/photostream/

Ebeni
May 25, 2014 7:53 pm

I am a lay person so excuse me if I am missing something. Is there something in climate science that dispenses with the entire discipline of measurement accuracy and confidence and application of significant figures?

Admin
May 25, 2014 7:55 pm

Bill 2, further to my point, from the GISS site:-
Q. Does GISS deal directly with raw (observed) data?
A. No. GISS has neither the personnel nor the funding to visit weather stations or deal directly with data observations from weather stations. GISS relies on data collected by other organizations, specifically, NOAA/NCDC’s Global Historical Climatology Network (GHCN) v3 adjusted monthly mean data as augmented by Antarctic data collated by UK Scientific Committee on Antarctic Research (SCAR) and also NOAA/NCDC’s Extended Reconstructed Sea Surface Temperature (ERSST) v3b data.

So GISS uses NOAA’s figures, with the incorporated NOAA hockey stick shaped adjustment. Both GISS and NOAA figures are contaminated with this joke size adjustment. Then GISS applies further adjustments…

Admin
May 25, 2014 8:01 pm

Ebeni
… Is there something in climate science that dispenses with the entire discipline of measurement accuracy and confidence and application of significant figures?
Yes – lack of expertise in statistics. Their statistics weaknesses were laid bare, by publication of a hilarious paper written by McIntyre, who is a real statistics expert, which demonstrated that Michael Mann’s hockey stick algorithm produced a hockey stick when it was fed with random data (“red noise”).
http://climateaudit.files.wordpress.com/2009/12/mcintyre-grl-2005.pdf

Richard Mallett
May 25, 2014 8:02 pm

So, if the satellite data cannot be trusted, and GISS cannot be trusted, and NOAA NCDC presumably is too untrustworthy even to be considered, that just leaves HadCRUT4 (trend since 1850 = 0.47 C / century) ?

May 25, 2014 8:02 pm

Bill 2 says:
May 25, 2014 at 7:29 pm
Maybe you should check out the FAQ on the GISS site?
It is very understandable why Hadcrut would adjust things for the previous month or even two months because they did not have some numbers from the middle of China in a timely manner. But if GISS needs to adjust things way back every month, something seems very wrong.
And if we assume for argument sake that GISS is indeed doing the right thing with all of these adjustments for years in the past, then we can only assume the Hadcrut people are not doing it right. Would this be a fair assumption?

michael hart
May 25, 2014 8:10 pm

I can understand that there could possibly be good reasons why things could be adjusted from a hundred years ago, but why would 2010 be adjusted in the last 4 months?

Why do dogs lick their gonads?
Because they can.

John F. Hultquist
May 25, 2014 8:11 pm

Ebeni says:
May 25, 2014 at 7:53 pm
“Is there something in climate science …

Actually, no there isn’t. That is why the term is always in quotes: “climate science”
This keeps everyone aware of the nothingness therein.

May 25, 2014 8:24 pm

Ebeni says:
May 25, 2014 at 7:53 pm
I am a lay person so excuse me if I am missing something. Is there something in climate science that dispenses with the entire discipline of measurement accuracy and confidence and application of significant figures?
Climate science has its own rules regarding the above. So while we know that we cannot know the anomaly to the nearest 1/1000 degree, we give that number and say it is only to +/- 0.1 C or something like that.
Then climate science requires 95% confidence to say whether warming or cooling is occurring. So measurement uncertainties, etc are presumably dealt with and incorporated in the site found here:
http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html?Xxdat=%5B0,1,4,48,92%5D

Philip Schaeffer
May 25, 2014 8:25 pm

Sleepalot said:
“I can also explain GISS adjustments – they’re plain old scientific fraud.”
You haven’t explained anything. All you have done is made a statement, and linked a few graphs.

Nick Stokes
May 25, 2014 8:34 pm

Eric Worrall says: May 25, 2014 at 8:01 pm
“Yes – lack of expertise in statistics. Their statistics weaknesses were laid bare, by publication of a hilarious paper written by McIntyre, who is a real statistics expert, which demonstrated that Michael Mann’s hockey stick algorithm produced a hockey stick when it was fed with random data (“red noise”).”

Yes. Here is how it was expertly done.

May 25, 2014 8:36 pm

Richard Mallett says:
May 25, 2014 at 8:02 pm
So, if the satellite data cannot be trusted, and GISS cannot be trusted, and NOAA NCDC presumably is too untrustworthy even to be considered, that just leaves HadCRUT4
You may wish to check out the following. Hadcrut4 barely came out before it needed adjustments. And guess the direction of those adjustments? See:
http://wattsupwiththat.com/2013/05/12/met-office-hadley-centre-and-climatic-research-unit-hadcrut4-and-crutem4-temperature-data-sets-adjustedcorrectedupdated-can-you-guess-the-impact/
Here is a comment I made then:
“From 1997 to 2012 is 16 years. Here are the changes in thousandths of a degree with the new version of Hadcrut4 being higher than the old version in all cases. So starting with 1997, the numbers are 2, 8, 3, 3, 4, 7, 7, 7, 5, 4, 5, 5, 5, 7, 8, and 15. The 0.015 was for 2012. What are the chances that the average anomaly goes up for 16 straight years by pure chance alone if a number of new sites are discovered? Assuming a 50% chance that the anomaly could go either way, the chances of 16 straight years of rises is 1 in 2^16 or 1 in 65,536. Of course this does not prove fraud, but considering that “HadCRUT4 was introduced in March 2012”, it just begs the question why it needed a major overhaul only a year later.
I believe people should not wonder why suspicions are aroused as to whether or not everything is kosher.”

Richard Mallett
Reply to  Werner Brozek
May 26, 2014 8:08 am

So if none of the data sets can be trusted, how do we know that the pause is real ?

Pamela Gray
May 25, 2014 8:46 pm

When drought is upon us here on the West Coast, I expect hot day time temps and low night time temps. A case in point, California is often the place of record highs and lows within a 24 hour period during drought. El Nino will reverse that with its incoming clouds and rain. The day time temps will temper a bit and the night time temps will warm up.

Nick Stokes
May 25, 2014 8:55 pm

wbrozek says: May 25, 2014 at 6:26 pm
“I can understand that there could possibly be good reasons why things could be adjusted from a hundred years ago, but why would 2010 be adjusted in the last 4 months? And why would the all time record month of January 2007 go down from 0.94 to 0.92 over the last 4 months?”

Here is the GISS log of changes. They record that:
“January 21, 2014: The GISS analysis was repeated this morning based on today’s status of the GHCN data. The changes were well within the margin of error, e.g. the L-OTI mean for 2013 changed from 0.6048+-0.02°C to 0.6065+-0.02°C, a change of less than 0.002°C. However, rounding to 2 digits for the L-OTI table changed the 0.60°C used in some documents prepared last week to 0.61°C. This minuscule change also moved year 2013 from a tie for the 7th place to a tie for the 6th place in the GISS ranking of warmest years, demonstrating how non-robust these rankings are.”
Clearly the change in January reflects a catch-up to past changes in GHCN. GHCN changes as belated data comes in, and also if the adjustment algorithm changes. GISS now relies on the GHCN homogenization; they don’t do their own

RACookPE1978
Editor
May 25, 2014 9:04 pm

Werner Brozek says:
May 25, 2014 at 8:02 pm

(replying to) Bill 2 says:
May 25, 2014 at 7:29 pm
Maybe you should check out the FAQ on the GISS site?

It is very understandable why Hadcrut would adjust things for the previous month or even two months because they did not have some numbers from the middle of China in a timely manner. But if GISS needs to adjust things way back every month, something seems very wrong.

As I recall, Hansen’s “automatic” routines at NASA-GISS recalculate every month every temperature ever recorded based on the latest (most recent!) night time light index record for that station’s area. This is because Hansen/NASA/GISS was desperate at that time to show as wide a temperature record as possible for as long as possible, while maintaining the “perfect-untarnished-records” for as many different sites as possible.
Thus, he HAD TO create a temperature record of “perfect stations” while appearing to compensate for urban heat islands while reading old records that admittedly varied in location, were missing records, and were taken at various times of day and by various quality of stations by many, many thousands individuals – each of who changed every day in how accurate they did (or did not) follow exact procedures. From these competing needs, and with a cooperative “peer-reviewed paper back in 1980’s, he developed his 600 km “smoothing” of records (later expanded into a 1200 km “smoothing” of temperature data) because “as long as the trends were the same the anomalies could be treated as the same…”
Worse, if today’s values for ANY station change or miss a digit or or “blacked out” in the monthly night time light index, then the “smoothing” algorithm and the station-break algorithm and the station history algorithm re-calculate and re-set ALL previous station daily averages (and ALL previous and current station anomalies!) going all the way back to the first record written down.
Hansen’s magical computer routines, as I understand, have never been changed: Every month the past temperature records ARE CHANGED based on the most recent night time light index based on some 1980-level “standard” for urban, semi-urban, and rural NASA index. I have read of several computer programmers who have attempted to both re-create his program, and others who have attempted to debug (re-read and sort out) what few parts of his program have been released. None were successful. I know of no one who has gotten access to the entire routine, nor anybody (outside of his original FORTRAN punch cards) who can duplicate his methods or results.
I can find no one who can explain why NASA-GISS must recalculate every month temperature records written by hand in 1915.

May 25, 2014 9:15 pm

Nick Stokes says:
May 25, 2014 at 8:55 pm
Thank you.
So on January 20, 2013 had an anomaly of 0.60 to 2 SD.
On January 22, 2013 had an anomaly of 0.61 to 2 SD.
Today, 2013 has an anomaly of 0.59 to 2 SD.
In January 2014, 2003 was also 0.60, but today, it is 0.59.
GISS is really a pain to keep up to date.

Editor
May 25, 2014 9:20 pm

The comments about GISS’ “dancing data” intrigued me. I’ve been downloading most major world datasets for a few years. GISS is available at http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt I whipped up a quickie bash script to go through my downloaded files and see what GISS has been reporting for 2010. Note that GISS reports a whole number value, i.e. anomaly * 100. Divide by 100 to get the actual number. The following is the output. The columns are…
1) The month for which the data ends. Obviously, it can start no earlier than 201012
2) The sum of the 12 monthly anomalies (January-December).
3) The average anomaly for 2010
Remember to divide columns 2 and 3 by 100
201012 759 63
201101 757 63
201102 757 63
201103 758 63
201104 760 63
201105 757 63
201106 757 63
201107 760 63
201108 760 63
201109 760 63
201110 756 63
201111 760 63
201112 757 63
201201 756 63
201202 752 63
201203 756 63
201204 750 63
201205 757 63
201206 752 63
201207 749 62
201208 755 63
201209 751 63
201210 769 64
201211 767 64
201212 791 66
201301 796 66
201302 795 66
201303 800 67
201304 797 66
201305 798 67
201306 795 66
201307 801 67
201308 799 67
201309 809 67
201310 804 67
201311 800 67
201312 800 67
201401 794 66
201402 806 67
201403 792 66
201404 785 65

Editor
May 25, 2014 9:31 pm

Nick Stokes says:
> May 25, 2014 at 8:55 pm
>
> Clearly the change in January reflects a catch-up to past
> changes in GHCN. GHCN changes as belated data comes
> in, and also if the adjustment algorithm changes. GISS now
> relies on the GHCN homogenization; they don’t do their own
I’ve heard of “belated data” but please don’t tell me that there’s “belated data” all the way back to 1880, coming in every month.

Louis Hooffstetter
May 25, 2014 9:32 pm

Werner: two comments:
First: “If we average a thousand readings that are only accurate to the nearest degree, we can easily get an average to the nearest 1/1000 degree.”
Sure, you can take the numbers out to as many decimal places as you want, but the numbers beyond the decimal are meaningless. Significant figures dictate that when your readings are only accurate to the nearest degree, any calculated answer is also only accurate to the nearest degree. Only freshman chemistry students and climastrologists make this mistake.
Second: “…it appears likely that some records will be set on at least some surface data sets, however not on the satellite data sets.”
Exactly! And this is the crux of the issue. Why is there a growing discrepancy between surface data sets and satellite data sets? One of the data sets is wrong, but which one and why? That should be the topic of your next post.

Rick Adkison
May 25, 2014 9:34 pm

Why are the anomalies associated with the satellite measurements significantly smaller than those from the surface measurements? Is their potential upward bias in the surface measurements due to thermometer placement; i.e. heat island effect?

Nick Stokes
May 25, 2014 9:39 pm

RACookPE1978 says: May 25, 2014 at 9:04 pm
“Hansen’s magical computer routines, as I understand, have never been changed: Every month the past temperature records ARE CHANGED based on the most recent night time light index based on some 1980-level “standard” for urban, semi-urban, and rural NASA index. I have read of several computer programmers who have attempted to both re-create his program, and others who have attempted to debug (re-read and sort out) what few parts of his program have been released. None were successful. I know of no one who has gotten access to the entire routine, nor anybody (outside of his original FORTRAN punch cards) who can duplicate his methods or results.”
Not true. To start at the end, the code is here. And here is a project by British software developers to port GISTEMP to Python. They were able to reproduce the results perfectly.
And it is quite untrue that Hansen’s routines have never been changed. There was a radical change when GHCN V3 came out, and GISS removed their own homogenisation, relying entirely on GHCN.

NikFrimNYC
May 25, 2014 9:40 pm

Ebeni asked: “I am a lay person so excuse me if I am missing something. Is there something in climate science that dispenses with the entire discipline of measurement accuracy and confidence and application of significant figures?”
The rabbit hole is much deeper than mere statistical sloppiness now that the latest hockey stick was wholly fabricated, statistics not even being needed, just spurious data drop-off at the end after some input data was re-dated to afford that as a pure artifact blade that made world headlines:
http://s6.postimg.org/jb6qe15rl/Marcott_2013_Eye_Candy.jpg
Innovations that dispense with scientific integrity include these:
Steig’s innovation of illegally spreading Antarctic Peninsula warming over the whole continent, or Mann’s innovation of ignoring the majority of proxies that show a bowl instead of a hockey stick, or Mann’s innovation of then using an algorithm to cherry pick noisy proxies that lined up with thermometer plots so he could call this objectively unbiased “filtering,” or Marcott’s innovation of a pure data-drop off hockey stick blade, of the innovation of creating virtual sea levels that are then labeled as “sea level,” or the innovation of calling debate foes “deniers” in press releases for papers, or the innovation of allowing a single tree to create a hockey stick shape, or the innovation of raising confidence levels from 90% to 95% after further deviation of temperature from predictions, or the innovation of invoking consensus as a *scientific* instead of profoundly anti-scientific principle, or the innovation of Hiding The Decline by just throwing away new data, or the innovation of claiming that greater uncertainty equates with greater urgency and risk, or the innovation of dissolving sea shells in acid and extrapolating to the whole ocean, or the innovation of calling mild ocean neutralization “acidification,” or the innovation of finding four dead polar bears and expanding that to species endangerment, or the innovation of using satellite data to up-adjust the global average temperature in a way that the same satellite data in fact falsifies, or the innovation of doing risk analysis devoid of any and all benefit analysis as balance, or the innovation of “reversing the null hypothesis,” or the innovation of theoretically hiding heat in an ocean that shows no corresponding extra volume expansion, of the innovation of calling climate model runs “experiments,” of the innovation of invoking a surge of weather intensity in abstracts as actual weather intensity has declined, or the innovation of referencing IPCC reports which themselves reference activist literature almost as much as peer reviewed science, or the innovation of using mere mentions of man made warming in abstracts as offering empirical support *of* that theory, or the innovation of NASA itself not using NASA satellite data in their only temperature product, or the innovation of asserting that recent temperature variation is outside of natural variability without mentioning the near exact precedent for it in the first half of the thermometer record, or the innovation of claiming the 350 year old Central England record that falsifies climate alarm is merely an insignificant local affair that just by chance shows near exact correlation with the global average plots, or the innovation of using the systematic mismatch between tide gauges (relative to land) and satellite altimetry (absolute) to imply a sudden burst in sea level rise that is falsified by the tide gauge data itself.

Nick Stokes
May 25, 2014 9:54 pm

Walter Dnes says: May 25, 2014 at 9:31 pm
“I’ve heard of “belated data” but please don’t tell me that there’s “belated data” all the way back to 1880, coming in every month.”

Again, you should look at the log of changes.
“December 14, 2011: GHCN v2 and USHCN data were replaced by the adjusted GHCN v3 data. “
Switching from GISS to GHCN homogenisation
“September 26, 2012: NOAA/NCDC replaced GHCN v3.1 by GHCN v3.2. Hence the GISS analysis is based on that product starting 9/14/2012. Version v3.2 differs from v3.1 by minor changes in the homogenization of the unadjusted data.”
“January 16, 2013: Starting with the January 2013 update, NCDC’s ERSST v3b data will be used to estimate the surface air temperature anomalies over the ocean instead of a combination of Reynold’s OISST (1982 to present) and data obtained from the Hadley Center (1880-1981).”
These changes can well affect data back to 1880. GISTEMP isn’t just station readings.

May 25, 2014 9:56 pm

Louis Hooffstetter says:
May 25, 2014 at 9:32 pm
Thank you!
As a retired physics teacher, I certainly agree with you on SD. However there is no way that I will take all of the given numbers and reduce them to the nearest 1/10 degree. The lack of recent warming is plain, whether we use numbers to the nearest 1/10 or 1/1000 degree, and that is what I hope to show.
As for showing which data set is wrong, I am really bothered by the fact that UAH and RSS are so far apart with respect to the time for a slope of 0. That is something that needs to be sorted out by the people capable of doing so. I certainly cannot touch that one.

E.M.Smith
Editor
May 25, 2014 9:57 pm

:
I ported GIStemp to Linux and made it go. I’ve not done the newer version (maybe I ought to…) after they claim to not be doing their own homogenizing. To call GISS “data” is, IMHO, an error. It is a “data food product” at best. ( In America, non-cheese dairy synthetic product must be labeled with “cheese food product” so you know it isn’t really cheese 😉
It’s just GHCN after NOAA got done fudging it with more fudging added via a variety of odd machinations involving a load of averaging and (in the older version at least) a set of “serial homogenizing” that could smear temperatures from 1200 km away into other places. But then could repeat that process up to 3 times in different sections of code. (as ‘fill in missing data’, as ‘homogenize the values’ and as ‘make temperatures into grid / box scores’… each sequentially done and using smeared data food product from the prior step…)
IMHO it is not much more than an exercise in data smearing, over averaging and dancing in the error bands of those averaging processes, and expecting truth to come out of a data blender…
It’s a bit data now, but more than you ever want to know about GISS GIStemp, and how the deed is done can be found here:
https://chiefio.wordpress.com/gistemp/
Later I figure out the real data-buggery had been moved upstream to NOAA / NCDC and got to looking at how the GHCN was molested into a rise. Since HADCRUT uses the GHCN as well (they said they lost their real data but it could be retrieved from the GHCN for all intents and purposes, so anyone needing to say they do not use GHCN needs to take it up with them…); the bottom line is that there is ONE set of source data, that gets folded spindled and homogenized by NCDC and smeared around by GIStemp and is more or less used in HADCRUT (which gets a different Sea Surface mix, but not by much). To call them “different data sets” is a stretch. More like different molestations of the same original data.
More here: https://chiefio.wordpress.com/category/ncdc-ghcn-issues/
The bottom line is that we don’t have any real “surface temperature data set”. We have a collection of badly abused numbers that were, once upon a time, based on temperatures; but have long since ceased being historical recordings of temperatures.

Nick Stokes
May 25, 2014 10:00 pm

Louis Hooffstetter says: May 25, 2014 at 9:32 pm
“Why is there a growing discrepancy between surface data sets and satellite data sets? One of the data sets is wrong, but which one and why? That should be the topic of your next post.”

It was the topic of this very recent post. But the divergence between UAH and the surface indices is a lot less than the divergence between UAH and RSS.

alex
May 25, 2014 10:03 pm

A “pause”? What “pause”?

May 25, 2014 10:04 pm

Rick Adkison says:
May 25, 2014 at 9:34 pm
Why are the anomalies associated with the satellite measurements significantly smaller than those from the surface measurements?
The satellite anomalies are measured relative to a later base period which was warmer than an earlier base period. That is why I believe the rank number is much more meaningful than the anomaly number.

May 25, 2014 10:09 pm

If you want to see what the anomaly is, based solely on station measurements (no interpolation for places not actually measured, no other adjustments) follow the link in my name.

Editor
May 25, 2014 10:15 pm

Nick Stokes says:
> May 25, 2014 at 9:54 pm
>
> Again, you should look at the log of changes.
I think we’re talking past each other. You pointed to 3 changes during the past 3 years. My point is that data going back to the 1880’s *CHANGES EVERY SINGLE MONTH* that I download it. The changes may be relatively minor, but they do happen. Can someone do a FOIA request to get GISS monthly anomaly downloads back to day 1? It would be interesting.

May 25, 2014 10:15 pm

alex says:
May 25, 2014 at 10:03 pm
A “pause”? What “pause”?
It is the one that Trenberth is trying to explain via heat in the deep ocean.

Dr Burns
May 25, 2014 10:50 pm

Werner Brozek says:
May 25, 2014 at 6:59 pm
If we average a thousand readings that are only accurate to the nearest degree, we can easily get an average to the nearest 1/1000 degree.
Increased readings gives greater precision, not greater accuracy. In addition, this only applies to measurements of the same thing. Measurements of temperatures around the world in different places are not measurements of the same object. If it were true that greater accuracy could be achieved by simply more readings, we could get the Earth’s population to hold their fingers in the air to estimate temperature, then calculate an average to a high accuracy.

May 26, 2014 12:58 am

@ Dr Burns, Increased readings gives greater precision, not greater accuracy.
Thank you, that is precise!

Jim
May 26, 2014 2:20 am

Of course it’ll break a record. That was the objective of this year, to break the record come hook or crook. Obviously it’s been adjusted already. It shows the temps way above everyone else.

Jim
May 26, 2014 2:21 am

This is all about denying the pause exists. These people are criminals.

Editor
May 26, 2014 3:31 am

Ronan Connolly has a detailed critique of how poorly GISS handle UHI effect.
http://notalotofpeopleknowthat.wordpress.com/2014/05/20/is-the-giss-adjustment-for-uhi-adequate/

thegriss
May 26, 2014 3:46 am

@ john Robertson
“I am all out of patience with these anomalies of 0.001 accuracy.
Pure fiction.
In audio this noise is called what it is, noise.”
NO, in audio , this is called SILENCE.

thegriss
May 26, 2014 3:49 am

“First: “If we average a thousand readings that are only accurate to the nearest degree, we can easily get an average to the nearest 1/1000 degree.””
ONLY if those measurements are made under exactly the same circumstances of exactly the same item.
Its like measuring the length 1000 random pieces of wood to the nearest mm, and saying the average is to the nearest 1/000mm
IT ISN’T !!!

May 26, 2014 3:54 am

Dr Burns says:
May 25, 2014 at 10:50 pm
If it were true that greater accuracy could be achieved by simply more readings, we could get the Earth’s population to hold their fingers in the air to estimate temperature, then calculate an average to a high accuracy.
Of course it would really help if all people were spread evenly throughout the earth without having 15 million readings from Mexico city and 15 from Antarctica.

May 26, 2014 4:04 am

Jim says:
May 26, 2014 at 2:20 am
That was the objective of this year, to break the record come hook or crook.
And here I thought that a super El Nino was somehow expected to prove that man-made CO2 was responsible. I must confess that I thought the spikes over the last two months in Hadcrut3 and 4 were very surprising, especially since the expected El Nino has not even started yet.

May 26, 2014 4:32 am

thegriss says:
May 26, 2014 at 3:49 am
Its like measuring the length 1000 random pieces of wood to the nearest mm, and saying the average is to the nearest 1/000mm
IT ISN’T !!!

Of course you are correct. Let me rephrase that. Suppose that we wish to compare two soccer teams and we find that one team has scored 554 goals in 1000 games and the other has scored 1449 goals in 1000 games. The average for the first team was 0.554 goals a game and the average for the next was 1.449 goals a game. So even though one team was about three times better than the other, it would not have been apparent if we simply rounded both 0.554 and 1.449 to the nearest whole number which would have been 1 since you cannot have 1/10 or 1/100 or 1/1000 of a goal.
Of course measurements have inherent uncertainties so we have this added complexity to deal with. However that does not mean that the information is completely useless if given to an extra decimal place than the individual measurements warranted. You just have to keep the limitations in the back of your mind.

kim
May 26, 2014 4:37 am

I saw NikfromNYC’s lovely 9:40 PM rant @ the Bish’s and commented: Never have I missed paragraphs less. Bravo, Nik.
======================

Steve from Rockwood
May 26, 2014 5:13 am

Werner Brozek says:
May 25, 2014 at 6:59 pm

Bill H says:
May 25, 2014 at 6:44 pm
One hundredth of a degree C… What instrument is so carefully calibrated and placed around the world so quickly that we are talking this small a change?
Good question! We must realize a couple of things. If we average a thousand readings that are only accurate to the nearest degree, we can easily get an average to the nearest 1/1000 degree. And we need to realize that numbers within about 0.1 C could be considered a statistical tie. So with GISS, the top ten range from 0.56 to 0.65. So the top ten could be considered to be statistically tied for first.

I never understood this reasoning. You would have to take 1,000 readings in the same location at the same time with the same instrumentation. Then this only improves the precision of the reading and not its accuracy. When you average many readings over different areas taken at different times with different measuring systems the average error goes up and not down.

Bill_W
May 26, 2014 5:44 am

You actually reported most of them to the nearest 1/1000 degree which is fairly silly. It would serve everyone (and the truth) better if you would report them to the proper number of sig. figures and include the error bars. By this I don’t mean just the error bars from averaging together a bunch of numbers, but the real errors carried through in a proper error analysis. If this was hammered home continuously, it would soon be obvious to most that saying one year is “hotter” than another when they differ by a few hundreths of a degree was nonsense.

May 26, 2014 5:46 am

Steve from Rockwood says:
May 26, 2014 at 5:13 am
When you average many readings over different areas taken at different times with different measuring systems the average error goes up and not down.
If you wanted to get the best value of the average temperature on Earth for a given day, you would ideally have many instruments that take 24 hour readings throughout the day. Then you would combine the temperature readings with the length of time at the various readings to get an average for the day at each spot. If you did this with 100 equally spaced spots on Earth, you would get a certain number. But if you did this with a million equally spaced spots on Earth, you would get a certain number that would be more accurate than the first.

May 26, 2014 6:06 am

Of course GISS, etc., can set new record highs in 2014. They must and they will.

May 26, 2014 6:16 am

Bill_W says:
May 26, 2014 at 5:44 am
You actually reported most of them to the nearest 1/1000 degree which is fairly silly.
My numbers are right from their sites such as:
http://www.cru.uea.ac.uk/cru/data/temperature/HadCRUT3-gl.dat
If you feel they are silly with all numbers to the nearest 1/1000 degree, you need to take it up with them. For me, my purpose is not to tell Hadcrut3 how to report things, but merely to show how long the pause in warming is with their own numbers as they are given.
So if you can convince Hadcrut3 to report things to the nearest 1/10 degree, I will do likewise.

George Tomaich
May 26, 2014 6:42 am

Why can’t the NCDC issue the temperature record for the US Climate Reference Network (USCRN) in a format like that posted by Dr. Roy Spencer. The information goes all the way back to 2008. It would be the mot accurate data for the US and is surely warranted given all the manipulation of the USHCN data. Plus it was paid for out of the public purse.

steverichards1984
May 26, 2014 6:46 am

Werner Brozek says:
If you wanted to get the best value of the average temperature on Earth for a given day, you would ideally have many instruments that take 24 hour readings throughout the day. Then you would combine the temperature readings with the length of time at the various readings to get an average for the day at each spot. If you did this with 100 equally spaced spots on Earth, you would get a certain number. But if you did this with a million equally spaced spots on Earth, you would get a certain number that would be more accurate than the first.
If the average temp of 100 equally spaced spots on earth returned a valid or useful value, then yes, 1000 spots could return a more precise value.
Whether the returned value has an understandable physical meaning is some what in doubt!
How we could evaluate for non-sinusoidal temperature variations I do not know.

Steve from Rockwood
May 26, 2014 6:46 am

Werner Brozek says:
May 26, 2014 at 5:46 am

Steve from Rockwood says:
May 26, 2014 at 5:13 am
When you average many readings over different areas taken at different times with different measuring systems the average error goes up and not down.
If you wanted to get the best value of the average temperature on Earth for a given day, you would ideally have many instruments that take 24 hour readings throughout the day. Then you would combine the temperature readings with the length of time at the various readings to get an average for the day at each spot. If you did this with 100 equally spaced spots on Earth, you would get a certain number. But if you did this with a million equally spaced spots on Earth, you would get a certain number that would be more accurate than the first.

Let’s say you started with 100 equally spaced dots (temperature measurements stations) on the Earth. Then, for convenience, you created 1 million dots placed mainly in urban areas, to make the recording as easy as possible. Which set returns the greater accuracy?
Let”s also say the 100 dots were measured continuously from which the daily min and max plus mean were taken. Compare that to the million dots where people take the measurements at their convenience but note the time or occasionally miss the readings because they have other things to do (like landing planes etc).
Consider that some scientists decide that not all the 1 million dots are of the same quality and eliminate 2/3 of them after having used all of them for several years. This of course requires an adjustment, which they calculate and make. Now what can we say about the accuracy of the 333,333 dots?
Some scientists further decide that previous measurements had systematic errors that require adjustment over time. It becomes quite a task to accurately adjust 333,333 dots so different teams tackle different data sets. Now what is the accuracy of this average?
And finally, compare the average value of all those dots to a single record that has been meticulously recorded for hundreds of years but without adjustment and with a precision of +/- 0.5. How does that one record compare (in an accuracy sense) to the average of the hundreds of thousands of dots – each one with its own set of problems.
In my experience accuracy is affected by systematic errors while precision is affected by random errors. You can always increase precision but accuracy is very difficult to improve unless you treat each systematic error properly for each data set. Anyone who thinks the worldwide temperature record has an accuracy of +/0.01 degrees C just because of the high number of readings has missed a very important point IMHO.

NikFromNYC
May 26, 2014 6:48 am

Kim:
Earlier I made a cathartic scrapbook version of trying to pull climate deception together, as a zoomable graphic:
http://k.min.us/iby6xe.gif
I called it The Green Bank Authority. I found myself rather confused by the sheer myriad enormity of the jungle of claims out there, so I tamed the thicket a bit.

Richard M
May 26, 2014 6:48 am

Nick Stokes says:
May 25, 2014 at 10:00 pm
It was the topic of this very recent post. But the divergence between UAH and the surface indices is a lot less than the divergence between UAH and RSS.

No, UAH and RSS are converging. All you need to do is look at the data.
http://www.woodfortrees.org/plot/rss-land/to/plot/rss/to:2000/trend/plot/uah/to/plot/uah/to:2000/trend/plot/rss/from:2000/to/trend/plot/uah/from:2000/trend
They diverged in the 20th century.

Latitude
May 26, 2014 7:02 am

I haven’t seen anyone state the obvious….
By adjusting past temps down…..they can claim any present year as the hottest
They are going to try and claim this year as the hottest….by some 100th or 10th of a degree…
…next year, if they adjust this years temp down……it not only means they were wrong by claiming that this year was the hottest…..it means the next year that’s the hottest doesn’t even have to be a higher temp

May 26, 2014 7:34 am

Steve from Rockwood says:
May 26, 2014 at 6:46 am
Anyone who thinks the worldwide temperature record has an accuracy of +/0.01 degrees C just because of the high number of readings has missed a very important point IMHO.
I do not believe any one is suggesting this. But if they are, then the fact that GISS went up by 0.03 from March to April versus 0.10 for Hadcrut3 disproves that.

May 26, 2014 7:55 am

Richard M says:
May 26, 2014 at 6:48 am
No, UAH and RSS are converging. All you need to do is look at the data.
Actually they are diverging since 1998. You cannot compare RSS land with UAH global. See the graphs where they were offset to start at the same place in 1998:
http://www.woodfortrees.org/plot/rss/from:1998/plot/rss/from:1998/trend/plot/uah/from:1998/offset:0.156/plot/uah/from:1998/trend/offset:0.156
Slope lines going in opposite directions is not converging.

May 26, 2014 8:32 am

Richard Mallett says:
May 26, 2014 at 8:08 am
So if none of the data sets can be trusted, how do we know that the pause is real ?
We know the pause is real because despite all adjustments that make the present warmer and the past cooler, they just cannot adjust too much without looking too suspicious in light of the satellite data. That is the way it appears anyway. They desperately do not want the pause to be there.

May 26, 2014 8:32 am

@thegriss 3:46am,
You are correct, this so-called temperature information would be silence in an audio circuit.
This is the charade of anomalies, much ado about no change.

beng
May 26, 2014 9:05 am

Satellites show no warming for 16+ yrs, while GISS is setting records????
‘Nuff said.

Steve from Rockwood
May 26, 2014 9:24 am

@Werner:
“I do not believe any one is suggesting this. But if they are, then the fact that GISS went up by 0.03 +/- 0.25 from March to April versus 0.10 +/- 0.25 for Hadcrut3 disproves that.”
Fixed it for you.

May 26, 2014 10:39 am

Werner,
Excellent analysis as usual. One quibble: We are not in a “pause”. Global warming has stopped.
If GW begins again, then we can correctly call this a pause. Warming may commence again. But it might not. At this point we do not know. The only thing we know is that global warming stopped many years ago.
I am also bothered by the tiny tenth and hundreth of a degree claims. I have calibrated many thermometers. Only the very best, most expensive instruments can resolve a 0.1º C change. With a good thermometer you are lucky to be within ±1º.
In a rational world, we would use whole degrees.

Solomon Green
May 26, 2014 11:00 am

Mi Cro
“If you want to see what the anomaly is, based solely on station measurements (no interpolation for places not actually measured, no other adjustments) follow the link in my name.”
Thanks I did very interesting and well worth the visit.

May 26, 2014 11:14 am

Thank you Steve and dbstealey and all others for your comments so far.
As far as global warming resuming, it will be a very long time before the slope of RSS is not negative for over 17 years. But even if that should happen, it will be much longer before the warming is significant at the 95% level for 15 years.
However depending on where things are, changes could happen very quickly. For example, Hadcrut3 went from 16 years and 10 months to 13 years and 9 months from March to April. Exactly the same thing could happen with GISS with regards to the 95% and 15 years. Take a look at the following where GISS has no warming at the 95% level. A high May anomaly just could push it over the edge. But GISS would be all by itself then because none of the other data sets are close to losing their 15 years at 95% statistical significance.
http://www.woodfortrees.org/plot/gistemp/from:1997.55/plot/gistemp/from:1997.55/trend

Jimbo
May 26, 2014 12:45 pm

As the correlation between co2 and temperature breaks down we have a new correlation! From the BBC.Maybe the BBC is being given a hint by the statistician and owner of the website ‘Spurious Correlations’.

BBC – 25 May 2014
Spurious correlations: Margarine linked to divorce?
http://www.bbc.com/news/magazine-27537142

john cooknell
May 26, 2014 1:26 pm

The amazing thing is that in historical times, with very little added human CO2 and human population only a fraction of what it is now, atmospheric temperatures were recorded that appear to approach those values recorded in the present.

Steve from Rockwood
May 26, 2014 1:59 pm

Enjoyed your post Werner 😉

Bob Koss
May 26, 2014 4:17 pm

What happen to Hawaii land temperature data? Giss has no data after February 2013. Did I miss the news of a massive eruption destroying the islands? Temperature had been dropping there for a few years. Maybe it has continued to be inconvenient for setting a record.

May 26, 2014 4:23 pm

Bob Koss says:
May 26, 2014 at 4:17 pm
Sorry! I only work with global data or Hadsst3 as I am more interested in the global aspect of global warming. Hopefully another reader can help you out.

Nick Stokes
May 26, 2014 4:40 pm

Bob Koss says: May 26, 2014 at 4:17 pm
“What happen to Hawaii land temperature data? Giss has no data after February 2013.”

GHCN has up to date data on all four major islands. GISS has up to date Hilo and Kahului, but for some reason lags with Honolulu and Lihue.

May 26, 2014 5:00 pm

(By the way, in January of this year, it was stated that the 2010 anomaly was 0.67. I do not understand how 2010 lost 0.02 C over the last four months.). Here again Steve Goddard has the answers, this is just plain old fashioned Fraud, not even scientific fraud.
http://stevengoddard.wordpress.com/2014/01/15/almost-23-of-giss-warming-is-fake/

noloctd
May 26, 2014 5:43 pm

One of these days they’ll forego adjusting data to meet their politcal desires and just create a program to make up “data” out of the whole cloth.

Bob Koss
May 26, 2014 5:45 pm

Nick,
If the stations aren’t listed in the Giss homogenized database they don’t get used. Giss isn’t using the stations you mentioned. Using the Giss maps tool to plot land data trends you will see Hawaii shows a negative trend since 2001 of about 0.35c. In effect they have now wiped the whole state off the earth.
Here are the only stations listed as being used. Only two of them have data as recent as February 2013. Next most recent stations ended in 2003.
http://data.giss.nasa.gov/cgi-bin/gistemp/find_station.cgi?lat=21.3&lon=-158.1&dt=1&ds=14

Nick Stokes
May 26, 2014 6:02 pm

Bob Koss says: May 26, 2014 at 5:45 pm
“If the stations aren’t listed in the Giss homogenized database they don’t get used. Giss isn’t using the stations you mentioned.”

They have the data for Kahului and Hilo, and list it pre-homogenization. It seems they can’t homogenize it because of recent gaps.
I’ve plotted recent unadjusted GHCN station trends here. I can’t see anything downward at Honolulu. I get 1.59°/century since 1997, or 1.29°C since 1977. Much in line with nearby SST.

RACookPE1978
Editor
May 26, 2014 6:03 pm

Bob Koss says:
May 26, 2014 at 5:47 pm
Nick,
Here is the Giss trend map for land data.

Got a lot of millions of square kilometers of arctic up there past 72 and 80 north that are colored red and orange that HAVE NO THERMOMETERS ON THEM.
See, the DMI has measured the daily air temperatures at 80 north latitude since 1959. There has been 0.0 increase in summertime temperatures up there at 80 north since 1959. (Winter temp’s have gone up, but there is no sunlight p there in winter for the CO2 to interact with. )

Nick Stokes
May 26, 2014 6:53 pm

RACookPE1978 says: May 26, 2014 at 6:03 pm
“Got a lot of millions of square kilometers of arctic up there past 72 and 80 north that are colored red and orange that HAVE NO THERMOMETERS ON THEM.”

As stated, it shades colors on a triangle mesh between stations. You can see the stations, and the mesh too, if you want.
“Winter temp’s have gone up, but there is no sunlight up there in winter for the CO2 to interact with…”
Sunlight does not interact with CO2. It obstructs IR.

May 26, 2014 7:21 pm

Kid to his mother upon being caught fibbing:
“But Maa, It’s not a lie. It’s the truth, but I homogenized it”

DR
May 26, 2014 8:26 pm

Anyone remember this letter from GISS? I didn’t save the link, but it’s out there.
http://i.imgur.com/McgXE3y.png

Bob Koss
May 26, 2014 11:28 pm

Nick,
Hilo, Honolulu, and Lihue were all being homogenized and used by Giss as recently as 2011 in ver. 2. All show a negative trend of several degrees/century since 2001. After February 2013 Giss uses no temperature data for Hawaii. Makes me wonder what is going on.
I noticed Hawaii annual temperatures were down around the time of the 1998 super el nino(close to the 1951-80 average). That may be the reason for your positive trend for Honolulu starting with 1997. Around 2010 I believe we had another el nino when annual temperature also dropped in Hawaii. Could it be el nino has the effect of reducing land temperatures around Hawaii? If that were true, I imagine it would be easier to set a new temperature record without Hawaii data. That area covers around 1.5 million sq km they way they spread temperature data around.

William Astley
May 27, 2014 1:38 am

In reply to:
The current Hadcrut4 average is 0.500, which is only 0.05 below its record of 0.547. And as is the case with GISS, the April anomaly was a huge 0.641, so if this anomaly were to continue for the rest of 2014, Hadcrut4 would also set a new record.
The current UAH average is 0.171. This is 0.25 below the 1998 record of 0.419. This record seems safe for this year. Even the April anomaly of 0.184 would not challenge the record if it continued for the rest of the year. (Note: This applies to version 5.5.)
William:
There is the first observational evidence of cooling. The UAH data set shows the Southern Hemisphere temperature has dropped to the 30 year average.
http://www.ospo.noaa.gov/data/sst/anomaly/2014/anomnight.5.26.2014.gif
The mechanism that was inhibiting GCR (GCR is an abbreviation for galactic cosmic rays and is also called cosmic ray flux CRF, there are no rays the first discovers thought they had discovered a new type of radiation and the idiotic scientific community never bothered to correct, GCR/CRF are mostly high speed protons which are modulated by the strength and extent of the solar heliosphere) and solar wind modulation of planetary clouds for the last 7 year to 10 years is now starting to abate.
This chart shows neutron counts at a high latitude location. Neutron counts are proportion to the amount of GCR that are striking the earth’s atmosphere.
http://cosmicrays.oulu.fi/webform/query.cgi?startday=27&startmonth=03&startyear=1975&starttime=00%3A00&endday=27&endmonth=04&endyear=2014&endtime=00%3A00&resolution=Automatic+choice&picture=on
if I understand the mechanisms and based on cycles of similar warming and cooling that correlate with solar magnetic cycle changes roughly 90% of the warming (90% of 0.8C which is 0.7C) in the last 150 years was due to solar magnetic cycle modulation of planetary cloud cover.
The UAH data is not contaminated by the urban heat effect and is not contaminated by climategate type of manipulation. The UAH satellite measured temperature for the planet, average long term temperature anomaly is roughly 0.3C. Cooling of 0.8C will result in a UAH average (new typical base) of -0.5C.

Richard M
May 27, 2014 6:56 am

Werner Brozek says:
May 26, 2014 at 7:55 am
Richard M says:
May 26, 2014 at 6:48 am
No, UAH and RSS are converging. All you need to do is look at the data.
——-
Actually they are diverging since 1998. You cannot compare RSS land with UAH global. See the graphs where they were offset to start at the same place in 1998:
http://www.woodfortrees.org/plot/rss/from:1998/plot/rss/from:1998/trend/plot/uah/from:1998/offset:0.156/plot/uah/from:1998/trend/offset:0.156
Slope lines going in opposite directions is not converging.

I didn’t compare UAH global to RSS land. I used both the global trend lines. Of course, if you offset them start at the same point it looks like they are diverging, but that is fooling with the graphs and is not an accurate representation of the data. If you look at the entire time period, they diverged in the 20th century and are now converging. (I assumed they are both based on the same base period).

May 27, 2014 7:32 am

Richard M says:
May 27, 2014 at 6:56 am
(I assumed they are both based on the same base period).
See: http://www.drroyspencer.com/2012/11/
“Differences with RSS over the Last 2 Years
Many people don’t realize that the LT product produced by Carl Mears and Frank Wentz at Remote Sensing Systems has anomalies computed from a different base period for the average annual cycle (1978-1998) than we use (1981-2010). They should not be compared unless they are computed about the same annual cycle.”

Richar Barraclough
May 27, 2014 9:52 am

Walter Dnes says:
May 25, 2014 at 9:20 pm
The comments about GISS’ “dancing data” intrigued me. I’ve been downloading most major world datasets for a few years. GISS is available at http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt I whipped up a quickie bash script to go through my downloaded files and see what GISS has been reporting for 2010

Walter,
Thank you for that – very illuminating.
I downloaded all the datasets a couple of years ago, and imported them into a spreadsheet so that I could do my own analyses on them. Then each month I just type in the most recent figure, rather than downloading the whole thing again.
How naive of me! The GISS figures change all the time, and even the most recent couple of months have already been revised. I must be careful to load the latest version in full every month. Are the other datasets as volatile? The occasional correction I can understand, but continual changes to the same month several years back suggests that none of the figures is particularly accurate, and will change repeatedly in the future.

May 27, 2014 10:16 am

Richar Barraclough says:
May 27, 2014 at 9:52 am
Are the other datasets as volatile?
Just in case Walter does not respond with a more complete answer, my experience has been that the most recent month may have an adjustment of a few thousandths of a degree and occasionally, the second last month may also have a slight change. So if you check the previous two months, you should be OK 99% of the time on all but GISS.

Richard M
May 27, 2014 10:22 am

Werner Brozek says:
May 27, 2014 at 7:32 am
See: http://www.drroyspencer.com/2012/11/
“Differences with RSS over the Last 2 Years
Many people don’t realize that the LT product produced by Carl Mears and Frank Wentz at Remote Sensing Systems has anomalies computed from a different base period for the average annual cycle (1978-1998) than we use (1981-2010). They should not be compared unless they are computed about the same annual cycle.”

Do you know what the difference comes out to? That would be the proper amount to offset the graphs. For example, using -.05 for RSS it shows a divergence followed by convergence and then divergence in the opposite direction.
http://www.woodfortrees.org/plot/rss/to/plot/rss/to:2000/trend/offset:-.05/plot/uah/to/plot/uah/to:2000/trend/plot/rss/from:2000/to/trend/offset:-.05/plot/uah/from:2000/trend

May 27, 2014 10:58 am

The implicit implication here is that if a record is indeed set, the reason for that record will be human CO2. While we dither with hundredths of a degree it is well to remember the one degree drop and recovery from the MWP to the present.
Pick your trend.

Richard Mallett
Reply to  gymnosperm
May 27, 2014 1:16 pm

What source(s) are you using for the difference between the MWP / LIA and the present ?

Reply to  Richard Mallett
June 6, 2014 6:39 am

I was just using my eyechrometer on this:
http://wp.me/a1uHC3-jd
Hollowscene optimum to LIA nearly one degree. Maybe only 3/4 degree from MWP. Still a lot more than a few hundredths.

Reply to  Richard Mallett
June 7, 2014 9:06 am
Richard Mallett
Reply to  gymnosperm
June 7, 2014 11:23 am
May 27, 2014 11:10 am

Richard M says:
May 27, 2014 at 10:22 am
Do you know what the difference comes out to?
I cannot give a complete answer since there are so many variables and I do not know all. Then there is the point that WFT only has version 5.5 and not version 5.6. Version 5.6 is even further apart from RSS. However what WFT shows is that the average for all RSS values is 0.09774 and the average for all UAH values is 0.00878. So taking the difference of 0.08896 and adding it to UAH, we get the following graph. It shows that UAH started cooler and is getting warmer whereas RSS started warmer and is getting cooler.
http://www.woodfortrees.org/plot/uah/from:1978/offset:0.08896/plot/rss/from:1978/plot/uah/from:1978/trend/offset:0.08896/plot/rss/from:1978/trend

May 27, 2014 11:14 am

gymnosperm says:
May 27, 2014 at 10:58 am
The implicit implication here is that if a record is indeed set, the reason for that record will be human CO2.
True, if all data sets have a record. But what would be the implication if GISS set a record but UAH came in 10th?

May 27, 2014 11:55 am

Solomon Green says:
May 26, 2014 at 11:00 am

Thanks I did very interesting and well worth the visit.

Thanks for taking the time to look.

May 27, 2014 1:27 pm

Richard Mallett says:
May 27, 2014 at 1:16 pm
What source(s) are you using for the difference between the MWP / LIA and the present ?
Did you post this here by mistake? I have not mentioned either one.

Richard Mallett
Reply to  Werner Brozek
May 27, 2014 1:49 pm

I posted my message to gymnosperm in a box that said ‘Reply to gymnosperm’ – was that wrong ?

May 27, 2014 2:11 pm

Richard Mallett says:
May 27, 2014 at 1:49 pm
I posted my message to gymnosperm in a box that said ‘Reply to gymnosperm’ – was that wrong ?
It seems as if what you tried to do is what can be done on Dr. Spencer’s site where you reply to a specific person and the reply is just below that person’s post. At WUWT, all responses are in order so you should do what I did above so everyone knows what specific entry you are trying to respond to. In order to ital the person’s response you are replying to, put in the following before the quoted part: [i] Then at the end, put in the following: [/i]
I hope [i]gymnosperm[/i] responds.
To bold things, just replace b with i in the above, and the [ and ] with hyperlink angled brackets.
Cheers
[Mod note: Hyperlinks automatically format in this WordPress formatting.
one can also use [blockquote] and [/blockquote] to italicize and indent a previous writer’s words.
use the “Test Page” to try your formatting first if you are uncertain. There are also examples on that page of WUWT. .mod]

Richard Mallett
Reply to  Werner Brozek
May 27, 2014 2:35 pm

Let me try the italics thing :-
gymnosperm says:
May 27, 2014 at 10:58 am
The implicit implication here is that if a record is indeed set, the reason for that record will be human CO2. While we dither with hundredths of a degree it is well to remember the one degree drop and recovery from the MWP to the present.
Pick your trend.

What source(s) are you using for the difference between the MWP / LIA and the present ?
(Selecting text for copying and pasting takes some practice)

May 27, 2014 2:15 pm

Ooops!
My formatting worked, but not my explanation. Please see:
http://home.comcast.net/~ewerme/wuwt/index.html

Richard Mallett
Reply to  Werner Brozek
May 27, 2014 2:38 pm

Thanks to Werner Brozek, I think I have it now. The blogs all seem to be slightly different. Some also have ‘reply’ links on (some of) the messages.

Nick Stokes
May 27, 2014 2:55 pm

Richar Barraclough says: May 27, 2014 at 9:52 am
“The GISS figures change all the time, and even the most recent couple of months have already been revised. I must be careful to load the latest version in full every month. Are the other datasets as volatile?”

GISS is basically an aggregator. It brings in data from GHCN (adjusted), an SST, USHCN and others like SCAR. If any of those makes a change, then so does GISS. Also, by the anomalies arithmetic, any change in the base period makes a small change to all years.
The upside of aggregation is that changes, while frequent, are usually small.

Richar Barraclough
May 27, 2014 3:36 pm

Thanks Nick
That explains why the whole dataset can change – a change within any of the contributing datasets during the base period. I know the changes tend to be small – but no less annoying!

Richard Barraclough
May 27, 2014 3:37 pm

Aaargh – can’t even spell my own name…….

Dave Peters
May 27, 2014 7:18 pm

I am trying to sample the reasoning from both ideological poles, but basically am a committed warmist.
For those who question the meaning of the second numeral to the right of the decimal, in a two-digit summary of a monthly or annual mean anomaly, consider how many conspirators it would require to actively bias the data stream in order to achieve a tenth of a degree conspiratorial quantum leap of a single hundredth degree, to a bogus result. Has anyone specified the actual number of observations under-girding the Hadley or GISS two and three digit summaries? Is it in the tens of millions?
That digit has no meaning only to those who wish it had none. There are so many observations behind it, it has obvious meaning beyond the calibration of individual instrument readings.

Reply to  Dave Peters
May 27, 2014 7:29 pm

After 1973 there are 2-3 million samples per year, prior far less depending on the year.
If you go to the link in my name I have some different pages, a country with yearly sample counts for the GSOD data I used.

Dave Peters
May 27, 2014 8:31 pm

By and large, I find the crowd on this thread uninterested in answering a question, and determined to see the evidence they desire to see. Since the Earth began to measurably warm according to instrument readings, we can assess its average warm rate.
A thirty-five year crawling average bottoms in 1907, at -0.55 F. for the GISS global, and compares to +0.80 F. for the five-year interval between 1996 and 2000. We warmed by 1.36 F. in 91 years, at an average rate of 1.5 hundredths F. per year. Since the giant El Nino in ’97 – ’98, the five-year has warmed to 1.08 F. in 13 years, or at a rate of 2.2 hundredths F. per. Forty-five percent faster than in the last century. Not a pause.
Fully a third of this recorded warmth emerged from the Pacific in the one special year, or 35 year’s worth of typical heating. In the wake of that event, it would not be expected to see average warming resume for another decade and a half. But you guys are not looking to see IF the world is warming, you are trying to erect reasoning justifying your assessment that it is not. In either case, how you make allowance for the special El Nino is everything.
You ought allow, in my view, for that fourth power law governing Planck radiation, and the portion of the IR spectrum which finds the atmosphere transparent. When the super El Nino so raised the surface temperature, there began an immediate radiation loss proportionate to that hike, raised four powers. Looking backwards from the energy regime established by so extraordinary an elevation, you ought focus upon how the planet can sustain so large an IR bleed rate.
In view of the tension between Planck loss and the achievement of surface temperature hikes, how in Hell you “statistically” inspect the post El Nino period without reference to that thermal Niagara is quite mysterious. And physically meaningless, and only statistically significant in the absence of any awareness of 1997-98. But of course, we know about that world-shaping event, don’t we?

Dave Peters
May 27, 2014 8:34 pm

Thanx, Cro..

May 28, 2014 4:35 am

Dave Peters says:
May 27, 2014 at 8:31 pm
Thank you for your comments. As for the pause, it started in the 2000s for all except RSS. You talk about “the tension between Planck loss and the achievement of surface temperature hikes”. That is true and an interesting way of looking at things. So does that mean you agree that warming will never be catastrophic as this would lead to huge tensions that would undermine any catastrophic temperature spike?

Dave Peters
May 28, 2014 2:21 pm

Werner — Thank YOU for the courtesy of your reply.
On your pt. #1: “the pause started in the 2000’s”
EXACTLY! In view of my point, how could there NOT be a helluvah pause, following that 35-year’s worth of (perhaps) AGW burp? In addition to accumulating saturation, added CO2 forcing faces the task of “pressing” against that Planck four power thing. I envision that as doing push ups while someone keeps putting added weights on my back. But, more tellingly, if you consider things from the view of radiative equilibrium, the flatness of the plateau seems to me to strongly argue for their being massive H2O radiative support for the huge thermal hike experienced by the surface in ’97-’98. I cannot grasp how a Lindzen world would allow a given heat pulse to linger for a decade.
On pt. #2. Ought catastrophic warming be averted by such increases in difficulty of sustaining ever-increasing Planck bleeds?
That is the vex. Where are we headed? I live in Colorado Springs. Two springs back, when the lower-48 saw that > 1 F. leap in temps in a single year, the sun in March beat down on a nearly snowless western slope—baking it. I live nearly 7,000 above sea level, but as we moved towards the solstice, we had a never seen before 101 F. read, and the weatherman said something I’d never heard in me 63 years: …”the relative humidity is so low, you may as well say there is none”. By late afternoon, I could smell pine smoke and see a gentle snow of ash falling in my back yard. By nine PM, I walked up a hill three blocks from my home and watched across towards Pike’s Peak to witness 350 1/2 million dollar homes burn. Some as lingering glows, some as suddenly erupting conflagrations. Next day, a fire commander explained that, with temps so high, and humidity so low, a hoisted cone had a 60% chance of re-ignition. And, indeed, a two mile wide mountain lake, which I thought a certain barrier to the spreading Waldo inferno, proved inadequate, as falling embers ignited a new blaze on its western shore. Now, we all know that we cannot “attribute” Waldo to AGW, but we also know that since 101 had never been seen by Europeans here, we cannot eliminate AGW as a likely contributing influence. So, catastrophic to whom?
Lets say we luck out and live upon Lindzen-world. No feedbacks. So absolutely assured warming of 2.7 F. attends 2X CO2. An American exhausts 6X CO2 to his 1/7th of a billionth of the atmosphere, in his 78 years. So by age 15, he is @ 2X, by his moral example. Is 2.7 F. catastrophic? Probably not. But, that temp is for the global surface, and we live on the land surface, where modeled temps are 2X, and observed are 3X. So, assume we luck out again, and face 2X. That’s 5.4 F. So by age 45 we are @ >10F. Now, we would need to live to ninety to see committed warming of 20 F., in a best dream Lindzen-world. But, the overwhelming evidence suggests active thermal feedbacks. Furnace Creek in Death Valley is 18 F. higher than the norm, as the hottest place on Earth. I rode a bike there on an August afternoon in 1988, just to see what life was like in that thermal extreme. I took eight two-liter waters bottles with me and the furthest I could ride without drinking was two power poles. About 300 yards. Catastrophic? I am still here. But I consider this to be an inappropriate moral example to set for the Chindians, in whose hands the fate of our posterity now largely rests. I am no fan of windmills or “negawatts”, but believe we could have and should have gone balls to the wall building nuclear forty years ago, as France did. Their residential power costs 2/3rds the average of the UK, Germany, Netherlands, Spain, Denmark and Italy, last year. So, that much carbonlessness is utterly free.
Thanks for your efforts to wrestle with these matters. We have learned a million-fold about climate since Charney asserted the above numbers in 1979. They are identical to the latest ICCP figures. Likely, we will make the essential world-shaping choices with something close to these contemporary notions of the catastrophic, and the likelihoods.

May 28, 2014 4:20 pm

Dave Peters says:
May 28, 2014 at 2:21 pm
Thank you for your reply. They say hindsight is 20/20. They certainly did not expect any pause.
Phil Jones, July 5, 2005:
“The scientific community would come down on me in no uncertain terms if I said the world had cooled from 1998. Okay it has but it is only seven years of data and it isn’t statistically significant.”
http://mnichopolis.hubpages.com/hub/ClimateGate-The-Smoking-Gun-email

Richard Mallett
June 6, 2014 7:16 am

http://wattsupwiththat.com/2014/05/25/can-giss-and-other-data-sets-set-records-in-2014-now-includes-april-data/#comment-1655664
I hope I’m doing it right this time 🙂
Thanks for that; if you find something in higher resolution, or (better still) with data, please let me know.