NOAA's new normals – a step change of 0.5 degrees F

This from NOAA, which I missed last week while I was at ICCC6. But wait, there’s more! Have a look at the maps I produced below the “Continue reading>” line using NOAA’s own tools.

Average U.S. temperature increases by 0.5 degrees F

New 1981-2010 ‘normals’ to be released this week

June 29, 2011

Statewide changes in annual "normal temperatures" (1981 - 2010 compared to 1971 - 2000).

Statewide changes in annual “normal temperatures” (1981 – 2010 compared to 1971 – 2000).

Download here. (Credit: NOAA)

According to the 1981-2010 normals to be released by NOAA’s National Climatic Data Center (NCDC) on July 1, temperatures across the United States were on average, approximately 0.5 degree F warmer than the 1971-2000 time period.

Normals serve as a 30 year baseline average of important climate variables that are used to understand average climate conditions at any location and serve as a consistent point of reference. The new normals update the 30-year averages of climatological variables, including average temperature and precipitation for more than 7,500 locations across the United States. This once-a-decade update will replace the current 1971–2000 normals.

In the continental United States, every state’s annual maximum and minimum temperature increased on average. “The climate of the 2000s is about 1.5 degree F warmer than the 1970s, so we would expect the updated 30-year normals to be warmer,” said Thomas R. Karl, L.H.D., NCDC director.

Using standards established by the World Meteorological Organization, the 30-year normals are used to compare current climate conditions with recent history. Local weathercasters traditionally use normals for comparisons with the day’s weather conditions.

In addition to their application in the weather sector, normals are used extensively by electric and gas companies for short- and long-term energy use projections. NOAA’s normals are also used by some states as the standard benchmark by which they determine the statewide rate that utilities are allowed to charge their customers.

The agricultural sector also heavily depends on normals. Farmers rely on normals to help make decisions on both crop selection and planting times. Agribusinesses use normals to monitor “departures from normal conditions” throughout the growing season and to assess past and current crop yields.

NCDC made many improvements and additions to the scientific methodology used to calculate the 1981-2010 normals. They include improved scientific quality control and statistical techniques. Comparisons to previous normals take these new techniques into account. The 1981-2010 normals provide a more comprehensive suite of precipitation and snowfall statistics. In addition, NCDC is providing hourly normals for more than 250 stations at the request of users, such as the energy industry.

Some of the key climate normals include: monthly and daily maximum temperature; monthly and daily minimum temperature; daily and monthly precipitation and snowfall statistics; and daily and monthly heating and cooling degree days. The 1981-2010 climate normals is one of the suite of climate services NOAA provides government, business and community leaders so they can make informed decisions. NOAA and its predecessor agencies have been providing updated 30-year normals once every decade since the 1921-1950 normals were released in 1956.

NOAA’s mission is to understand and predict changes in the Earth’s environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Facebook, Twitter and our other social media channels.

==============================================================

The interesting thing about baselines, is that by choosing your baseline, you can make an anomaly from that baseline look like just about anything you want it to be. For example, at the NOAA Earth System Research Laboratory, they offer a handy dandy map creator for US Climate Divisions that allows you to choose the baseline period (from a predetermined set they offer), including the new 1981-2010 normals released on July 1st.

Here are some examples I get simply by changing the base period.

Here’s the most current (new) base period of 1981-2010. Doesn’t look so bad does it?

 

Now if we plot the entire period, look out, we are on fire! Global warming is running amok!

Oh, wait, look at the scale. Looks like about 0.08 to 0.13F warming. The ESRL plotting system arbitrarily assigns a range to the scale, based on data magnitude, but keeps the same color range.

The point of this? Temperature anomalies can be anything you want them to be, and by their nature of forcing a choice of baseline, forces you to cherrypick choose periods in your presentation of the data. The choice is up to the publisher.

Now the next question is, with NOAA/NCDC following standard WMO procedure (Roy Spencer at UAH did a few months back also) for a new 30 year base period, will NASA GISS start using a modern normals baseline for their publicly presented default graphs rather than the cooler and long outdated 1951-1980 for the press releases and graphs they foist on the media? Of course for a variety of reasons, they’ll tell you no. But it is interesting to see the rest of the world moving on while GISS remains stoically static. In case you are wondering, here is the offset difference based on the 1951-1990 vs the 1951-1980 base period used. This will of course change again with a more recent baseline, such as NOAA has adopted. While the slope won’t change the magnitude offset will. More here.

base period comparison

The issue is about public presentation of data. I figure if making such a baseline change is something NOAA does, should not NASA follow in the materials they release to the press?

Anomalies – any way you want them:

0 0 votes
Article Rating
45 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jeff Carlson
July 7, 2011 9:15 am

wake me when NOAA produces some science …

Anything is possible
July 7, 2011 9:30 am

I am surprised that nobody ever seems to question the use of the 30-year baselines that are used so religiously in climate studies. As far as I can tell, it is merely a compromise figure that the WMO got everybody to agree to back in the day, and has no actual scientific significance whatsoever.
Given that the major oceanic cycles seem to repeat over c.65 years, surely that would be a more appropriate and significant period to use when establishing baselines.
Anyone else have thoughts on this?

dp
July 7, 2011 10:00 am

I am 65. My average age is 33. I feel 65. My average age for the past 32 years is about 50. I still feel 65. My average age between 1946 and 1978 is 16. I feel 65. No matter how I spin it I don’t feel anything but 65. This doesn’t seem productive.
I’m tired… I’m going home now… -Forrest Gump

July 7, 2011 10:06 am

The 30 year rule was developed when the books the records were hand entered into came published in sets of 10 years, at the time the decade book became filled there were pages for the monthly averages, annual averages, and the decade average for each date.
They settled on 30 years because it was about all you could do to open three old sets of records and the current, on a 3 X 6 foot desk. You could scan through all four books a page at a time, but try opening two more books and it was a shuffle them around nightmare.

Mark
July 7, 2011 10:11 am

Anything is possible says (at 9:30 am) – “I am surprised that nobody ever seems to question the use of the 30-year baselines that are used so religiously in climate studies.”
I assume the baseline was picked to be 30 years as that would give you a nice statistically speaking number (n=30) to estimate simga which in turn can be used to caluculate a 95%CI.

ShrNfr
July 7, 2011 10:38 am

What else about the AMO does NOAA not understand?

Bill Yarber
July 7, 2011 10:49 am

I think the 30 year period comes from historical perspective. Look at the temp data since 1880 to today, you see about 30 years of warming followed by 30 years of cooling. Probably the basis of the Farmers Almanac predictions. The PDO, AO, AMDO and other natural cyclical events are just window dressing. (sarcasm off)

July 7, 2011 10:50 am

Anthony:
Actually NCDC could be violating WMO standards and probably is if they applied these normals to GHCN data. The WMO states in their guidelines that Normal’s should NOT be changed unless there is more stations in the new normals compared to the old normals:

GUIDE TO CLIMATOLOGICAL PRACTICES THIRD EDITION
SNIP
4.8.1 Period of calculation
Under the Technical Regulations (WMO‐No . 49), climatological standard normals are averages of climatological data computed for the following consecutive periods of 30 years: 1 January 1901 to 31 December 1930, 1 January 1931 to 31 December 1960, and so forth. Countries should calculate climatological standard normals as soon as possible after the end of a standard normal period. It is also recommended that, where possible, the calculation of anomalies be based on climatological standard normal periods, in order to allow a uniform basis for comparison. Averages (also known as provisional normals) may be calculated at any time for stations not having 30 years of available data (see 4.8.4). Period averages are averages computed for any period of at least ten years starting on 1 January of a year ending with the digit 1 (for example, 1 January 1991 to 31 December 2004). Although not required by WMO, some countries calculate period averages every decade .
Where climatological standard normals are used as a reference, there are no clear advantages to updating the normals frequently unless an update provides normals for a significantly greater number of stations. Frequent updating carries the disadvantage that it requires recalculation of not only the normals themselves but also numerous data sets that use the normals as a reference. For example, global temperature data sets have been calculated as anomalies from a reference perio d such as 1961‐1990 (fig. 4.19). Using a more recent averaging period, such as 1971‐2000, results in a slight improvement in “predictive accuracy” for elements which show a secular trend (that is, where the time series shows a co sistent rise or fall in its values when measured over along term) . Also, 1971‐2000 normals would be viewed by many users as more “current” than 1961‐1990. However, the disadvantages of frequent updating could be considered to offset this advantage when the normals are being used for reference purposes.

http://www.wmo.int/pages/prog/wcp/documents/Guide2.pdf
As shown in the above section from the WMO guide, UAH switched to a 30 year baseline as per the WMO: ” Countries should calculate climatological standard normals as soon as possible after the end of a standard normal period.”
As soon as they had 30 years matching the guidelines they made standard Normals. NCDC on the other hand did not follow WMO guidelines if they switched normals on the GHCN dataset due to the “Great Dying of Thermometers”. There was more stations during the 71-2000 baseline then there is in the 81-2010 baseline in GHCN therefore it would violate this: “Where climatological standard normals are used as a reference, there are no clear advantages to updating the normals frequently unless an update provides normals for a significantly greater number of stations. Frequent updating carries the disadvantage that it requires recalculation of not only the normals themselves but also numerous data sets that use the normals as a reference.”
At the minimum the WMO does not support this type of change in normals that NCDC is doing. Now if there is more stations in USHCN for 81-2010 then 71-2000 it is an appropriate update for that dataset, but only for that dataset.
As to GISS they should have updated form 51-80 to 61-90 because there was more stations in GHCN and again to 71-2000. However WMO does give them an out for not doing so: “However, the disadvantages of frequent updating could be considered to offset this advantage when the normals are being used for reference purposes.”

Doug Allen
July 7, 2011 11:01 am

I suspect almost all of that 0.5 degrees F is the removal of the cold 1970’s which I remember well in upstate NY. I’m originally from Cincinnati, and the savage winter of 1977-78 there and throughout the midwest was one for the books. Here’s some info about where I was born during that winter- http://www.enquirer.com/editions/2000/12/31/loc_dont_look_for_river.html

July 7, 2011 11:02 am

Given what part of the 60-yr cycle we’re in, they’re really setting themselves (and their “clients”) up for a nasty fall. Planning on “continued warming” is a definite losing strategy.

July 7, 2011 11:03 am

Mark says:
July 7, 2011 at 10:11 am
I assume the baseline was picked to be 30 years as that would give you a nice statistically speaking number (n=30) to estimate simga which in turn can be used to caluculate a 95%CI.

No the 30 year period was not picked for Stats purposes as shown in the WMO guidelines:

Historical practices regarding climate normals (as described in previous editions of this Guide [WMO‐No. 100], Technical Regulations [WMO‐No. 49]) and WMO/TD‐No. 1188) date from the first half of the 20th century. The general recommendation was to use 30‐year periods of reference. The 30‐year period of reference was set as a standard mainly because only 30 years of data were available for summarization when the recommendation was first made . The early intent of normals was to allow comparison among observations from around the world. The use of normals as predictors slowly gained momentum over the course of the 20th century.

It was picked because when they first started they only had 30 years of data. If you wanted to pick normals for Stats purposes according to the WMO 30 years is not the ideal length for temperature studies:

A number of studies have found that 30 years is not generally the optimal averaging period for normals used for prediction. The optimal period for temperatures is often substantially shorter than 30 years, but the optimal period for precipitation is often substantially greater than 30 years. WMO/TD‐No. 1377 and other references at the end of this chapter provide much detail on the predictive use of normals of several elements.

 
http://www.wmo.int/pages/prog/wcp/documents/Guide2.pdf
So there you g, the whole it has to be 30 years for it to be climate meme is based on a cherrypick (that was the length of data they had) and it is not the optimal averaging period for temperatures.

R. Gates
July 7, 2011 11:48 am

Yes, anyone can pick and choose the begin and end points of a series of data if they have an agenda to make the data seem to indicate something it may not. But if you are consistent in your choicesmof starting and ending points and use data that is gathered in a consistent manner, you should be able to see real trends in the data over a long period of time. It will be interesting to see how the decade of 2010-2019 compares to the previous decade of 2000-2009. This will be a good test for knowing much more about the sensitivity of the climate to rapid increases in CO2 brought about by human activity.

July 7, 2011 12:24 pm

The selection of a base period for GLOBAL presentations ( like CRU and NASA do ) as the WMO recognizes is complicated.
“As to GISS they should have updated form 51-80 to 61-90 because there was more stations in GHCN and again to 71-2000. ”
That’s not exactly true.
A long while back I did a study of all possible base periods and counting the number of stations
available.
“available” does not mean “In the database” If you are using a CAM method then the stations need to fit TWO criteria. They have to be “in the database” and they have to be reporting
ENOUGH data in the reference period.
So, for example, if you are looking at GLOBAL data, the period would be 1953-1982
( as I recall)
However, if you pick that period, you will lose some southern hemisphere stations.
To maximize the southern hemispshere stations ( which has fewer) you want to pick
1961-1990. While this has a marginally smaller number of stations in the NH, you maximize the SH.
That’s important to minimize the SD in the monthly mean of the base period.
With hansen’s RSM there are also rules for counting which stations to include and which dont have enough data. So you CANNOT simply look at the number of stations in the database at any particular time. Its not that simple. Its not hard, It’s just not as simple as suggested here.
The base period has no significant effect on the trend. It CAN effect the noise. and you want to maximize the number of reports ( locally or globally as the case may be) in the base period.
Folks who want to play around with this can download the R package RghcnV3.
Maybe I’ll do a special bit of demo code to show folks how to look at these things properly.
At some point I’ll put in Giss code into the package.. but first… the berkely method which has no base period or RomanMs which has no base period.
When people see that station count and base period MAKE NO SUBSTANTIAL DIFFERENCE
then perhaps we can get the crowd focused on the key issue which is sensitivity

henrythethird
July 7, 2011 12:24 pm

Most of the data is being used for presentation purposes.
That means, they need to show the greatest rise above “normal” as they can get, with the scary thick red lines to back it up.
Comparing the current values to the past decade doesn’t give them the “punch” they need to say it’s worse than we thought.
Besides, it’s the trend that matters, not the “zero”.
Or so they say…

Dave Wendt
July 7, 2011 12:50 pm

Anything is possible says:
July 7, 2011 at 9:30 am
Given that the major oceanic cycles seem to repeat over c.65 years, surely that would be a more appropriate and significant period to use when establishing baselines.
Anyone else have thoughts on this?
But if they used a 65 yr baseline that would have made their old baseline from 1935-2000. If you look at any graphs of the temps for the last century, particularly those from the days before they began to attempt to render the same fate on the Dust Bowl that they had on the Medieval Climate Optimum, it should be intuitively obvious why that was not even a slightly viable alternative for the lads on The Team.

kadaka (KD Knoebel)
July 7, 2011 12:52 pm

So looking at the three decade-long periods in the normals, by swapping out 1971-1980 for 2001-2010 while retaining 1981-1990 and 1991-2000, you gain 0.5°F.
Was the average temperature for the continental US for the 2001-2010 period actually 0.5°F more than that for 1971-1980?

R. de Haan
July 7, 2011 12:54 pm

Making a dream come true.

Mark
July 7, 2011 1:28 pm

“boballab says: July 7, 2011 at 11:03 am No the 30 year period was not picked for Stats purposes as shown in the WMO guidelines:”
Thanks for the details of why n= 30 was used vs what could, or is it should, be used.
m

July 7, 2011 1:55 pm

Steven Mosher says:
July 7, 2011 at 12:24 pm
“available” does not mean “In the database” If you are using a CAM method then the stations need to fit TWO criteria. They have to be “in the database” and they have to be reporting
ENOUGH data in the reference period.

Nice way of putting it but even GISS admits that there was more stations in the database from 1961-90 and 71-2000 as shown in Hansen et al 1999 and their Figure 1:

Measurements at many meteorological stations are included in more than one of the 31GHCN data sets, with the recorded temperatures in some cases differing in value or record length. Our first step was thus to estimate a single time series of temperature change for each location, as described in section 4. The cumulative distribution of the resulting station record lengths is given in Figure 1a, and the number of stations at a given time is shown in Figure 1b.

http://pubs.giss.nasa.gov/docs/1999/1999_Hansen_etal.pdf
This is the same graph which can be see on the Station Data page:
As to the meme you spout about “The loss of stations doesn’t matter” is also not only contradicted by the WMO but also by GISS in that same paper. It seems you left a very important part out:

Sampling studies discussed below indicate that the decline in number of stations is unimportant in regions of dense coverage, although the estimated global temperature change can be affected by a few hundredths of a degree. The effect of poor coverage on estimated regional and zonal temperatures can be large in specific areas, such as high latitudes in the Southern Hemisphere, as illustrated in section 4.

Last time I looked the arctic region doesn’t even come close to “dense coverage” and that is one area where GISS finds it’s greatest warming and where they even admit that poor coverage effects their estimates up there.

Gary Pearse
July 7, 2011 2:07 pm

The problem with adding on a decade and then changing the normals is that you have a tweny year counterweight, The fact the last 10 years has been cooling is being downsized by the previous hot period (period of solar max activity); ten years from now, the “hottest” ten years still in the picture will probably make the cooling period appear flat. Why not have a more permanent “climatological” base of a much longer period – how else are we going to evaluate what is really going on. The reason we started with 30 year periods is because at the time there was only 30 years of data (duh). I believe 1850 – 1950 or match up a warm period on one end and a cold on the other (1900- 2000?). This will give a permanent baseline to measure what is happening going forward. The people who have been supporters of the CAGPCW (Catastrophic Anthropogenic Goal Post Changing Warmists) appear to love layering it with game changing, horizon shifting, upside down trending complications, They don’t really want to see what is really happening.

Roger Knights
July 7, 2011 2:18 pm

I read here somewhere that US temperatures have been noticeably declining for 15 years. I hope someone can post a chart showing that.

Theo Goodwin
July 7, 2011 2:19 pm

This is another piece of outrageous propaganda from the Climate Central Planning Office (formerly NOAA).

Roger Knights
July 7, 2011 2:19 pm

PS: This chart would help to debunk the claim that recent extreme weather events in the US are related to rising temperatures.

Editor
July 7, 2011 2:39 pm

R. Gates says: “… It will be interesting to see how the decade of 2010-2019 compares to the previous decade of 2000-2009. This will be a good test for knowing much more about the sensitivity of the climate to rapid increases in CO2 brought about by human activity.
OK, so now can you persuade the developed world’s politicians to wait for the test results, before levying huge fines on all of us.

Gator
July 7, 2011 2:51 pm

If you do not believe station dropout has any effect on temperature, I suggest you read this…
http://scienceandpublicpolicy.org/images/stories/papers/originals/surface_temp.pdf
And if you do not want to digest the entire report, just check out page 18.

Dave Wendt
July 7, 2011 2:52 pm

Anthony
This question is fairly boneheaded and I am almost embarrassed to ask it, but I’m having a brain fade day and am a bit confused. It’s in regard to the set of anomaly maps in your post. My question is, in the maps. which set functions as the baseline and which provides the anomaly values? When i first looked at them I assumed the longer record was the baseline and the shorter record was the anomaly, but when I came back to them the linguistic structure of the headings sort of suggest the reverse. Trying to reason it out was just adding to my already screaming headache, so I thought I’d just ask.

u.k.(us)
July 7, 2011 3:14 pm

Steven Mosher says:
July 7, 2011 at 12:24 pm
“When people see that station count and base period MAKE NO SUBSTANTIAL DIFFERENCE
then perhaps we can get the crowd focused on the key issue which is sensitivity”
=========
Discounting that Anthony clearly shows the base period does matter (at least in the U.S. ), you call the “key issue” sensitivity.
Do you think we have a good enough grasp of sensitivity, to be burdening entire economies and their taxpayers in the effort to mitigate its effects?
Personally, I have yet to be convinced.

Maxbert
July 7, 2011 4:07 pm

You’re right that farmers depend on “normals.” Our conditions here in Washington state have been colder and wetter than normal for 3 years now. Thermal units are lower, the growing season has been shorter, and crops have been lost or damaged all over the state.
A +0.5F step change, my foot. Your 1981-2010 baseline version looks about right to those of us on the ground.

hotrod (Larry L)
July 7, 2011 4:12 pm

The funny thing is that this new warmer normal will make any up coming cooling look more significant and harder for them to hide. If the weather/climate cycles replicate the cooler 70’s when compared to the newer normals the cooling will be much more dramatic.
I will enjoy laughing at the AGW folks when they shoot themselves in the foot due to their own data manipulations.
Larry

R. Gates
July 7, 2011 5:11 pm

Roger Knights says:
July 7, 2011 at 2:18 pm
I read here somewhere that US temperatures have been noticeably declining for 15 years. I hope someone can post a chart showing that.
_____
If they do, it would be a fabrication as this is most definitely not true.

Bill Illis
July 7, 2011 5:34 pm

Here is the US Monthly Temperature Anomaly (in Celsius and using an 11 month moving average because US temperatures on a monthly basis are quite variable – last month May, 2011 was
-0.6C but March, 2011 was +0.8C)
http://img69.imageshack.us/img69/788/ustempscmay11.png

Ross
July 7, 2011 7:43 pm

Have a look at these – Going to the Sun Road
Summer June 2006 – http://www.flickr.com/photos/jacdupree/178695793/
Summer July 2011 – http://www.flickr.com/photos/glaciernps
Certainly doesn’t seem to be much global warming here this year.

July 7, 2011 8:26 pm

Bill;
My eyeball trendline analyser sez the slope is -0.5°C/decade since 1995.
Purely an illusion and lie, of course. Just ask R. Gates!

July 7, 2011 10:35 pm

u.k.(us) says:
July 7, 2011 at 3:14 pm (Edit)
Steven Mosher says:
July 7, 2011 at 12:24 pm
“When people see that station count and base period MAKE NO SUBSTANTIAL DIFFERENCE
then perhaps we can get the crowd focused on the key issue which is sensitivity”
=========
Discounting that Anthony clearly shows the base period does matter (at least in the U.S. ), you call the “key issue” sensitivity.
Do you think we have a good enough grasp of sensitivity, to be burdening entire economies and their taxpayers in the effort to mitigate its effects?
Personally, I have yet to be convinced.
##########################
Let me be clear.
The base period does not matter mathematically to the calculation of trends.
it does not matter.
You have a series of temperatures x1,x2,x3,x4,,,,,xn
To turn it into anomalies or NORMALS you subtract a series of constants.
Then you look at TRENDS
If you change the base period you are changing the constants. This has NO EFFECT WHATSOEVER on the calculation of the trends.
Now, Anthony’s point with this has always been the “marketing” aspect. If you pick a cool period for the base, you get “redder” colors. Pick a warm period, you get “bluer” colors.
BUT, it makes no difference to the MATH OF TRENDS.
My larger point is this. A lot of ink and energy is spent on these marketing issues.
That distracts people from the most important issue. that issue is sensitivity.
That is where THE BEST ARGUMENT AGAINST CAGW HAPPENS.
So I am complaining about the distractions. The IPCC doesnt have a very solid SCIENCE STORY about sensitivity. And I think it makes MORE SENSE to focus on that, then it does to foucs on the colors in charts. There is only so much focus that people can have and if they focus on tangents rather than the big issue, then thats a lost opportunity.
Same with the temperature record. Many people have focused on BIAS when the real issue is UNCERTAINTY. Of course BIAS ( they are cheating) makes for better headlines, btu the real issue is uncertainty.

Blade
July 8, 2011 1:23 am

Glancing through the comments so far I don’t think anyone has correctly answered Anthony’s question: The point of this? Ok, here is why it is done …
If you change the baseline to a warmer ‘normal’, then all subsequent cooling is masked or at least tamped down. This would be done in anticipation of cooling temperatures which will thereby be reduced in impact. The AGW cultists will get to say: it hasn’t really cooled, just look at this trend! It’s simple really.
Running averages with shifting baselines will merely create a ratchet effect going forward. It is a game of statistics, not science.

walt man
July 8, 2011 6:20 am

Steven Mosher says: July 7, 2011 at 10:35 pm
Choice of 30 year base line CAN make a difference.
For averaged yearly values it will be insignificant
For monthly or daily there MAY be some significance.
e.g. monthly: if winter months are warming faster than summer over the 2 30 year periods then if you average all 30 off Januarys for each period then these would appear as a higher figure to difference (lower trend) from each individual year if the latter period is used. Similarly if july has not warmed as much then the July trends will be comaratively greater than the Jan trends. This must skew the data at the monthly level and possibly therefore at the yearly?

Dave Springer
July 8, 2011 7:22 am

o.5F sounds about right for continental US. It won’t be anywhere near that large globally of course since, contrary to urban AGW legend, greenhouse gases don’t raise ocean surface temperatures. The GHG effect is a land-only phenomenon. You CANNOT heat a body of water by shining long wave infrared radiation onto the surface. The energy is immediately carried away through evaporation. If the climate boffins would simply acknowledge that physical fact they would know why they can’t find the so-called “missing heat”. It’s missing because it never entered the ocean. Thus about 2/3’s of the theoretical surface warming from increased GHGs is absent because about 2/3’s of the earth’s surface is a body of water. You heat dirt with LWIR but you cannot heat water with it.

Neil
July 8, 2011 8:58 am

I tried plotting the data for some of the base periods against the base periods themselves. (The first instinct of any software tester – start with the simplest case!) Try Jan-Dec 1961-1990 versus 1961-1990 longterm average, for example.
I expected the map to be uniformly white, since the difference between any function and itself is – ZERO.
But what I got, in each case, was a mixture of yellow and white. Small discrepancies, less than +-0.03F. But still discrepancies.
Watts up with that?

kadaka (KD Knoebel)
July 8, 2011 5:34 pm

Finally got the answer to my last post myself (thank you to the wonderfully helpful people on this site, which apparently are all on vacation).
The national NOAA-NCDC data are here (took forever to find):
http://www7.ncdc.noaa.gov/CDO/CDODivisionalSelect.jsp
Tossing the monthly values into a spreadsheet and averaging the periods yields the following:
1971-1980 52.5°F
1981-1990 53.2
1991-2000 53.6
2001-2010 54.0
I forgot about the “averaging by three,” there’s a 1.5°F difference, not 0.5.
So according to NOAA-NCDC, 2001-2010 was 1.5°F (0.8°C) warmer than 1971-1980. Seems high. Averaging the decadal periods together, the 1971-2000 value was 53.1°F, 1981-2010 was 53.6, showing the 0.5°F difference in normals as mentioned above.
===
From Steven Mosher on July 7, 2011 at 10:35 pm:

You have a series of temperatures x1,x2,x3,x4,,,,,xn
To turn it into anomalies or NORMALS you subtract a series of constants.

Dang, I was just averaging, didn’t subtract any constants. And still got the right answer.
===
From Brian H on July 7, 2011 at 8:26 pm:

My eyeball trendline analyser sez the slope is -0.5°C/decade since 1995.

Using the monthly numbers I found to make yearly averages (violating dozens of official data-massaging Climate Science™ rules, I’m sure), I graphed the results and let the spreadsheet figure out the linear fit. From 1995 to 2010 inclusive, the slope is 0.027°F/yr, 0.15°C/decade. From 1971-2010 inclusive, the range I was examining, 0.044°F/yr, 0.24°C/decade. Looked at this way, the rate of increase has sure slowed down.
If looking at just the last full decade, 2001 to 2010 inclusive, the slope is 0.093°F/yr, 0.52°C/decade. Looks like a strong cooling signal.
Perhaps your eyeball trendline analyzer just got the starting year wrong?

u.k.(us)
July 8, 2011 8:26 pm

Steven Mosher says:
July 7, 2011 at 10:35 pm
Let me be clear……………
==========
Well said, and I agree.
Thanks for the expansive comment, it is duly noted.

July 8, 2011 11:26 pm

Why do we have the 30 year convention?
boballab quotes WMO:

Historical practices regarding climate normals…date from the first half of the 20th century….The 30‐year period of reference was set as a standard mainly because only 30 years of data were available for summarization when the recommendation was first made .

The WMO is right in giving the 30 year convention beginning in the early 20th century (when I think it was adopted by the IMO), but the data availablity argument (even if it was made at the time) requires more explanation. At the time it was commonly recognised that thermometer data availability, and quality, varied enormously across the globe. But certainly there were many European and colonial records going back much more than 30 years, and a significant set that had ticked over the 100 year mark.
We would need to see some evidence as to why the 30 years line was regarded as especially significant in terms of data availability at this time. Charles Schott, in a famous early statistical intervention in the climate change controversy, using data across a 100 year time span. That was in 1899 and in an argument against the widespread belief in localised anthropogenic climatic change.
Early statistical climatology tended to argue against climate change; against both popular belief in anthropogenic change and against the new findings of geologists supporting natural variation on an historical scale. Later, in hindsight, Hubert Lamb saw the position of the statisticians as rather too convenient. If climate varied on all scales, from the human scale to the geological scale, then finding statistical norms becames difficult. On their conclusion/assumption of no climate change, we would expect the statiticians to choose a period that could mostly flatten what they says as random yearly variation, and 30 years might have been mimimally sufficient for that.
Lamb’s account of statistical climatology as he discovered it in the 1940s as little more than the ‘book-keeping bookkeeping branch of meteorology, this fits with Richard Holle’s more pragmatic explanation of the 30 year convention:

They settled on 30 years because it was about all you could do to open three old sets of records and the current, on a 3 X 6 foot desk. You could scan through all four books a page at a time, but try opening two more books and it was a shuffle them around nightmare.

Interesting. I would love to know more!
But perhaps there was some influence from the other side, from Geology and folklore. It was in the 1890s that Bruckner used thermometer and proxy data to presented a case for a variable 35 years global climate cycle. And it was soon noted that this revived a piece of folklore as record and supported by Francis Bacon in the early 17th century:

There is a toy, which I have heard and I would not have it given over, but waited upon a little. They say it is observed in the Low Countries (I know not in what part) that every five and thirty years the same kind and suit of years and weather comes about again; as great frosts, great wet, great droughts, warm winters, summers with little heat, and the like; and they call it the prime. It is a thing I do the rather mention, because, computing backwards, I have found some concurence.

Of Vicissitude of Things

July 9, 2011 3:20 am

kadaka;
Could be, in which case, OMG it’s more worser than I thought! About -4.717K by 2100!! (Give or take about 4K.)
😉

July 9, 2011 3:23 am

P.S. The above was calculated strictly following the procedures vaguely outlined in the Engineering Handbook of Pretend Precision.

Mark
July 9, 2011 7:12 am

berniel said at 11:26pm (7/8/11)
“…. this fits with Richard Holle’s more pragmatic explanation of the 30 year convention:……..” Reminds me the joys of figuring out how much data to collect back in the days of paper records were it actually took a fair amount of time (and $) to collect historical bits of information. Data mining the old fashioned way was time consuming! Even worse for those of us that couldn’t type well was putting those bits of information into a format to evaluate it- I think I spent more time fixing the errors in my punch cards then I did in originally putting them together for the compiler/card reader. I recall giving up on punch cards once HP came out with their statistical calculator. The joy of using cricket graph vs. the alternatives for presenting information come to mind as well.
Berniel, thanks for the Bacon reference! I am sure he would find the discussions on cause and effect in the climate, and the models we develop to explain it, of interest. If anyone would know the difference between correlation vs causation it would be Bacon.