NOAA's new normals – a step change of 0.5 degrees F

This from NOAA, which I missed last week while I was at ICCC6. But wait, there’s more! Have a look at the maps I produced below the “Continue reading>” line using NOAA’s own tools.

Average U.S. temperature increases by 0.5 degrees F

New 1981-2010 ‘normals’ to be released this week

June 29, 2011

Statewide changes in annual "normal temperatures" (1981 - 2010 compared to 1971 - 2000).

Statewide changes in annual “normal temperatures” (1981 – 2010 compared to 1971 – 2000).

Download here. (Credit: NOAA)

According to the 1981-2010 normals to be released by NOAA’s National Climatic Data Center (NCDC) on July 1, temperatures across the United States were on average, approximately 0.5 degree F warmer than the 1971-2000 time period.

Normals serve as a 30 year baseline average of important climate variables that are used to understand average climate conditions at any location and serve as a consistent point of reference. The new normals update the 30-year averages of climatological variables, including average temperature and precipitation for more than 7,500 locations across the United States. This once-a-decade update will replace the current 1971–2000 normals.

In the continental United States, every state’s annual maximum and minimum temperature increased on average. “The climate of the 2000s is about 1.5 degree F warmer than the 1970s, so we would expect the updated 30-year normals to be warmer,” said Thomas R. Karl, L.H.D., NCDC director.

Using standards established by the World Meteorological Organization, the 30-year normals are used to compare current climate conditions with recent history. Local weathercasters traditionally use normals for comparisons with the day’s weather conditions.

In addition to their application in the weather sector, normals are used extensively by electric and gas companies for short- and long-term energy use projections. NOAA’s normals are also used by some states as the standard benchmark by which they determine the statewide rate that utilities are allowed to charge their customers.

The agricultural sector also heavily depends on normals. Farmers rely on normals to help make decisions on both crop selection and planting times. Agribusinesses use normals to monitor “departures from normal conditions” throughout the growing season and to assess past and current crop yields.

NCDC made many improvements and additions to the scientific methodology used to calculate the 1981-2010 normals. They include improved scientific quality control and statistical techniques. Comparisons to previous normals take these new techniques into account. The 1981-2010 normals provide a more comprehensive suite of precipitation and snowfall statistics. In addition, NCDC is providing hourly normals for more than 250 stations at the request of users, such as the energy industry.

Some of the key climate normals include: monthly and daily maximum temperature; monthly and daily minimum temperature; daily and monthly precipitation and snowfall statistics; and daily and monthly heating and cooling degree days. The 1981-2010 climate normals is one of the suite of climate services NOAA provides government, business and community leaders so they can make informed decisions. NOAA and its predecessor agencies have been providing updated 30-year normals once every decade since the 1921-1950 normals were released in 1956.

NOAA’s mission is to understand and predict changes in the Earth’s environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Facebook, Twitter and our other social media channels.

==============================================================

The interesting thing about baselines, is that by choosing your baseline, you can make an anomaly from that baseline look like just about anything you want it to be. For example, at the NOAA Earth System Research Laboratory, they offer a handy dandy map creator for US Climate Divisions that allows you to choose the baseline period (from a predetermined set they offer), including the new 1981-2010 normals released on July 1st.

Here are some examples I get simply by changing the base period.

Here’s the most current (new) base period of 1981-2010. Doesn’t look so bad does it?

 

Now if we plot the entire period, look out, we are on fire! Global warming is running amok!

Oh, wait, look at the scale. Looks like about 0.08 to 0.13F warming. The ESRL plotting system arbitrarily assigns a range to the scale, based on data magnitude, but keeps the same color range.

The point of this? Temperature anomalies can be anything you want them to be, and by their nature of forcing a choice of baseline, forces you to cherrypick choose periods in your presentation of the data. The choice is up to the publisher.

Now the next question is, with NOAA/NCDC following standard WMO procedure (Roy Spencer at UAH did a few months back also) for a new 30 year base period, will NASA GISS start using a modern normals baseline for their publicly presented default graphs rather than the cooler and long outdated 1951-1980 for the press releases and graphs they foist on the media? Of course for a variety of reasons, they’ll tell you no. But it is interesting to see the rest of the world moving on while GISS remains stoically static. In case you are wondering, here is the offset difference based on the 1951-1990 vs the 1951-1980 base period used. This will of course change again with a more recent baseline, such as NOAA has adopted. While the slope won’t change the magnitude offset will. More here.

base period comparison

The issue is about public presentation of data. I figure if making such a baseline change is something NOAA does, should not NASA follow in the materials they release to the press?

Anomalies – any way you want them:

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
45 Comments
Inline Feedbacks
View all comments
Jeff Carlson
July 7, 2011 9:15 am

wake me when NOAA produces some science …

Anything is possible
July 7, 2011 9:30 am

I am surprised that nobody ever seems to question the use of the 30-year baselines that are used so religiously in climate studies. As far as I can tell, it is merely a compromise figure that the WMO got everybody to agree to back in the day, and has no actual scientific significance whatsoever.
Given that the major oceanic cycles seem to repeat over c.65 years, surely that would be a more appropriate and significant period to use when establishing baselines.
Anyone else have thoughts on this?

dp
July 7, 2011 10:00 am

I am 65. My average age is 33. I feel 65. My average age for the past 32 years is about 50. I still feel 65. My average age between 1946 and 1978 is 16. I feel 65. No matter how I spin it I don’t feel anything but 65. This doesn’t seem productive.
I’m tired… I’m going home now… -Forrest Gump

July 7, 2011 10:06 am

The 30 year rule was developed when the books the records were hand entered into came published in sets of 10 years, at the time the decade book became filled there were pages for the monthly averages, annual averages, and the decade average for each date.
They settled on 30 years because it was about all you could do to open three old sets of records and the current, on a 3 X 6 foot desk. You could scan through all four books a page at a time, but try opening two more books and it was a shuffle them around nightmare.

Mark
July 7, 2011 10:11 am

Anything is possible says (at 9:30 am) – “I am surprised that nobody ever seems to question the use of the 30-year baselines that are used so religiously in climate studies.”
I assume the baseline was picked to be 30 years as that would give you a nice statistically speaking number (n=30) to estimate simga which in turn can be used to caluculate a 95%CI.

ShrNfr
July 7, 2011 10:38 am

What else about the AMO does NOAA not understand?

Bill Yarber
July 7, 2011 10:49 am

I think the 30 year period comes from historical perspective. Look at the temp data since 1880 to today, you see about 30 years of warming followed by 30 years of cooling. Probably the basis of the Farmers Almanac predictions. The PDO, AO, AMDO and other natural cyclical events are just window dressing. (sarcasm off)

July 7, 2011 10:50 am

Anthony:
Actually NCDC could be violating WMO standards and probably is if they applied these normals to GHCN data. The WMO states in their guidelines that Normal’s should NOT be changed unless there is more stations in the new normals compared to the old normals:

GUIDE TO CLIMATOLOGICAL PRACTICES THIRD EDITION
SNIP
4.8.1 Period of calculation
Under the Technical Regulations (WMO‐No . 49), climatological standard normals are averages of climatological data computed for the following consecutive periods of 30 years: 1 January 1901 to 31 December 1930, 1 January 1931 to 31 December 1960, and so forth. Countries should calculate climatological standard normals as soon as possible after the end of a standard normal period. It is also recommended that, where possible, the calculation of anomalies be based on climatological standard normal periods, in order to allow a uniform basis for comparison. Averages (also known as provisional normals) may be calculated at any time for stations not having 30 years of available data (see 4.8.4). Period averages are averages computed for any period of at least ten years starting on 1 January of a year ending with the digit 1 (for example, 1 January 1991 to 31 December 2004). Although not required by WMO, some countries calculate period averages every decade .
Where climatological standard normals are used as a reference, there are no clear advantages to updating the normals frequently unless an update provides normals for a significantly greater number of stations. Frequent updating carries the disadvantage that it requires recalculation of not only the normals themselves but also numerous data sets that use the normals as a reference. For example, global temperature data sets have been calculated as anomalies from a reference perio d such as 1961‐1990 (fig. 4.19). Using a more recent averaging period, such as 1971‐2000, results in a slight improvement in “predictive accuracy” for elements which show a secular trend (that is, where the time series shows a co sistent rise or fall in its values when measured over along term) . Also, 1971‐2000 normals would be viewed by many users as more “current” than 1961‐1990. However, the disadvantages of frequent updating could be considered to offset this advantage when the normals are being used for reference purposes.

http://www.wmo.int/pages/prog/wcp/documents/Guide2.pdf
As shown in the above section from the WMO guide, UAH switched to a 30 year baseline as per the WMO: ” Countries should calculate climatological standard normals as soon as possible after the end of a standard normal period.”
As soon as they had 30 years matching the guidelines they made standard Normals. NCDC on the other hand did not follow WMO guidelines if they switched normals on the GHCN dataset due to the “Great Dying of Thermometers”. There was more stations during the 71-2000 baseline then there is in the 81-2010 baseline in GHCN therefore it would violate this: “Where climatological standard normals are used as a reference, there are no clear advantages to updating the normals frequently unless an update provides normals for a significantly greater number of stations. Frequent updating carries the disadvantage that it requires recalculation of not only the normals themselves but also numerous data sets that use the normals as a reference.”
At the minimum the WMO does not support this type of change in normals that NCDC is doing. Now if there is more stations in USHCN for 81-2010 then 71-2000 it is an appropriate update for that dataset, but only for that dataset.
As to GISS they should have updated form 51-80 to 61-90 because there was more stations in GHCN and again to 71-2000. However WMO does give them an out for not doing so: “However, the disadvantages of frequent updating could be considered to offset this advantage when the normals are being used for reference purposes.”

Doug Allen
July 7, 2011 11:01 am

I suspect almost all of that 0.5 degrees F is the removal of the cold 1970’s which I remember well in upstate NY. I’m originally from Cincinnati, and the savage winter of 1977-78 there and throughout the midwest was one for the books. Here’s some info about where I was born during that winter- http://www.enquirer.com/editions/2000/12/31/loc_dont_look_for_river.html

Brian H
July 7, 2011 11:02 am

Given what part of the 60-yr cycle we’re in, they’re really setting themselves (and their “clients”) up for a nasty fall. Planning on “continued warming” is a definite losing strategy.

July 7, 2011 11:03 am

Mark says:
July 7, 2011 at 10:11 am
I assume the baseline was picked to be 30 years as that would give you a nice statistically speaking number (n=30) to estimate simga which in turn can be used to caluculate a 95%CI.

No the 30 year period was not picked for Stats purposes as shown in the WMO guidelines:

Historical practices regarding climate normals (as described in previous editions of this Guide [WMO‐No. 100], Technical Regulations [WMO‐No. 49]) and WMO/TD‐No. 1188) date from the first half of the 20th century. The general recommendation was to use 30‐year periods of reference. The 30‐year period of reference was set as a standard mainly because only 30 years of data were available for summarization when the recommendation was first made . The early intent of normals was to allow comparison among observations from around the world. The use of normals as predictors slowly gained momentum over the course of the 20th century.

It was picked because when they first started they only had 30 years of data. If you wanted to pick normals for Stats purposes according to the WMO 30 years is not the ideal length for temperature studies:

A number of studies have found that 30 years is not generally the optimal averaging period for normals used for prediction. The optimal period for temperatures is often substantially shorter than 30 years, but the optimal period for precipitation is often substantially greater than 30 years. WMO/TD‐No. 1377 and other references at the end of this chapter provide much detail on the predictive use of normals of several elements.

 
http://www.wmo.int/pages/prog/wcp/documents/Guide2.pdf
So there you g, the whole it has to be 30 years for it to be climate meme is based on a cherrypick (that was the length of data they had) and it is not the optimal averaging period for temperatures.

R. Gates
July 7, 2011 11:48 am

Yes, anyone can pick and choose the begin and end points of a series of data if they have an agenda to make the data seem to indicate something it may not. But if you are consistent in your choicesmof starting and ending points and use data that is gathered in a consistent manner, you should be able to see real trends in the data over a long period of time. It will be interesting to see how the decade of 2010-2019 compares to the previous decade of 2000-2009. This will be a good test for knowing much more about the sensitivity of the climate to rapid increases in CO2 brought about by human activity.

July 7, 2011 12:24 pm

The selection of a base period for GLOBAL presentations ( like CRU and NASA do ) as the WMO recognizes is complicated.
“As to GISS they should have updated form 51-80 to 61-90 because there was more stations in GHCN and again to 71-2000. ”
That’s not exactly true.
A long while back I did a study of all possible base periods and counting the number of stations
available.
“available” does not mean “In the database” If you are using a CAM method then the stations need to fit TWO criteria. They have to be “in the database” and they have to be reporting
ENOUGH data in the reference period.
So, for example, if you are looking at GLOBAL data, the period would be 1953-1982
( as I recall)
However, if you pick that period, you will lose some southern hemisphere stations.
To maximize the southern hemispshere stations ( which has fewer) you want to pick
1961-1990. While this has a marginally smaller number of stations in the NH, you maximize the SH.
That’s important to minimize the SD in the monthly mean of the base period.
With hansen’s RSM there are also rules for counting which stations to include and which dont have enough data. So you CANNOT simply look at the number of stations in the database at any particular time. Its not that simple. Its not hard, It’s just not as simple as suggested here.
The base period has no significant effect on the trend. It CAN effect the noise. and you want to maximize the number of reports ( locally or globally as the case may be) in the base period.
Folks who want to play around with this can download the R package RghcnV3.
Maybe I’ll do a special bit of demo code to show folks how to look at these things properly.
At some point I’ll put in Giss code into the package.. but first… the berkely method which has no base period or RomanMs which has no base period.
When people see that station count and base period MAKE NO SUBSTANTIAL DIFFERENCE
then perhaps we can get the crowd focused on the key issue which is sensitivity

henrythethird
July 7, 2011 12:24 pm

Most of the data is being used for presentation purposes.
That means, they need to show the greatest rise above “normal” as they can get, with the scary thick red lines to back it up.
Comparing the current values to the past decade doesn’t give them the “punch” they need to say it’s worse than we thought.
Besides, it’s the trend that matters, not the “zero”.
Or so they say…

Dave Wendt
July 7, 2011 12:50 pm

Anything is possible says:
July 7, 2011 at 9:30 am
Given that the major oceanic cycles seem to repeat over c.65 years, surely that would be a more appropriate and significant period to use when establishing baselines.
Anyone else have thoughts on this?
But if they used a 65 yr baseline that would have made their old baseline from 1935-2000. If you look at any graphs of the temps for the last century, particularly those from the days before they began to attempt to render the same fate on the Dust Bowl that they had on the Medieval Climate Optimum, it should be intuitively obvious why that was not even a slightly viable alternative for the lads on The Team.

kadaka (KD Knoebel)
July 7, 2011 12:52 pm

So looking at the three decade-long periods in the normals, by swapping out 1971-1980 for 2001-2010 while retaining 1981-1990 and 1991-2000, you gain 0.5°F.
Was the average temperature for the continental US for the 2001-2010 period actually 0.5°F more than that for 1971-1980?

R. de Haan
July 7, 2011 12:54 pm

Making a dream come true.

Mark
July 7, 2011 1:28 pm

“boballab says: July 7, 2011 at 11:03 am No the 30 year period was not picked for Stats purposes as shown in the WMO guidelines:”
Thanks for the details of why n= 30 was used vs what could, or is it should, be used.
m

July 7, 2011 1:55 pm

Steven Mosher says:
July 7, 2011 at 12:24 pm
“available” does not mean “In the database” If you are using a CAM method then the stations need to fit TWO criteria. They have to be “in the database” and they have to be reporting
ENOUGH data in the reference period.

Nice way of putting it but even GISS admits that there was more stations in the database from 1961-90 and 71-2000 as shown in Hansen et al 1999 and their Figure 1:

Measurements at many meteorological stations are included in more than one of the 31GHCN data sets, with the recorded temperatures in some cases differing in value or record length. Our first step was thus to estimate a single time series of temperature change for each location, as described in section 4. The cumulative distribution of the resulting station record lengths is given in Figure 1a, and the number of stations at a given time is shown in Figure 1b.

http://pubs.giss.nasa.gov/docs/1999/1999_Hansen_etal.pdf
This is the same graph which can be see on the Station Data page:
As to the meme you spout about “The loss of stations doesn’t matter” is also not only contradicted by the WMO but also by GISS in that same paper. It seems you left a very important part out:

Sampling studies discussed below indicate that the decline in number of stations is unimportant in regions of dense coverage, although the estimated global temperature change can be affected by a few hundredths of a degree. The effect of poor coverage on estimated regional and zonal temperatures can be large in specific areas, such as high latitudes in the Southern Hemisphere, as illustrated in section 4.

Last time I looked the arctic region doesn’t even come close to “dense coverage” and that is one area where GISS finds it’s greatest warming and where they even admit that poor coverage effects their estimates up there.

July 7, 2011 2:07 pm

The problem with adding on a decade and then changing the normals is that you have a tweny year counterweight, The fact the last 10 years has been cooling is being downsized by the previous hot period (period of solar max activity); ten years from now, the “hottest” ten years still in the picture will probably make the cooling period appear flat. Why not have a more permanent “climatological” base of a much longer period – how else are we going to evaluate what is really going on. The reason we started with 30 year periods is because at the time there was only 30 years of data (duh). I believe 1850 – 1950 or match up a warm period on one end and a cold on the other (1900- 2000?). This will give a permanent baseline to measure what is happening going forward. The people who have been supporters of the CAGPCW (Catastrophic Anthropogenic Goal Post Changing Warmists) appear to love layering it with game changing, horizon shifting, upside down trending complications, They don’t really want to see what is really happening.

Roger Knights
July 7, 2011 2:18 pm

I read here somewhere that US temperatures have been noticeably declining for 15 years. I hope someone can post a chart showing that.

Theo Goodwin
July 7, 2011 2:19 pm

This is another piece of outrageous propaganda from the Climate Central Planning Office (formerly NOAA).

Roger Knights
July 7, 2011 2:19 pm

PS: This chart would help to debunk the claim that recent extreme weather events in the US are related to rising temperatures.

Editor
July 7, 2011 2:39 pm

R. Gates says: “… It will be interesting to see how the decade of 2010-2019 compares to the previous decade of 2000-2009. This will be a good test for knowing much more about the sensitivity of the climate to rapid increases in CO2 brought about by human activity.
OK, so now can you persuade the developed world’s politicians to wait for the test results, before levying huge fines on all of us.

Gator
July 7, 2011 2:51 pm

If you do not believe station dropout has any effect on temperature, I suggest you read this…
http://scienceandpublicpolicy.org/images/stories/papers/originals/surface_temp.pdf
And if you do not want to digest the entire report, just check out page 18.