The other half of the USHCN network – precipitation

Standard Rain Gauge for the NOAA Cooperative Observer Network - image NOAA CRH

Normally I focus on the temperature component, but the reason I’m posting this will become evident soon.  – Anthony

Our New Analysis of United States Precipitation Trends

By John Nielsen-Gammon, Texas state climatologist

I’m going to be talking a lot about various aspects of the Cooperative Observer (COOP) Network this week and next. The COOP network is our primary source for climate data for the United States since the 1890s. Observations include daily maximum and minimum temperatures, precipitation, and snowfall. A unique aspect of the COOP network is that almost all of the observations are taken by volunteers, making it an impressive example of coordinated, dedicated effort having an enduring impact.

There are two primary ways that COOP data get combined into coherent, long-term data sets. One is by using a subset of stations that have particularly long-term records: the United States Historical Climatology Network (USHCN). The other is by combining the COOP observations into regional averages, known as climate divisions.

The climate division data is the most common tool for monitoring month-to-month variations in the climate. When the National Oceanic and Atmospheric Administration (NOAA), which by the way administers the COOP network, says that Texas had the driest seven consecutive months on record this past October through April, they’re using the climate division data to make that statement.

There’s one big problem with using the climate division data to make that sort of statement, or to determine in general how climate has changed over time: COOP stations come and go. Volunteers move, or stop taking measurements. New volunteers might live in a wetter or warmer part of the climate division, making the new climate division averages wetter or warmer than before. Plus the climate division data before 1931 was reconstructed based on statewide averages because many parts of the country didn’t have enough stations within each climate division.

Barry Keim (the Louisiana State Climatologist) and others have looked at a few parts of the country and found that these sorts of changes can seriously affect the long-term record. I found the same thing with the Edwards Plateau climate division in Texas. Individual long-term stations showed that annual precipitation was increasing in central Texas, but the climate division average was going down. It turns out that in the early part of the century most stations were in the eastern Edwards Plateau, which gets most of the rain. Later on there were more dry stations, and the average for the division went down.

We decided to fix this problem. And if we were going to fix it for Texas, we may as well fix it for the rest of the country at the same time. This became Brent McRoberts’ master’s thesis, and with a little bit of refinement it because a paper that is going to be published in the Journal of Applied Meteorology and Climatology in the next couple of months.

Read the rest of the post here at the Houston Choronicle blog

0 0 votes
Article Rating
17 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jimbo
May 11, 2011 2:43 am

Weststations / Surfacestations / Doplarstations – it’s worse than we thought.

“The Texas upward trend may be surprising to those of you living through what might turn out to be one of the worst droughts on record. But think about the worst drought years: 1917-1918, 1925, and 1951-1956. It was much drier before the 1950s than after. Our recent dry years have alternated with very wet years. ”
……..
Why the discrepancy between the projections and the reality so far? It’s interesting to consider the possibilities:………..
http://blog.chron.com/climateabyss/2011/05/our-new-analysis-of-united-states-precipitation-trends/

May 11, 2011 2:53 am

The TD3200 coop summary of the day data base, vs 2002 is the data input I use to generate my cyclic pattern Analog forecast for the USA. I will be interested to see how all of this turns out.

icecover
May 11, 2011 2:53 am

[snip . . OT . . kb]

A C Osborn
May 11, 2011 3:18 am

Anthony, what you are showing is exactly the problem of a central location just putting together “similar” data without having the expertise or incentive to investigate whether or not it is correct to do so and what implication it has on the overall data. They then unwisely use the data for “Trend” work.
By the way I think in the last paragraph the “and with a little bit of refinement it because a paper” should be “and with a little bit of refinement it became a paper.

Joe Lalonde
May 11, 2011 5:52 am

Anthony,
There is a great deal wrong with current climate science and all science. Technologies change, observations and new ideas change, yet the theories and science laws stay the same. Too general for a highly complex system and many cases incorrect. Studying a few hundred years and ignoring billions of years just for funding or creature comfort or just to sell products has corrupted the whole area we call science.
Companies tell us what is the calculation of efficiency, yet when hard science is incorporated, these numbers are badly off target to actual efficiency. The interaction of the solar system is highly efficient when you know what you are looking for and looking at. The planets are in rotational sequence with the sun (except the first two planets and the last planet in our solar system(these have an explanation as well)) .
Many theories do not make sense such as:
Ice Ages moving rocks and rounding them plus the generation of vast amounts of sand. Snow and ice are generated regionally and do not move, they melt there too.
Our planet not loosing a drop of water yet evidence shows other planets have and we have vast amounts of only a billions years ago created ocean salt deposits in high latitudes.
What cooled the planet and generated a “skin” so that centrifugal force did not send mass flying off?
Planets are in inertia, yet only one day difference from the rotation of the suns core to all the planets speed out of 4.5 billion years with the different density and sizes of planets(except the 3 explained) .
So, do I have any belief that current science and scientist are correct?
NOT!

Gary
May 11, 2011 5:59 am

An obvious ancillary question in light of the Surfacestations Project is: are there any microsite conditions that could alter the conclusions? Otherwise, the summary seems like they were thorough in their work.

Russell Duke
May 11, 2011 6:15 am

This is the problem of talking about averages vs. what we as individuals experience as weather. Texas is always too hot, or too cold, or too wet, or too dry. On AVERAGE it is a very nice place to live.

May 11, 2011 7:19 am

Joe Lalonde says on May 11, 2011 at 5:52 am:

Many theories do not make sense such as:
Ice Ages moving rocks and rounding them plus the generation of vast amounts of sand. Snow and ice are generated regionally and do not move, they melt there too.

Pls … what? Glacial movement of surface material didn’t take place? Please explain? Perhaps you mean something else?
Also, see two words: “Terminal Moraine”

A terminal moraine, also called end moraine, is a moraine that forms at the end of the glacier called the snout.
Terminal moraines mark the maximum advance of the glacier. An end moraine is at the present boundary of the glacier.
Terminal moraines are one of the most prominent types of moraines in the Arctic. One famous terminal moraine is the Giant’s Wall in Norway which, according to legend, was built by giants to keep intruders out of their realm. It is now known that terminal moraines are created at the edge of the greatest extent of the glacier. At this point, the debris that has been accumulated by plucking and abrasion, that has been pushed by the front edge of the ice is driven no farther, but instead is dumped in a heap. Because the glacier acts very much like a conveyor belt, the longer it stays in one place, the greater the amount of material that will be deposited. The moraine is left as the marking point of the terminal extent of the ice.
In North America, the Outer Lands is a name given to the terminal moraine archipelago of the northeast United States (Cape Cod, Martha’s Vineyard, Nantucket, Block Island and Long Island).
Other prominent examples of terminal moraines are the Tinley Moraine and the Valparaiso Moraine, perhaps the best examples of terminal moraines in North America. These moraines are most clearly seen southwest of Chicago.

Bolding mine.
Ref: http://en.wikipedia.org/wiki/Terminal_moraine
.

netdr2
May 11, 2011 7:20 am

The rebuttals to the surface-stations project were that a market basket of good stations didn’t produce much different temperatures from a market basket of all stations.
I was lead to believe that both sets of stations had been adjusted, which means they were homogenized with other surrounding stations. Is that true ?
If that is the case the good stations have been adjusted to be worse and the bad ones have been adjusted to be better so the test is invalid. It should be good vs bad with no adjustments allowed.
The next point they made is that the difference in temperature was all that mattered, the station could be off by 2 ° C but the differences remained the same. I doubt that concrete responds the same to more and less sun the same way that cow pasture does. The one at the waste treatment plant would behave much differently than the cow pasture.
Just to be clear adjusting the good one to be worse is silly. Would you adjust a compass in a junkyard to one in a cow pasture and use the average ? Silly logic anyone ?
The third point which wasn’t addressed is time. Do badly sited stations respond differently over time ? DFW airport was a cow pasture 1n 1977 but it is the home of a surface station now. A small city of 30,000 along with hundreds square miles of concrete has been built on or near that cow pasture.
Each home has an electric line gong into it and millions of kilowatt hours of energy going into it and eventually escaping no matter how much insulation it has.
I cannot believe there has been no effect on temperature or that it has been properly compensated for.

Jeremy
May 11, 2011 7:22 am

This became Brent McRoberts’ master’s thesis, and with a little bit of refinement it because a paper that is going to be published in the Journal of Applied Meteorology and Climatology in the next couple of months.

I think you mean “became” not “because”.
Congrats at turning a masters thesis into a published paper!

Günther Kirschbaum
May 11, 2011 7:24 am

Is it so difficult to spell John Nielsen-Gammon’s name right?
REPLY: It is if you are partially dyslexic, like I am. I constantly fight with letter reversal. Fixed and thank you for pointing it out. My question for you based on your behavior here: Is it so difficult to stop being boorish and rude with every post? You really are quite the sourpuss. – Anthony

reason
May 11, 2011 7:41 am

“Texas is always too hot, or too cold, or too wet, or too dry. On AVERAGE it is a very nice place to live.”
Bolded for TRUTH.

May 11, 2011 7:56 am

A.C. Osborn & Jeremy – Thanks for catching the “because” typo/autocorrect combo-error. I’ve fixed it in the original post.

David Smith
May 11, 2011 6:23 pm

John, when you review the precipitation records you may want to check for what I call “precipitation time-of-observation bias”.
The observation time at many stations shifted over the decades. My cursory review shows that late afternoon used to be the daily breakpoint but that shifted to morning or midnight over several decades.
When the 24-hour observations were taken in the late afternoon they sometimes split heavy rain events (often involving afternoon thunderstorms) in two, with part of the event falling into one day and the rest falling into the following day. That split tended to reduce the maxuimum size of either daily event.
When the observations were shifted to the quieter morning, the heavy-rain afternoons were no longer split.
Thus, it may appear, falsely, that daily heavy rain events increased.
Something to ponder.

Matt in Houston
May 12, 2011 7:15 am

This is definitely a step in the right direction at least regarding the history. But still it has a subjective component. Can’t this precipitation measurement be more accuratley measured with advanced radar systems? I realize not all radar measured precip will actually make it to the ground but I would think that should be more accurately determinable than by essentially random collections of buckets of water. Am I offbase here?

May 12, 2011 9:48 pm

David – Interesting point. Fortunately, it doesn’t affect my monthly totals.
Matt – Unfortunately, radars measure backscattered electromagnetic radiation. To convert that to rainfall, it needs to be calibrated with rain gauge observations, and (especially unfortunately) the calibration is weather-dependent. I like to use radar to get the detailed patterns but gauge information is needed for overall amounts. For an operational product that combines both types of information, visit http://water.weather.gov/precip

The iceman cometh
May 13, 2011 12:39 am

One of the problems with trying to analyze rainfall data is that rain is NOT normally distributed. Yet we take arithmetic averages of the data, and that results in a hidden bias – the arithmetic average is not the same as the mode, or most likely. A lot of rainfall data is reasonably well represented by a log-normal distribution, which has the advantage that it is nice and easy to work with – and which also means that the arithmetic average generally overestimates the most likely.
With this thought, I have done quite a bit of analysis of the NADP data. Lloyd, Philip J. Changes in the wet precipitation of sodium and chloride over the continental United States, 1984–2006. Atmospheric Environment 44 (26),pp 3196-3206, 2010 One of the fun things I found was that trend analysis became easy if you forgot about Pearson regression, and used all the available data and did an F-test to check the significance of the trend. What is a trend? The way in which the average changes with time. If you have 30 years of weekly data, you can estimate the average of the first 100 weeks quite well because the error on the mean is (the error on the individual measurement)/n^2. The error on the individual measurement is high (it includes seasonal effects), but the error on the mean is still small. In effect, in doing an F-test, you are seeing how the average (of the log rainfall!) moves with average time – and that means you can cope with missing data without having any significant impact on the detected trend.