The Weather History Time Machine

The_Time_Machine[1]From Sand Diego State University – A program created by Samuel Shen allows researchers to look back in time to see how precipitation across the globe contributed to major weather events.

Shen and colleagues created a video (which follows) showcasing their historical precipitation data. At 00:31 (July 1933 – June 1934), you can see the extreme dryness in the Pacific Ocean preceding the Dust Bowl.

By Michael Price

During the 1930s, North America endured the Dust Bowl, a prolonged era of dryness that withered crops and dramatically altered where the population settled. Land-based precipitation records from the years leading up to the Dust Bowl are consistent with the telltale drying-out period associated with a persistent dry weather pattern, but they can’t explain why the drought was so pronounced and long-lasting.

The mystery lies in the fact that land-based precipitation tells only part of the climate story.  Building accurate computer reconstructions of historical global precipitation is tricky business. The statistical models are very complicated, the historical data is often full of holes, and researchers invariably have to make educated guesses at correcting for sampling errors.

Hard science

The high degree of difficulty and expertise required means that relatively few climate scientists have been able to base their research on accurate models of historical precipitation. Now, a new software program developed by a research team including San Diego State University Distinguished Professor of Mathematics and Statistics Samuel Shen will democratize this ability, allowing far more researchers access to these models.

“In the past, only a couple dozen scientists could do these reconstructions,” Shen said. “Now, anybody can play with this user-friendly software, use it to inform their research, and develop new models and hypotheses. This new tool brings historical precipitation reconstruction from a ‘rocket science’ to a ‘toy science.’”

The National Science Foundation–funded project is a collaboration between Shen, University of Maryland atmospheric scientist Phillip A. Arkin and National Oceanic and Atmospheric Administration climatologist Thomas M. Smith.

Predicting past patterns

Prescribed oceanic patterns are useful for predicting large weather anomalies. Prolonged dry or wet spells over certain regions can reliably tell you whether, for instance, North America will undergo an oceanic weather pattern such as the El Nino or La Nina patterns. The problem for historical models is that reliable data exists from only a small percentage of the earth’s surface. About eighty-four percent of all rain falls in the middle of the ocean with no one to record it. Satellite weather tracking is only a few decades old, so for historical models, researchers must fill in the gaps based on the data that does exist.

Shen, who co-directs SDSU’s Center for Climate and Sustainability Studies Area of Excellence, is an expert in minimizing error size inside model simulations. In the case of climate science, that means making the historical fill-in-the-gap guesses as accurate as possible.  Shen and his SDSU graduate students Nancy Tafolla and Barbara Sperberg produced a user-friendly, technologically advanced piece of software that does the statistical heavy lifting for researchers. The program, known as SOGP 1.0, is based on research published last month in the Journal of Atmospheric Sciences. The group released SOGP 1.0 to the public last week, available by request.

SOGP 1.0, which stands for a statistical technique known as spectral optimal gridding of precipitation, is based on the MATLAB programming language, commonly used in science and engineering. It reconstructs precipitation records for the entire globe (excluding the Polar Regions) between the years 1900 and 2011 and allows researchers to zoom in on particular regions and timeframes.

New tool for climate change models

For example, Shen referenced a region in the middle of the Pacific Ocean that sometimes glows bright red on the computer model, indicating extreme dryness, and sometimes dark blue, indicating an unusually wet year. When either of these climate events occur, he said, it’s almost certain that North American weather will respond to these patterns, sometimes in a way that lasts several years.

“The tropical Pacific is the engine of climate,” Shen explained.

In the Dust Bowl example, the SOGP program shows extreme dryness in the tropical Pacific in the late 1920s and early 1930s — a harbinger of a prolonged dry weather event in North America. Combining this data with land-record data, the model can retroactively demonstrate the Dust Bowl’s especially brutal dry spell.

“If you include the ocean’s precipitation signal, the drought signal is amplified,” Shen said. “We can understand the 1930s Dust Bowl better by knowing the oceanic conditions.”

The program isn’t a tool meant to look exclusively at the past, though. Shen hopes that its ease of use will encourage climate scientists to incorporate this historical data into their own models, improving our future predictions of climate change.

Researchers interested in using SOGP 1.0 can request the software package as well as the digital datasets used by the program by e-mailing sogp.precip@gmail.com with the subject line, “SOGP precipitation product request,” followed by your name, affiliation, position, and the purpose for which you intend to use the program.

Advertisements

53 thoughts on “The Weather History Time Machine

  1. Well, at least they admit it is modeling based on guesses.
    I like the soothing sound effects of rain. Quite relaxing.

    • “The tropical Pacific is the engine of climate,” Shen explained”.
      Isn’t this what you have been saying for years Bob?
      Maybe the warmistas are at last getting the idea that this Earth is a water planet (70% surface Ocean water cover + lakes and rivers) and that with the Pacific Ocean covering one entire hemisphere, one does not have to look beyond that to find what is the main driver of our Climate.
      Keep up the good work Bob – I read all your posts and have learned a great deal from them.
      Brian J in UK.

  2. Watching the video, most of the dry (drought?) appears in the Eastern part of the USA, not the West. Nor did I see any particularly dry spells in the Western US States to coincide with the severe droughts of the 30s and 50s.
    That seems to be a big lack of accuracy in the historical record.

    • The maps show the magnitudes of anomalies in mm/day. A better representation would have been to show % of normal anomalies. This would have better emphasized droughts in dry areas, and de-emphasized slightly dry years in wet areas.

      • Gee, what’s a “normal anomaly”? Has the word become so hyperbolic it’s losing its meaning? Once upon a time “anomaly” was used for an aberrant spike in a trace (or equivalent), that is something unusual and worthy of investigation, not every recorded deviation from either a mean, or some datum which is arbitrarily decided.

    • If parts of the west normally get 10″, err, 30 cm (300 mm) of rain per year, and then in a dry year where they get 20 mm/day less than average, then they’ll wind up with -65 mm for the whole year.
      Tell them tho throw the plots out and redo them with percentages as Data Soong suggests….

  3. It is always strange to read things like these. After all, what did they just say about previous models? It goes completely against everything we’ve been told regarding certainty. Moreover, how many scientists did they say were able to use previous models? The claim is that all this warming hoopla has been created by a very, very, very small group of people.

    • They could have relied upon observations from ships, aircraft & islands, but don’t hold out much hope. More likely what dbstealey says.

      • Complete nonsense.
        Even today there are large parts of the Pacific Ocean untroubled by shipping. There are regular stories of fishermen turning up after months adrift and not seeing any ships. As any good American should know the Japanese sailed on 26th November 1941 arrived in Hawaiian waters on 7th December 1941 and then escaped undetected. This suggests vast areas of ocean were unused for most of the 20th century.
        GPS mapping supports the idea than it remains that way now.
        Check it out.

      • Even more true for the Southern Pacific. Actually coverage there was much better in the 19th century when sailing ships regularly went down to 40-55 degrees south to take advantage of the constant westerly gales. That era ended in 1914, but there are some data from whalers up to the 1960’s or so. Since then there has been literally nothing except a very few research vessels and ecotourist cruises. I didn’t appreciate just how empty southern waters are nowadays till I went on a cruise in the waters south and east of New Zealand a few years ago. Not one single ship sighted for 3 weeks and more than 3000 miles. And that was in summer.

      • Observations gathered by a fish-whisperer….
        Seriously, any observations at sea can’t be anything but extremely spotty.
        A ship measuring precipitation at sea? Precipitation falling during a storm. How did a ship at sea keep seawater separate from rain while it’s crashing through the waves?
        Talk about siting issues!

      • There are lots of islands & is shipping in the tropical Pacific, where the ENSO occurs. The North Pacific & far southern Pacific, not so much. There was indeed more sailing in the Southern Ocean in the 19th than 21st centuries.

      • Mosh – right.
        Not sure how many.
        Nor whether any are quantitative, rather than qualitative.
        My weather observing days in the Voluntary Observing Fleet [1970s and 80s] relied on various codes – see here http://badc.nerc.ac.uk/data/surface/code.html#presweath all those over 49 relate to precipitation.
        An example – 93 – Slight snow, or rain and snow mixed or hail at time of observation –
        Thunderstorm during the preceding hour but not at time of observation.
        We never tried to ‘measure’ rainfall.
        Didn’t like rain – ruined the fresh paint [a ship is a big thing to paint, so you’re always doing fabric maintenance]; and it reduced visibility [and radar effectiveness] – so made seeing the next ship more difficult.
        The UK’s MCGA, in their Marine Information Note 361 [of 2009] had a chartlet showing where observations were made [and so where not, too!]. I can’t find it on the net, but probably have a copy in my files at work – will look tomorrow.
        Auto

      • Scientists have come up with the first comprehensive map of global shipping routes based on actual itineraries. The team pieced together a year’s worth of travel itineraries from 16,693 cargo ships using data from LLoyd’s Register Fairplay and the Automatic Identification System, which tracks vessels using a VHF receiver and GPS.
        A few hot spots logged the majority of journeys. The busiest port was the Panama Canal, followed by the Suez Canal and Shanghai.

        Notice the areas with no shipping for an entire year. The Pacific, South Atlantic and Indian Oceans are untouched by shipping, just where are those observations other than in someone’s guesses?
        http://www.wired.com/images_blogs/wiredscience/2010/01/figure1a-660×379.gif

      • Sandyinlimousin – thanks.
        I mentioned MIN 361 – that actually copied MSC.1/Circ.1293, to which I (now!) link below.
        I offer an actual month of actual weather observations: –
        http://www.imo.org/blast/blastDataHelper.asp?data_id=24475&filename=1293.pdf
        This link shows – on page 6/6 – for one month (August 2008) only – where ship observations were made.
        <400,000 for the month, globally.
        Error bars of mental arithmetic. . .
        Average – one per 300-ish square miles of ocean. For the month.
        one per 90-100 thousand square miles each day.
        Area of he UK – about 92,000 square miles.
        I am encouraging ship masters I know to have their ships become Voluntary Observing Ships.
        Auto

      • Auto,
        Just looking at the map of logged courses by shipping I would say that vessels follow the same course to get from A to B, for good economic reasons, and with modern equipment follow precise course unaffected by the elements. The fact that they take many readings means that there will still be large areas unmonitored. This leaves the same problem as is encountered in The Arctic, Antarctica, and other remote land areas, one can only guess.

      • The French have that covered.
        But the US has lots of observations:
        http://www.archives.gov/research/guide-fed-records/groups/027.html
        Samples from:
        Records of the Weather Bureau
        (Record Group 27)
        1735-1979
        27.5.5 Records of the Marine Division
        Textual Records: Abstracts of ships’ logs collected by Lt. Matthew Fontaine Maury (“Maury Logs”), 1796-1861. Abstracts of ships’ logs, 1862-78. Records of marine observations by ocean square, 1873-86; and simultaneous meteorological observations on ships, 1886-1902. Ship abstract storm logs, 1896-1910. Gale and storm reports, 1895-1910. Fog reports, 1896-1910. Marine meteorological journals, 1879-93. Records containing summary weather data for the North Pacific and North Atlantic Oceans, 1890-1904. Records of observations at the Guam Naval Station, 1902-8, 1913-19 (in San Francisco). Records of observations in the Gulf of Mexico and North Atlantic and North Pacific Ocean areas, 1890-1930; and the Azores Islands, 1896-99, 1912-21.
        Microfilm Publications: M1160.
        Top of Page
        27.5.6 Records of the Division of Operations and Reports
        Maps: Marine Section monthly maps of climatic conditions in the oceans and Great Lakes, 1909-14 (855 items). See 27.7.
        Top of Page
        Maps: Locations of weather reporting stations, forecast centers, flight advisory weather service units, airport stations, and headquarters, 1944-45 (10 items). See also 27.7.
        Top of Page
        27.5.8 Records of the Office of Meteorological Research
        Textual Records: Records, 1953-60, relating to the International Geophysical Year (July 1, 1957-Dec. 31, 1958).
        Maps: Historical synoptic maps for the Northern Hemisphere, compiled 1941-65, from data collected 1899-1965, many prepared in cooperation with the Armed Forces and certain colleges and universities, showing daily weather (57,916 items); tracks of high and low pressure and conditions at upper levels of the atmosphere (6,883 items); and time variations, sunrises, and sunsets (48 items). Southern Hemisphere and Southwest Pacific weather maps, 1932-52 (2,500 items). International Geophysical Year aerological cross sections along 75 degrees West, 1957-58 (3,240 items). See also 27.7.
        Top of Page
        27.5.9 Records of the Forecast Division
        Maps: Manuscript and published daily U.S. surface weather maps, 1891-1941 (60,000 items). Wet bulb readings, 1895-97 (1,640 items). Barometric charts, 1937-39 (1,761 items). See also 27.7.
        Top of Page
        27.5.10 Records of the Division of Synoptic Reports and Forecasts
        Maps: Manuscript and published daily U.S. surface weather maps, 1941-65 (83,200 items). Base maps, 1941-65 (13 items). See also 27.7.
        Top of Page
        27.5.12 Records of the Statistics Division
        Maps (48 items): North Atlantic and eastern Siberia average ceiling heights and visibility limits, compiled by the Work Projects Administration and the weather service of the Army Air Forces, ca. 1943. See also 27.7.
        Top of Page
        27.5.13 Records of the Aerological Division
        Map (1 item): Upper air winds over the United States, 1937. See also 27.7.
        Top of Page

  4. I simply see wet areas swapping with dry areas a year or two later, and vice-versa.
    I also notice the dark blue and dark red, didn’t get darker or more wide spread.

  5. The high degree of difficulty and expertise required means that relatively few climate scientists have been able to base their research on accurate models of historical precipitation. …….
    name one

  6. ‘Building accurate computer reconstructions of historical global precipitation is tricky business. The statistical models are very complicated, the historical data is often full of holes, and researchers invariably have to make educated guesses at correcting for sampling errors.’
    This made me cringe. Guesses ‘correcting’ actual data.

  7. In what science can you say, we are guessing, but we are calling it accurate. There is a palm reader down the road, that would have done it cheaper.

  8. There is nothing illuminating about this use of made-up data. It is indeed science fiction. However for them, the important thing is that they got paid for doing it and were praised by their peer group for supporting the Climate Construct.

  9. Recall Mesopotamia’s notorious “Dark Millennium” c. BC 4000 – 3000, whose extremely abrupt mega-drought blighted hydraulic civilizations’ development for nigh 1,000 years.
    Standard references attribute this very prolonged, severe regional catastrophe to shifting Atlantic Ocean currents, apparently wafting Saharan desiccation east and north.
    For whatever reason, archaeologists have excavated horrific evidence of mass die-offs from the Levant to Indus Valley thermoclines, evidence of destructive warming that could just as well have characterized the equally disruptive 1,500-year Younger Dryas “cold shock” from c. 12,800 – 11,300 YBP.
    Be warned: Sand-grain Earth is truly too confined for comfort.

  10. This sentence says it all for me: “This new tool brings historical precipitation reconstruction from a ‘rocket science’ to a ‘toy science’.”
    I’ve always believed that global warming / climate change was a toy science (computer models).

  11. well its good to see that somebody filling tackled this problem.
    Lets see if I can explain.
    a simple example.
    Suppose you have a pool in your backyard and a thermometer at both ends. One reads 75 F the other reads
    80F. So you take a average and come up with 77.5 F
    What does that represent? well an average, all averages, are a model. A very simple model. What’s that model predict? That model predict that if you randomly sample the pool at some other location you will
    measure 77.5F. That’s what an average is. Its a prediction about what you will observe if you take a random location and record the temperature. This prediction has errors. But if ALL you know is two measurements
    75 and 80, if thats all you really know, then your best estimate ( yes guess ) is 77.5. It minimizes your error.
    There are many ways to “average” a set a observations.. many ways to predict the data at unsampled locations. To date the techniques used to create gridded rain fields havent been the best methods.
    In this study they used EOFs, go google that.
    To build their “average” they used GPCP data from 1979 to 2008 to build the EOF and they used GHCN precipitation data from 1900- present for the regression coefficients.
    It’s a fancy average.. probably the best method for problems of this nature.
    And you can actually test the prediction.
    science all the way down

    • An alien landed on planet Earth. It notes that the dominant land surface life forms with large cranial to body weight ratios use a 7 day-night calendar to organize their lives. It also notes that on 3 consecutive Tuesday’s, it rained. It returns to its home planet and report that the human are far more advanced than they realized. They could control their weather so that it only rains on Tuesdays.
      Lesson: Aliens capable of space travel use progressive reasoning.

    • So, we can’t agree on temperatures, and now we’re gonna throw an estimate of water vapor into the equation, to ………..try to play a video game ?
      I’ve got better ways to waste my time.

    • Actually, it’s turtles all the way down that rabbit hole.
      ‎Achilles the rabbit, slept with zero motion some of the time.
      I think you’l find the turtle had the highest average speed.
      It’s really revealing science that averaging stuff, not!

    • And then there is the cowpoke and his horse who both drowned in a river crossing where the average depth was only two feet. That’s the problem with implementing real world activities based upon models. Or spending huge tax dollars on imbecilical alternative energy projects which do not and cannot work or sustain themselves.

    • An average is only a model when used for predictive purposes. Otherwise it’s just an average.
      An average used for predictive purposes without some theoretical or empirical basis, is equivalent to predicting using coin tosses or chicken entrails. Most definitely not science.
      Otherwise, empirically deriving the best method of in filling sparse data, would appear a straightforward problem. Random remove some proportion of your dataset and see how well your algorithm ‘predicts’ the removed data.
      No ‘guesses’ required.

  12. “If you include the ocean’s precipitation signal, the drought signal is amplified,” Shen said. “We can understand the 1930s Dust Bowl better by knowing the oceanic conditions.”
    ++++++++++
    Isn’t this backwards? Specifically, don’t they need the Dust Bowl land records to guess at the oceanic conditions? Said another way, the Dust Bowl drought proves that the ocean was dry (they said it not me) and that dry ocean tells us there was a drought leading to the Dust Bowl.
    Am I confused here?

    • I think the meaning of their statement was that knowing the oceanic state they could better understand what produced the conditions in the dust bowl. It seems to me that the work has some value if it can be validated. Understanding that drought conditions in the mid west are likely due to conditions in the pacific at a given time allows farmers to better choose how to use their land. One of the major problems in the dust bowl era was that farmers continued to plough and harrow the land only to watch the loose top soil produced blow away in the wind. Leaving the fields fallow with the stubble of the previous harvest in place would have helped conserve the top soil.

  13. Shen will democratize this ability, allowing far more researchers access to these models.

    What a novel idea, making the methodology to your research public. Maybe Michael Mann will sue Shen for starting a trend in climate science that highlights an immensely large shortfall in Manns research and climate science in general.

  14. “Now, anybody can play with this user-friendly software, use it to inform their research, and develop new models and hypotheses. This new tool brings historical precipitation reconstruction from a ‘rocket science’ to a ‘toy science.’”
    This is exactly the point!
    ‘Climate science’ no longer even vaguely resembles a science, but rather a community of ‘gamers’ who, working with the same basic software framework, are now empowered to customize the system with their own ‘mods’ (and those created by others, if they so wish) so that it produces a product that is more pleasing to their own eyes.
    ‘Climate science’ now resembles the modder communities that have grown up around the Elder Scrolls games (Skyrim, Oblivion, Morrowind), Minecraft, the Fallout series, and so many other popular games.

  15. A couple of things I noticed. It seems like the area of very dry/very wet increases across the globe over a period of time, which tells me that they’re falling into the preconcieved notion that climate change has caused more extremes. I’m sure if they use this to model the future, we will be seeing megadroughts and megafloods everywhere. The other thing I’ve noticed is that it seems to miss major events in recent history. The “Flood of the Century” in 1993 is modeled as near normal across the midwest. The major flooding along eastern and central NC that occurred due to record rainfall and two tropical storms that hit in 1999 is shown as a major drought. If it can’t get recent history modeled accurately, why should I trust it going further back?

Comments are closed.