New Statistical Models Could Lead to Better Predictions of Ocean Patterns and the Impacts on Weather, Climate and Ecosystems

COLUMBIA, Mo. – The world’s oceans cover more than 72 percent of the earth’s surface, impact a major part of the carbon cycle, and contribute to variability in global climate and weather patterns. However, accurately predicting the condition of the ocean is limited by current methods. Now, researchers at the University of Missouri have applied complex statistical models to increase the accuracy of ocean forecasting that can influence the ways in which forecasters predict long-range events such as El Nińo and the lower levels of the ocean food chain—one of the world’s largest ecosystems.

“The ocean really is the most important part of the world’s environmental system because of its potential to store carbon and heat, but also because of its ability to influence major atmospheric weather events such as droughts, hurricanes and tornados,” said Chris Wikle, professor of statistics in the MU College of Arts and Science. “At the same time, it is essential in producing a food chain that is a critical part of the world’s fisheries.”

The vastness of the world’s oceans makes predicting its changes a daunting task for oceanographers and climate scientists.  Scientists must use direct observations from a limited network of ocean buoys and ships combined with satellite images of various qualities to create physical and biological models of the ocean.  Wikle and Ralph Milliff, a senior research associate at the University of Colorado, adopted a statistical “Bayesian hierarchical model” that allows them to combine various sources of information as well as previous scientific knowledge. Their method helped improve the prediction of sea surface temperature extremes and wind fields over the ocean, which impact important features such as the frequency of tornadoes in tornado alley and the distribution of plankton in coastal regions—a critical first stage of the ocean food chain.

“Nate Silver of The New York Times combined various sources of information to understand and better predict the uncertainty associated with elections,” Wikle said. “So much like that, we developed more sophisticated statistical methods to combine various sources of data—satellite images, data from ocean buoys and ships, and scientific experience—to better understand the atmosphere over the ocean and the ocean itself. This led to models that help to better predict the state of the Mediterranean Sea, and the long-lead time prediction of El Nińo and La Nińa. Missouri, like most of the world, is affected by El Nińo and La Nińa (through droughts, floods and tornadoes) and the lowest levels of the food chain affect us all through its effect on Marine fisheries.”

El Nińo is a band of warm ocean water temperatures that periodically develops off the western coast of South America and can cause climatic changes across the Pacific Ocean and the U.S. La Nińa is the counterpart that also affects atmospheric changes throughout the country. Wikle and his fellow researchers feel that, through better statistical methods and models currently in development, a greater understanding of these phenomena and their associated impacts will help forecasters better predict potentially catastrophic events, which will likely be increasingly important as our climate changes.

Wikle’s study, “Uncertainty management in coupled physical-biological lower trophic level ocean ecosystem models,” was funded in part by the National Science Foundation and was published in Oceanography and Statistical Science.

–30–
Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
125 Comments
Inline Feedbacks
View all comments
Matthew R Marler
March 23, 2014 6:44 am

Stephen Richards: New Statistical Models Could Lead to Better Predictions of Ocean Patterns and the Impacts on Weather, Climate and Ecosystems
NO THEY CANNOT !!!!!!!!!!!

You didn’t just rule out perfection in that post. You ruled out improvement. That’s a pretty unambiguous statement. It shouldn’t be hard in 20 years to know whether you were correct.

March 23, 2014 7:03 am

Save the Climate!
Study the oceans
Restore Co2 balance.
Take Krill oil.
Save your joints.
Assist the the Whales.
(sorry) 😉

Coach Springer
March 23, 2014 7:05 am

Evergreen news; if we improve something, it could do better. Didn’t need a University for that. To date, we have climate models that don’t and can’t work anywhere near a level that market trading models do. My first impression is that this improvement sounds like applying a better motor oil in an experimental internal combustion engine of the 1700s that doesn’t quite work much at all.

Jbird
March 23, 2014 7:23 am

Give it up Mosher, The alarmists have lost. (Some just haven’t figured that out yet.) All the nit picking and hair splitting in the world can’t change the fact that the models have failed.

Rob Ricket
March 23, 2014 7:35 am

OT…The global sea ice anomaly just turned positive; thanks to a late season increase in Arctic coverage.
Lake Michigan has lost a great deal of ice. The open water will increase the potential for late-season lake effect snow in the U.P., Indiana and Ohio.

Raymond
March 23, 2014 7:41 am

Mosher,
As a scientific lay person I’m, asking you if the following statement from from http://iridl.ldeo.columbia.edu/dochelp/StatTutorial/Climatologies/ is true.
“Climatology is commonly known as the study of our climate, yet the term encompasses many other important definitions. Climatology is also defined as the long-term average of a given variable, often over time periods of 20-30 years. Climatologies are frequently employed in the atmospheric sciences, and may be computed for a variety of time ranges………”
If so it would appear from this definition that the quality of current “Climatolgy” is strictly dependent on the quality of the Computer Models and Data used and if one or the other is inadequate the quality is poor.

Jim G
March 23, 2014 8:07 am

Steven Mosher says:
March 22, 2014 at 1:41 pm
“I predict that you are 6 feet tall, plus or minus 3 feet.
That’s a prediction, but pretty horrible. It has limited use.”
My recollection of Bayesian decision theory is that it is not a classical statistical routine and no confidence intervals can be placed upon results of following the decision tree to its various end points. The data collected to establish the probabilities for the tree branching points may have confidence intervals based upon the sample sizes of the various data sets involved. As I am not familiar with this specific type of Bayesian application, I am open to advice from anyone who knows more.

Jim Bo
March 23, 2014 8:17 am

I modeled this.
On March 31 we’re going to see a media assault on truth (scientific and otherwise) the likes of which mankind has never before experienced.

March 23, 2014 8:31 am

Thanks, A. for exposing yet another alarmist article, but I’m afraid they have to keep on coming; their jobs depend on it.
Greg, when you write “the system can be represented as CO2 rise + “noise” is the fundamental error of the last 30 years of climate science.”, you are exactly right in my opinion. This is the crux of the matter. This how GCMs are programmed, this is why they are so wrong.

Jim G
March 23, 2014 8:35 am

W says:
Though I have significant disdain for climate models, particularly those that have continuously shown that they cannot predict squat, the Bayesian approaches which were developed for the military way back in WWII were somewhat successfull in assisting in the delivery of war time materials. I agree that the climate system is chaotic in nature but perhaps subsets of the various variables involved tracked using Bayesian might be more usefull than time series approaches. Would depend, of course, upon the size and quality of the data sets used for establishing the probabilities but cause and effect need not be inferred as in some of the classical methods which we know do not work.

Gamecock
March 23, 2014 8:40 am

Bill Illis says:
March 23, 2014 at 6:36 am
If it doesn’t work, when does the effort get abandoned and the funding ceased. If it doesn’t work, when do they try to find out what part is not working and try to fix it.
====================
If it doesn’t work, we’ll never hear about it.
“Rowers are going to trek the northwest passage, to bring attention to climate change.”
We never got the story “dumbass rowers blocked by ice.” Yet the rowers got full media attention, by pre-publishing.

Gamecock
March 23, 2014 8:49 am

I disagree with all who think climate models could potentially become valuable. They can never have value. If a model correctly predicted, “It’s going to get dryer around here in the next 10 years,” no one would do anything differently. People would react when it actually started getting dryer.

Chjp Javert
March 23, 2014 9:22 am

Joel O’Bryan says:
March 22, 2014 at 11:18 pm
IMHO, Baysian stats can only help the failed climate models to find more reasons why they are failing. The modelers are good people, and they want their models to actually replicate the real climate. So let’s be smart here, but still skeptical too.
======================================================
I don’t have any Baysian insight, but I sure have opinions of alarmist modelers:
These supposedly well-educated adults are somewhere between willfully naive or corrupt. They knowingly use pseudo-science to obtain money (grants) and power (academic positions). Most aggressively censor dissenting opinion; some propose throwing “deniers” in jail.
Modelers don’t try to “actually replicate the real climate” – if they did, model accuracy would improve via the scientific method (or they’d discover the system is stochastic). The only thing these guys have discovered is Mother Nature does not behave according to their theory. This bovine excrement is pure propaganda for more funding.
This is done to knowing misallocate hundreds of billions of dollars per year (a non-trivial 0.5% of global GDP). That amount of resource could easily prevent millions of deaths per year (lack of food, malaria).
These same people get all uppity if you point out they share some pretty unpleasant tactics with fascists or (GASP!) Nazis. Maybe they’d prefer the Catholic inquisition.

Matthew R Marler
March 23, 2014 9:23 am

Terry Oldburg, you might enjoy the book “A Comparison of Classical and Bayesian Methods of Estimation” by Francisco Samaniego

Reply to  Matthew R Marler
March 23, 2014 10:58 am

Matthew Marler:
Thanks for the citation. In the construction of a model, I favor a method that is more aptly described as “optimization” than as “estimation.” The model builder optimizes the missing information in each of the inferences that are made by the model by minimizing it or maximizing it under constraints expressing the available. I favor this method because it is entirely logical and produces superior results..

Chip Javert
March 23, 2014 9:38 am

Gamecock says:
March 23, 2014 at 8:49 am
I disagree with all who think climate models could potentially become valuable. They can never have value. If a model correctly predicted, “It’s going to get dryer around here in the next 10 years,” no one would do anything differently. People would react when it actually started getting dryer.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Well, your statement might be accurate for most people, but…
Get yourself a climate model that says the sea is rising 20 feet by the year 2100 (laughable), and you’ll immediately have a bunch of south Pacific politicians clamoring for US taxpayer funding.
All this “clamoring” plus bad research, adds up to an estimated $350,000,000,000/year (or 0.5% of global GDP). That a whole lot of money and power…

ren
March 23, 2014 9:42 am

America should prepare for the next blizzards and low temperatures next week. If someone says that it is because of GW, it contradicts the facts.

Roger D PGeol
March 23, 2014 9:47 am

As a petroleum geologist my colleagues and I are frequently asked to evaluate geological phenomenon on a statistical basis. For example, the Bayesian approach is used to predict the distribution of reservoir properties such as sand thickness, porosity, permeability etc.to be used in computer simulations of a reservoir system under changing conditions. The model starts with the known parameters at various control points, usually wellbore penetrations, then applies the statistical approach to “fill in” those parameters in the intervening grid where data is unavailable without the intervention of biases. After having worked in reservoir characterization for six years and thereafter having run numerous models on numerous other projects in which I have been involved, my conclusion is that the Bayesian model seldom has any resemblance to a real-world geological phenomenon and the most important part of the model is still the professional input as to the reality of the model and sorting out the significant variables. Pretty much like prejudicial judgement and simply put, if the data don’t fit the model, it’s the MODEL, not the data. In reservoir modeling it is fluid flow, pressures, injection and OIL PRODUCTION RATES and such, that are important data, things that are physically measured. I’ll bet that Bayesian approach will work a lot better in a model where you cannot verify the system performance…;-) Just my opinion…

Reply to  Roger D PGeol
March 23, 2014 11:24 am

Roger D PGeol:
You’re describing Bayesian parameter estimation. Its Achilles heal is the multiplicity of uninformative prior probability density functions (PDFs), each yielding a different posterior probability density function. Which of these posterior PDFs does one use in making a decision? The choice is arbitrary.
To apply this lesson to global warming climatology, the choice of posterior PDF over the equilibrium climate sensitivity is arbitrary. Some posterior PDFs yield scary conclusions. Others yield non-scary conclusions. In an expression of the self interest of the professional climatologist rather than logical deduction, he or she routinely presents to the maker of governmental policy with one of the scarier ones without pointing out that non-scary ones are also possibilities.

RACookPE1978
Editor
March 23, 2014 10:00 am

Jim Bo says:
March 23, 2014 at 8:17 am
I modeled this.
On March 31 we’re going to see a media assault on truth (scientific and otherwise) the likes of which mankind has never before experienced.

Sorry, I don’t follow your logic here. April Fool’s Day would be a better release day for their attacks, would it not? 8<)

March 23, 2014 10:01 am

New Statistical Models Could Lead to … ”
No skill, no innate knowledge (inherent knowledge of the physics of the interrelationship of factors, coefficients, etc.) how those systems function. No ‘real’ value; little better than Monte Carlo analysis (a statistical spread of possible values about a central locus) …
.

EternalOptimist
March 23, 2014 10:01 am

I dont understand why Sherlock Mosher wishes to get my height by looking at the wear and tear on my clothing. Why not get a tape measure. and measure

Chip Javert
March 23, 2014 10:08 am

Terry Oldberg says:
March 23, 2014 at 9:55 am
…statistically validated models have already been developed that predict average air temperatures over the western states of the United States over a forecasting horizon of 6 months. This has been made possible by the existence of 328 statistically independent observed events extending back to the year 1850…
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Assuming what you’re saying is the model ACCURATELY & CONSISTENTLY predicts average air temp, please provide a cite.
I don’t begin to have the mathematical chops to validate the claim, but Willis or Bob sure due…(hint, hint).

Reply to  Chip Javert
March 23, 2014 11:54 am

Chip Javert:
There is a bibliography at http://www.entropylimited.com/pubs.htm .

Chip Javert
March 23, 2014 10:17 am

Terry Oldberg says:
March 23, 2014 at 9:55 am
…statistically validated models have already been developed that predict average air temperatures over the western states of the United States over a forecasting horizon of 6 months. This has been made possible by the existence of 328 statistically independent observed events extending back to the year 1850…
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Just a thought:
“328 statistically independent observed events extending back to the year 1850…” appears to be EXACTLY 2 per year (328/(2014-1850) = 328/164 = 2.00000000 (yea, I know I might be overdoing it on the number of decimal places, but I’m making a point).
Gotta be some good model to go from that granularity to consistently & accurately forecasting a future 6-months of a stochastic system.

Reply to  Chip Javert
March 23, 2014 12:13 pm

Chip Javert:
These results were produced by regarding the model as the algorithm of the optimal decoder of a message coming from the future. This message consisted of the sequence of outcomes of statistically independent events.
Unlike the optimal decoder of communications engineering, this one had to work without the agency of an error-correcting code. A consequence was for the outcomes of events to be known only to within the the probabilities of their occurrences. The decoder produced information about the outcomes but not perfect information.
Under current circumstances, it is impossible to construct the model of a decoder that produces any information at all about an event of 30 years duration for the number of 30 year events is inadequate to the task.

Box of Rocks
March 23, 2014 10:22 am

A Random walk down Wall Street…

March 23, 2014 10:27 am

re: Martin 457 says March 22, 2014 at 4:32 pm
I hope for the best with this. Having lived in ‘Tornado Alley’ my whole life, we still have the occasional tornado watch issued 1/2 an hour before the tornadoes start forming.
Perhaps along the leading edge of the dry line (southwest thru TX, thru OK and NNE into KS); sometimes T-storm initiation along the dryline ‘holds off’ forming any cumulus due to a strong ‘cap’ and it therefore makes more sense to go partly with ‘observations’ looking for initiation along that dryline before aligning a projected ‘schedule’ (Watches) for the forthcoming events ….
.

Jim Bo
March 23, 2014 10:30 am

RACookPE1978 says: March 23, 2014 at 10:00 am

April Fool’s Day would be a better release day for their attacks, would it not? 8<)

I wish I could share your amusement. One week to go before release and the deluge has already commenced. .