New Statistical Models Could Lead to Better Predictions of Ocean Patterns and the Impacts on Weather, Climate and Ecosystems

COLUMBIA, Mo. – The world’s oceans cover more than 72 percent of the earth’s surface, impact a major part of the carbon cycle, and contribute to variability in global climate and weather patterns. However, accurately predicting the condition of the ocean is limited by current methods. Now, researchers at the University of Missouri have applied complex statistical models to increase the accuracy of ocean forecasting that can influence the ways in which forecasters predict long-range events such as El Nińo and the lower levels of the ocean food chain—one of the world’s largest ecosystems.

“The ocean really is the most important part of the world’s environmental system because of its potential to store carbon and heat, but also because of its ability to influence major atmospheric weather events such as droughts, hurricanes and tornados,” said Chris Wikle, professor of statistics in the MU College of Arts and Science. “At the same time, it is essential in producing a food chain that is a critical part of the world’s fisheries.”

The vastness of the world’s oceans makes predicting its changes a daunting task for oceanographers and climate scientists.  Scientists must use direct observations from a limited network of ocean buoys and ships combined with satellite images of various qualities to create physical and biological models of the ocean.  Wikle and Ralph Milliff, a senior research associate at the University of Colorado, adopted a statistical “Bayesian hierarchical model” that allows them to combine various sources of information as well as previous scientific knowledge. Their method helped improve the prediction of sea surface temperature extremes and wind fields over the ocean, which impact important features such as the frequency of tornadoes in tornado alley and the distribution of plankton in coastal regions—a critical first stage of the ocean food chain.

“Nate Silver of The New York Times combined various sources of information to understand and better predict the uncertainty associated with elections,” Wikle said. “So much like that, we developed more sophisticated statistical methods to combine various sources of data—satellite images, data from ocean buoys and ships, and scientific experience—to better understand the atmosphere over the ocean and the ocean itself. This led to models that help to better predict the state of the Mediterranean Sea, and the long-lead time prediction of El Nińo and La Nińa. Missouri, like most of the world, is affected by El Nińo and La Nińa (through droughts, floods and tornadoes) and the lowest levels of the food chain affect us all through its effect on Marine fisheries.”

El Nińo is a band of warm ocean water temperatures that periodically develops off the western coast of South America and can cause climatic changes across the Pacific Ocean and the U.S. La Nińa is the counterpart that also affects atmospheric changes throughout the country. Wikle and his fellow researchers feel that, through better statistical methods and models currently in development, a greater understanding of these phenomena and their associated impacts will help forecasters better predict potentially catastrophic events, which will likely be increasingly important as our climate changes.

Wikle’s study, “Uncertainty management in coupled physical-biological lower trophic level ocean ecosystem models,” was funded in part by the National Science Foundation and was published in Oceanography and Statistical Science.

–30–
0 0 votes
Article Rating
125 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
March 22, 2014 1:11 pm

Wikle said. “So much like that, we developed more sophisticated statistical methods to combine various sources of data—satellite images, data from ocean buoys and ships, and scientific experience—to better understand the atmosphere over the ocean and the ocean itself. This led to models…
“Sources of data” = Satellite images, data from ocean bouys and ships, and scientific experience?!
Huh?

David, UK
March 22, 2014 1:12 pm

More self-serving modelled bullshit.

1sky1
March 22, 2014 1:17 pm

If only an intrinsically complex physical problem could be unravelled by mere statistics.

Doug Huffman
March 22, 2014 1:18 pm

Normative and prescriptive statements, characterized by would, should & could, have no inherent or essential truth value.

Stephen Richards
March 22, 2014 1:20 pm

New Statistical Models Could Lead to Better Predictions of Ocean Patterns and the Impacts on Weather, Climate and Ecosystems
NO THEY CANNOT !!!!!!!!!!!

Curious George
March 22, 2014 1:22 pm

The “scientific experience” is evaluated only statistically.

March 22, 2014 1:29 pm

In the first paragraph, couldn’t help smiling at the phrase “limited by CURRENT methods”.

March 22, 2014 1:31 pm

“Stephen Richards says:
March 22, 2014 at 1:20 pm
New Statistical Models Could Lead to Better Predictions of Ocean Patterns and the Impacts on Weather, Climate and Ecosystems
NO THEY CANNOT !!!!!!!!!!!
##############
More settled science from skeptics.

Gamecock
March 22, 2014 1:36 pm

“Their method helped improve the prediction of sea surface temperature extremes and wind fields over the ocean, which impact important features such as the frequency of tornadoes in tornado alley and the distribution of plankton in coastal regions—a critical first stage of the ocean food chain.”
Academic flourish. Their study is not critical to the food chain.

March 22, 2014 1:41 pm

“1sky1 says:
March 22, 2014 at 1:17 pm
If only an intrinsically complex physical problem could be unravelled by mere statistics.”
#################################
note they didnt say unravelled.
note they said
“Their method helped improve the prediction of sea surface temperature extremes and wind fields over the ocean,”
Note the difference between “unravelling” and improving a prediction.
I predict that you are 6 feet tall, plus or minus 3 feet.
That’s a prediction, but pretty horrible. It has limited use.
Now I find a pair of your shoes and I note they are size 9. Looking at other data
and the relationship between shoe size and height, I improve my prediction.
Maybe I say You are 6 feet tall, plus or minus 18 inches.
Then I find a pair of your pants and I note the inseam is 34 inches and the bottom
is frayed.
I improve my prediction.. you are 6 feet tall plus or minus 6 inches.
AT no point have I claimed to unravell the mystery of your height, or the history of your height.
I’ve simply used more information to improve my prediction
Here is how it works if I want to figure out where you live from your tweets
http://dailycaller.com/2014/03/21/researchers-develop-formula-that-reveals-home-location-based-on-tweets/
And I can figure out the complex problem of what kind of car you like or music you like by looking at certain data.
SVM
http://link.springer.com/chapter/10.1007%2F11531371_50

David, UK
March 22, 2014 1:43 pm

Chris Wikle: “We developed more sophisticated statistical methods to combine various sources of data—satellite images, data from ocean buoys and ships, and scientific experience—to better fool ourselves that we know what we’re talking about.”

March 22, 2014 1:45 pm

Steven Mosher says:
“More settled science from skeptics.”
1. Why are you painting all skeptics with the same brush? Instead, you should reply to the poster you disagree with. And…
2. Skeptics are the only honest kind of scientists. Just FYI.

David, UK
March 22, 2014 1:45 pm

“Wikle and his fellow researchers feel that, through better statistical methods and models currently in development, a greater understanding of these phenomena and their associated impacts will help forecasters better predict potentially catastrophic events, which will likely be increasingly important as our climate changes.”
The abbreviated version:
“Wikle and his fellow researchers are delusional.”

Gary Pearse
March 22, 2014 1:47 pm

“applied complex statistical models to increase the accuracy of ocean forecasting that can influence the ways in which forecasters predict long-range events such as El Nińo and the lower levels of the ocean food chain”
The food chain is not an event, but I can see where this is going- they’ll find that the food chain is in jeopardy once the model is in place. Food, water, shelter, clothing. Hmm which should we choose for a really scary disaster.
Also statistics can’t discover anything. If you don’t know enough, the statistical data you gather may not be appropriate for task. For example, if you are a blind man measuring the circumference of the legs of elephants without knowing something about what they are, or you have a linearly ignorant assumption of what they are, your statistics will tell you with high “probability” that these trees are all roughly the same circumference and that they invariably are in clusters of four.

David, UK
March 22, 2014 1:50 pm

Steven Mosher says:
March 22, 2014 at 1:31 pm
More settled science from skeptics.

Brilliant, as always, Mosh.
/sarc

Man Bearpig
March 22, 2014 1:58 pm

Steve: All predictions are based on statistics.

john
March 22, 2014 1:59 pm

Steven Mosher says:
March 22, 2014 at 1:31 pm
“More settled science from skeptics.”
sorry loser but that is one of your alarmist statements. Realists don’t talk
that way.

March 22, 2014 2:02 pm

Anytime I hear the word Bayesian I assume some more statistical hocus pocus has arrived..
This article could hardly be more nebulous. I’ve heard pitches for quack medicines that were more specific. Assume “more sophisticated” to mean “overly complex and fragile.”

Otter (ClimateOtter on Twitter)
March 22, 2014 2:03 pm

mr. mosher – please do not become the nicky stokes of WUWT.

Mac the Knife
March 22, 2014 2:11 pm

As the climate models from 30 years ago to the present generation have failed to predict the ensuing decades of natural climate variability, it is axiomatic that ‘new’ models may be able to do better. When your batting average is ZERO, improvement should be achievable.
Divination by examining steaming chicken guts may do ‘better’.
Voodoo ‘casting dem bones’ may do ‘better’.
Noting increases in wooly worms and acorn nut yields may do better.
A tarot card reading may do better.
The Farmers Almanac has done better.

Latitude
March 22, 2014 2:11 pm

New Statistical Models Could Lead to Better Predictions of Ocean Patterns and the Impacts on Weather, Climate and Ecosystems…..or not
key word ‘statistical’

DontGetOutMuch
March 22, 2014 2:12 pm

Micheal Mann could fly to the moon with rocket smoke coming from his butt.

March 22, 2014 2:21 pm

All that statistics stuff introduces “uncertainty” and we are absolutely morally certain that man is guilty of poisoning the atmosphere with CO2, hence using statistics would be a step backwards.
Reading thermometers would also be a step backwards.

Luke Warmist
March 22, 2014 2:22 pm

“The ocean really is the most important part of the world’s environmental system because of its potential to store carbon and heat, but also because of its ability to influence major atmospheric weather events such as droughts, hurricanes and tornados,” said Chris Wikle, professor of statistics in the MU College of Arts and Science. 
“…which impact important features such as the frequency of tornadoes in tornado alley…”
 To begin forecasting tornado frequency alone will require computers several orders of magnitude bigger and faster than currently exist. (did I mention accurately?)
…not holding my breath for this one.

David L. Hagen
March 22, 2014 2:33 pm

Demetris Koutsoyiannis and his team recognize that todays deterministic Global Climate Models underestimate uncertainty and are incapable of accurate predictions. Koutsoyiannis et al. are leading the way in developing stochastic models especially in hydrology.
The full “tails” probability in natural distributions with some anthroprogenic contributions are the most important issues for engineering calculations when accommodating nature.

Theo Goodwin
March 22, 2014 2:35 pm

“Wikle and Ralph Milliff, a senior research associate at the University of Colorado, adopted a statistical “Bayesian hierarchical model” that allows them to combine various sources of information as well as previous scientific knowledge.”
Use of Bayesian reasoning makes sense only in a highly ramified context. For example, you can use it at Vegas to improve your betting. At Vegas, all the unknowns are totally known, unless the game is “crooked.” As regards the oceans, the number of reasonably well-confirmed physical hypotheses which describe phenomena such as ENSO is pretty much zero. Our knowledge of the oceans is a context whose place on the scale of relative ramification is close to zero. Even the known unknowns are mostly unknown.

Bruce Cobb
March 22, 2014 2:44 pm

“…will help forecasters better predict potentially catastrophic events, which will likely be increasingly important as our climate changes.”
Ah, there it is! I knew there had to be a “climate change” connection in there somewhere. All is right with the world. Government-funded “science” at its finest.

March 22, 2014 2:58 pm

Steven Mosher says:
March 22, 2014 at 1:31 pm
“NO THEY CANNOT !!!!!!!!!!!
############## ”
Mosher is shouting again. The strain of watching Mother Nature in action again controlling the climate is getting to him.
More settled science from Warmistas.

R. Shearer
March 22, 2014 3:01 pm

Scientists from Boulder also report the “warning” that CO2 passes 400 ppm again. http://www.dailycamera.com/news/boulder/ci_25397460/boulder-scientists-report-record-early-high-co2-readings

Dr Burns
March 22, 2014 3:28 pm

When models can forecast with any accuracy whether it will rain in two days time, I may start to have some faith in them.

Dave
March 22, 2014 3:31 pm

“The world’s oceans …………………………contribute to the variability in global climate and weather patterns.” Well, yes they do, but more importantly they contribute to the stability of the global climate. El Nino is nothing more nor less that the releasing of stored energy into the atmosphere thus giving that energy access to the upper atmosphere and beyond. It’s an amazingly stable system especially considering the fact that 97 % of scientists think that it isn’t.

Lawrie Ayres
March 22, 2014 3:35 pm

Mac the Knife says:
March 22, 2014 at 2:11 pm
As the climate models from 30 years ago to the present generation have failed to predict the ensuing decades of natural climate variability, it is axiomatic that ‘new’ models may be able to do better. When your batting average is ZERO, improvement should be achievable.
Divination by examining steaming chicken guts may do ‘better’.
Voodoo ‘casting dem bones’ may do ‘better’.
Noting increases in wooly worms and acorn nut yields may do better.
A tarot card reading may do better.
The Farmers Almanac has done better.
Jennifer Maharosy recently requested of our Minister for the Environment to have the Bureau of Meteorology explain why they have such a poor record of predictions. They use models whereas our more accurate long range forecasters use historical data and observation. The Farmers Almanac is a similar beast and proves that history is a better predictor than computers.
As Dr Phil says ” The best predictor of future actions is past performance”.

pat
March 22, 2014 3:57 pm

off to Yokohama!
23 March: Guardian: Robin McKie: Global warming to hit Asia hardest, warns new report on climate change
Flooding, famine and rising sea levels will put hundreds of millions at risk in one of the world’s most vulnerable regions
The report – Climate Change 2014: Impacts, Adaptation and Vulnerability – makes it clear that for the first half of this century countries such as the UK will avoid the worst impacts of climate change, triggered by rising carbon dioxide levels in the atmosphere. By contrast, people living in developing countries in low latitudes, particularly those along the coast of Asia, will suffer the most, especially those living in crowded cities.
A final draft of the report, seen by the Observer, will be debated by a panel of scientists set up by the Intergovernmental Panel on Climate Change (IPCC) this week at a meeting in Yokohama, Japan, and will form a key part of the IPCC’s fifth assessment report on global warming, whose other sections will be published later this year…
The report makes grim reading…
http://www.theguardian.com/environment/2014/mar/22/global-warming-hit-asia-hardest
23 March: Guardian: Nick Cohen: The climate change deniers have won
All of which is a long way of saying that the global warming deniers have won. And please, can I have no emails from bed-wetting kidults blubbing that you can’t call us “global warming deniers ” because “denier” makes us sound like “Holocaust deniers”, and that means you are comparing us to Nazis? The evidence for man-made global warming is as final as the evidence of Auschwitz. No other word will do…
… The movement was in the grip of “cognitive dissonance”, a condition first defined by Leon Festinger and his colleagues in the 1950s . They examined a cult that had attached itself to a Chicago housewife called Dorothy Martin. She convinced her followers to resign from their jobs and sell their possessions because a great flood was to engulf the earth on 21 December 1954. They would be the only survivors. Aliens in a flying saucer would swoop down and save the chosen few.
When 21 December came and went, and the Earth carried on as before, the group did not despair. Martin announced that the aliens had sent her a message saying that they had decided at the last minute not to flood the planet after all. Her followers believed her. They had given up so much for their faith that they would believe anything rather than admit their sacrifices had been pointless.
Climate change deniers are as committed….
I could write about the environment every week. No editor would stop me. But the task feels as hopeless as arguing against growing old. Whatever you do or say, it is going to happen. How can you persuade countries to accept huge reductions in their living standards to limit (not stop) the rise in temperatures? How can you persuade the human race to put the future ahead of the present?
The American historians of science Naomi Oreskes and Eril M Conway BLAH BLAH BLAH…
http://www.theguardian.com/commentisfree/2014/mar/22/climate-change-deniers-have-won-global-warming

Luke Warmist
March 22, 2014 3:59 pm

Dr Burns says:
March 22, 2014 at 3:28 pm
“When models can forecast with any accuracy whether it will rain in two days time, I may start to have some faith in them.”
 Funny you mention that. On those rare occasions when they forecast rain for where I live, (Phoenix) the actual rain arrives from 12-24 hours later than they forecast. Apparently the NWS models weight the winds to the high end, and they so far, haven’t adjusted for it.

Latitude
March 22, 2014 4:01 pm

Dave says:
March 22, 2014 at 3:31 pm
“The world’s oceans …………………………contribute to the variability in global climate and weather patterns.” Well, yes they do, but more importantly they contribute to the stability of the global climate
=======
Of course, the land areas change faster

Peter Sable
March 22, 2014 4:15 pm

Statistics are the wrong tool for time-series data or spatial data, because statisticians inevitably rely on “averages”, which introduces errors into the analysis due to aliasing. So I don’t trust any statistician in this field at all, since it is all time-series or spatial data to start with…

March 22, 2014 4:21 pm

Reblogged this on sainsfilteknologi and commented:
New Statistical Model

Mark Luhman
March 22, 2014 4:22 pm

Steve Chaos theory dictates that models are and will always be incapable of telling us what the weather or climate will be in the future. Do you disagree with that theory and why. The financial firms gave up on models to predict the stock market twenty years ago just for that reason. presently you cannot model the the thermal dynamics in a pot of boiling water how on earth do you think you can model climate? I am personally tired of spending billion of tax dollars on the fools errand call climate modeling, if you climate people can find private funding for your fools errand fine just quit bilking the taxpayer.
Also do not think that I against weather models I am not, but at least I understand their usefulness is only about a week out and accuracy is only within hours, and they still cannot tell me what amount of rain I may or may not receive with any degree of accuracy, yet they are the best thing going. I also understand the Chaos theory will limted those models to never being accurate further out than few days or hours, even if the computer speed and ability to calculate was infinite. So as far as long range forecast forget it, we can look at trends and make some predictions but dice might do just as well. As to climate about all we can say it will not be what it is today. Will a model help in that, I am certain it won’t
Steve your belief in climate models is like my wish that human may sometime be able to faster than the speed of light but I fear your wish and mine will come up against the hard reality of physics, and neither are achievable.
Yet Steve what the hell do I know about it I only been repair, hooking up and operating computer and network for the last thirty five years and have seen a lot for promos but most of it has been bust. After when I first started AI was going to rule. Funny we still have people writing all the code yet still pretty much line by line.

pat
March 22, 2014 4:26 pm

22 March: CNBC: Javier E. David: Smart homes aim for consumers’ wallets as energy costs soar
A brutal winter has left many feeling the pain of soaring utility bills. Yet a new University of Michigan study suggests households have been slow to adopt cost-saving measures, even as most fret about paying more for home energy than gasoline…
The survey, from the university’s Energy Institute, showed that respondents expected their utility bills would rise by 30 percent in the next five years, a far steeper rate than the 15 percent jump they expect to see for gas. But the increase would have to hit about 50 percent for them to make major changes, the survey found…
***People with lower incomes were the most likely to adjust their habits, he added.
The findings underscore why there’s growing buzz surrounding smart homes, the space where top shelf technology converges with energy efficiency. The industry is considered a major growth area in the wake of Google’s $3.2 billion deal for Nest Labs…
“It’s part of a greater awareness of climate change and using less energy for that reason,” said Roy Johnson, CEO of EcoFactor, a smart technology company that provides software to utilities and companies such as Comcast – the parent company of CNBC…
Honeywell, which manufactures thermostats that automatically adjust temperatures, and Siemens, which manufactures a “smart grid” that utilizes solar panels to generate electricity, and feeding unused energy back into the utility grid, are just two of the names at the forefront of the smart home push…
http://www.cnbc.com/id/101514623
21 March: Opinion: Time to rein in the climate change carbon baggers
Why are World Economic Forum, IMF and World Bank being so obstinate in maintaining an increasingly discredited position?
By Michelle Stirling-Anosh, special to the Vancouver Sun
(Michelle Stirling-Anosh is the communications manager of Friends of Science.)
Even NASA and the IPCC have acknowledged there has been a 16-plus year natural pause in global warming. Climate expert Roger Pielke presented evidence of no trend in extreme weather events to the U.S. senate committee on environment and public works in July…
But that hasn’t stopped organizations such as the World Bank and International Monetary Fund from continuing to raise fears of catastrophic global warming.
According to its website, the World Bank is heavily invested in low-carbon projects for the Third World, while the IMF is touting the benefits of carbon taxes even as the carbon markets in Europe have collapsed completely, to the point that Germany has gone back to building over 20 coal plants because the carbon risk of increased taxes to investors is now considered negligible…
All this makes one wonder why the World Economic Forum, IMF and World Bank are being so obstinate in maintaining an increasingly discredited position.
So many roads lead to Chicago, climate change, carbon and Lagarde’s tenure at Baker and McKenzie, a Chicago law firm recognized “as one of the first global law firms to establish a climate-change practice.” U.S. President Barack Obama spent over six years as a board member of the Joyce Foundation that financed the founding of the Chicago Climate Exchange, which eventually collapsed. The Joyce Foundation also funds TIDES and other ENGOs that loudly proclaim climate terror despite no scientific evidence…
In a power-point presentation from 2007, Baker McKenzie gave us an example: a Chinese plant sells its emissions credits; a private fund and the World Bank buy them, then resell them through “the IM process” and the World Bank, raising “$1.2 billion in 23 minutes.”
By contrast, the Financial Conduct Authority of the U.K. reported in September that not a single ordinary investor has made any money in carbon credits. Ordinary investors are not able to sell or trade carbon credits once acquired…
It’s time we all stopped being suckers for climate scare.
http://www.vancouversun.com/business/Opinion+Time+rein+climate+change+carbon+baggers/9646666/story.html

Martin 457
March 22, 2014 4:32 pm

I hope for the best with this. Having lived in ‘Tornado Alley’ my whole life, we still have the occasional tornado watch issued 1/2 an hour before the tornadoes start forming.
Most the time we get a few hours but, sometimes, they couldn’t predict darkness when the sun goes down.

tom
March 22, 2014 4:36 pm

El Niño and la Niña, please, instead of the El Nińo and La Nińa in the blogpost. The Spanish language deserves some respect, surely?

March 22, 2014 4:47 pm

It sounds as though they used Bayesian parameter estimation. It suffers from the availability of uninformative prior probability density functions (PDFs) that are of infinite number. Each of these functions generates a different posterior PDF.. A consequence is for the non-contradiction to be violated. Non-contradiction is one of the three classical laws of thought.

March 22, 2014 4:51 pm

Coulda woulda shoulda…..
My apologies to Willie for this!
Mama,
Don’t let your babies grow up to be climate schemers,
They’ll drive you poor, going to school,
Learning BS(MS &PHDs) and playing with their tools.
They’ll cover you in Sh!t, and not worry one bit.
After they’re exposed, they’ll show up at home,
Hiding in the basement, hoping to be left all alone….
Sorry, was in a hurry…:)~ (I am not a poet but I know it)

pat
March 22, 2014 4:58 pm

LOL:
22 March:The Ecologist: Sam Fankhauser: Financial markets should get serious on climate policy
The number of climate change laws on the statue books of the world’s leading economies grew from less than 40 in 1997 to almost 500 at the end of 2013.
Most leading countries now have legal provisions on renewable energy, energy efficiency, carbon pricing, land use change, transport emissions, adaptation to climate risks and low-carbon research and development.
These efforts do not yet add up to a credible global response that will limit the rise in global temperatures to less than two degrees Celsius – the objective of international climate negotiations…
In other words, unless we find a cheap way to capture and store carbon, two-thirds of the fossil fuel reserves of coal, oil and gas majors will have to remain under ground…
Climate laws are driven by … other climate laws
A tentative finding from the analysis of the 500 climate laws so far is that one of the most powerful drivers of climate legislation is the number of climate laws passed elsewhere.
There appears to be a strong element of peer pressure and intergovernmental knowledge exchange. If this is confirmed, it would point towards a self-reinforcing cycle.
The more climate change laws are passed – and the current pace is one new law per country every 18-20 months – the more ready policymakers become to take further action. The financial sector would do well to take note.
http://www.theecologist.org/blogs_and_comments/commentators/2326237/financial_markets_should_get_serious_on_climate_policy.html
(Sam Fankhauser is Co-Director of the Grantham Research Institute on Climate Change at the London School of Economics. He is a Director at Vivid Economics and a member of the UK Committee on Climate Change. His research is funded the Grantham Foundation for the Protection of the Environment and the UK Economic and Social Research Council, ESRC).

Gixxerboy
March 22, 2014 4:59 pm

Nicholas Tesdorf
That was not Mosher shouting. It was Stephen Richards. Mosh was parodying it with his line about ‘settled science from skeptics’.

tadchem
March 22, 2014 5:04 pm

“because of its ability to influence major atmospheric weather events such as droughts, hurricanes and tornados…”
Captain Obvious says (1) “droughts” are not weather events, (2) the ocean that spawns hurricanes is not distinct from the ocean that does not spawn hurricanes, (3) most tornadoes are born, live, and die hundreds if not thousands of miles downwind of any ocean, and (4) tornadoes are decidedly LOCAL events, not even ‘regional’ and certainly not significant in the ‘global’ climate.

Editor
March 22, 2014 5:26 pm

“Wikle and his fellow researchers feel that, through better statistical methods and models currently in development…”
Why are they wasting everyone’s time with a press release about something in development?

Arno Arrak
March 22, 2014 5:41 pm

Wickle is heavy on statistics but apparently has done no climate research. We don’t need any statistics to analyze current climate problems, especially the origin and meaning of the current 17 year sessation of warming.

Leon0112
March 22, 2014 5:42 pm

My translation is “Bob Tisdale is right. We need to do a better job of modelling oceans and other sources of natural variability if we are going to have a prayer of actually predicting something.”

James the Elder
March 22, 2014 6:15 pm

tom says:
March 22, 2014 at 4:36 pm
El Niño and la Niña, please, instead of the El Nińo and La Nińa in the blogpost. The Spanish language deserves some respect, surely?
I, nor my computer speak it. If I were to relocate to a Spanish speaking locale, I would adjust accordingly. WWIII starting us in the face and you worry about an effing ~.

Chip Javert
March 22, 2014 7:41 pm

Steven Mosher says:
March 22, 2014 at 1:41 pm
Note the difference between “unravelling” and improving a prediction…
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
So I read this hypothetical stuff after I come in from a day of dodging snakes and rats while trimming dense tropical plants in my backyard.
While I sometimes enjoy (i.e.: laugh) at Steven’s stuff, he NEVER deals with fundamental explanation of atmospheric physics, let alone presenting REAL (i.e.: Mother Nature, not model) data to demonstrate his point…just troll-speak. Statistics are great when you have a scientific theory (hypothesis, whatever) and real data…however, statistics as a substitute for a scientific theory is simply “cooking the books” or
“torturing the data”.
But, this evening I’m pretty tired and I did get a good laugh.

SIGINT EX
March 22, 2014 7:52 pm

Isht der Klimate un zee Vvvveeeerrrrmintsht.
Ha ha. 😀

lee
March 22, 2014 8:00 pm

‘Now, researchers at the University of Missouri have applied complex statistical models to increase the accuracy of ocean forecasting…’
And what was the outcome? When will we get the next el Nino? Is this a falsifiable projection?

March 22, 2014 8:33 pm

After a fun career in the computing and Finance world (notice which I capitalized), I was quite sick of the inelegant clichés codgers and fools liked to spout when discussing technology.
e.g.
“Let’s stick with well tested leading technology and stay away from ‘bleeding edge’ technology.”
“Let’s think outside of the box.”
“I do not want to re-invent the wheel.”
It’s always interesting that every fool and old coot thought that they were amongst the first to spout these inanities and that they hated to have someone point out that their statements contradicted themselves.
I used to throw rejoinders at the speakers hoping to minimize trouble yet stop the fount from continuing to flow. In the same order;
“If it’s ‘well tested’ by your standards it is no longer leading technology.”
“I’d be happy if people would ‘just think’.”
“I don’t mind when people ‘re-invent’ the wheel. One of these days they’ll invent round wheels.”
Apt readers of the above press release will notice that all three of these rejoinders and their causes are in play.
There is no recognition nor admission that the current models are abject failures; just a statement that the new models improve or will improve many things, including:

“…“At the same time, it is essential in producing a food chain that is a critical part of the world’s fisheries…”

Those are some incredibly amazing statistics and here I thought that food chains were inherent in themselves.
There is no better faster way to build straight to the bleeding edge, don’t think, don’t assess and absolutely do not peek to see what is wrong or necessary before building ‘new shiny glittering’ models. The previous group went and spent decades morphing ‘weather’ models in pretense of a well designed comprehensive ‘whole earth’ climate scenarios. The new group sees their 28% false surface emulations and raises them by 72% new false sea surface emulations.
Just what the world needs! A ‘best thing since sliced bread’ glamorous press announcement long before code, formulas or , ||’shudder’||, models can be tested certified and found useful.

george e. smith
March 22, 2014 9:12 pm

“””””…..1sky1 says:
March 22, 2014 at 1:17 pm
If only an intrinsically complex physical problem could be unravelled by mere statistics……”””””
Statistics (any sort of statistics, either dumb, or “sophisticated), is performed on discrete sets of already known numbers with actual numerical values.
The results of such analysis are exact, in that statistical mathematics is a rigorous discipline of mathematics, with no uncertainty, in the outcome of correct application of its algorithms.
The numbers can be plotted (on suitable axes) as a scatter plot (of discrete dots).
The results of such analysis tell you nothing about ANY datum, not in the given set.. You cannot predict the value of ANY possible data point that might by some means be introduced before the first element of the set; nor can you predict the value of ANY possible data point, that might by some means be obtained after the last element of the given set. You can’t even predict, whether such extrapolations of the set, will result in higher, or lower or identical values, than the end points of the set; the very direction of any change is quite indeterminate.
Moreover, this information vacuum, includes the space between the individual elements of the set. You cannot know what happens between the dots; EXCEPT in the unique case, where the dots represent instantaneous samples of a continuous band limited function, properly sampled in accordance with the Nyquist sampling theorem.
In that case, it is possible (in theory) to completely recover the continuous band limited function, and determine properly interpolated values. And for that matter, correctly obtain an average of the continuous function over that data set range.
So drawing scatter plots, and then joining the dots sequentially, is an exercise in deception.

March 22, 2014 9:28 pm

Think, science lovers. Think. You really believe what you are espousing? Defend it here, then, if it’s easy for you:
http://johnwilsonbach.com/2014/03/22/kidnapped-by-spring/

Mac the Knife
March 22, 2014 9:54 pm

Lawrie Ayres says:
March 22, 2014 at 3:35 pm

Mac the Knife says:
March 22, 2014 at 2:11 pm
…..When your batting average is ZERO, improvement should be achievable…….
The Farmers Almanac has done better.

Jennifer Maharosy recently requested of our Minister for the Environment to have the Bureau of Meteorology explain why they have such a poor record of predictions. They use models whereas our more accurate long range forecasters use historical data and observation. The Farmers Almanac is a similar beast and proves that history is a better predictor than computers.
As Dr Phil says ” The best predictor of future actions is past performance”.

Lawrie,
Thanks for the ‘come back’! I searched for and found a ‘Jennifer Marohasy’ blog that is Australian climate oriented. Is that the person you were referencing?
http://jennifermarohasy.com/
Interesting blog and provides insights into the Australian climate cycles that I’ve had little exposure to! Thank you for that!
Aus is a looooog ways from Seattle WA USA… If I may ask, where in Australia are you located?
Mac

rogerknights
March 22, 2014 10:05 pm

23 March: Guardian: Nick Cohen: The climate change deniers have won
The evidence for man-made global warming is as final as the evidence of Auschwitz. No other word will do…

Strawman

Peter Fraser
March 22, 2014 10:35 pm

I have been unable to find any statistical oceanic biomass data on the effect of the almost total removal of the top of the planktonic food chain, the krill eating whales. Hundreds of thousands of these animals were slaughtered from the mid eighteenth to the mid twentieth century. I am not positing a causal relationship between the right whales decline and increasing CO2, but it would be an interesting exercise to do the sums. There are reasonably good records for landed tonnages of oil. Estimates of the amount of krill eaten could be calculated. Perhaps the biomass of krill grazing on the worlds largest carbon sink, phytoplankton, was much smaller in the past because of the whales. This would indicate a large increase in the phytoplankton biomass and increasing CO2 consumption. Just a thought for the anti-whaling lobby.

William Astley
March 22, 2014 10:48 pm

In reply to:
“The world’s oceans cover more than 72 percent of the earth’s surface, impact a major part of the carbon cycle, and contribute to variability in global climate and weather patterns. However, accurately predicting the condition of the ocean is limited by current methods. Now, researchers at the University of Missouri have applied complex statistical models to increase the accuracy of ocean forecasting that can influence the ways in which forecasters predict long-range events such as El Niño and the lower levels of the ocean food chain—one of the world’s largest ecosystems.”
William:
Talk is cheap. Use the ‘complex’ statistical model to make a prediction. Statistical models fail when there is a step change in a primary (the primary) forcing mechanism that has not occurred in the past modeled statistical period.
I notice there has been a series of ‘predictions’ concerning a El Niño this year. Good luck with that prediction.
Why is there suddenly a significant increase in sea ice both poles?
Why the increase in cold surface ocean temperature anomalies?
William: The planet is cooling due to the sudden decrease in solar magnetic cycle activity. It will be interesting to watch the creative first attempts to hand wave away global cooling.
http://www.ospo.noaa.gov/data/sst/anomaly/2014/anomnight.3.20.2014.gif
http://nsidc.org/data/seaice_index/images/daily_images/S_stddev_timeseries.png
http://arctic.atmos.uiuc.edu/cryosphere/iphone/images/iphone.anomaly.global.png

Joel O'Bryan
March 22, 2014 11:18 pm

C’mon folks I’m a skeptic too, but Employing Baysian statistical methods dis a step forward for every branch of science. Electrical eng signal processing and information n theory have been using and refining this method for 30 years. It has enabled all the ADSL, 4G-LTE data, and spread spec CDMA comms we take for granted now in 2014 for broadband data streaming.
IMHO, Baysian stats can only help the failed climate models to find more reasons why they are failing. The modelers are good people, and they want their models to actually replicate the real climate. So let’s be smart here, but still skeptical too.

Reply to  Joel O'Bryan
March 23, 2014 9:16 am

Joel O’Bryan:
A distinction should be made between Bayes’s theorem and Bayesian methods. As Bayes’s theorem is logically correct, conformity to it is required else conclusions may not logically be drawn from the associated argument. Though Bayes’s theorem correct, there are methods drawn from it that are illogical from their violation of the law of non-contradiction in the selection of the prior probability density function.The violation of non-contradiction stems from the multiplicity of uninformative priors.
Among the applications violating non-contradiction in this way is the procedure (called “Bayesian parameter estimation) by which climatologists extract a posterior probability density function for the equilibrium climate sensitivity from a global temperature time series. My guess is that the University of Missouri work employs this procedure.

Alan Robertson
March 22, 2014 11:54 pm

nicholas tesdorf says:
March 22, 2014 at 2:58 pm
Steven Mosher says:
March 22, 2014 at 1:31 pm
“NO THEY CANNOT !!!!!!!!!!!
############## ”
Mosher is shouting again. The strain of watching Mother Nature in action again controlling the climate is getting to him.
More settled science from Warmistas.
__________________________
Sorry, but you mis- attributed that quote to Mosher. What he actually said was even dumber than what you think he said- you misunderestimated him, (or something.)

Greg
March 23, 2014 12:19 am

1sky1 says:
“If only an intrinsically complex physical problem could be unravelled by mere statistics.”
Indeed, It’s well overdue that that climate is analysed as system (ie an engineering problem) rather than a box of random number generators.
Oceans, evaporation, heat transfer are physically inter-linked, not random processes.
Starting with a foregone conclusion that the system can be represented as CO2 rise + “noise” is the fundamental error of the last 30 years of climate science.
PS. It hasn’t worked.

Reply to  Greg
March 23, 2014 9:34 am

Greg:
Right on. Climatologists err in appropriating the ideas of “signal” and “noise” from electrical engineering and applying them to the problem of controlling the climate. Under Einsteinian relativity, a “signal” is capable of propagating from the present to the future but incapable for propagating from the future to the present for to do the latter it would have to travel at superluminal speed. To control the climate the control system needs information about the conditional outcomes of events but this information cannot be carried by a signal. Violating relativity is one of the many fundamental errors that climatologists have made in constructing their field.

Stephen Richards
March 23, 2014 2:19 am

Gixxerboy says:
March 22, 2014 at 4:59 pm
Nicholas Tesdorf
That was not Mosher shouting. It was Stephen Richards. Mosh was parodying it with his line about ‘settled science from skeptics’
Then you and Mosher need to explain how any useless model can PREDICT anything, dickhead.
There isn’t a climate or weather model out there that can predict 100% the future. You need to go back to school an learn to understand the limitations of models and all models have them. The best engineering models do help but they are modeling systems that are moderately predictable. The climate and weather are not in that categorie.

RoHa
March 23, 2014 3:57 am

“The world’s oceans cover more than 72 percent of the earth’s surface, impact a major part of the carbon cycle,”
And does the impact have any effect?

Editor
March 23, 2014 4:01 am

through better statistical methods and models currently in development, a greater understanding of [El Niño and la Niña] and their associated impacts will help forecasters better predict potentially catastrophic events, which will likely be increasingly important as our climate changes.“. They are getting things backwards. First they need to look at the major climate factors – things like solar cycles and ocean oscillations, ie. the things that change climate. Then they can look at the more minor or short term things like El Niño and la Niña. There’s little value in working on El Niño and la Niña on the basis that they will become “increasingly important as our climate changes” if they don’t know what changes climate in the first place. How do I know that they don’t know? They don’t know what caused the MWP, LIA, or any of the climate cycles of the Holocene, so obviously they don’t know what’s causing what looks like a continuation of those cycles that we’re in right now.
Dr Burns Mar 22 3:28pm “When models can forecast with any accuracy whether it will rain in two days time, I may start to have some faith in them.“. You are talking about weather models. Weather models are, and always will be, useless for prediction on any climate scale. For that, climate models are needed, and currently there are none. The models used by the IPCC and others are not climate models, they are weather models.

Jean Parisot
March 23, 2014 4:33 am

Loosely translated: ‘we don’t really know whats going on – but we need more money’.

son of mulder
March 23, 2014 5:05 am

What, where, when and magnitude would be the useful (im)possible predictions. Statistical modelling will be a waste of money in a chaotic, coupled, non-linear system.

Reply to  son of mulder
March 23, 2014 9:55 am

son of mulder:
Despite the difficulties that you describe, statistically validated models have already been developed that predict average air temperatures over the western states of the United States over a forecasting horizon of 6 months. This has been made possible by the existence of 328 statistically independent observed events extending back to the year 1850. An impediment to successful development of a model that forecasts over a horizon of 30 years is that there are only between 5 and 6 statistically independent observed events. To gather 328 30-year observed events, one would need to observe the climate for 30 years/event * 328 events = 9840 years.

Ian W
March 23, 2014 5:14 am

I would be interested how statistics which are collections of data from past measurements can forecast the behavior of a coupled complex, chaotic, non-linear, dynamic, system of systems. As everyone appears to accept that the climate system is ‘chaotic’ …. from http://mathworld.wolfram.com/Chaos.html
“chaotic systems are distinguished by sensitive dependence on initial conditions and by having evolution through phase space that appears to be quite random.”
Statistics by its very nature deals with averaging the initial conditions (or Bayesian priors) and those are the known initial conditions there are many “unknown unknowns” that may also lead to rapid evolution into different states. Statistics is precisely the incorrect tool for forecasting the behavior of a ‘coupled complex, chaotic, non-linear, dynamic, system of systems’ that is Earth’s climate as a consequence of its unpredictability.

Reply to  Ian W
March 23, 2014 10:31 am

Ian W
The term “statistics” has several meanings. One is “mathematical statistics.” The latter makes it possible for one to deal with a situation that one usually faces in the construction of a model: missing information for a deductive conclusion. That information is missing forces the model builder to replace the rule of classical logic in which every proposition has a truth-value by the rule in which every proposition has a probability of being true, giving rise to mathematical statistics.
The term “statistical model” is sometimes used in reference to a model that is constructed solely from observational data as is the case, for example, under regression analysis. However, it is possible under available methods of mathematical statistics to supplement observational data by natural laws in the construction of a model. To do so results in a better model.

Mike Bromley the Kurd
March 23, 2014 5:19 am

Interesting, Mr. Mosher, You try to dismantle a perceived house of cards by building one.

Gamecock
March 23, 2014 5:59 am

Bob Tisdale says:
March 22, 2014 at 5:26 pm
Why are they wasting everyone’s time with a press release about something in development?
========================
Because they can take credit for what they are working on. In case what they are working on doesn’t pan out. I see it as a sign that they really have no faith in what they are doing. “We better publish now, while we can.”

chuckarama
March 23, 2014 6:22 am

Geez! He makes it sound like it’s so hard to model and predict climate and atmospheric events. Hasn’t he read the latest IPCC model confidence indicators? They don’t need his complex data driven statistics to predict nearly 3C of temp rise with extreme confidence. All this mumbo-jumbo makes it sound a little unsettled and difficult to do still. What a denier!

Matthew R Marler
March 23, 2014 6:24 am

Steven Mosher: note they didnt say unravelled.
note they said
“Their method helped improve the prediction of sea surface temperature extremes and wind fields over the ocean,”
Note the difference between “unravelling” and improving a prediction.

Well said. That post was good.

Bill Illis
March 23, 2014 6:36 am

If it works, good.
If it doesn’t work, when does the effort get abandoned and the funding ceased. If it doesn’t work, when do they try to find out what part is not working and try to fix it.
In climate science, we NEVER get to this point. It just goes on and on and on.

Matthew R Marler
March 23, 2014 6:40 am

Stephen Richards: There isn’t a climate or weather model out there that can predict 100% the future.
So what? There isn’t a perfect paper clip or light bulb either.
We have models to the effect that the weather distribution of December in the NH is shifted toward the cold end compared to the weather distribution of October; which in turn is shifted to cold end compared to the weather distribution of July. We have models ranking days vs nights, and some regions vs. others. A model of climate that was predicted the correct rank orderings of the climate summaries 50 years hence might be worth a lot. We don’t have one, but that does not mean that we never will.
There isn’t a model for the lift capacity of an aircraft wing that has 0 error. Models do not need 0 error to be useful. What they need is a public record of being accurate enough to achieve the purposes for which they were designed.

Matthew R Marler
March 23, 2014 6:44 am

Stephen Richards: New Statistical Models Could Lead to Better Predictions of Ocean Patterns and the Impacts on Weather, Climate and Ecosystems
NO THEY CANNOT !!!!!!!!!!!

You didn’t just rule out perfection in that post. You ruled out improvement. That’s a pretty unambiguous statement. It shouldn’t be hard in 20 years to know whether you were correct.

March 23, 2014 7:03 am

Save the Climate!
Study the oceans
Restore Co2 balance.
Take Krill oil.
Save your joints.
Assist the the Whales.
(sorry) 😉

Coach Springer
March 23, 2014 7:05 am

Evergreen news; if we improve something, it could do better. Didn’t need a University for that. To date, we have climate models that don’t and can’t work anywhere near a level that market trading models do. My first impression is that this improvement sounds like applying a better motor oil in an experimental internal combustion engine of the 1700s that doesn’t quite work much at all.

Jbird
March 23, 2014 7:23 am

Give it up Mosher, The alarmists have lost. (Some just haven’t figured that out yet.) All the nit picking and hair splitting in the world can’t change the fact that the models have failed.

Rob Ricket
March 23, 2014 7:35 am

OT…The global sea ice anomaly just turned positive; thanks to a late season increase in Arctic coverage.
Lake Michigan has lost a great deal of ice. The open water will increase the potential for late-season lake effect snow in the U.P., Indiana and Ohio.

Raymond
March 23, 2014 7:41 am

Mosher,
As a scientific lay person I’m, asking you if the following statement from from http://iridl.ldeo.columbia.edu/dochelp/StatTutorial/Climatologies/ is true.
“Climatology is commonly known as the study of our climate, yet the term encompasses many other important definitions. Climatology is also defined as the long-term average of a given variable, often over time periods of 20-30 years. Climatologies are frequently employed in the atmospheric sciences, and may be computed for a variety of time ranges………”
If so it would appear from this definition that the quality of current “Climatolgy” is strictly dependent on the quality of the Computer Models and Data used and if one or the other is inadequate the quality is poor.

Jim G
March 23, 2014 8:07 am

Steven Mosher says:
March 22, 2014 at 1:41 pm
“I predict that you are 6 feet tall, plus or minus 3 feet.
That’s a prediction, but pretty horrible. It has limited use.”
My recollection of Bayesian decision theory is that it is not a classical statistical routine and no confidence intervals can be placed upon results of following the decision tree to its various end points. The data collected to establish the probabilities for the tree branching points may have confidence intervals based upon the sample sizes of the various data sets involved. As I am not familiar with this specific type of Bayesian application, I am open to advice from anyone who knows more.

Jim Bo
March 23, 2014 8:17 am

I modeled this.
On March 31 we’re going to see a media assault on truth (scientific and otherwise) the likes of which mankind has never before experienced.

March 23, 2014 8:31 am

Thanks, A. for exposing yet another alarmist article, but I’m afraid they have to keep on coming; their jobs depend on it.
Greg, when you write “the system can be represented as CO2 rise + “noise” is the fundamental error of the last 30 years of climate science.”, you are exactly right in my opinion. This is the crux of the matter. This how GCMs are programmed, this is why they are so wrong.

Jim G
March 23, 2014 8:35 am

W says:
Though I have significant disdain for climate models, particularly those that have continuously shown that they cannot predict squat, the Bayesian approaches which were developed for the military way back in WWII were somewhat successfull in assisting in the delivery of war time materials. I agree that the climate system is chaotic in nature but perhaps subsets of the various variables involved tracked using Bayesian might be more usefull than time series approaches. Would depend, of course, upon the size and quality of the data sets used for establishing the probabilities but cause and effect need not be inferred as in some of the classical methods which we know do not work.

Gamecock
March 23, 2014 8:40 am

Bill Illis says:
March 23, 2014 at 6:36 am
If it doesn’t work, when does the effort get abandoned and the funding ceased. If it doesn’t work, when do they try to find out what part is not working and try to fix it.
====================
If it doesn’t work, we’ll never hear about it.
“Rowers are going to trek the northwest passage, to bring attention to climate change.”
We never got the story “dumbass rowers blocked by ice.” Yet the rowers got full media attention, by pre-publishing.

Gamecock
March 23, 2014 8:49 am

I disagree with all who think climate models could potentially become valuable. They can never have value. If a model correctly predicted, “It’s going to get dryer around here in the next 10 years,” no one would do anything differently. People would react when it actually started getting dryer.

Chjp Javert
March 23, 2014 9:22 am

Joel O’Bryan says:
March 22, 2014 at 11:18 pm
IMHO, Baysian stats can only help the failed climate models to find more reasons why they are failing. The modelers are good people, and they want their models to actually replicate the real climate. So let’s be smart here, but still skeptical too.
======================================================
I don’t have any Baysian insight, but I sure have opinions of alarmist modelers:
These supposedly well-educated adults are somewhere between willfully naive or corrupt. They knowingly use pseudo-science to obtain money (grants) and power (academic positions). Most aggressively censor dissenting opinion; some propose throwing “deniers” in jail.
Modelers don’t try to “actually replicate the real climate” – if they did, model accuracy would improve via the scientific method (or they’d discover the system is stochastic). The only thing these guys have discovered is Mother Nature does not behave according to their theory. This bovine excrement is pure propaganda for more funding.
This is done to knowing misallocate hundreds of billions of dollars per year (a non-trivial 0.5% of global GDP). That amount of resource could easily prevent millions of deaths per year (lack of food, malaria).
These same people get all uppity if you point out they share some pretty unpleasant tactics with fascists or (GASP!) Nazis. Maybe they’d prefer the Catholic inquisition.

Matthew R Marler
March 23, 2014 9:23 am

Terry Oldburg, you might enjoy the book “A Comparison of Classical and Bayesian Methods of Estimation” by Francisco Samaniego

Reply to  Matthew R Marler
March 23, 2014 10:58 am

Matthew Marler:
Thanks for the citation. In the construction of a model, I favor a method that is more aptly described as “optimization” than as “estimation.” The model builder optimizes the missing information in each of the inferences that are made by the model by minimizing it or maximizing it under constraints expressing the available. I favor this method because it is entirely logical and produces superior results..

Chip Javert
March 23, 2014 9:38 am

Gamecock says:
March 23, 2014 at 8:49 am
I disagree with all who think climate models could potentially become valuable. They can never have value. If a model correctly predicted, “It’s going to get dryer around here in the next 10 years,” no one would do anything differently. People would react when it actually started getting dryer.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Well, your statement might be accurate for most people, but…
Get yourself a climate model that says the sea is rising 20 feet by the year 2100 (laughable), and you’ll immediately have a bunch of south Pacific politicians clamoring for US taxpayer funding.
All this “clamoring” plus bad research, adds up to an estimated $350,000,000,000/year (or 0.5% of global GDP). That a whole lot of money and power…

ren
March 23, 2014 9:42 am

America should prepare for the next blizzards and low temperatures next week. If someone says that it is because of GW, it contradicts the facts.

Roger D PGeol
March 23, 2014 9:47 am

As a petroleum geologist my colleagues and I are frequently asked to evaluate geological phenomenon on a statistical basis. For example, the Bayesian approach is used to predict the distribution of reservoir properties such as sand thickness, porosity, permeability etc.to be used in computer simulations of a reservoir system under changing conditions. The model starts with the known parameters at various control points, usually wellbore penetrations, then applies the statistical approach to “fill in” those parameters in the intervening grid where data is unavailable without the intervention of biases. After having worked in reservoir characterization for six years and thereafter having run numerous models on numerous other projects in which I have been involved, my conclusion is that the Bayesian model seldom has any resemblance to a real-world geological phenomenon and the most important part of the model is still the professional input as to the reality of the model and sorting out the significant variables. Pretty much like prejudicial judgement and simply put, if the data don’t fit the model, it’s the MODEL, not the data. In reservoir modeling it is fluid flow, pressures, injection and OIL PRODUCTION RATES and such, that are important data, things that are physically measured. I’ll bet that Bayesian approach will work a lot better in a model where you cannot verify the system performance…;-) Just my opinion…

Reply to  Roger D PGeol
March 23, 2014 11:24 am

Roger D PGeol:
You’re describing Bayesian parameter estimation. Its Achilles heal is the multiplicity of uninformative prior probability density functions (PDFs), each yielding a different posterior probability density function. Which of these posterior PDFs does one use in making a decision? The choice is arbitrary.
To apply this lesson to global warming climatology, the choice of posterior PDF over the equilibrium climate sensitivity is arbitrary. Some posterior PDFs yield scary conclusions. Others yield non-scary conclusions. In an expression of the self interest of the professional climatologist rather than logical deduction, he or she routinely presents to the maker of governmental policy with one of the scarier ones without pointing out that non-scary ones are also possibilities.

RACookPE1978
Editor
March 23, 2014 10:00 am

Jim Bo says:
March 23, 2014 at 8:17 am
I modeled this.
On March 31 we’re going to see a media assault on truth (scientific and otherwise) the likes of which mankind has never before experienced.

Sorry, I don’t follow your logic here. April Fool’s Day would be a better release day for their attacks, would it not? 8<)

March 23, 2014 10:01 am

New Statistical Models Could Lead to … ”
No skill, no innate knowledge (inherent knowledge of the physics of the interrelationship of factors, coefficients, etc.) how those systems function. No ‘real’ value; little better than Monte Carlo analysis (a statistical spread of possible values about a central locus) …
.

EternalOptimist
March 23, 2014 10:01 am

I dont understand why Sherlock Mosher wishes to get my height by looking at the wear and tear on my clothing. Why not get a tape measure. and measure

Chip Javert
March 23, 2014 10:08 am

Terry Oldberg says:
March 23, 2014 at 9:55 am
…statistically validated models have already been developed that predict average air temperatures over the western states of the United States over a forecasting horizon of 6 months. This has been made possible by the existence of 328 statistically independent observed events extending back to the year 1850…
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Assuming what you’re saying is the model ACCURATELY & CONSISTENTLY predicts average air temp, please provide a cite.
I don’t begin to have the mathematical chops to validate the claim, but Willis or Bob sure due…(hint, hint).

Reply to  Chip Javert
March 23, 2014 11:54 am

Chip Javert:
There is a bibliography at http://www.entropylimited.com/pubs.htm .

Chip Javert
March 23, 2014 10:17 am

Terry Oldberg says:
March 23, 2014 at 9:55 am
…statistically validated models have already been developed that predict average air temperatures over the western states of the United States over a forecasting horizon of 6 months. This has been made possible by the existence of 328 statistically independent observed events extending back to the year 1850…
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Just a thought:
“328 statistically independent observed events extending back to the year 1850…” appears to be EXACTLY 2 per year (328/(2014-1850) = 328/164 = 2.00000000 (yea, I know I might be overdoing it on the number of decimal places, but I’m making a point).
Gotta be some good model to go from that granularity to consistently & accurately forecasting a future 6-months of a stochastic system.

Reply to  Chip Javert
March 23, 2014 12:13 pm

Chip Javert:
These results were produced by regarding the model as the algorithm of the optimal decoder of a message coming from the future. This message consisted of the sequence of outcomes of statistically independent events.
Unlike the optimal decoder of communications engineering, this one had to work without the agency of an error-correcting code. A consequence was for the outcomes of events to be known only to within the the probabilities of their occurrences. The decoder produced information about the outcomes but not perfect information.
Under current circumstances, it is impossible to construct the model of a decoder that produces any information at all about an event of 30 years duration for the number of 30 year events is inadequate to the task.

Box of Rocks
March 23, 2014 10:22 am

A Random walk down Wall Street…

March 23, 2014 10:27 am

re: Martin 457 says March 22, 2014 at 4:32 pm
I hope for the best with this. Having lived in ‘Tornado Alley’ my whole life, we still have the occasional tornado watch issued 1/2 an hour before the tornadoes start forming.
Perhaps along the leading edge of the dry line (southwest thru TX, thru OK and NNE into KS); sometimes T-storm initiation along the dryline ‘holds off’ forming any cumulus due to a strong ‘cap’ and it therefore makes more sense to go partly with ‘observations’ looking for initiation along that dryline before aligning a projected ‘schedule’ (Watches) for the forthcoming events ….
.

Jim Bo
March 23, 2014 10:30 am

RACookPE1978 says: March 23, 2014 at 10:00 am

April Fool’s Day would be a better release day for their attacks, would it not? 8<)

I wish I could share your amusement. One week to go before release and the deluge has already commenced. .

March 23, 2014 10:33 am

Terry Oldberg saysMarch 23, 2014 at 9:34 am
… Climatologists err in appropriating the ideas of “signal” and “noise” from electrical engineering …
Perhaps a better fit: hypocycloids (circles within circles) with varying parameters constrained as to cycle length, distance, etc. and some parameters ‘offset’ as with lobes on cam.

Theo Goodwin
March 23, 2014 10:57 am

Roger D PGeol says:
March 23, 2014 at 9:47 am
Exactly. You speak from hard earned experience. Too bad no climate modeler has similar experience. In addition, the geology of reservoirs is a rather rich context compared to the average temperature of the planet.

March 23, 2014 11:01 am

“Terry Oldberg says: March 23, 2014 at 9:16 am

climatologists extract a posterior probability density function for the equilibrium climate sensitivity from a global temperature time series. My guess is that the University of Missouri work employs this procedure.”

Oh their aching sacroiliacs! What wonderful phrasing for calling their derrieres dense and pulling numbers from it!
I’ll have to remember that one! And your overall post is excellent even without subtle wordplay.

March 23, 2014 11:26 am

“_Jim says: March 23, 2014 at 10:27 am

re: Martin 457 says March 22, 2014 at 4:32 pm
I hope for the best with this. Having lived in ‘Tornado Alley’ my whole life, we still have the occasional tornado watch issued 1/2 an hour before the tornadoes start forming.

Perhaps along the leading edge of the dry line (southwest thru TX, thru OK and NNE into KS); sometimes T-storm initiation along the dryline ‘holds off’ forming any cumulus due to a strong ‘cap’ and it therefore makes more sense to go partly with ‘observations’ looking for initiation along that dryline before aligning a projected ‘schedule’ (Watches) for the forthcoming events ….”

Tornado watches, indeed many weather related ‘watches’ are issued when the conditions are ‘ripe’ for the weather related phenomena to form.
Minutes can pass between clear skies to towering cumulonimbus clouds starting to form anvils and rumbling ominously. Without interference super cells can form and begin rotating.
The idea of the ‘Tornado watch’ is that one keeps tuned to both the skies and NOAA weather alerts. If one is living in Tornado Alley then there are often local TV/radio stations that will broadcast frequent Doppler alerts along with likely paths.
When a tornado or a tornado cloud is spotted, then ‘tornado watches’ get upgraded to ‘tornado warnings’. If I am in the path of a tornado warning path, we clear (turn off lights, stove, oven…) the house and head for the basement with the weather radio and pets. And I’m not even in Tornado Alley!
But I have been in Tornado Alley and gotten caught in serious conditions that were not there 30 minutes prior, nor were they on the horizon. Just ill luck that I happened to be under where the weather conditions went from ‘ripe’ to unstable to storms. From sunscreen covered to rain jacketed with rain covered glasses and frantically trying to get somewhere lightning was not striking. If I heard a roar, I would’ve been satisfied with a ditch and grass.
Like watching the ‘stormchasers’ who finally caught a ‘tornado’ or got caught by one.

Matthew R Marler
March 23, 2014 1:04 pm

Terry Oldburg: In the construction of a model, I favor a method that is more aptly described as “optimization” than as “estimation.” The model builder optimizes the missing information in each of the inferences that are made by the model by minimizing it or maximizing it under constraints expressing the available. I favor this method because it is entirely logical and produces superior results..
That sounds like the E-M algorithm.
I personally only respect Bayesian methods where there is a demonstrably accurate distribution of measurements in some describable population supporting the prior. That is basically Samaniego’s message: in order to achieve an improvement over MLEs (for example) the prior has to be at least sufficiently accurate: past or above a “threshold” as he puts it.

Reply to  Matthew R Marler
March 23, 2014 1:31 pm

Matthew Marler:
I’m thinking of entropy minimax pattern discovery as described by Ron Christensen in the book “Multivariate Statistical Modeling.” Christensen uses Bayes’s theorem in solving the inverse problem that the model requires probability values but observational science gives the model builder only frequency values. For this application of Bayes’s theorem, the uninformative prior can be shown to uniquely be the uniform prior on the interval between 0 and 1. That the prior is unique avoids the usually just criticism that is lodged against Bayesian methods by the frequentists while providing a solution to the inverse problem.
Early in the development of entropy minimax pattern discovery, Christensen tried MLE. This resulted in the straight rule. He found, however, that in this context MLE led to catastropic failure of his algorithm. This resulted from a usually slight over estimate of the information content in the observed frequencies by the straight rule.

Beale
March 23, 2014 2:38 pm

How can there be better predictions if the science is settled?

son of mulder
March 23, 2014 4:25 pm

” Terry Oldberg says: March 23, 2014 at 9:55 am
Despite the difficulties that you describe, statistically validated models have already been developed that predict average air temperatures over the western states of the United States over a forecasting horizon of 6 months. This has been made possible by the existence of 328 statistically independent observed events extending back to the year 1850.”
I’m sorry but since the climate going forward is meant to be changing, what possible value can historical events be in predicting the future by statistical methods?

catweazle666
March 24, 2014 7:08 am

“Bayesian hierarchical model”
Ah, another computer game, how nice.
Is it out for Xbox One yet?
How does it compare with Game of Thrones?

son of mulder
March 24, 2014 8:32 am

Terry Oldberg, I’ve read the references that you have quoted to me. I can’t see how the process described differs from predicting, a) tomorrow will be like today, b) summer will be warmer than winter etc. a) has some use if you haven’t heard a forecast for tomorrow. b) is of no added value as we know that.
But what about predicting how the jetstream will behave over Europe in 2044-2054,so we know whether and where to construct bigger and better flood relief and prevention systems or more reservoirs in 30 years time?

Reply to  son of mulder
March 24, 2014 12:25 pm

son of mulder:
Your examples a) and b) give me a clue to what it is that you don’t understand about the lesson that I’m trying to convey to you. Thanks for giving me the opportunity to clarify.
In the referenced articles I derive logical principles which, when followed, may result in the ability of a decision maker to predict the outcomes of events lying in the future though this decision maker has only observed the outcomes of events lying in the past. To do so is to “generalize from specific instances.” How to do so is “the problem of induction.” Solving this problem is the topic of the second article. Generalization is facilitated by abstraction; this is the topic of the first article.
Following the logical principles results in construction of a model, one of whose functions is to make a predictive inference. A “predictive inference” is an extrapolation from an observed state of nature to an unobserved but observable state of nature. The former state is called the “condition” while the latter state is called the “outcome.”
Under circumstances found in practice, predictive inferences consistent with observed events are of infinite number but a decision maker must select one of them in making a prediction. How can he make this selection? This is where the logical principles come in. They select that predictive inference which is most useful for decision making from among the many possibilities. The most useful predictive inference is the one that provides the decision maker with the most information about the outcomes of the events of the future.
In practice, the information provided by the model is incomplete with the result that the model predicts the outcome of each event only to within a probability. Neither of your examples illustrates the idea of imperfect information or probability.
A predictive inference references the set of all possible conditions and the set of all possible outcomes. Neither of your two examples references the two sets. Also, in selecting that predictive inference which provides the most information, one varies the definitions of the conditions. In composing your examples, you’ve not done that.
Regarding the usefulness of the predictions, this property is imparted to the model through the selection of the set of all possible outcomes. This set should be selected by reference to the decisions that will be made by the model. In the case of a trial of a lung cancer cancer drug, it might be appropriate to select the set { alive, dead } in reference to a patient a year after he has taken the new drug. For a lung cancer patient to know the value of the probability that he’ll be dead in a year conditional on whether he takes the new drug will be useful to him.
In coming up to speed on the topic of my tutorial, you’d find it useful to compose an example featuring a set of conditions and set of outcomes. wherein the definitions of the outcomes are constants but the definitions of the conditions are variables. You should select the definitions of the outcomes for their pertinence to making a decision that is important to someone. The conditions and conditional outcomes should have probabilities of occurrence. A model that is described in this way is amenable to optimization through use of ideas from information theory.

1sky1
March 24, 2014 3:52 pm

tSteven Mosher says:
March 22, 2014 at 1:41 pm
Typically enough, you fail to recognize the difference between what needs to be done scientifically to unravel a physical problem–which is what I address–and what someone claims to be an improvement in purely phenomenological statistics. I don’t have time for such uncomprehending quarrel.

son of mulder
March 24, 2014 4:37 pm

Terry, you are asking for a lot of time to be invested in a detailed study of what I glean to be a fusion of Bayesian Probability Theory, reformulation of systems into phase spaces and a sort of (analogous to) a Least Action principle to choose the best prediction. But before that you need to show me how your methods would practically address the 30 year European jetstream prediction because that is essentially what determines storm patterns in Europe, in a context where there is a continuing change in the environment (anthropogenic CO2) and a naturally chaotically evolving physical system. That would give me some idea how practical it is to apply what you are proposing on a fairly basic climate scenario to give a useful answer.
I’m afraid my mathematical instinct is that because the world climate is an open chaotic Thermodynamic system the same lack of convergence that one sees in say the closed classical 3 body problem will be present in predictions of the jetstream’s behaviour but more so.

Reply to  son of mulder
March 24, 2014 7:20 pm

son of mulder:
The topic that you’d need to study, if you have not already done so, is logic. As currently organized, global warming climatology is illogical.