Think-Tank Says Trained Chimp Can Predict Hurricanes Better Than NOAA… And Puts it to the Test

Chimp Predicts 6-8 Atlantic Hurricanes in 2010

Via press release: Washington, DC: The National Oceanic and Atmospheric Administration’s track record in predicting the number of Atlantic hurricanes is so abysmal that a trained chimp could do better, says The National Center for Public Policy Research, a Washington, D.C.-based think tank.

The group is putting this claim to the test, issuing a 2010 Atlantic Hurricane Forecast today determined by a chimpanzee, “Dr. James Hansimian.”

Video links follow.

A video of Dr. Hansimian and his methodology can be found at www.nationalcenter.org/HurricaneForecast.html or http://tw0.us/DNm.

or watch it below:

The forecast is being issued in advance of NOAA’s May “Atlantic Hurricane Season Outlook,” expected to be released next week.

“NOAA’s May outlooks have been wrong three out of the last four years – or 75% of the time,” said David Ridenour, vice president of The National Center for Public Policy Research. “We think our chimp can do better. He hasn’t been wrong so far. Of course, this is his very first hurricane season forecast.”

The video isn’t intended to needle NOAA for its erroneous forecasts, but to make a larger point about our current understanding of climate.

“NOAA’s forecasts have been wrong not because of a lack of dedication or competence of its forecast team, but because climate science is really still its infancy,” said Amy Ridenour, president of The National Center for Public Policy Research. “We should remember this as we consider whether to adopt economically-ruinous caps on energy. If we can’t rely on 6-month forecasts, how can rely on forecasts of what rising carbon concentrations will do to our climate 25, 50 or even 100 years out?”

The National Center for Public Policy Research is also issuing a challenge to NOAA.

“If, at the end of the hurricane season, Dr. Hansimian’s forecast turns out to be more accurate than NOAA’s, we challenge the agency to make him an honorary member of NOAA’s hurricane specialists unit,” said David Ridenour. “In return, if NOAA’s forecast is more accurate, we’ll include a prominently-displayed mea culpa on our website.”

Dr. James Hansimian, says the video, is “author of the book, ‘The Banana Curve: No Tricks Needed,’ published by East Anglia University Press.” The video was filmed on location in Las Vegas, Nevada on March 24, 2010 – before the latest predictions by either Colorado State University’s forecast team, which is led by Phil Klotzbach, or the forthcoming predictions expected from NOAA.

Dr. Hansimian is played by Kenzie, who starred “Chim Chim” in the 2008 Warner Brother release “Speed Racer,” appearing with actors John Goodman, Emile Hirsch and Susan Sarandon. Kenzie also had a guest spot on the VH1 reality show, “Hogan Knows Best,” starring Hulk Hogan.

A second video will be released on December 1, at the conclusion of the hurricane season, with Dr. Hansimian’s reaction to the performance of his forecast against the NOAA forecast.

The National Center for Public Policy Research is a non-partisan, non-profit – somewhat less stodgy and more irreverent – free market foundation based in Washington, D.C. It is a truly independent organization, receiving 98% of its funding from individuals through hundreds of thousands of donations. No individual, foundation, or company provides the organization with more than a fraction of one percent of its annual revenue.

Permission to use video on-air or online is granted so long as appropriate attribution to the National Center for Public Policy Research is included and the National Center is informed of its use. Please use the telephone, fax or email contact information at the top for all inquiries.

– 30 –
Contact: David Almasi at (202) 543-4110 x11 or (703) 568-4727 or e-mail dalmasi@nationalcenter.org or Judy Kent at (703) 759-7476 or jkent@nationalcenter.org
Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
54 Comments
Inline Feedbacks
View all comments
Admin
May 18, 2010 2:39 pm

If there were any comments here that have been deleted I apologize. It is probably my fault and it was an accident.

May 18, 2010 2:56 pm

He’s a Funky Junky Monkey !

sleeper
May 18, 2010 3:01 pm

Maybe we could get the trained chimp to do your job too, Charles. 😉
.
[He’s already got me, and I get paid less than a chimp. ~dbs]

Henry chance
May 18, 2010 3:31 pm

Hadley was wrong 9 out of the last 10 years. They predicted hotter than actual.
Now with the new Super dooper computer they can calculate faster and be wrong sooner.
Should we reject the computer models devised by scientists that are wrong almost all the time?

timetochooseagain
May 18, 2010 3:31 pm

Let’s look at how NOAA’s forecasts have done. You can find the forecasts are conveniently in tables on the wiki pages representing each Atlantic Hurricane Season. I’ll show May, since the “forecast” done in August is rather disingenuous since the season can be well underway by that point. Actual numbers are in parentheses.
Named Storms
2009 9-14 (9) barely right
2008 12-16 (16) barely right
2007 13-17 (15) Spot on
2006 13-16 (10) FLAT WRONG
2005 12-15 (28) FLAT WRONG
2004 12-15 (15) barely right
2003 11-15 (16) WRONG
2002 9-13 (12) fair
In my view, it looks as though most of NOAA’s forecasts were fairly decent. The last 8 seasons have seen only three in which the number of named storms was outside of the predicted range. Two thousand five was especially bad, probably because nobody expected such an active season historically-because your technology is allowing us to detect them better.
But how about their predictions for just hurricanes?
2009 4-7 (3) wrong
2008 6-9 (8) fair
2007 7-10 (6) wrong
2006 8-10 (5) FLAT WRONG
2005 7-9 (15) FLAT WRONG
2004 6-8 (9) wrong
2003 6-9 (7) fair
2002 6-8 (4) WRONG
Interestingly, here their performance looks much worse. Only TWICE in the last 8 seasons has the actual number of hurricanes fallen within the range of predictions by NOAA in May.
So, how we judge NOAA’s performance depends on what exactly we look at to test their forecasts. They are very bad at predicting the number of hurricanes, but do better with named storms. What will this group chose to test their predictions with?

kadaka (KD Knoebel)
May 18, 2010 4:03 pm

Found in: sleeper on May 18, 2010 at 3:01 pm

[He’s already got me, and I get paid less than a chimp. ~dbs]

The chimp gets bananas but you get raspberries?

Ray
May 18, 2010 4:04 pm

Get that Chimp a generous grant! He is making robust models. He definitely need to study this further.

Dan
May 18, 2010 4:10 pm

I’m a hurricane researcher, and while I agree that hurricane forecasts are not great, this monkey stunt is a little ridiculous. The hurricane forecasts are effectively just applications to the upcoming hurricane season of historical statistical correlations between a few pre-season indicators, most notably ENSO indices, and historical hurricane activity. The correlations explain only limited amounts of the variance, but over many years they should predict with decent accuracy active vs. inactive seasons. However, for any given year of course a monkey could do better, in the same way that a monkey could guess any future quantity and beat a trained forecast once (or twice etc., with decreasing probability) regardless of how accurate the forecast actually is. Moreover, to use this stunt to mock climate science is inapproriate.

Wren
May 18, 2010 4:16 pm

timetochooseagain says:
May 18, 2010 at 3:31 pm
Let’s look at how NOAA’s forecasts have done. You can find the forecasts are conveniently in tables on the wiki pages representing each Atlantic Hurricane Season. I’ll show May, since the “forecast” done in August is rather disingenuous since the season can be well underway by that point. Actual numbers are in parentheses.
Named Storms
2009 9-14 (9) barely right
2008 12-16 (16) barely right
2007 13-17 (15) Spot on
2006 13-16 (10) FLAT WRONG
2005 12-15 (28) FLAT WRONG
2004 12-15 (15) barely right
2003 11-15 (16) WRONG
2002 9-13 (12) fair
In my view, it looks as though most of NOAA’s forecasts were fairly decent. The last 8 seasons have seen only three in which the number of named storms was outside of the predicted range. Two thousand five was especially bad, probably because nobody expected such an active season historically-because your technology is allowing us to detect them better.
But how about their predictions for just hurricanes?
2009 4-7 (3) wrong
2008 6-9 (8) fair
2007 7-10 (6) wrong
2006 8-10 (5) FLAT WRONG
2005 7-9 (15) FLAT WRONG
2004 6-8 (9) wrong
2003 6-9 (7) fair
2002 6-8 (4) WRONG
Interestingly, here their performance looks much worse. Only TWICE in the last 8 seasons has the actual number of hurricanes fallen within the range of predictions by NOAA in May.
So, how we judge NOAA’s performance depends on what exactly we look at to test their forecasts. They are very bad at predicting the number of hurricanes, but do better with named storms. What will this group chose to test their predictions with?
====
The sum of the predicted hurricane ranges for 2002 to 2009 is 44-62, and the sum of the actual hurricanes during this period is 57, which is slightly above the middle of the range which is 53.
The chimp is not being allowed to predict any number it wants(e.g., 99 hurricanes for 2010), so the National Center for Public Policy Research is monkeying with the experiment.

May 18, 2010 4:19 pm

Somewhere Homer Simpson is smiling.

mr.artday
May 18, 2010 4:33 pm

Dan@4:10pm. It is not possible to be inappropriate in mocking the most prostituted science in history. Your profession has either sold out to the socialists or stood shamefully silent while world class liars tried to swindle trillions, immiserate billions and destroy freedom.

Brian M. Flynn
May 18, 2010 4:38 pm

In “Inexpert Elicitation by RMS” dated April 22nd, 2009, Pielke Jr. says he “created” several panels of monkeys which “produced the exact same results [on short term hurricane activity] as those produced by the RMS expert panels comprised of world leading scientists on hurricanes and climate” (RMS being a leading company that provides catastrophe models which are used to assess risk in the insurance and reinsurance industries), and he explains how.
A case of monkey see, monkey do?

Reed Coray
May 18, 2010 4:46 pm

Henry chance says: May 18, 2010 at 3:31 pm
Should we reject the computer models devised by scientists that are wrong almost all the time?
No. To a gambler, someone who’s wrong X%>50% of the time is just as valuable as someone who’s right X% of the time. What we have to do is get Vegas to handicap Hadley computer models.

timetochooseagain
May 18, 2010 4:59 pm

Wren, that’s a non standard method of comparing. All you’ve shown is that they don’t systematically get their predictions wrong in one direction. The Mean Absolute Error is better, although in that case I have to take the mean of the range given by NOAA, which is why I just asked, how often does reality fall in or out of their range.
Here is what I get for mean absolute error:
Storms: 3.625
Hurricanes: 2.75
That’s not great performance, but I suppose it isn’t terrible either.

Gail Combs
May 18, 2010 5:13 pm

Dan says:
May 18, 2010 at 4:10 pm
I’m a hurricane researcher, and while I agree that hurricane forecasts are not great, this monkey stunt is a little ridiculous… Moreover, to use this stunt to mock climate science is inapproriate.”
___________________________________________________________________________
I am a chemist and what Climate Séance-sts have done to the scientific method, peer review and the economies of Europe and soon the USA makes me think longingly of Madame Guillotine. If the Iceland volcanoes blow perhaps history will repeat itself. Be glad it is only a bit of fun you have to put up with. We have had to put up with the vitriol from the greens for years.

Mike Davis
May 18, 2010 5:16 pm

Wren:
You need to discard the predictions that were wrong. 2 out of 8 they made were within range for 25%. Your smoothing approach reminds me of that used by the Modelers and it does not provide usable information because it appears better than actual. Lessons are learned by finding and correcting mistakes not by smoothing them over as if they did not happen!

Gary Hladik
May 18, 2010 5:27 pm

Hurricane prediction: so easy a cave– er, chimpanzee can do it!

rbateman
May 18, 2010 5:44 pm

Not such a bad idea at all. Hey, if Phil can do it, why not Hansimian?
Yeah, I know, the canaries in the mines didn’t work out so well. By the time the canary was dead, the crew was passed out.
Can we get grant funding for finding new animals to help us predict things like, oh, Sunpots, Earthquakes, Wars, Market spins, etc.?

kcom
May 18, 2010 5:46 pm

I’m a hurricane researcher, and while I agree that hurricane forecasts are not great, this monkey stunt is a little ridiculous.
And they acknowledge that. They are basically addressing, in this humorous (for some) way, your next point.
The hurricane forecasts are effectively just applications to the upcoming hurricane season of historical statistical correlations between a few pre-season indicators, most notably ENSO indices, and historical hurricane activity. The correlations explain only limited amounts of the variance, but over many years they should predict with decent accuracy active vs. inactive seasons.
They are pointing out that climate science is not a well-developed science. Even with many years of hurricane observations, you say the best we can do is use a few simple variables to explain some of the variance and get kind of close to the right answer…over time. Not exactly awe inspiring science, right?
And yet, while there are no even remotely comparable historical statistical correlations on global warming or climate change, and the variables involved are tremendously complicated (crossing the boundaries of geology, biology, solar physics, etc., etc.), and there is no track record of successul (or even unsuccessful) predictions to lay odds against, still AGW proponents want to claim a near certainty about the state of the climate in 50 or 100 years, including the temperature of the air, the height of the oceans, the state of the ice caps, the disappearance of species, etc.
That’s what this is about. Just because we can make predictions doesn’t mean they are necessarily worth much more than the paper they’re printed on. It might be the best we can do, but not very good is still not very good.
However, for any given year of course a monkey could do better
Of course, it’s not a monkey. But you knew that (I hope).
Moreover, to use this stunt to mock climate science is inapproriate.
As they explained, they are not mocking climate science. (As if a science could be mocked anyway.) But I would agree they are mocking (some) very thin-skinned climate scientists. And that’s a huge and important difference. Any climate scientist with the hubris to believe he knows more than he knows and is certain of more than can be certain deserves a little mockery. They’re not all like that. Only the ones guilty of taking themselves too seriously need take offense. The rest recognize that climate science is a nascent science that has a long, long way to go before it can adequately explain all of the influences and interactions that determine the climate. Even Dr. Hansimian understands that.

ZT
May 18, 2010 7:03 pm

Isn’t torturing/training the chimp into ‘acting’ a little cruel?

Rhoda R
May 18, 2010 7:30 pm

As a general statement: Why shouldn’t science be mocked? What is there about science that should make it even more untouchable than religion?
Back on topic, I live in Florida and have suffered through three hurricanes. I don’t get uptight about predictions — remember Hurrican Andrew? It was the only landfalling hurrican that yeare which was a slow hurrican year. Hurrican predicions mean diddlysquat except as an excuse to raise insurance rates.

Pamela Gray
May 18, 2010 8:05 pm

caveman!?!?!!? Spit pop out my nose all over the puter screen.

Bill H
May 18, 2010 8:38 pm

Henry chance says:
May 18, 2010 at 3:31 pm
Hadley was wrong 9 out of the last 10 years. They predicted hotter than actual.
Now with the new Super dooper computer they can calculate faster and be wrong sooner.
Should we reject the computer models devised by scientists that are wrong almost all the time?
………………………………………………
Henry,
When you have a 90% failure rate, I would say it’s time for a new chimp….IMHO

Wren
May 18, 2010 9:33 pm

timetochooseagain says:
May 18, 2010 at 4:59 pm
Wren, that’s a non standard method of comparing. All you’ve shown is that they don’t systematically get their predictions wrong in one direction. The Mean Absolute Error is better, although in that case I have to take the mean of the range given by NOAA, which is why I just asked, how often does reality fall in or out of their range.
Here is what I get for mean absolute error:
Storms: 3.625
Hurricanes: 2.75
That’s not great performance, but I suppose it isn’t terrible either.
======
Actually, the hurricane prediction errors do tend to be in one direction. If I counted correctly, In 6 of the 8 seasons the number of hurricanes was under-predicted. I didn’t look at the performance for storms.
My method of summation lets over-predictions and under-predictions offset, which isn’t a good way to evaluate the hurricane forecasts, but I thought it might elicit discussion.
I would agree the hurricane predictions are not a great performance. Are they a good performance? Well, good compared to what? By my count they beat a no-change extrapolation in 6 of 7 seasons, with a tie in 1 season.

Wren
May 18, 2010 9:36 pm

Mike Davis says:
May 18, 2010 at 5:16 pm
Wren:
You need to discard the predictions that were wrong. 2 out of 8 they made were within range for 25%. Your smoothing approach reminds me of that used by the Modelers and it does not provide usable information because it appears better than actual. Lessons are learned by finding and correcting mistakes not by smoothing them over as if they did not happen!
====
Read my reply to timetochooseagain.

1 2 3