Update: Dr. Roger Pielke Jr. says via email “the fake photo is perfectly appropriate” and adds this update to his report on Grinstead from last year:
Today Grinsted et al. have another paper out in PNAS in which they follow up the one discussed below. They make the fantabulous prediction of a Katrina every other year. They say in the new paper:
[W]e have previously demonstrated that the most extreme surge index events can predominantly be attributed to large landfalling hurricanes, and that they are linked to hurricane damage (20). We therefore interpret the surge index as primarily a measure of hurricane surge threat, although we note that other types of extreme weather also generate surges such as hybrid storms and severe winter storms. . .
As I showed in this post, which Grinsted commented on, the surge record does not accurately reflect hurricane incidence or damage. Another poor showing for PNAS in climate science.
– Anthony
Guest Post by Willis Eschenbach
Anthony has commented on the recent paper by Grinsted et al. in his post called “Model predicts more storm surge, but they use what appears to be a fake photo in the press release“. The original study Abstract is here, but the paper has not yet been published. Fortunately, the supplementary material with their summary data is online here. This is the relevant quote from their Abstract (emphasis mine).
We find that warm years in general were more active in all cyclone size ranges than cold years. The largest cyclones are most affected by warmer conditions and we detect a statistically significant trend in the frequency of large surge events (roughly corresponding to tropical storm size) since 1923. In particular, we estimate that Katrina-magnitude events have been twice as frequent in warm years compared with cold years (P < 0.02).
Their claim from the abstract is that historically, warmer years have larger storm surges from cyclones … which seemed doubtful to me. So I got their “Surge Index” data from their Supplementary Information, and took a look. Figure 1 shows the results. I have plotted the size of the surge against the temperature anomaly for the month in which the surge occurred.
Figure 1. Surges plotted against the HadCRUT3 temperature anomaly for that month. PHOTO: Wolf Rock Lighthouse
Well … that sure doesn’t show what they claimed. There’s absolutely no trend in that at all. In particular, “Katrina sized events” (storm surge >= 113) are more common and larger in the colder months, not the warmer months. So having failed there, let me try something else …
They talk about warm and cold years, not warm and cold months. I’ll give that one a try. Figure 2 shows the previous Surge Index results compared to the temperature for that year, rather than for the month … or it will as soon as I go calculate, create, and shoot Figure 2 … OK, here it is.
Figure 2. Surges plotted against the HadCRUT3 temperature anomaly for that year.
That didn’t help in the slightest. Again, no trend in storm surge index with respect to temperature. And again, “Katrina sized” events with a storm surge 113 or greater are more common in the colder years.
So I fear that I can’t replicate their results. They may be using some very sophisticated analysis … but in my experience, if a trend were actually present, it would show up in one of the two charts above.
What am I missing?
Regards to everyone,
w.
DATA: Spreadsheet with the values is here.
[UPDATE] A reader points out that the paper is now available here.
Hey Willis, do a plot of the difference between the arctic temperature anomoly and equatorial temperature anomoly and the storm surge. See if anything interesting happens then.
Willis,
What you are missing is that there is no additional funding if there is no CAGW-related signal.
The Weather Channel is already trumpeting this study. Just saw them using it tonight.
Willis, thats not climate science – where’s the hockey stick? 😉
Paper available via this link.
http://www.glaciology.net/Home/PDFs/Announcements/ahomogenousrecordofatlantichurricanesurgethreatsince1923
A strong surge in overconfidence by Alarmistas about the supposed meltdown in the Arctic.
Arctic ice recovery continues. Yet again the total ice extent is thundering towards normal. The extent is normal over much of the Arctic; Bering Sea well above normal:
http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/seaice.recent.arctic.png
http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/recent365.anom.region.2.html
There is a trend…..(statistically completely irrelevant)
-0.29*x (monthly) and -0.08*x (yearly) which means the warmer it is, the lower the surge index is!
How can they claim the opposite??
Yesterday, I could not arrive at any conclusions.
You are not missing anything, Willis. They are. Looks to me like they don’t even make a show of checking actual facts – they just make it up.
[snip . . OT . . mod]
You are not using Mannian mathematics, or adjusting the data using a Marcott time series. Once you do that, everything will become clear.
As Robert Wykoff suggests, Dr. Lindzen (MIT climatologist; a real one…) postulates that it’s the magnitude of the latitudinal temperature differential that tends to create big-storm years.
Accordingly, in a warming Earth, the latitudinal temperature differential would tend to be less which would lessen storm severity.
@Willis,
You are missing the grant money that told you to find that storms were higher in hot years…
😉
So here, let me help you. First, that one spot way up high in a cold year, toss it as a suspect data point / data anomaly. Now you have 3 and 3 above the 100 line. As that’s equal, we need to ‘fix it’. Set your ‘cut off’ for ‘warm’ at 0.2+, everything below that is ‘cold’. Seeing that there are a whole lot more data points total on the cold side, we now have our method. Make it a “percentage of total storms”. Those three high rank in the cold times are a smaller percentage of all storms then. As there are few storms during warm times, those three become a larger percentage. Presto! More major storms in hot time! Easy peasy.
Can I have my grant now?
I still can’t come up with any conclusions.
Sample intensity.
Sample frequency.
Perhaps the solution is this: Grinsted et Al considered temperature anomalies of -0.4 to +0.4 degrees C to be ‘normal’ and only considered data points outside those parameters. There are no data points below -0.4 degrees C and a few above +0.4 degrees C.
I guess that is a statistically significant trend and makes sense if you are a climate scientist.
Willis
I have now looked at tens of thousands of contemporary and later weather records dating back to 1000 AD. Some of these were contained in my article ‘the long slow thaw’ which extended CET to 1538 from its instrumental start in 1660.
http://judithcurry.com/2011/12/01/the-long-slow-thaw/
I am currently concentrating on researching the preriod 1250 to 1550 in order to try to pinpoint the transition from the MWP to the LIA.
It is quite obvious from reading the weather observations that the majority of extreme weather events- including storms- take place during the cold-not the warm-periods. This is hardly surprising as there exists a greater potential energy differential betweeen cold winters and hot summrers- which were common during the LIA- than there are in the more benign and equitable climate we enjoy today.
tonyb
If they are lying on our tax dollars then haul the responsible gov agency in front of Congress and demand they either back up their claim or forfeit anymore funding for 10 years.
Good “back of the envelope” analysis, Willis. The correlation between temporary SST “anomalies” and tropical storm intensity is well established; alarmists want to read an anthropogenic signal into the picture — that is where it gets difficult.
I find this 2008 article from Thomas Knutson of the NOAA’s Geophysical Fluid Dynamics Lab (with updates from January 2013), while paying the required lip service to the IPCC projections of a stormier world over the next century under “greenhouse gas induced warming” (see final paragraph), to be a good summary (and far removed from the fear-mongering of Kerry Emmanuel and others).
http://www.gfdl.noaa.gov/global-warming-and-hurricanes
The following quote is the “money” one: “…there is a small nominally positive upward trend in tropical storm occurrence from 1878-2006. But statistical tests reveal that this trend is so small, relative to the variability in the series, that it is not significantly distinguishable from zero.”
Alarmists want so badly to see evidence of continuously worsening weather, they find it hard to contain their Schadenfreude when a big storm like Katrina or Sandy hits. Thereafter, the tabloidazition runs amok and exploitation of damage and destruction has a field day — damn the science. The general public believes all the fear-mongering – while sober analyses are buried, empowering politicians and administrators to incorporate yet more draconian measures to combat “human-caused” CO2. Never mind that this is ONLY being done in the W. World.
Kurt in Switzerland
I’m still trying to arrive at some kind of conclusion about the future of warming. 🙂
It may be that the frequency and strength of hurricanes at least partially relates to the temperature differential between the tropics and the higher latitudes. If the higher latitudes warm faster than the tropics during global warming (which is what is occuring), then the number and frequency of hurricanes might actually decrease with global warming, since the temperature differential has declined.
The same might also hold for tornadoes, which are formed when very cold air mixes with very warm air (best developed in the southen USA); if the cold air is less cold due to global warming, and the overall temperature differential smaller due to differential global warming, then tornadoes might become less frequent and less severe, the exact opposite of what the alarmists predict.
You can see this sort of principle in the ocean called the ‘Pacific’-it was named the Pacific (i.e. peaceful) by Magellan because it struck him as being less violent and prone to storms than the Atlantic, despite the fact that it is much bigger. This may be due to the fact there is less land about the Pacific basin, and therefore the temperature across a greater area is more evenly spread, which might mean less storminess overall.
Also, 1998 was noted in the southwestern Pacific anyway as being unusally calm in terms of swell and general storminess (I know because I am a surfer, and everybody noted then that there was virtually no swell in Eastern Australia that summer), depsite it being a very warm summer. Higher temperatures in the Pacific had a pacifying effect on the southwestern end anyway, possibly because temperature differentials were generally lower.
Yes, E M Smith has a suitable methodology. I myself would have discarded the 7 high points centered around -0.2, because they are obviously spurious. Now a simple trend line will show a spread from below SI 50 to about SI 100. That gives you the claimed doubling.
I would be happy to ‘peer review’ E M Smith’s paper if he will ‘peer review’ mine. We can then have two quite distinct reviewed mathematical analyses ready for ‘publication’ via the press and inclusion in AR5.
Ha! Deniers! You’re in the wrong business! Here are E M Smith and I saving the world, and being paid handsomely for it, while you are still looking for a real signal… /sarc
Well done Willis. When the sophisticated modelling is contradicted by simple graphs or pivot tables good science should try to reconcile those anomalies. Instead we have claims that the scientists know what they are doing, but the truth is beyond the understanding of lesser mortals.
An example I found was the notorious LOG12 paper.
http://manicbeancounter.com/2012/10/04/the-role-of-pivot-tables-in-understanding-lewandowsky-oberauer-gignac-2012/
Given that Katrina was a Cat3 middle of the road average storm then these would be the most common. It is the Cat5’s that would be a possible indicator.
So this paper is a ”storm in a teacup” load of alarmist rhetoric.
Ignore, file under spam.
I see no trend in the graphs, and I’m a six sigma green belt process engineer. I looks at graphs like that every day…
What should I make of the following for the NH bearing in mind storms like temperature differentials.
you need to plot DAILY temperature anomaly- DUH
First link is to the home page rather than the specific article indicated.
[Thanks, fixed. -w.]
I wonder how they analyzed the data. Since this is not normally distributed data (it’s bounded on the low side by zero and therefore is probably Log-Normal) it would probably be best transformed by taking the LOG of the data and then performing trend analysis or ANOVA, t-test, etc. on the transformed data set. However unless they discard a few of the higher points around -0.2C I can’t see how any standard statistical testing, using any transform, would yield a result that warmer years yield higher storm indexes. I smell Mannian statistics at play.
I would think that storm surge would be related to:
1- The position of the moon and tide at the time of landfall
2- The location of landfall and angle that the storm reaches it
3- Strength of the storm in terms of wind speed
I think it’s a bit too early to criticize a paper if you don’t even know what methods of statistical analysis did they use and for example what criteria were used to determine what is a ‘warm year’. Assuming that ‘warm year’ is determined by global temperature anomaly is a bit too narrow.
johnmarshall says:
March 19, 2013 at 3:40 am
“Given that Katrina was a Cat3 middle of the road average storm then these would be the most common. It is the Cat5′s that would be a possible indicator.”
Katrina was a Cat 3 at landfall however it was a very powerful Cat 5 shortly before that while it was in the middle of the Gulf of Mexico. The Katrina storm surge was that of a Cat 5 as the surge itself did not diminish even though the winds did.
Question: Was topography of the coastline considered?
@James Cross
I would think that storm surge would be related to:
1- The position of the moon and tide at the time of landfall
2- The location of landfall and angle that the storm reaches it
3- Strength of the storm in terms of wind speed
STOP PRESS!
Latest scientific data shows that human generated CO2 is influencing the position of the Moon…..
Willis writes “Well … that sure doesn’t show what they claimed. There’s absolutely no trend in that at all. ”
Have you tried re-dating some of the storms?
@E.M.Smith – Your suggested methodology is brilliant, but sadly believable.
Regarding the plots, is it possible they are speaking of SST only?
“Here we construct an independent record of Atlantic tropical cyclone activity on the basis of storm surge statistics from tide gauges. We demonstrate that the major events in our surge index record can be attributed to landfalling tropical cyclones; these events also correspond with the most economically damaging Atlantic cyclones.”
Tide gauge statistics for storm surges is useless without integrating tidal data. For example, Incrediblysuperbodaciousmegastorm Sandy hit at a full-moon high tide, adding several feet to the total storm surge level. Had it simply hit at low tide, we may not even remember it now.
Economic damage is useless for measuring storms.
Kasuha says:
March 19, 2013 at 5:11 am
I think it’s a bit too early to criticize a paper if you don’t even know what methods of statistical analysis did they use and for example what criteria were used to determine what is a ‘warm year’
I think what Willis has done is more than fair since the paper hasn’t been published yet but the proponents of CAGW by CO2 are already touting it.
One probably should take the position that other than folks reviewing the paper, none of us should be discussing it. That does not seem to be the way it works for the Alarmist team. They make the most out of unsubstantiated information, yelling it from the rooftops and then ignore the corrections that are hidden on page 2 of the second section of the newspaper.
I suspect that will be what we have here. Correctly analyzed the data will not show a statistically significant increase in larger storm surges during warmer years. Of course, the “lie is already around the world”, as they say.
uuuuugggghhhh! This is science? Any evidence against the warming mantra and out comes a paper to refute it. Cloud feedback negative? Not a problem, we will put pressure on the joural editor and then have “debunking” paper out and published in days. Extreame weather scare not coming to pass? Have another paper. IPCC projections not living up to the scary hype? Filter out some natural variability, re-establish a new baseline and presto, paper! Meanwhile the warmists are still talking about 9 meter sea level rises and 7-11 degree global temp increases…Hockey stick broken? Call everyone that disagrees with you “industry funded deniers” and whine like a 10 year old girl with a broken barbie doll…OK, I feel better…
TTTM, you owe me a new monitor!
James Cross, are you one of the people who think that the moon influences things, why that was disproved [fill in the blank] years ago. /sarc
Why attempt in all good faith to find a rationale for such prevarication, when long experience with AGW Catastrophists shows there never is any objective basis for Green Gang assertions? A “Mannian proposition” is like a Dutch Treat or Hobson’s Choice, meaning none at all.
Rhoda R says:
March 19, 2013 at 12:20 am
“The Weather Channel is already trumpeting this study. Just saw them using it tonight.”
Rhoda…”The Weather Channel” is an info-tainment company now. With their recent idiotic land storm naming, I really can’t see why anyone takes them seriously anymore (I sure don’t). Do your brain a favor, and get REAL weather information from reliable sources (e.g. Accuweather.com).
Another problem with using tide gauge storm surge as a measure is that storm surge is also a function of topography. Comparing the storm surge of Charley to Hugo fails in that Hugo hit the open coast of South Carolina, whereas Charley drove into the funnel of Charlotte Harbor.
BWTM: storm surge is also a function of the speed of the storm. A fast moving storm produces less surge than a slow moving storm.
E.M.Smith says:
March 19, 2013 at 2:04 am
—–
Well, this is a good start, but I don’t see how you can have any confidence in your results or how you can claim to merit a grant, relying on all of these .. these .. observations … There’s a deplorable lack of sophisticated, state of the art computer modelling in your methodology.
What you really need is to find a dozen or so models that individually show no particular skill, and then synthesize them into a tunable simulation concerto masterpiece that will allow you to project what’s really going on. Breaking simulations out into scenarios wouldn’t hurt either. This is the only way you’re ever going to get your results on solid grant footing IMHO. [/sarc]
Mark Bofill says:
March 19, 2013 at 6:49 am
—-
(forgot the sarc tag, most humble apologies)
[Fixed. -w.]
They said by year and you plotted by month according to you prose. Is that wrong or did you plot the thing?
Why are we using Katrina as an example storm? Most of the damage that was associated with the storm was caused by a leeve failure. It wasn’t over topped, it failed due to improper construction and maintainance.
Willis, you forgot to add +0.2C as a correction to all of the temperature anamoly values resulting from the diurnal related hysteresis effects on the recorded instrumental temperature readings. In addition, you need to smooth the colder data a bit by averaging the points together since the temperature resolution during cold times is not as good. Any person with half a scientific brain knows if the data doesn’t fit the model, it’s the data that needs adjusting.
Peter Miller says:
March 19, 2013 at 1:41 am
You are not using Mannian mathematics, or adjusting the data using a Marcott time series. Once you do that, everything will become clear
————————-
I’m sure if you account for temperature adjustments from Jim Hansen you will correct about 50% of your error.
cn
Kasuha says:
March 19, 2013 at 5:11 am
I think it’s a bit too early to criticize a paper if you don’t even know what methods of statistical analysis did they use
===============
Any result that is dependent on the choice of statistics is not a reliable result. It is likely an artifact of the methodology. One of the classic statistical frauds is “method shopping”. To try different statistical methods until you find the one that supports the conclusion you are trying to prove. This sort of practice is an outright scientific fraud but it is very hard to prove.
For example: If you try 100 different ways to test if storm surge is related to temperature, and 5 methods say it is related, and 95 methods say it is not related, then you can be pretty confident that it is not related. However, if you then published the 5 results that showed it was related, and neglected to mention the 95 methods that showed it was not related, and used this to claim there was a relationship you have committed scientific fraud. But it is very hard to prove because you didn’t tell anyone about the other 95 methods.
Alarmist climate science has been shown repeatedly to be a science of method shopping. Different methodologies are tried until one if found that delivers the desired result. This is then published along with an alarming headline. However, when the data is later tested with different methodologies it is found that the results are spurious. This happened with the original hockey stick and with the southern hemisphere hockey stick. It is happening at CA right now with the most recent hockey stick. It should not surprise anyone if this current papers also fails the test.
Thus the relevance of the climategate emails. They showed method shopping to “hide the decline” and disappear the MWP for example. The science was not conducted to discover what was happening. It was aimed at confirming a foregone conclusion. A conclusion that is not supported by an objective evaluation of the data.
We see this sort of thing routinely in medicine for example. A drug company tests a new drug. They publish the results that show how wonderful the drug is and run up their stock price. The executives cash in their options and retire as zillionares. Years later as people begin to die in large numbers it is discovered that the drug company hid the adverse results. Law suits follow but it is all but impossible to prove the executives were in on the scam. They are protected by the zillions they scammed out of the system at the expense of human lives.
Tom in Florida says:
March 19, 2013 at 5:12 am
Question: Was topography of the coastline considered?
===========
Storm surge is most prominent on the east coast of continents due to the rotation of the earth.
TimTheToolMan says:
March 19, 2013 at 5:34 am
Willis writes “Well … that sure doesn’t show what they claimed. There’s absolutely no trend in that at all. ”
Have you tried re-dating some of the storms?
————-
Willis, did you try inverting some of the data?
cn
If I were an emergency planner, I’d be more worried when temperature anomalies diverged from 0.20. That’s the takeaway message.
Clearly you’re mistaken, or a shill of Big Oil, or something, cuz Al Gorezeera said there’d be no Arctic ice left by 2013.
(do I really need a /sarc?)
Willis, try infilling “missing” data. You’re neglecting the storm surge of fake data.
Russ in Houston says:
March 19, 2013 at 7:02 am
“Why are we using Katrina as an example storm? Most of the damage that was associated with the storm was caused by a leeve failure. It wasn’t over topped, it failed due to improper construction and maintainance.”
The damage to New Orleans was as you say. However, there was major damage in Mississippi and Alabama due to a Cat 5 like storm surge.
Kasuha says:
March 19, 2013 at 5:11 am
Kasuha, it may well be that a “warm year” determined by global temperature anomaly is too narrow … but that’s what they are doing. They list it out in the document and the SI. I just couldn’t replicate it …
w.
TimTheToolMan says:
March 19, 2013 at 5:34 am
Dang, the Marcott-Shakun method, why didn’t I think of that?
w.
brad says:
March 19, 2013 at 6:59 am
I plotted both by month and by year … take another look.
w.
They could have seen the data as a “bathtub” curve in which case higher temperature would ultimately trend upward in surge index. I do not advocate that view as more data is needed and consideration for more drivers should be made. I also see that if it is indeed a bathtub curve that the global cooling scenario is just as bad, if not worse. . .
I have noted something.
As their “climate science” fails and they can no longer produce actual science (if they ever did) that backs their position what they have done is switch the purpose of “science papers”.
“Science papers” are now just an arm of the propaganda efforts. The claim that their propaganda is based on a “science paper” makes their silliness seem realistic. So expect a huge upsurge in junk science papers that will be given quick press and wide public distribution — then quickly forgotten to be replaced by yet another junk science paper and then another and another — all shrilly touted and as quickly forgotten.
Debunking these papers needs to be done but as propagandists they are only interested in the immediate headlines. Lots and lots of shouting hides the calmer voice speaking truth. God, but we are dealing with people who are real garbage.
Eugene WR Gallun
Their relationship was probably with the AMO. Just by eyeballing it, when comparing the AMO vs. Chris Landsea’s Power Dissipation Index (PDI), I can see a relationship. The PDI had lower values when the AMO was negative during the 70’s and 80’s.
http://rogerpielkejr.blogspot.ca/2012/11/us-hurricane-intensity-1900-2012.html
http://en.wikipedia.org/wiki/File:Amo_timeseries_1856-present.svg
Of course, Tamino maintains that the AMO is a response to temperatures, so why you aren’t picking up this relationship using HadCRUT is a mystery.
Kudos to Kurt, DavidL and Ferd for their comments. The paper reads as an attempt to find evidence to support a pre-existing agenda. Thats worse than useless. Even the definition of “surge” is likely useless for several reasons. The “parameterization” of surge is suspect due to conflating unknowns. You can’t quantify something without understanding it.
Eg. Coastlines are too variable to assume a “surge” quantity as a linear function. People have naturally settled along coasts in “surge prone” areas, such as harbors. And avoided cliffs.
The papers measurements of “surge” are post-hoc. Good science would make the hypothesis based on a clear definition, then establish a method to observe, and quantify the “surge” over time.
Delightful as always. Keep up the good work.
The Marcott-Shakun Method. I’m suddenly reminded of the Schartz-Metterklume Method. There are similarities…
I wonder if homogenising the surge data would work or was that pasteurising it, I forget…..
[snip. Invalid email address. — mod.]
The reason for all of this push to get another hockey stick into AR5. The big upswing graph is the only thing a lot of undeveloped countries leaders can follow, got to have an easy to see graph for all those who have very little science training.
It gets them the votes from the floor they need, they (hockey sticks) help to stir up the masses, to generate the “tax redistribution of income method” for skimming some/most of the money as it passes through the UN.