Flawed Claim of New Study: 'Extreme Tornado Outbreaks Have Become More Common'

A new paper shows that the average number of tornadoes per outbreak has grown by more than 40% over the last half century. The likelihood of extreme outbreaks – those with many tornadoes – is also greater.

This paper is flawed from the start, right from the raw data itself, read on to see why – Anthony

Elk-Mountain-tornado
A tornado near Elk Mountain, west of Laramie Wyoming on the 15th of June, 2015. The tornado passed over mostly rural areas of the county, lasting over 20 minutes. John Allen/IRI.

From the Earth Institute at Columbia University:

Most death and destruction inflicted by tornadoes in North America occurs during outbreaks—large-scale weather events that can last one to three days and span huge regions. The largest outbreak ever recorded happened in 2011. It spawned 363 tornadoes across the United States and Canada, killing more than 350 people and causing $11 billion in damage.

The 2016 Severe Convection and Climate Workshop (#SevCon16) starts March 9. Visit the Columbia Initiative on Extreme Weather and Climate for more details.

Now, a new study shows that the average number of tornadoes in these outbreaks has risen since 1954, and that the chance of extreme outbreaks —tornado factories like the one in 2011—has also increased.

The study’s authors said they do not know what is driving the changes.

“The science is still open,” said lead author Michael Tippett, a climate and weather researcher at Columbia University’s School of Applied Science and Engineering and Columbia’s Data Science Institute. “It could be global warming, but our usual tools, the observational record and computer models, are not up to the task of answering this question yet.”

Tippett points out that many scientists expect the frequency of atmospheric conditions favorable to tornadoes to increase in a warmer climate—but even today, the right conditions don’t guarantee a tornado will occur. In any case, he said,

“When it comes to tornadoes, almost everything terrible that happens happens in outbreaks.”

SREX-SPM-Fig3
The effect that changing the mean and variance of a distribution has extremes, using temperatures as an example. Source: IPPC.

The results are expected to help insurance and reinsurance companies better understand the risks posed by outbreaks, which can also generate damaging hail and straight-line winds. Over the last 10 years, the industry has covered an average of $12.5 billion in insured losses each year, according to Willis Re, a global reinsurance advisor that helped sponsor the research. The article appears this week in the journal Nature Communications.

Every year, North America sees dozens of tornado outbreaks. Some are small and may give rise to only a few twisters; others, such as the so-called “super outbreaks” of 1974 and 2011, can generate hundreds. In the simplest terms, the intensity of each tornado is ranked on a zero-to-five scale, with other descriptive terms thrown in. The lower gradations cause only light damage, while the top ones, like a twister that tore through Joplin, Missouri, in 2011 can tear the bark off trees, rip houses from their foundations, and turn cars into missiles.

As far as the tornado observational record is concerned, the devil’s in the details.

For this study, the authors calculated the mean number of tornadoes per outbreak for each year as well as the variance, or scatter, around this mean. They found that while the total number of tornadoes rated F/EF1 and higher each year hasn’t increased, the average number per outbreak has, rising from about 10 to about 15 since the 1950s.

The study was coauthored by Joel Cohen, director of the Laboratory of Populations, which is based jointly at Rockefeller University and Columbia’s Earth Institute. Cohen called the results “truly remarkable.”

“The analysis showed that as the mean number of tornadoes per outbreak rose, the variance around that mean rose four times faster. While the mean rose by a factor of 1.5 over the last 60 years, the variance rose by a factor of more than 5, or 1.5 x 1.5 x 1.5 x 1.5. This kind of relationship between variance and mean has a name in statistics: Taylor’s power law of scaling.

“We have seen [Taylor’s power law] in the distribution of stars in a galaxy, in death rates in countries, the population density of Norway, securities trading, oak trees in New York and many other cases,” Cohen says. “But this is the first time anyone has shown that it applies to scaling in tornado statistics.”

The exponent in Taylor’s law number—in this case, the exponent was 4– can be a measure of clustering, Cohen says. If there’s no clustering—if tornadoes occur just randomly–then Taylor’s law has an exponent of 1. If there’s clustering, then it’s greater than 1. “In most ecological applications, the Taylor exponent seldom exceeds 2. To have an exponent of 4 is truly exceptional. It means that when it rains, it really, really, really pours,” says Cohen.

Extreme outbreaks have become more frequent because of two factors, Tippett said. First, the average number of tornadoes per outbreak has gone up; second, the rapidly increasing variance, or variability, means that numbers well above the average are more common.

(a) Number of tornado outbreaks per year. The rate of decline is not statistically significantly significant. (b and c) Annual mean number of tornadoes per outbreak and annual variance of the number of tornadoes per outbreak. Vertical axes are on a logarithmic scale, so the rate of increase in the annual mean is expressed as a percentage per year. (d) The annual mean number of tornadoes per outbreak versus the annual variance of the number of tornadoes per outbreak. Both axes are on a logarithmic scale. The solid line represents Taylor’s power law of fluctuation scaling. The two-digit number next to the plotted symbol gives the calendar year in the second half of the twentieth century or first half of the twenty-first century.
(a) Number of tornado outbreaks per year. The rate of decline is not statistically significantly significant. (b and c) Annual mean number of tornadoes per outbreak and annual variance of the number of tornadoes per outbreak. Vertical axes are on a logarithmic scale, so the rate of increase in the annual mean is expressed as a percentage per year. (d) The annual mean number of tornadoes per outbreak versus the annual variance of the number of tornadoes per outbreak. Both axes are on a logarithmic scale. The solid line represents Taylor’s power law of fluctuation scaling. The two-digit number next to the plotted symbol gives the calendar year in the second half of the twentieth century or first half of the twenty-first century.

Tippett was concerned that the findings could be artifacts of tornado observational data, which are based on eyewitness accounts and known to have problems with consistency and accuracy. To get around this, he re-ran his calculations after substituting the historical tornado data with environmental proxies for tornado occurrence and number of tornadoes per occurrence. These provide an independent—albeit imperfect—measure of tornado activity. The results were very nearly identical.

As for whether the climate is the cause, Tippett said, “The scientific community has thought a great deal about how the frequency of future weather and climate extremes may change in a warming climate. The simplest change to understand is a shift of the entire distribution, but increases in variability, or variance, are possible as well. With tornadoes, we’re seeing both of those mechanisms at play.”

Insurance and reinsurance companies and the catastrophe-modeling community can use this information.

“This paper helps begin to answer one of the fundamental questions to which I’d like to know the answer,” says Harold Brooks of the U.S. National Oceanic and Atmospheric Administration’s National Severe Storms Laboratory. “If tornadoes are being concentrated into more big days, what effect does that have on their impacts compared to when they were less concentrated?“

“The findings are very relevant to insurance companies that are writing business in multiple states, especially in the Midwest,” says Prasad Gunturi, senior vice president at Willis Re, who leads the company’s catastrophe model research and evaluation activities for North America. “Overall growth in the economy means more buildings and infrastructure are in harm’s way,” said Gunturi. “When you combine this with increased exposure because outbreaks are generating more tornadoes across state lines and the outbreaks could be getting more extreme in general, it means more loss to the economy and to insurance portfolios.”

Insurance companies have contracts with reinsurance companies, and these contracts look similar to the ones people have for home and car insurance, though for much higher amounts.  The new results will help companies ensure that contracts are written at an appropriate level and that the risks posed by outbreaks are better characterized, said Brooks.

“One big question raised by this work, and one we’re working on now, is what in the climate system has been behind this increase in outbreak severity,” said Tippett.

This research was also supported by grants from Columbia’s Research Initiatives for Science and Engineering, the Office of Naval Research, NOAA’s Climate Program Office  and the U.S. National Science Foundation.

The paper:

Tornado outbreak variability follows Taylor’s power law of fluctuation scaling and increases dramatically with severity

Michael K. Tippett, Joel E. Cohen

Nature Communications 7, Article number: 10668 doi:10.1038/ncomms10668

Abstract:

Tornadoes cause loss of life and damage to property each year in the United States and around the world. The largest impacts come from ‘outbreaks’ consisting of multiple tornadoes closely spaced in time. Here we find an upward trend in the annual mean number of tornadoes per US tornado outbreak for the period 1954–2014. Moreover, the variance of this quantity is increasing more than four times as fast as the mean. The mean and variance of the number of tornadoes per outbreak vary according to Taylor’s power law of fluctuation scaling (TL), with parameters that are consistent with multiplicative growth. Tornado-related atmospheric proxies show similar power-law scaling and multiplicative growth. Path-length-integrated tornado outbreak intensity also follows TL, but with parameters consistent with sampling variability. The observed TL power-law scaling of outbreak severity means that extreme outbreaks are more frequent than would be expected if mean and variance were independent or linearly related.



Why this study is fatally flawed (in my opinion):

Ironically, the hint as to why the study is fatally flawed comes with the photo of the Wyoming tornado they supplied in the press release. Note the barren landscape and the location. Now note the news story about it. 50 years ago, or maybe even 30 years ago, that tornado would likely have gone unnoticed and probably unreported not just in the local news, but in the tornado record. Now in today’s insta-news environment, virtually anyone with a cell phone can report a tornado. 30 years ago, the cell phone was just coming out of the lab and into first production.

Also 30 years ago, there wasn’t NEXRAD doppler Radar deployed nationwide, and it sees far more tornadoes that the older network of WSR-57 and WSR-74 weather radars, which could only detect the strongest of these events

As shown by this study: Doswell, Charles A., III (2007). “Small Sample Size and Data Quality Issues Illustrated Using Tornado Occurrence Data“. Electronic J. Severe Storms Meteor. 2 (5): 1–16.

Abstract

A major challenge in weather research is associated with the size of the data sample from which evidence can be presented in support of some hypothesis. This issue arises often in severe storm research, since severe storms are rare events, at least in any one place. Although large numbers of severe storm events (such as tornado occurrences) have been recorded, some attempts to reduce the impact of data quality problems within the record of tornado occurrences also can reduce the sample size to the point where it is too small to provide convincing evidence for certain types of conclusions. On the other hand, by carefully considering what sort of hypothesis to evaluate, it is possible to find strong enough signals in the data to test conclusions relatively rigorously. Examples from tornado occurrence data are used to illustrate the challenge posed by the interaction between sample size and data quality, and how it can be overcome by being careful to avoid asking more of the data than what they legitimately can provide. A discussion of what is needed to improve data quality is offered.

The total number of tornadoes is a problematic method of comparing outbreaks from different periods, however, as many more smaller tornadoes, but not stronger tornadoes, are reported in the US in recent decades than in previous ones.

Basically, there’s a reporting bias and a decreasing sample size as the data goes further back in time. Even the 1974 Tornado Outbreak, the largest record holder for decades, likely had more tornadoes than was reported at the time. If Doppler Radar had existed then, it would almost certainly have spotted many F0 and F1 class tornadoes (and maybe even some F2 or F3 tornadoes that weren’t found by aerial surveys due to them being in remote areas) that were not reported. Since large tornado outbreaks are so few and far between, and because technology for detecting tornadoes has advanced rapidly during the study period, it isn’t surprising at all that they found a trend. But, I don’t believe the trend is anything more than an artifact of increased reporting. The State Climatologist of Illinois agrees, and wrote an article about it:


If we look at the number of stronger tornadoes since 1950 in Illinois, we see a lot of year to year variability. However there is no significant trend over time – either up or down (Figure 1). Stronger tornadoes are identified here as F-1 to F-5 events from 1950 to 2006 (using the Fujita Scale), and EF-1 to EF-5 events from 2007 to 2010 (using the Enhanced Fujita Scale).  By definition, these stronger tornadoes cause at least a moderate amount of damage. See the Storm Prediction Center for a discussion on the original Fujita and Enhanced Fujita (EF) scales.

tornado-trend-F1-F5
Figure 1. Strong tornadoes in Illinois, 1950-2010. The red line represents the linear trend.
tornado-trend-F0
Figure 2. Weak tornadoes in Illinois, 1950-2010. The red line is a 3rd order polynomial, representing a smoothed trend line.
tornado-trend-F0-F5-combined
Figure 3. All tornadoes in Illinois, 1950-2010. Shaded area represent the most accurate era of tornado records.

What we have seen is a dramatic increase in the number of F-0 (EF-0) tornadoes from 1950 to 2010 (Figure 2). These are the weakest of all tornado events and typically cause little if any damage. These events were overlooked in the early tornado records. The upward trend is the result of better radar systems, better spotter networks, and increased awareness and interest by the public. These factors combined have allowed for a better documentation of the weaker events over time.

If we combine both data sets together, we see the apparent upward trend caused by the increasingly accurate accounting of F-0 (EF-0) tornadoes (Figure 3). As a result, the number of observed tornadoes in Illinois has increased over time, but without an indication of any underlying climate change.In my opinion, the tornado record since 1995 (shaded in yellow) provides the most accurate picture of tornado activity in Illinois. From that part of the record we see that the average number of tornadoes per year in Illinois is now 63.


From a paper by Matthew Westburg:


 

“Monitoring and Understanding Trends in Extreme Storms,” published in the Bulletin of the American Meteorological Society also has concluded that the United States is not experiencing an increase in the severity of tornadoes. Figure 3 from this paper shows “The occurrence of F1 and stronger tornadoes on the Fujita scale shows no trend since 1954, the first year of near real time data collection, with all of the increase in tornado reports resulting from an increase in the weakest tornadoes, FO.”

Reported tornadoes in NWS database from 1950 to 2011. Blue line is F0 tornadoes; red dots are F1 and stronger tornadoes.
Reported tornadoes in NWS database from 1950 to 2011. Blue line is F0 tornadoes; red dots are F1 and stronger tornadoes.

There are multiple reasons to explain why there seems to be an increase in the frequency of tornadoes in Illinois and the whole United States since 1950.

Tornado records have been kept in the United States since 1950. While we are fortunate to have records that date back about 64 years, “ the disparity between tornado records of the past and current records contributes a great deal of uncertainty regarding questions about the long-term behavior or patterns of tornado occurrence”(“Historical Records and Trends”). Inconsistent tornado records have made it difficult to identify tornado trends. In the last several decades, scientists have done a better job of making tornado data more consistent and uniform. Overtime, this will help scientists to be able to identify trends in tornado data. In addition to inconsistent records, changes in reporting systems have had an effect on tornado data and possible trends.

Prior to the 1970’s, tornadoes were usually not reported unless they caused substantial damage to property, caused injuries, or deaths; consequently, there was an under reporting of tornadoes. At this time, scientists were able to define what a tornado was, but they struggled to define the intensity of each tornado, in terms the size and wind speed of each tornado, and therefore, they struggled to compare one tornado to another. This changed in 1971 when a meteorologist named Ted Fujita, “established the Fujita Scale (F-Scale) for rating the wind speeds of tornadoes by examining the damage they cause” (McDaniel). Once the Fujita Scale was established, scientists were more easily able to classify tornadoes based on their wind speed and the damage they caused.

Source: http://spark.parkland.edu/cgi/viewcontent.cgi?filename=0&article=1060&context=nsps&type=additional


In 2011, I went into great detail on the reporting bias problem with this WUWT essay:

Why it seems that severe weather is “getting worse” when the data shows otherwise – a historical perspective

Note the the authors of the study released today only started their dataset in 1950, just before the dawn of television news, in the early 1960’s. Television alone accounts for a significant increase in tornado reporting. Now with cell phones and millions of camera’s, Doppler radar, storm chasers, The Weather Channel, the Internet, and a 24 hour news, many, many more smaller tornadoes that would have gone unnoticed are being reported than ever before, but the frequency of strong tornadoes has not increased:

fig31_tornadoes-600x3611

When you only analyse data that goes back to the early days of TV news, where trends in increased reporting begin, you are bound to find an increase in frequency and strength, not just overall, but in outbreaks too. As Doswell et al. notes authors should be “…careful to avoid asking more of the data than what they legitimately can provide”.

So, what do they do to get around these problems? They add a proxy for actual tornado data! From the paper:

Environmental proxies for tornado occurrence and number of tornadoes per occurrence provide an independent, albeit imperfect, measure of tornado activity for the period 1979–2013 (‘Methods’ section). At a minimum, the environmental proxies provide information about the frequency and severity of environments favourable to tornado occurrence. The correlation between the annual average number of tornadoes per outbreak and the proxy for number of tornadoes per occurrence is 0.56 (Supplementary Fig. 6a). This correlation falls to 0.34, still significant at the 95% level, when the data from 2011 are excluded.

I remain unconvinced that this is anything useful.

To their credit, they do note the reporting problem, but don’t clearly say how they deal with it.

However, to date, there is no upward trend in the number of reliably reported US tornadoes per year6. Interpretation of the US tornado report data requires some caution. For instance, the total number of US tornadoes reported each year has increased dramatically over the last half century, but most of that increase is due to more reports of weak tornadoes and is believed to reflect changing reporting practices and other non-meteorological factors rather than increased tornado occurrence7. The variability of reported tornado occurrence has increased over the last few decades with more tornadoes being reported on days when tornadoes are observed8, 9. In addition, greater year-to-year variability in the number of tornadoes reported per year has been associated with consistent changes in the monthly averaged atmospheric environments favourable to tornado occurrence10

I had to laugh at this line:

…with more tornadoes being reported on days when tornadoes are observed

Basically, what I’m saying is that while the statistical math used by Tippett may very well be correct, but the data used is biased, incomplete, inaccurate, and lends itself to the authors finding what they seek. They don’t state that they take reporting bias into account, and that results in a flawed conclusion. Then there’s this about Taylor’s Power Law:

Taylor’s power law is an empirical law in ecology that relates the variance of the number of individuals of a species per unit area of habitat to the corresponding mean by a power law relationship.

Tornadoes are not life, they are not self-replicating, they do not give birth, they don’t cluster like life does for benefits of food or protection in numbers. I don’t see how a law designed for predicting cluster populations of living organisms can be applied to inanimate physical phenomena such as tornadoes. While there is support in the literature for Taylors Law aka “fluctuation scaling”, it seems only useful when you have a cleaner dataset, not one that has multiple reporting biases over decades that significantly increase it’s inhomogeneity. For example, Taylors law was successfully used to examine UK crime behavior. But even though we have decades of crime reports in the UK, they don’t vary in intensity. There aren’t “small murders” and “big murders” there are just murders.

Feynman famously said of science: “The first principle is that you must not fool yourself and you are the easiest person to fool.” I think Tippet et al. has created a magnificent statistical construct with which they have in fact, fooled themselves.

Addition: some history is instructive. Here are a few major tornado outbreaks in the USA

In the September 1821 there was the New England tornado outbreak

In March 1875 there was the Southeast tornado outbreak

In 1884 there was the Enigma Tornado Outbreak

In 1908 there was the Dixie Tornado Outbreak

In 1932 there was the Deep South tornado outbreak

In 1974 was the Super Outbreak

In 2011 was the April 27th Super Tornado Outbreak

In fact, there is quite a list of Tornado outbreaks in US history. The majority of tornado outbreaks in US history occurred before there was a good reporting or power categorization (Fujita Scale) methodology in place. For Tippet to claim a trend in clusters/outbreak severity ignores all these others for which there is little data, and focuses only on the period of data for which reporting and categorization technology and techniques were fast evolving. Thanks to widespread reporting on TV news, the 1974 Super Outbreak became the biggest impetus in the history of the Weather Bureau to improve warning and monitoring technology, and likely was responsible for most of the improvements in data over the next 30 years.

 

0 0 votes
Article Rating
63 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
March 7, 2016 12:10 pm

I think your final paragraph says it all.

Bryan A
Reply to  Oldseadog
March 7, 2016 2:25 pm

with a minimum of 9 years, a maximum of 54 years, and an average of 32 years between major outbreaks, the next one could occur in 2020 or more likely around 2042 or 2043

RHS
March 7, 2016 12:15 pm

What would the environmental proxy look like for an F0 or F1 tornado? Particularly days to months or decades after it occurred? I’d find it hard to believe there is much effect to be read in the tea leaves from weak tornados even month to month.
Perhaps I should submit for a grant!

Reply to  RHS
March 7, 2016 2:39 pm

The closest I’ve ever been to tornado touchdown was about 200 yards. That was about 25 years ago. I was at work. (Big brick building) It was night. It was likely an F0. There were two of us on duty. We didn’t hear or see anything aside from about every alarm in the plant going off when we lost power. We did hear later that a funnel was spotted near us. A day or two later people from the NWS came out and, after inspecting the damage, confirmed that it was a tornado.
A few years ago I was on our local NWS site and stumbled on a link to all the tornadoes for our area. It went back beyond that year.
That tornado was not listed.

Reply to  Gunga Din
March 7, 2016 9:03 pm

This reminds me of hurricanes (and tropical storms) getting upgraded or downgraded by post-season analysis, or (as was the case for Andrew which was upgraded) upgraded or downgraded years later. I wonder if this tornado was determined by later analysis to be a gustnado – where the rotation is confined to a much lower altitude range than is the case with tornadoes. Some gustnadoes cause damage like that of some weaker tornadoes, but doppler radar archives sometimes distinguish these from each other when the rotation can be discerned as shallow or deep/tall.

Reply to  Gunga Din
March 8, 2016 5:02 pm

Donald, maybe. All I know is that from the twisted tree limbs that took out half of our power and the non-trace of of any of the contractor’s trailers (aside from some scattered plumbing fittings) along with the funnel cloud siting, the NWS said it was a tornado. I don’t know of any radar images that were recorded and reviewed later.
I do know that the two of us on duty that night were VERY busy for awhile!

blcjr
Editor
March 7, 2016 12:19 pm

“If you torture the data long enough, it will confess, even to crimes it did not commit.”
Basil

charles nelson
March 7, 2016 12:19 pm

There are a lot more photos and moving images of tornados taken on cellphones than there were in say…the 1930s.

Editor
March 7, 2016 12:23 pm

So they are aware of the problem that changes in number and variance may be due solely to reporting, viz:

For instance, the total number of US tornadoes reported each year has increased dramatically over the last half century, but most of that increase is due to more reports of weak tornadoes and is believed to reflect changing reporting practices and other non-meteorological factors rather than increased tornado occurrence7

but they do nothing about it, in a paper that claims that the changes in number and variance are the crucial measurement …
I weep for the death of science.
w.

Greg
Reply to  Willis Eschenbach
March 7, 2016 1:39 pm

That makes it clear misrepresentation. If you do not have consistent sampling , the statistics are pretty meaningless. Worse misleading.
If you eliminate all the EF0 and EF1 that are inconsistently reported over time the message is resoundingly clear. Tornadoes don’t like ” a warming world”.
http://climategrog.files.wordpress.com/2013/05/tornado_compare_ef234.png
https://climategrog.wordpress.com/tornado_compare_ef234/
The biggest outbreak was right at the coldest point before the recent warming. During late 20th c. warming tornado activity was much less that it was during the previous cooling period. Maybe the 2011 spike is another indication we are moving into another cooling period.
Hurricanes fair better during warming phases but slump during the recent warm but hiatus-ridden doldrums. It appears that there was a similar slump starting around 1935.
Clearly we need to try a bit harder than just linking these kind of events to whether is it warmer than the 19th c. or not.
http://climategrog.files.wordpress.com/2016/01/ace_amo_2015_gauss2.png
https://judithcurry.com/2016/01/11/ace-in-the-hole/

Reply to  Greg
March 7, 2016 9:49 pm

As for hurricanes: Warming favors them, but the tropics are warming less than the world as a whole. And I don’t expect correlation to be good on a scale smaller than at least a couple decades, because hurricane patterns often get into ruts that last several years, sometimes influenced by some rut or another that some part or another of the Pacific gets into for up to about a decade.
As for tornadoes: For the more-damaging easier-to-detect-years-ago minority of them that are F2/EF2 and stronger, a major factor seems to be strength of winds in a wind shear pattern that is powered mainly by horizontal temperature gradient. The Arctic is warming faster than the tropics, which would disfavor this. On the other hand, the warming surface and cooling tropopause makes the atmosphere more favorable to thunderstorms. The actual trend for tornadoes of every class F2/EF2 and stronger has been on a very slight decline since 1950. One thing that I expect, and that I have yet to see being reported, is the tornado season peaking earlier in the year as a result of warming. Towards winter, the temperature gradient between the Arctic and the tropics is stronger, and warming of the surface and cooling of the tropopause is making it easier for thunderstorms to occur earlier in the early spring and late winter. Given the amount of warming that has occurred so far, I think the tornado season may nowadays be peaking about half a week earlier than it did a few decades ago.
One thing about tornadoes during a strong El Nino: A strong El Nino favors more wintertime tornadoes in the southern tier of the US but not a worse tornado season overall. A strong La Nina on-average (without good consistency) favors more tornadoes by favoring a more-northern storm track in the eastern half of the US from winter to mid-spring, which means a larger area of the eastern half of the US becomes likely to have tornadoes during the pre-peak time of tornado season that favors stronger faster-moving tornadoes (March and April).
The current El Nino seems associated with a US storm track that I see having some resemblence to that of the springtime of the 2010-2011 La Nina. What I expect overall this spring is tornadoes being less bad than they were in early 2011 but worse than 2012-2015, and peaking unusually early in the season – maybe 10-15 days earlier in the year than long-term average starting anytime from 1950 to 1956.

Reply to  Willis Eschenbach
March 7, 2016 5:48 pm

That’s exactly the selection I copied ready to paste in for my comment…
So firstly I completely agree with you and secondly wonder why someone doesn’t actually do something about the claim of increased reporting of tornadoes being a function of improved measurement and reporting.
I would have thought that if you were taking the time to analyse tornadoes then you would make the effort to work out when the region’s doppler radars came online and correlate that to the reported tornado’s location giving an answer to the question once and for all.

mebbe
Reply to  Willis Eschenbach
March 7, 2016 8:49 pm

“”I weep for the death of science.””
Dry your tears, Willis. Science isn’t dead, nor even sleeping. She just doesn’t hang at the Gala Ball.
You know that.

Reply to  Willis Eschenbach
March 8, 2016 10:16 am

Information with these caveats really is useless to insurance companies; but they could use it raise rates.

Rob Morrow
March 7, 2016 12:23 pm

Accountants are well aware that old financial data are non-comparable to newer data if the reporting methods have changed. It’s incredible that published “climate scientists” can’t/won’t muster the same basic scrutiny.

Curious George
Reply to  Rob Morrow
March 7, 2016 2:02 pm

That’s because the Distinguished Professor Michael E. Mann does not teach accounting.

Mark from the Midwest
Reply to  Curious George
March 7, 2016 2:18 pm

or accountability

Owen in GA
Reply to  Rob Morrow
March 7, 2016 3:06 pm

Accountants can be sued or imprisoned if they commit this kind of error. If in a financial report one mentioned that the accounting standards changed in the middle of the period under study, but reported the glowing result that change caused as an unbiased and uncorrected truth, the auditors would charge fraud and the bond holders would file suit.

March 7, 2016 12:32 pm

Brooks et al published a similar abuse of statistics in 2014, which I deconstructed here: http://unfrozencavemanmd.blogspot.com/2014/11/torture-of-tornado-data-in-united.html

Adrian Roman
March 7, 2016 12:34 pm

“I don’t see how a law designed for predicting cluster populations of living organisms can be applied to inanimate physical phenomena such as tornadoes”
But those that are reporting them are living, isn’t it? /sarc

Reply to  Adrian Roman
March 7, 2016 3:07 pm

AR, a much succinter and clearer summary of my longer, more complex, but probably incomprehensible to most equivalent comment downthread.

March 7, 2016 12:35 pm

“It could be global warming, but our usual tools, the observational record and computer models, are not up to the task of answering this question yet.” Greenspeak for “We are willing to support the meme. Send funding and we’ll jump on the bandwagon.”

Charlie
Reply to  A.D. Everard
March 7, 2016 2:21 pm

Or perhaps ‘we have yet to invent a mechanism whereby warming affects production of weak tornadoes but leaves severe tornadoes unaffected’.

Editor
March 7, 2016 12:42 pm

And once again we are forced to ask how the hell this got through peer review?

TG
Reply to  Paul Homewood
March 7, 2016 1:14 pm

With a wink and an inbreed nod!

Reply to  Paul Homewood
March 7, 2016 3:17 pm

PH a good question. The answer, as you well know, is pal review from amongst those on the Warmunist gravy train.

Curious George
Reply to  Paul Homewood
March 7, 2016 5:47 pm

Phil Jones: “I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow — even if we have to redefine what the peer-review literature is!”
Done. 100% success.

3¢worth
March 7, 2016 12:49 pm

The so-called super outbreak of 1974? Maybe I’m missing something, but haven’t alarmists described the climate during the 1970’s (colder than today) as not only Typical but Ideal? Didn’t the study by Dr. Madhav Khandekar, -“Global Warming Extreme Weather Link – A Review of the State of the Science” find no more extreme weather events today than in the past? In fact, if I remember correctly, Dr. Khandekar found the period with the most extreme weather events was during the generally colder decade of the 1940’s. Dr. Khandekar was a former research scientist with Environment Canada.

Tom Halla
March 7, 2016 12:56 pm

I wonder how the reporting of weak tornadoes correlates with the nationwide coverage by doppler radar. In the Hill Country of Texas, where I live, the local TV stations go to continuous coverage everytime we have notable thunderstorms. For suburban Austin, anything remotely resembling a possible tornado gets reported.

M Seward
March 7, 2016 12:57 pm

Quite analgous to the UHI effect in the land temperature record. There has been a steady and very large increase in the amount of bitumen roads and concrete etc deployed in the environment over the 20th century with an attendant distortion of the temperature record. Blindingly obvious trend mechanism in the raw data but takes some effort to filter out.
Similar issue with tree rings and say water supply effecting the raw data metric the same way temperature does. Lazy or even dishonest researchers use the data anyway. Being published is the measure of merit it seems. Being published is the great growth story in climate science.

G. Karst
March 7, 2016 12:59 pm

Clusterf–k! GK

March 7, 2016 1:00 pm

In fact, everything is more common and less common at some point. Climate changes in both cyclical and linear ways, and does that forever. They tell you correctly that you are not living in your grandparents’ climate. What they don’t tell you is that your grandparents were not living in their grandparents’ climate.
Extreme tornado outbreaks more common? Meh. I’m sure that detailed reportage of extreme tornadoes is more common. And maybe the outbreaks themselves are more common. Get back to me in a century when you know more.
“A paper shows…”. “Now, a new study shows…” Signals to read no further, especially if something called an “Earth Institute” is involved. They do know we’ve stopped listening, right?

TG
March 7, 2016 1:01 pm

Facts and evidence mean nothing to the rent-seeking grant givers and takers. As many have said on this and other sites -It’s always about the money. It really is a black eye for real scientist and industries that rely on science to produce the theory, prove it and then the final product. You could never have developed computers or any mechanical hardware we rely on today using so-called climate scientific methods. Calculus, math must be provable. To these inbreed warmist peer review is a wink a nod and a climate gate type agreement.

jmorpuss
March 7, 2016 1:38 pm

“During this year’s hail season—June 1 through September 15—WMI seeded 79 storms with 355 kilograms of silver iodide in 9,200 flares. The season’s work cost the consortium of insurers $4.2 million, but even a small reduction in hail-related claims would cover that price tag. The August 2014 storm in Airdrie, Alta., caused $560 million in damage, says Krauss, “so a one percent reduction on that day more than pays for our program.”
The Alberta government first started the cloud seeding program in 1974 to minimize damage to crops. But in 1985, the government “said the money would be better spent if it just went to crop insurance,” recalls Krauss, who worked on the Alberta Research Council, which administered the program, from the mid-1970s to the mid-1980s.
The group of private insurers began paying for seeding in 1996, focusing on the densely populated Calgary and Red Deer areas instead of on farmland. “Priority is assigned to storms depending on their severity and the size of the community,” a WMI brochure reads. “Only those storms threatening populated areas are seeded.”
http://www.citopbroker.com/magazine-archives/storm-modeling-and-weather-modification-may-help-prevent-a-shower-of-enormous-claims-9405
How much money gets paid into insurance every day V’s paid out ? Governments go about trying to unlock this cash back into the economy, while insurance companies come up with ways to not pay out .
While spraying to decrease the size of hail may help in hail damage pay-outs , it probably works to increase tornado formation by putting more ice and supercooled water on the ground in front of the system. As hot air rises cold air rushes in, When a supercell does not ground out by lightning strikes, there’s a good chance a tornado will form .

Myron Mesecke
March 7, 2016 1:50 pm

I urge, no I demand that the authors attend National Weather Service Skywarn Storm Spotter training on a regular basis.
I have lost count of how many times I have attended Skywarn as part of being prepared for storm season as an amateur radio operator.
They will learn that it is through better radar, more people living in more locations and with more ways for people to communicate, that account for any rise in the number of tornadoes.
As they correctly state the number of violent tornadoes has not risen. it was actually trended downward since the 1970s.
As for the super outbreaks in 1974 and 2011, one during the global cooling scare and the other during the global warming scare they are easily explained. Both years had weather events that resulted in a greater temperature difference between cold air masses and warm air masses with the cold ones being colder, not the warm ones being warmer. Strong La Nina’s.
On various news sites I have been trying to beat into people’s heads that the warm, humid tropical air that flows from the Gulf of Mexico hasn’t changed. The tropics by nature don’t change temperature very much. it is the cold air from the Arctic that fluctuates. That cold air is the key to severe weather.
Why do so many seemingly smart people (scientists) have so little common sense?

Justthinkin
March 7, 2016 2:20 pm

Why do so many seemingly smart people (scientists) have so little common sense?
Because there is no such thing as smart people. And common sense has gone the way of the dodo. Scientist? Does not exist. One plus one equals four,in today’s society.Kids today do not know the difference between two,to,and too.And people elect them.

March 7, 2016 2:33 pm

While I think AW’s comments on the flaws in observational support data for this paper are correct, there is a deeper logical flaw. The alarmist horror is a Taylor scaling exponent of 4!!! Well, what is a Taylor power law scaling exponent? Google is your friend if you really want to know. But following a very brief precis.
Originally in ecology and epidemiology and medicine, folks sought to understand ‘epidemics’ aka clusters. Pest outbreaks, cancer clusters… And entymologist Taylor first figured out an observational relationship ‘law’ with a scaling exponent that fit the insect pest outbreak data. The ‘Law’ was named for him by others in 1966. Now, so happens I was studying general applied math models in ecology (lake Erie pollution evolution based on bathtub calculus (yup, Erie is a bathtub-fixed shape, known faucet, known drain… , preditor prey fluctuations based on probabalistic Markov chains rather than calculus, econometrics…) from 1969-1972. So caught a wiff of Taylor.
So, here is the logical problem. Taylor power series exponents rely on known underlying relationships. Rabbits and foxes (preditor prey). Bugs and e.g. Birds, planting density…( agricultural pests). Nobody has such an underlying logical relationship for the formation of supercell storm systems. What is the rabbit? What is the fox? So, import a perfectly valid math model framework into a subject matter where it cannot be shown to apply, and publish this dreck–because in the real world, the predicted CAGW increase in tornadoes has NOT occurred.

Bubba Cow
Reply to  ristvan
March 7, 2016 3:05 pm

AND (I’ll drop it in here since we’re in the math bashing portion) –
I very much doubt that the frequency distribution of tornados with temperature is “normally” distributed. Despite my doubt, it is incumbent upon the “researchers” to learn and present the data distribution (not cartoons – 1st figure) so that the statistics applied are appropriate. They clearly don’t know and reveal that ignorance with –
“the rapidly increasing variance, or variability, means that numbers well above the average are more common”.
Numbers well below the mean would be equally common !! Nature of the function.

Reply to  Bubba Cow
March 7, 2016 3:12 pm

Bubba Cow, yet another inconvenient fact concerning this dreck. Wish you well in Vermont.

Bubba Cow
Reply to  Bubba Cow
March 7, 2016 3:37 pm

And to you in Florida! I imagine it is too early in the year to venture to Wisconsin – although I am optimistically tying flies for spring trout here – sooooon.

Reply to  Bubba Cow
March 7, 2016 4:44 pm

Ah, BC, the spring morel mushroom season awaits also. But morel timing and location remain as secret as your early trout holes. After all, we could have fetched $72/# in Chicago last year wet, raw. Except we dried them all down and gave to very good friends. We are still gorging on the farms bumper morel bonanza from two years ago.
Or as secret as my later season trout places in the Wisconsin Uplands and in the Chatahoochee National Forest of north Georgia (owned by my significant other Patricia). BTW, very different flies. I still find the Chatahoochee very challenging. Water boarding will not pry those secrets loose. Highest regards.

lance
March 7, 2016 2:46 pm

Our local radio station here in Calgary (qr 77) put this guy on the radio in the morning slot….I remember hearing him state that they were becoming more violent lately…and of course I called BS…

DredNicolson
March 7, 2016 3:44 pm

I picked up enough meteorology in K-12 to know that severe thunderstorms form along cold fronts. COLD fronts. And that temperature/pressure/humidity differences drive the motion of air; the greater the difference, the stronger and faster the motion. And that you get the biggest of these differences in the temperate latitudes where the warm/humid air from the tropics and the cold/dry air from the polar regions keep running into each other.
The warmists have it bass-ackwards. A warming climate leads to fewer instances of severe and unstable weather patterns, not more. The less cold/dry air around to cause trouble up there, the better.

Dr. S. Jeevananda Reddy
Reply to  DredNicolson
March 7, 2016 4:19 pm

DredNicolson — the global warming fanatics, first, must understand such phenomenon over the globe. That is they must understand the impact of general circulation patterns that modify insitu weather.
Dr. S. Jeevananda Reddy

ClimateOtter
March 7, 2016 4:14 pm

The 2011 tornado outbreak corresponded with the decent of the Polar Vortex down into the continental US, exactly as it did in 1974, when the Polar Vortex descended into the continental US.

March 7, 2016 4:23 pm

I will tell a personal experience from the 1974 outbreak. My wife and I fly from D.C. to Tampa the day after most of the extreme outbreak. It was the worse fligh i ever was on with severe turbulance all the way and so severe that the air crew spent most of way in their seats.

John F. Hultquist
March 7, 2016 8:37 pm

A friend’s house in Xenia, OH was ‘disappeared’ during the 1974 tornado outbreak. They were not at home.
This outbreak appears to have been the stimulus for the Tornado Episode of the WKRP in Cincinnati TV series wherein Mr. Carlson becomes the take-charge guy and talks to a scared child (via phone) and encourages the child to go to the home’s basement. [Season 1, #12; 2/5/1979]

rogerknights
March 8, 2016 1:18 am

Hasn’t the contiguous US temp. Been flat for decades, especially per the reference network

Omphaloskeptacus
March 8, 2016 1:50 am

Taylor’s law maybe could be applied to the whole field of the propagation of climate catastrophe predicting science papers which of course are produced by living organisms who cluster together for mutual benifit and sharing of resources and who alter their environment (the data) for mutual protection. The analogy could use toliet bowl calculus which models the results when the volume and density of the modal input exceeds the outflow analyst capacity. The Taylor power coefficient is still not known for this relationship

March 8, 2016 3:44 am

Without question there is more people experiencing tornadoes. And the damage is going to be greater. Building a lot of houses out in tornado alley, what do they think would happen? Building from Cape May to the Atlantic Highlands on the beach, oh nothing will happen? That didn’t have anything to do with climate and everything to do with plain stupid.

jpatrick
March 8, 2016 4:17 am

We might well have a case of detection bias here. Surely one of the reviewers considered this.

March 8, 2016 5:57 am

I spent several years studying the tornado record as a research meteorologist. Anthony is exactly right about the effect of changes in how tornadoes are spotted on the number of F0 storms. When I was still working in the field, I told this to Tippett and the people at NOAA who fund his “research,” but they don’t care because demonstrating a connection to global warming is all they care about. I hope the insurance industry isn’t dumb enough to believe this nonsense.

McComberBoy
March 8, 2016 7:32 am

Anthony,
Thanks for calling BS once again on the willful blindness of today’s breed of researchers.
If the authors of the paper were truly interested in getting the data for earlier tornadoes, I would think that insurance data might be the place to look. Even if were only weak indicators. Though the country was much more rural and agricultural in the 1950’s and 60’s, farmers would still report storm damage from small tornadoes that were never caught on camera or radar. Repairing outbuildings, sheds or homes for hired hands would still take place and would show up in records for those repairs. Perhaps this could even be spotted in the financials of insurance companies who have a strong presence in the well know tornado allies of the midwest. Years with higher claims in certain areas should be a proxy for storm damage, whether or not there were large population centers hit.
Unfortunately there is a meme to be supported that would be harmed by more accurate tallies of moderate damage. There is obviously no incentive, monetary or otherwise, to find out the truth of moderately severe weather in our past, but perhaps one of the many WUWT regulars with ties to insurance could see what might be revealed in this regard.
pbh

emsnews
Reply to  McComberBoy
March 8, 2016 8:43 am

During the 1930s, many of these people in the Midwest were forced to flee or had zero insurance.

McComberBoy
Reply to  emsnews
March 8, 2016 11:52 am

The study referenced the 1950’s. long after the dust bowl.

Robert Barry
March 8, 2016 8:39 am

Our eternal struggle of Ego and ID . . .

March 8, 2016 9:36 am

“You must not fool yourself, and you are the easiest one to fool” sums up virtually all poorly constructed statistically dependent conclusions. Unchallenged they fool many others as well. Thanks Anthony for providing the forum to challenge and illuminate what’s significant.

Kevin Kilty
March 8, 2016 11:17 am

Since this thread involves a Wyoming tornado, I will add an anecdote or two about reporting. Prior to the F3 tornado that beat up Cheyenne Wyoming in 1979, I could never get anyone excited about my reports of funnel clouds or tornadoes because tornadoes just did not happen in Wyoming and everyone knew so. Now people see tornadoes everywhere. In the summer of 2011 I heard the warning sirens go off at my house of that time, east of Cheyenne, on several days. On one occasion I stepped onto the porch, had a look at the sky, and then remarked to my wife that the conditions were about as unlikely for a tornado as any I could imagine. The sky consisted of scattered pancake cumulus clouds. The report setting off the sirens was sent in by a Highway Patrolman, though, and so I suspect it pollutes the records as an honest to gosh sighting. On another occasion the radio broadcast warning indicated a tornado, on the ground, within a half mile of my home. I searched in vain for this one. The record is biased by better instrumentation, by increased population, by increased interest in things meteorological, and by highly excitable observers.

McComberBoy
Reply to  Kevin Kilty
March 8, 2016 11:58 am

On the other hand, we had two small tornadoes touch down north of Wheatland, WY in 1982 that would never make a newscast. No cell phones with their readily available cameras. No hysterical breathless reporting.
Similar thing in the central valley of California in November of 2001. Two funnel clouds south of Stockton, but no cell phone video and not much attention.
Anecdotal? Yes. But illustrative of Anthony’s point about the failure of this study to account for the differences in radar, population density, instant reporting ability and the like.

tadchem
March 8, 2016 11:49 am

If the number of tornadoes in any given class (F0/EF0 to F5/EF5) follows a power law such as Taylors, as the authors would like us to accept, then the trends in figures 1 and 2 as supplied by the Illinois State Climatologist should be more parallel.
If we accept the premise of Tippett and Cohen that a Taylor power law is at work, one can only infer that the deficiency in the counts of F0/EF0 tornadoes earlier in the record are due to under-reporting rather than any change in the overall behavior of tornado swarms.
This under-reporting of F0/EF0 tornadoes early in the record leads *directly* to the sag in the trend line of figure b. Also, the slope of the ‘trend line’ in figure c is not statistically significant. We are dealing here with the statistics of ‘rare events’ – a ‘Poisson distribution’ – in which variances calculated on an assumed Gaussian model (a ‘Normal distribution’) are irrelevant. Poisson variances are calculated with the logarithm of the counts. Also trend lines have their own uncertainties – an envelope of probability – that incorporates uncertainty in both the slope and the mean, producing hyperbolae that bracket the regression line as seen here: http://blogs.usyd.edu.au/waterhydrosu/2013/09/post.html

March 8, 2016 1:45 pm

Anthony … Myron above touches on the giant elephant in the room regarding severe weather reporting. The massive crease in the SkyWarn Spotter network. As the tech has dramatically improved for mobile users so too has the number of trained weather spotters.
Every chance of significant outbreak in recent years will have a throng of spotters chasing it, even in the remotest unpopulated areas.
Very often they are right there as the tornado forms. Unless this radical change in the quantity (and quality) of weather reporters is somehow factored and addressed any “count” of severe weather is largely worthless.

March 8, 2016 2:09 pm

An example – spotter locations El Reno storm – Oklahoma City …
http://fox41blogs.typepad.com/.a/6a0148c78b79ee970c019102e8ebef970c-500wi

Ken L.
March 9, 2016 5:12 pm

I’m late to the party on this post, but this is a classic case of comparing apples to oranges in terms of the quality of data. I can verify the truth that anyone who lives in Tornado Alley as I do, in central Oklahoma, will attest that virtually any tornado that occurs during any severe weather event \ watch is likely a media event with real time views of even brief 30 second touchdowns of weak EF0 twisters in the middle of open country. Scarcely a tornado that occurs is not reported and verified. As an “aged one”, I can also attest that we had no such information available back in the 1950s when tornado reports came after the damage had been done and by telephone to the weather service. But never fear – no data? Manufacture it, just as in the case of temperatures before there were thermometers and significant coverage with weather stations – heck they do that still today for remote areas.
Things are in a sad state that garbage such as this is accepted by peer review as science for publication and then regurgitated by the press to a public that has no reason or knowledge from their perspective to question it. While the situation surely raises my blood pressure, it helps to have a place such as this where I’m allowed occasionally to relieve my stress among kindred and even better educated souls.