A new paper shows that the average number of tornadoes per outbreak has grown by more than 40% over the last half century. The likelihood of extreme outbreaks – those with many tornadoes – is also greater.
This paper is flawed from the start, right from the raw data itself, read on to see why – Anthony
From the Earth Institute at Columbia University:
Most death and destruction inflicted by tornadoes in North America occurs during outbreaks—large-scale weather events that can last one to three days and span huge regions. The largest outbreak ever recorded happened in 2011. It spawned 363 tornadoes across the United States and Canada, killing more than 350 people and causing $11 billion in damage.
Now, a new study shows that the average number of tornadoes in these outbreaks has risen since 1954, and that the chance of extreme outbreaks —tornado factories like the one in 2011—has also increased.
The study’s authors said they do not know what is driving the changes.
“The science is still open,” said lead author Michael Tippett, a climate and weather researcher at Columbia University’s School of Applied Science and Engineering and Columbia’s Data Science Institute. “It could be global warming, but our usual tools, the observational record and computer models, are not up to the task of answering this question yet.”
Tippett points out that many scientists expect the frequency of atmospheric conditions favorable to tornadoes to increase in a warmer climate—but even today, the right conditions don’t guarantee a tornado will occur. In any case, he said,
“When it comes to tornadoes, almost everything terrible that happens happens in outbreaks.”
The results are expected to help insurance and reinsurance companies better understand the risks posed by outbreaks, which can also generate damaging hail and straight-line winds. Over the last 10 years, the industry has covered an average of $12.5 billion in insured losses each year, according to Willis Re, a global reinsurance advisor that helped sponsor the research. The article appears this week in the journal Nature Communications.
Every year, North America sees dozens of tornado outbreaks. Some are small and may give rise to only a few twisters; others, such as the so-called “super outbreaks” of 1974 and 2011, can generate hundreds. In the simplest terms, the intensity of each tornado is ranked on a zero-to-five scale, with other descriptive terms thrown in. The lower gradations cause only light damage, while the top ones, like a twister that tore through Joplin, Missouri, in 2011 can tear the bark off trees, rip houses from their foundations, and turn cars into missiles.
As far as the tornado observational record is concerned, the devil’s in the details.
For this study, the authors calculated the mean number of tornadoes per outbreak for each year as well as the variance, or scatter, around this mean. They found that while the total number of tornadoes rated F/EF1 and higher each year hasn’t increased, the average number per outbreak has, rising from about 10 to about 15 since the 1950s.
The study was coauthored by Joel Cohen, director of the Laboratory of Populations, which is based jointly at Rockefeller University and Columbia’s Earth Institute. Cohen called the results “truly remarkable.”
“The analysis showed that as the mean number of tornadoes per outbreak rose, the variance around that mean rose four times faster. While the mean rose by a factor of 1.5 over the last 60 years, the variance rose by a factor of more than 5, or 1.5 x 1.5 x 1.5 x 1.5. This kind of relationship between variance and mean has a name in statistics: Taylor’s power law of scaling.
“We have seen [Taylor’s power law] in the distribution of stars in a galaxy, in death rates in countries, the population density of Norway, securities trading, oak trees in New York and many other cases,” Cohen says. “But this is the first time anyone has shown that it applies to scaling in tornado statistics.”
The exponent in Taylor’s law number—in this case, the exponent was 4– can be a measure of clustering, Cohen says. If there’s no clustering—if tornadoes occur just randomly–then Taylor’s law has an exponent of 1. If there’s clustering, then it’s greater than 1. “In most ecological applications, the Taylor exponent seldom exceeds 2. To have an exponent of 4 is truly exceptional. It means that when it rains, it really, really, really pours,” says Cohen.
Extreme outbreaks have become more frequent because of two factors, Tippett said. First, the average number of tornadoes per outbreak has gone up; second, the rapidly increasing variance, or variability, means that numbers well above the average are more common.
Tippett was concerned that the findings could be artifacts of tornado observational data, which are based on eyewitness accounts and known to have problems with consistency and accuracy. To get around this, he re-ran his calculations after substituting the historical tornado data with environmental proxies for tornado occurrence and number of tornadoes per occurrence. These provide an independent—albeit imperfect—measure of tornado activity. The results were very nearly identical.
As for whether the climate is the cause, Tippett said, “The scientific community has thought a great deal about how the frequency of future weather and climate extremes may change in a warming climate. The simplest change to understand is a shift of the entire distribution, but increases in variability, or variance, are possible as well. With tornadoes, we’re seeing both of those mechanisms at play.”
Insurance and reinsurance companies and the catastrophe-modeling community can use this information.
“This paper helps begin to answer one of the fundamental questions to which I’d like to know the answer,” says Harold Brooks of the U.S. National Oceanic and Atmospheric Administration’s National Severe Storms Laboratory. “If tornadoes are being concentrated into more big days, what effect does that have on their impacts compared to when they were less concentrated?“
“The findings are very relevant to insurance companies that are writing business in multiple states, especially in the Midwest,” says Prasad Gunturi, senior vice president at Willis Re, who leads the company’s catastrophe model research and evaluation activities for North America. “Overall growth in the economy means more buildings and infrastructure are in harm’s way,” said Gunturi. “When you combine this with increased exposure because outbreaks are generating more tornadoes across state lines and the outbreaks could be getting more extreme in general, it means more loss to the economy and to insurance portfolios.”
Insurance companies have contracts with reinsurance companies, and these contracts look similar to the ones people have for home and car insurance, though for much higher amounts. The new results will help companies ensure that contracts are written at an appropriate level and that the risks posed by outbreaks are better characterized, said Brooks.
“One big question raised by this work, and one we’re working on now, is what in the climate system has been behind this increase in outbreak severity,” said Tippett.
Tornadoes cause loss of life and damage to property each year in the United States and around the world. The largest impacts come from ‘outbreaks’ consisting of multiple tornadoes closely spaced in time. Here we find an upward trend in the annual mean number of tornadoes per US tornado outbreak for the period 1954–2014. Moreover, the variance of this quantity is increasing more than four times as fast as the mean. The mean and variance of the number of tornadoes per outbreak vary according to Taylor’s power law of fluctuation scaling (TL), with parameters that are consistent with multiplicative growth. Tornado-related atmospheric proxies show similar power-law scaling and multiplicative growth. Path-length-integrated tornado outbreak intensity also follows TL, but with parameters consistent with sampling variability. The observed TL power-law scaling of outbreak severity means that extreme outbreaks are more frequent than would be expected if mean and variance were independent or linearly related.
Why this study is fatally flawed (in my opinion):
Ironically, the hint as to why the study is fatally flawed comes with the photo of the Wyoming tornado they supplied in the press release. Note the barren landscape and the location. Now note the news story about it. 50 years ago, or maybe even 30 years ago, that tornado would likely have gone unnoticed and probably unreported not just in the local news, but in the tornado record. Now in today’s insta-news environment, virtually anyone with a cell phone can report a tornado. 30 years ago, the cell phone was just coming out of the lab and into first production.
Also 30 years ago, there wasn’t NEXRAD doppler Radar deployed nationwide, and it sees far more tornadoes that the older network of WSR-57 and WSR-74 weather radars, which could only detect the strongest of these events
A major challenge in weather research is associated with the size of the data sample from which evidence can be presented in support of some hypothesis. This issue arises often in severe storm research, since severe storms are rare events, at least in any one place. Although large numbers of severe storm events (such as tornado occurrences) have been recorded, some attempts to reduce the impact of data quality problems within the record of tornado occurrences also can reduce the sample size to the point where it is too small to provide convincing evidence for certain types of conclusions. On the other hand, by carefully considering what sort of hypothesis to evaluate, it is possible to find strong enough signals in the data to test conclusions relatively rigorously. Examples from tornado occurrence data are used to illustrate the challenge posed by the interaction between sample size and data quality, and how it can be overcome by being careful to avoid asking more of the data than what they legitimately can provide. A discussion of what is needed to improve data quality is offered.
The total number of tornadoes is a problematic method of comparing outbreaks from different periods, however, as many more smaller tornadoes, but not stronger tornadoes, are reported in the US in recent decades than in previous ones.
Basically, there’s a reporting bias and a decreasing sample size as the data goes further back in time. Even the 1974 Tornado Outbreak, the largest record holder for decades, likely had more tornadoes than was reported at the time. If Doppler Radar had existed then, it would almost certainly have spotted many F0 and F1 class tornadoes (and maybe even some F2 or F3 tornadoes that weren’t found by aerial surveys due to them being in remote areas) that were not reported. Since large tornado outbreaks are so few and far between, and because technology for detecting tornadoes has advanced rapidly during the study period, it isn’t surprising at all that they found a trend. But, I don’t believe the trend is anything more than an artifact of increased reporting. The State Climatologist of Illinois agrees, and wrote an article about it:
If we look at the number of stronger tornadoes since 1950 in Illinois, we see a lot of year to year variability. However there is no significant trend over time – either up or down (Figure 1). Stronger tornadoes are identified here as F-1 to F-5 events from 1950 to 2006 (using the Fujita Scale), and EF-1 to EF-5 events from 2007 to 2010 (using the Enhanced Fujita Scale). By definition, these stronger tornadoes cause at least a moderate amount of damage. See the Storm Prediction Center for a discussion on the original Fujita and Enhanced Fujita (EF) scales.
What we have seen is a dramatic increase in the number of F-0 (EF-0) tornadoes from 1950 to 2010 (Figure 2). These are the weakest of all tornado events and typically cause little if any damage. These events were overlooked in the early tornado records. The upward trend is the result of better radar systems, better spotter networks, and increased awareness and interest by the public. These factors combined have allowed for a better documentation of the weaker events over time.
If we combine both data sets together, we see the apparent upward trend caused by the increasingly accurate accounting of F-0 (EF-0) tornadoes (Figure 3). As a result, the number of observed tornadoes in Illinois has increased over time, but without an indication of any underlying climate change.In my opinion, the tornado record since 1995 (shaded in yellow) provides the most accurate picture of tornado activity in Illinois. From that part of the record we see that the average number of tornadoes per year in Illinois is now 63.
From a paper by Matthew Westburg:
“Monitoring and Understanding Trends in Extreme Storms,” published in the Bulletin of the American Meteorological Society also has concluded that the United States is not experiencing an increase in the severity of tornadoes. Figure 3 from this paper shows “The occurrence of F1 and stronger tornadoes on the Fujita scale shows no trend since 1954, the first year of near real time data collection, with all of the increase in tornado reports resulting from an increase in the weakest tornadoes, FO.”
There are multiple reasons to explain why there seems to be an increase in the frequency of tornadoes in Illinois and the whole United States since 1950.
Tornado records have been kept in the United States since 1950. While we are fortunate to have records that date back about 64 years, “ the disparity between tornado records of the past and current records contributes a great deal of uncertainty regarding questions about the long-term behavior or patterns of tornado occurrence”(“Historical Records and Trends”). Inconsistent tornado records have made it difficult to identify tornado trends. In the last several decades, scientists have done a better job of making tornado data more consistent and uniform. Overtime, this will help scientists to be able to identify trends in tornado data. In addition to inconsistent records, changes in reporting systems have had an effect on tornado data and possible trends.
Prior to the 1970’s, tornadoes were usually not reported unless they caused substantial damage to property, caused injuries, or deaths; consequently, there was an under reporting of tornadoes. At this time, scientists were able to define what a tornado was, but they struggled to define the intensity of each tornado, in terms the size and wind speed of each tornado, and therefore, they struggled to compare one tornado to another. This changed in 1971 when a meteorologist named Ted Fujita, “established the Fujita Scale (F-Scale) for rating the wind speeds of tornadoes by examining the damage they cause” (McDaniel). Once the Fujita Scale was established, scientists were more easily able to classify tornadoes based on their wind speed and the damage they caused.
Note the the authors of the study released today only started their dataset in 1950, just before the dawn of television news, in the early 1960’s. Television alone accounts for a significant increase in tornado reporting. Now with cell phones and millions of camera’s, Doppler radar, storm chasers, The Weather Channel, the Internet, and a 24 hour news, many, many more smaller tornadoes that would have gone unnoticed are being reported than ever before, but the frequency of strong tornadoes has not increased:
When you only analyse data that goes back to the early days of TV news, where trends in increased reporting begin, you are bound to find an increase in frequency and strength, not just overall, but in outbreaks too. As Doswell et al. notes authors should be “…careful to avoid asking more of the data than what they legitimately can provide”.
So, what do they do to get around these problems? They add a proxy for actual tornado data! From the paper:
Environmental proxies for tornado occurrence and number of tornadoes per occurrence provide an independent, albeit imperfect, measure of tornado activity for the period 1979–2013 (‘Methods’ section). At a minimum, the environmental proxies provide information about the frequency and severity of environments favourable to tornado occurrence. The correlation between the annual average number of tornadoes per outbreak and the proxy for number of tornadoes per occurrence is 0.56 (Supplementary Fig. 6a). This correlation falls to 0.34, still significant at the 95% level, when the data from 2011 are excluded.
I remain unconvinced that this is anything useful.
To their credit, they do note the reporting problem, but don’t clearly say how they deal with it.
However, to date, there is no upward trend in the number of reliably reported US tornadoes per year6. Interpretation of the US tornado report data requires some caution. For instance, the total number of US tornadoes reported each year has increased dramatically over the last half century, but most of that increase is due to more reports of weak tornadoes and is believed to reflect changing reporting practices and other non-meteorological factors rather than increased tornado occurrence7. The variability of reported tornado occurrence has increased over the last few decades with more tornadoes being reported on days when tornadoes are observed8, 9. In addition, greater year-to-year variability in the number of tornadoes reported per year has been associated with consistent changes in the monthly averaged atmospheric environments favourable to tornado occurrence10
I had to laugh at this line:
…with more tornadoes being reported on days when tornadoes are observed
Basically, what I’m saying is that while the statistical math used by Tippett may very well be correct, but the data used is biased, incomplete, inaccurate, and lends itself to the authors finding what they seek. They don’t state that they take reporting bias into account, and that results in a flawed conclusion. Then there’s this about Taylor’s Power Law:
Taylor’s power law is an empirical law in ecology that relates the variance of the number of individuals of a species per unit area of habitat to the corresponding mean by a power law relationship.
Tornadoes are not life, they are not self-replicating, they do not give birth, they don’t cluster like life does for benefits of food or protection in numbers. I don’t see how a law designed for predicting cluster populations of living organisms can be applied to inanimate physical phenomena such as tornadoes. While there is support in the literature for Taylors Law aka “fluctuation scaling”, it seems only useful when you have a cleaner dataset, not one that has multiple reporting biases over decades that significantly increase it’s inhomogeneity. For example, Taylors law was successfully used to examine UK crime behavior. But even though we have decades of crime reports in the UK, they don’t vary in intensity. There aren’t “small murders” and “big murders” there are just murders.
Feynman famously said of science: “The first principle is that you must not fool yourself and you are the easiest person to fool.” I think Tippet et al. has created a magnificent statistical construct with which they have in fact, fooled themselves.
Addition: some history is instructive. Here are a few major tornado outbreaks in the USA
In fact, there is quite a list of Tornado outbreaks in US history. The majority of tornado outbreaks in US history occurred before there was a good reporting or power categorization (Fujita Scale) methodology in place. For Tippet to claim a trend in clusters/outbreak severity ignores all these others for which there is little data, and focuses only on the period of data for which reporting and categorization technology and techniques were fast evolving. Thanks to widespread reporting on TV news, the 1974 Super Outbreak became the biggest impetus in the history of the Weather Bureau to improve warning and monitoring technology, and likely was responsible for most of the improvements in data over the next 30 years.