Flawed Claim of New Study: 'Extreme Tornado Outbreaks Have Become More Common'

A new paper shows that the average number of tornadoes per outbreak has grown by more than 40% over the last half century. The likelihood of extreme outbreaks – those with many tornadoes – is also greater.

This paper is flawed from the start, right from the raw data itself, read on to see why – Anthony

Elk-Mountain-tornado
A tornado near Elk Mountain, west of Laramie Wyoming on the 15th of June, 2015. The tornado passed over mostly rural areas of the county, lasting over 20 minutes. John Allen/IRI.

From the Earth Institute at Columbia University:

Most death and destruction inflicted by tornadoes in North America occurs during outbreaks—large-scale weather events that can last one to three days and span huge regions. The largest outbreak ever recorded happened in 2011. It spawned 363 tornadoes across the United States and Canada, killing more than 350 people and causing $11 billion in damage.

The 2016 Severe Convection and Climate Workshop (#SevCon16) starts March 9. Visit the Columbia Initiative on Extreme Weather and Climate for more details.

Now, a new study shows that the average number of tornadoes in these outbreaks has risen since 1954, and that the chance of extreme outbreaks —tornado factories like the one in 2011—has also increased.

The study’s authors said they do not know what is driving the changes.

“The science is still open,” said lead author Michael Tippett, a climate and weather researcher at Columbia University’s School of Applied Science and Engineering and Columbia’s Data Science Institute. “It could be global warming, but our usual tools, the observational record and computer models, are not up to the task of answering this question yet.”

Tippett points out that many scientists expect the frequency of atmospheric conditions favorable to tornadoes to increase in a warmer climate—but even today, the right conditions don’t guarantee a tornado will occur. In any case, he said,

“When it comes to tornadoes, almost everything terrible that happens happens in outbreaks.”

SREX-SPM-Fig3
The effect that changing the mean and variance of a distribution has extremes, using temperatures as an example. Source: IPPC.

The results are expected to help insurance and reinsurance companies better understand the risks posed by outbreaks, which can also generate damaging hail and straight-line winds. Over the last 10 years, the industry has covered an average of $12.5 billion in insured losses each year, according to Willis Re, a global reinsurance advisor that helped sponsor the research. The article appears this week in the journal Nature Communications.

Every year, North America sees dozens of tornado outbreaks. Some are small and may give rise to only a few twisters; others, such as the so-called “super outbreaks” of 1974 and 2011, can generate hundreds. In the simplest terms, the intensity of each tornado is ranked on a zero-to-five scale, with other descriptive terms thrown in. The lower gradations cause only light damage, while the top ones, like a twister that tore through Joplin, Missouri, in 2011 can tear the bark off trees, rip houses from their foundations, and turn cars into missiles.

As far as the tornado observational record is concerned, the devil’s in the details.

For this study, the authors calculated the mean number of tornadoes per outbreak for each year as well as the variance, or scatter, around this mean. They found that while the total number of tornadoes rated F/EF1 and higher each year hasn’t increased, the average number per outbreak has, rising from about 10 to about 15 since the 1950s.

The study was coauthored by Joel Cohen, director of the Laboratory of Populations, which is based jointly at Rockefeller University and Columbia’s Earth Institute. Cohen called the results “truly remarkable.”

“The analysis showed that as the mean number of tornadoes per outbreak rose, the variance around that mean rose four times faster. While the mean rose by a factor of 1.5 over the last 60 years, the variance rose by a factor of more than 5, or 1.5 x 1.5 x 1.5 x 1.5. This kind of relationship between variance and mean has a name in statistics: Taylor’s power law of scaling.

“We have seen [Taylor’s power law] in the distribution of stars in a galaxy, in death rates in countries, the population density of Norway, securities trading, oak trees in New York and many other cases,” Cohen says. “But this is the first time anyone has shown that it applies to scaling in tornado statistics.”

The exponent in Taylor’s law number—in this case, the exponent was 4– can be a measure of clustering, Cohen says. If there’s no clustering—if tornadoes occur just randomly–then Taylor’s law has an exponent of 1. If there’s clustering, then it’s greater than 1. “In most ecological applications, the Taylor exponent seldom exceeds 2. To have an exponent of 4 is truly exceptional. It means that when it rains, it really, really, really pours,” says Cohen.

Extreme outbreaks have become more frequent because of two factors, Tippett said. First, the average number of tornadoes per outbreak has gone up; second, the rapidly increasing variance, or variability, means that numbers well above the average are more common.

(a) Number of tornado outbreaks per year. The rate of decline is not statistically significantly significant. (b and c) Annual mean number of tornadoes per outbreak and annual variance of the number of tornadoes per outbreak. Vertical axes are on a logarithmic scale, so the rate of increase in the annual mean is expressed as a percentage per year. (d) The annual mean number of tornadoes per outbreak versus the annual variance of the number of tornadoes per outbreak. Both axes are on a logarithmic scale. The solid line represents Taylor’s power law of fluctuation scaling. The two-digit number next to the plotted symbol gives the calendar year in the second half of the twentieth century or first half of the twenty-first century.
(a) Number of tornado outbreaks per year. The rate of decline is not statistically significantly significant. (b and c) Annual mean number of tornadoes per outbreak and annual variance of the number of tornadoes per outbreak. Vertical axes are on a logarithmic scale, so the rate of increase in the annual mean is expressed as a percentage per year. (d) The annual mean number of tornadoes per outbreak versus the annual variance of the number of tornadoes per outbreak. Both axes are on a logarithmic scale. The solid line represents Taylor’s power law of fluctuation scaling. The two-digit number next to the plotted symbol gives the calendar year in the second half of the twentieth century or first half of the twenty-first century.

Tippett was concerned that the findings could be artifacts of tornado observational data, which are based on eyewitness accounts and known to have problems with consistency and accuracy. To get around this, he re-ran his calculations after substituting the historical tornado data with environmental proxies for tornado occurrence and number of tornadoes per occurrence. These provide an independent—albeit imperfect—measure of tornado activity. The results were very nearly identical.

As for whether the climate is the cause, Tippett said, “The scientific community has thought a great deal about how the frequency of future weather and climate extremes may change in a warming climate. The simplest change to understand is a shift of the entire distribution, but increases in variability, or variance, are possible as well. With tornadoes, we’re seeing both of those mechanisms at play.”

Insurance and reinsurance companies and the catastrophe-modeling community can use this information.

“This paper helps begin to answer one of the fundamental questions to which I’d like to know the answer,” says Harold Brooks of the U.S. National Oceanic and Atmospheric Administration’s National Severe Storms Laboratory. “If tornadoes are being concentrated into more big days, what effect does that have on their impacts compared to when they were less concentrated?“

“The findings are very relevant to insurance companies that are writing business in multiple states, especially in the Midwest,” says Prasad Gunturi, senior vice president at Willis Re, who leads the company’s catastrophe model research and evaluation activities for North America. “Overall growth in the economy means more buildings and infrastructure are in harm’s way,” said Gunturi. “When you combine this with increased exposure because outbreaks are generating more tornadoes across state lines and the outbreaks could be getting more extreme in general, it means more loss to the economy and to insurance portfolios.”

Insurance companies have contracts with reinsurance companies, and these contracts look similar to the ones people have for home and car insurance, though for much higher amounts.  The new results will help companies ensure that contracts are written at an appropriate level and that the risks posed by outbreaks are better characterized, said Brooks.

“One big question raised by this work, and one we’re working on now, is what in the climate system has been behind this increase in outbreak severity,” said Tippett.

This research was also supported by grants from Columbia’s Research Initiatives for Science and Engineering, the Office of Naval Research, NOAA’s Climate Program Office  and the U.S. National Science Foundation.

The paper:

Tornado outbreak variability follows Taylor’s power law of fluctuation scaling and increases dramatically with severity

Michael K. Tippett, Joel E. Cohen

Nature Communications 7, Article number: 10668 doi:10.1038/ncomms10668

Abstract:

Tornadoes cause loss of life and damage to property each year in the United States and around the world. The largest impacts come from ‘outbreaks’ consisting of multiple tornadoes closely spaced in time. Here we find an upward trend in the annual mean number of tornadoes per US tornado outbreak for the period 1954–2014. Moreover, the variance of this quantity is increasing more than four times as fast as the mean. The mean and variance of the number of tornadoes per outbreak vary according to Taylor’s power law of fluctuation scaling (TL), with parameters that are consistent with multiplicative growth. Tornado-related atmospheric proxies show similar power-law scaling and multiplicative growth. Path-length-integrated tornado outbreak intensity also follows TL, but with parameters consistent with sampling variability. The observed TL power-law scaling of outbreak severity means that extreme outbreaks are more frequent than would be expected if mean and variance were independent or linearly related.



Why this study is fatally flawed (in my opinion):

Ironically, the hint as to why the study is fatally flawed comes with the photo of the Wyoming tornado they supplied in the press release. Note the barren landscape and the location. Now note the news story about it. 50 years ago, or maybe even 30 years ago, that tornado would likely have gone unnoticed and probably unreported not just in the local news, but in the tornado record. Now in today’s insta-news environment, virtually anyone with a cell phone can report a tornado. 30 years ago, the cell phone was just coming out of the lab and into first production.

Also 30 years ago, there wasn’t NEXRAD doppler Radar deployed nationwide, and it sees far more tornadoes that the older network of WSR-57 and WSR-74 weather radars, which could only detect the strongest of these events

As shown by this study: Doswell, Charles A., III (2007). “Small Sample Size and Data Quality Issues Illustrated Using Tornado Occurrence Data“. Electronic J. Severe Storms Meteor. 2 (5): 1–16.

Abstract

A major challenge in weather research is associated with the size of the data sample from which evidence can be presented in support of some hypothesis. This issue arises often in severe storm research, since severe storms are rare events, at least in any one place. Although large numbers of severe storm events (such as tornado occurrences) have been recorded, some attempts to reduce the impact of data quality problems within the record of tornado occurrences also can reduce the sample size to the point where it is too small to provide convincing evidence for certain types of conclusions. On the other hand, by carefully considering what sort of hypothesis to evaluate, it is possible to find strong enough signals in the data to test conclusions relatively rigorously. Examples from tornado occurrence data are used to illustrate the challenge posed by the interaction between sample size and data quality, and how it can be overcome by being careful to avoid asking more of the data than what they legitimately can provide. A discussion of what is needed to improve data quality is offered.

The total number of tornadoes is a problematic method of comparing outbreaks from different periods, however, as many more smaller tornadoes, but not stronger tornadoes, are reported in the US in recent decades than in previous ones.

Basically, there’s a reporting bias and a decreasing sample size as the data goes further back in time. Even the 1974 Tornado Outbreak, the largest record holder for decades, likely had more tornadoes than was reported at the time. If Doppler Radar had existed then, it would almost certainly have spotted many F0 and F1 class tornadoes (and maybe even some F2 or F3 tornadoes that weren’t found by aerial surveys due to them being in remote areas) that were not reported. Since large tornado outbreaks are so few and far between, and because technology for detecting tornadoes has advanced rapidly during the study period, it isn’t surprising at all that they found a trend. But, I don’t believe the trend is anything more than an artifact of increased reporting. The State Climatologist of Illinois agrees, and wrote an article about it:


If we look at the number of stronger tornadoes since 1950 in Illinois, we see a lot of year to year variability. However there is no significant trend over time – either up or down (Figure 1). Stronger tornadoes are identified here as F-1 to F-5 events from 1950 to 2006 (using the Fujita Scale), and EF-1 to EF-5 events from 2007 to 2010 (using the Enhanced Fujita Scale).  By definition, these stronger tornadoes cause at least a moderate amount of damage. See the Storm Prediction Center for a discussion on the original Fujita and Enhanced Fujita (EF) scales.

tornado-trend-F1-F5
Figure 1. Strong tornadoes in Illinois, 1950-2010. The red line represents the linear trend.
tornado-trend-F0
Figure 2. Weak tornadoes in Illinois, 1950-2010. The red line is a 3rd order polynomial, representing a smoothed trend line.
tornado-trend-F0-F5-combined
Figure 3. All tornadoes in Illinois, 1950-2010. Shaded area represent the most accurate era of tornado records.

What we have seen is a dramatic increase in the number of F-0 (EF-0) tornadoes from 1950 to 2010 (Figure 2). These are the weakest of all tornado events and typically cause little if any damage. These events were overlooked in the early tornado records. The upward trend is the result of better radar systems, better spotter networks, and increased awareness and interest by the public. These factors combined have allowed for a better documentation of the weaker events over time.

If we combine both data sets together, we see the apparent upward trend caused by the increasingly accurate accounting of F-0 (EF-0) tornadoes (Figure 3). As a result, the number of observed tornadoes in Illinois has increased over time, but without an indication of any underlying climate change.In my opinion, the tornado record since 1995 (shaded in yellow) provides the most accurate picture of tornado activity in Illinois. From that part of the record we see that the average number of tornadoes per year in Illinois is now 63.


From a paper by Matthew Westburg:


 

“Monitoring and Understanding Trends in Extreme Storms,” published in the Bulletin of the American Meteorological Society also has concluded that the United States is not experiencing an increase in the severity of tornadoes. Figure 3 from this paper shows “The occurrence of F1 and stronger tornadoes on the Fujita scale shows no trend since 1954, the first year of near real time data collection, with all of the increase in tornado reports resulting from an increase in the weakest tornadoes, FO.”

Reported tornadoes in NWS database from 1950 to 2011. Blue line is F0 tornadoes; red dots are F1 and stronger tornadoes.
Reported tornadoes in NWS database from 1950 to 2011. Blue line is F0 tornadoes; red dots are F1 and stronger tornadoes.

There are multiple reasons to explain why there seems to be an increase in the frequency of tornadoes in Illinois and the whole United States since 1950.

Tornado records have been kept in the United States since 1950. While we are fortunate to have records that date back about 64 years, “ the disparity between tornado records of the past and current records contributes a great deal of uncertainty regarding questions about the long-term behavior or patterns of tornado occurrence”(“Historical Records and Trends”). Inconsistent tornado records have made it difficult to identify tornado trends. In the last several decades, scientists have done a better job of making tornado data more consistent and uniform. Overtime, this will help scientists to be able to identify trends in tornado data. In addition to inconsistent records, changes in reporting systems have had an effect on tornado data and possible trends.

Prior to the 1970’s, tornadoes were usually not reported unless they caused substantial damage to property, caused injuries, or deaths; consequently, there was an under reporting of tornadoes. At this time, scientists were able to define what a tornado was, but they struggled to define the intensity of each tornado, in terms the size and wind speed of each tornado, and therefore, they struggled to compare one tornado to another. This changed in 1971 when a meteorologist named Ted Fujita, “established the Fujita Scale (F-Scale) for rating the wind speeds of tornadoes by examining the damage they cause” (McDaniel). Once the Fujita Scale was established, scientists were more easily able to classify tornadoes based on their wind speed and the damage they caused.

Source: http://spark.parkland.edu/cgi/viewcontent.cgi?filename=0&article=1060&context=nsps&type=additional


In 2011, I went into great detail on the reporting bias problem with this WUWT essay:

Why it seems that severe weather is “getting worse” when the data shows otherwise – a historical perspective

Note the the authors of the study released today only started their dataset in 1950, just before the dawn of television news, in the early 1960’s. Television alone accounts for a significant increase in tornado reporting. Now with cell phones and millions of camera’s, Doppler radar, storm chasers, The Weather Channel, the Internet, and a 24 hour news, many, many more smaller tornadoes that would have gone unnoticed are being reported than ever before, but the frequency of strong tornadoes has not increased:

fig31_tornadoes-600x3611

When you only analyse data that goes back to the early days of TV news, where trends in increased reporting begin, you are bound to find an increase in frequency and strength, not just overall, but in outbreaks too. As Doswell et al. notes authors should be “…careful to avoid asking more of the data than what they legitimately can provide”.

So, what do they do to get around these problems? They add a proxy for actual tornado data! From the paper:

Environmental proxies for tornado occurrence and number of tornadoes per occurrence provide an independent, albeit imperfect, measure of tornado activity for the period 1979–2013 (‘Methods’ section). At a minimum, the environmental proxies provide information about the frequency and severity of environments favourable to tornado occurrence. The correlation between the annual average number of tornadoes per outbreak and the proxy for number of tornadoes per occurrence is 0.56 (Supplementary Fig. 6a). This correlation falls to 0.34, still significant at the 95% level, when the data from 2011 are excluded.

I remain unconvinced that this is anything useful.

To their credit, they do note the reporting problem, but don’t clearly say how they deal with it.

However, to date, there is no upward trend in the number of reliably reported US tornadoes per year6. Interpretation of the US tornado report data requires some caution. For instance, the total number of US tornadoes reported each year has increased dramatically over the last half century, but most of that increase is due to more reports of weak tornadoes and is believed to reflect changing reporting practices and other non-meteorological factors rather than increased tornado occurrence7. The variability of reported tornado occurrence has increased over the last few decades with more tornadoes being reported on days when tornadoes are observed8, 9. In addition, greater year-to-year variability in the number of tornadoes reported per year has been associated with consistent changes in the monthly averaged atmospheric environments favourable to tornado occurrence10

I had to laugh at this line:

…with more tornadoes being reported on days when tornadoes are observed

Basically, what I’m saying is that while the statistical math used by Tippett may very well be correct, but the data used is biased, incomplete, inaccurate, and lends itself to the authors finding what they seek. They don’t state that they take reporting bias into account, and that results in a flawed conclusion. Then there’s this about Taylor’s Power Law:

Taylor’s power law is an empirical law in ecology that relates the variance of the number of individuals of a species per unit area of habitat to the corresponding mean by a power law relationship.

Tornadoes are not life, they are not self-replicating, they do not give birth, they don’t cluster like life does for benefits of food or protection in numbers. I don’t see how a law designed for predicting cluster populations of living organisms can be applied to inanimate physical phenomena such as tornadoes. While there is support in the literature for Taylors Law aka “fluctuation scaling”, it seems only useful when you have a cleaner dataset, not one that has multiple reporting biases over decades that significantly increase it’s inhomogeneity. For example, Taylors law was successfully used to examine UK crime behavior. But even though we have decades of crime reports in the UK, they don’t vary in intensity. There aren’t “small murders” and “big murders” there are just murders.

Feynman famously said of science: “The first principle is that you must not fool yourself and you are the easiest person to fool.” I think Tippet et al. has created a magnificent statistical construct with which they have in fact, fooled themselves.

Addition: some history is instructive. Here are a few major tornado outbreaks in the USA

In the September 1821 there was the New England tornado outbreak

In March 1875 there was the Southeast tornado outbreak

In 1884 there was the Enigma Tornado Outbreak

In 1908 there was the Dixie Tornado Outbreak

In 1932 there was the Deep South tornado outbreak

In 1974 was the Super Outbreak

In 2011 was the April 27th Super Tornado Outbreak

In fact, there is quite a list of Tornado outbreaks in US history. The majority of tornado outbreaks in US history occurred before there was a good reporting or power categorization (Fujita Scale) methodology in place. For Tippet to claim a trend in clusters/outbreak severity ignores all these others for which there is little data, and focuses only on the period of data for which reporting and categorization technology and techniques were fast evolving. Thanks to widespread reporting on TV news, the 1974 Super Outbreak became the biggest impetus in the history of the Weather Bureau to improve warning and monitoring technology, and likely was responsible for most of the improvements in data over the next 30 years.

 

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

63 Comments
Inline Feedbacks
View all comments
rogerknights
March 8, 2016 1:18 am

Hasn’t the contiguous US temp. Been flat for decades, especially per the reference network

Omphaloskeptacus
March 8, 2016 1:50 am

Taylor’s law maybe could be applied to the whole field of the propagation of climate catastrophe predicting science papers which of course are produced by living organisms who cluster together for mutual benifit and sharing of resources and who alter their environment (the data) for mutual protection. The analogy could use toliet bowl calculus which models the results when the volume and density of the modal input exceeds the outflow analyst capacity. The Taylor power coefficient is still not known for this relationship

March 8, 2016 3:44 am

Without question there is more people experiencing tornadoes. And the damage is going to be greater. Building a lot of houses out in tornado alley, what do they think would happen? Building from Cape May to the Atlantic Highlands on the beach, oh nothing will happen? That didn’t have anything to do with climate and everything to do with plain stupid.

jpatrick
March 8, 2016 4:17 am

We might well have a case of detection bias here. Surely one of the reviewers considered this.

March 8, 2016 5:57 am

I spent several years studying the tornado record as a research meteorologist. Anthony is exactly right about the effect of changes in how tornadoes are spotted on the number of F0 storms. When I was still working in the field, I told this to Tippett and the people at NOAA who fund his “research,” but they don’t care because demonstrating a connection to global warming is all they care about. I hope the insurance industry isn’t dumb enough to believe this nonsense.

McComberBoy
March 8, 2016 7:32 am

Anthony,
Thanks for calling BS once again on the willful blindness of today’s breed of researchers.
If the authors of the paper were truly interested in getting the data for earlier tornadoes, I would think that insurance data might be the place to look. Even if were only weak indicators. Though the country was much more rural and agricultural in the 1950’s and 60’s, farmers would still report storm damage from small tornadoes that were never caught on camera or radar. Repairing outbuildings, sheds or homes for hired hands would still take place and would show up in records for those repairs. Perhaps this could even be spotted in the financials of insurance companies who have a strong presence in the well know tornado allies of the midwest. Years with higher claims in certain areas should be a proxy for storm damage, whether or not there were large population centers hit.
Unfortunately there is a meme to be supported that would be harmed by more accurate tallies of moderate damage. There is obviously no incentive, monetary or otherwise, to find out the truth of moderately severe weather in our past, but perhaps one of the many WUWT regulars with ties to insurance could see what might be revealed in this regard.
pbh

emsnews
Reply to  McComberBoy
March 8, 2016 8:43 am

During the 1930s, many of these people in the Midwest were forced to flee or had zero insurance.

McComberBoy
Reply to  emsnews
March 8, 2016 11:52 am

The study referenced the 1950’s. long after the dust bowl.

Robert Barry
March 8, 2016 8:39 am

Our eternal struggle of Ego and ID . . .

March 8, 2016 9:36 am

“You must not fool yourself, and you are the easiest one to fool” sums up virtually all poorly constructed statistically dependent conclusions. Unchallenged they fool many others as well. Thanks Anthony for providing the forum to challenge and illuminate what’s significant.

Kevin Kilty
March 8, 2016 11:17 am

Since this thread involves a Wyoming tornado, I will add an anecdote or two about reporting. Prior to the F3 tornado that beat up Cheyenne Wyoming in 1979, I could never get anyone excited about my reports of funnel clouds or tornadoes because tornadoes just did not happen in Wyoming and everyone knew so. Now people see tornadoes everywhere. In the summer of 2011 I heard the warning sirens go off at my house of that time, east of Cheyenne, on several days. On one occasion I stepped onto the porch, had a look at the sky, and then remarked to my wife that the conditions were about as unlikely for a tornado as any I could imagine. The sky consisted of scattered pancake cumulus clouds. The report setting off the sirens was sent in by a Highway Patrolman, though, and so I suspect it pollutes the records as an honest to gosh sighting. On another occasion the radio broadcast warning indicated a tornado, on the ground, within a half mile of my home. I searched in vain for this one. The record is biased by better instrumentation, by increased population, by increased interest in things meteorological, and by highly excitable observers.

McComberBoy
Reply to  Kevin Kilty
March 8, 2016 11:58 am

On the other hand, we had two small tornadoes touch down north of Wheatland, WY in 1982 that would never make a newscast. No cell phones with their readily available cameras. No hysterical breathless reporting.
Similar thing in the central valley of California in November of 2001. Two funnel clouds south of Stockton, but no cell phone video and not much attention.
Anecdotal? Yes. But illustrative of Anthony’s point about the failure of this study to account for the differences in radar, population density, instant reporting ability and the like.

tadchem
March 8, 2016 11:49 am

If the number of tornadoes in any given class (F0/EF0 to F5/EF5) follows a power law such as Taylors, as the authors would like us to accept, then the trends in figures 1 and 2 as supplied by the Illinois State Climatologist should be more parallel.
If we accept the premise of Tippett and Cohen that a Taylor power law is at work, one can only infer that the deficiency in the counts of F0/EF0 tornadoes earlier in the record are due to under-reporting rather than any change in the overall behavior of tornado swarms.
This under-reporting of F0/EF0 tornadoes early in the record leads *directly* to the sag in the trend line of figure b. Also, the slope of the ‘trend line’ in figure c is not statistically significant. We are dealing here with the statistics of ‘rare events’ – a ‘Poisson distribution’ – in which variances calculated on an assumed Gaussian model (a ‘Normal distribution’) are irrelevant. Poisson variances are calculated with the logarithm of the counts. Also trend lines have their own uncertainties – an envelope of probability – that incorporates uncertainty in both the slope and the mean, producing hyperbolae that bracket the regression line as seen here: http://blogs.usyd.edu.au/waterhydrosu/2013/09/post.html

March 8, 2016 1:45 pm

Anthony … Myron above touches on the giant elephant in the room regarding severe weather reporting. The massive crease in the SkyWarn Spotter network. As the tech has dramatically improved for mobile users so too has the number of trained weather spotters.
Every chance of significant outbreak in recent years will have a throng of spotters chasing it, even in the remotest unpopulated areas.
Very often they are right there as the tornado forms. Unless this radical change in the quantity (and quality) of weather reporters is somehow factored and addressed any “count” of severe weather is largely worthless.

March 8, 2016 2:09 pm

An example – spotter locations El Reno storm – Oklahoma City …
http://fox41blogs.typepad.com/.a/6a0148c78b79ee970c019102e8ebef970c-500wi

Ken L.
March 9, 2016 5:12 pm

I’m late to the party on this post, but this is a classic case of comparing apples to oranges in terms of the quality of data. I can verify the truth that anyone who lives in Tornado Alley as I do, in central Oklahoma, will attest that virtually any tornado that occurs during any severe weather event \ watch is likely a media event with real time views of even brief 30 second touchdowns of weak EF0 twisters in the middle of open country. Scarcely a tornado that occurs is not reported and verified. As an “aged one”, I can also attest that we had no such information available back in the 1950s when tornado reports came after the damage had been done and by telephone to the weather service. But never fear – no data? Manufacture it, just as in the case of temperatures before there were thermometers and significant coverage with weather stations – heck they do that still today for remote areas.
Things are in a sad state that garbage such as this is accepted by peer review as science for publication and then regurgitated by the press to a public that has no reason or knowledge from their perspective to question it. While the situation surely raises my blood pressure, it helps to have a place such as this where I’m allowed occasionally to relieve my stress among kindred and even better educated souls.