From the — is climate change responsible? department, in a word, no. Here’s why – it’s a reporting bias as I document here. More people, more cameras, more eyes on the event to make data for storm reports. And, until around 1975, multiple vortice tornado outbreaks weren’t really part of the scientific literature, because they hadn’t been well documented, nor was there a physical mechanism proposed for them until this study and this one recreating the phenomenon in the lab (which I helped with from the technology side) came about.
The NOAA Storm Prediction Center says this about multiple vortice tornadoes:
Multiple Vortex Tornado
Many tornadoes contain smaller, rapidly spinning whirls known as subvortices, or suction vortices; but they are not always as clearly visible as in this big tornado near Altus OK, on 11 May 1982. Suction vortices can add over 100 mph to the ground-relative wind in a tornado circulation. As a result, they are responsible for most (if not all) cases where narrow arcs of extreme destruction lie right next to weak damage within tornado paths. Subvortices usually occur in groups of 2 to 5 at once (the 6 or 7 evident here being uncommon), and usually last less than a minute each. Tornado scientists now believe that most reports of several tornadoes at once, from news accounts and early 20th century tornado tales, actually were multivortex tornadoes. However, on rare occasions, separate tornadoes can form close to one another as satellite tornadoes.
Since they note that ” they are not always as clearly visible” it stands to reason that with multitudes of storm chasers being led to tornadoes by Doppler radar technology ahead of time, and being scattered around the tornado looking at it from multiple angles, that more clusters of tornadoes would be reported than ever before tornado chasing became an adventure sport and a tour guide business. There’s even a petition for banning it.
The petitioner says:
When I went chasing in May 2010 to see tornado alley myself I was surprised at one thing. The storm chasing wasn’t that dangerous due to the storms … it was dangerous because of the amount of people out there on the roads surrounding the storms.
It seems obvious to me that more ground observations are the cause of more observed tornadoes. As for the increased meteorological propensity they cite, I think they really don’t make their case with statistical modeling as the indicator:
While no significant trends have been found in either the annual number of reliably reported tornadoes or of outbreaks, recent studies indicate increased variability in large normalized economic and insured losses from U.S. thunderstorms, increases in the annual number of days on which many tornadoes occur, and increases in the annual mean and variance of the number of tornadoes per outbreak. In the current study, the researchers used extreme value analysis and found that the frequency of U.S. outbreaks with many tornadoes is increasing, and is increasing faster for more extreme outbreaks. They modeled this behavior using extreme value distributions with parameters that vary to match the trends in the data.
As Earnest Rutherford once said:
“If your experiment needs statistics, you ought to have done a better experiment.”
Increasing tornado outbreaks — is climate change responsible?
Study raises new questions about what climate change will do to tornado outbreaks and what is responsible for recent trends
New York, NY–December 1, 2016–Tornadoes and severe thunderstorms kill people and damage property every year. Estimated U.S. insured losses due to severe thunderstorms in the first half of 2016 were $8.5 billion. The largest U.S. impacts of tornadoes result from tornado outbreaks, sequences of tornadoes that occur in close succession. Last spring a research team led by Michael Tippett, associate professor of applied physics and applied mathematics at Columbia Engineering, published a study showing that the average number of tornadoes during outbreaks–large-scale weather events that can last one to three days and span huge regions–has risen since 1954. But they were not sure why.
In a new paper, published December 1 in Science via First Release, the researchers looked at increasing trends in the severity of tornado outbreaks where they measured severity by the number of tornadoes per outbreak. They found that these trends are increasing fastest for the most extreme outbreaks. While they saw changes in meteorological quantities that are consistent with these upward trends, the meteorological trends were not the ones expected under climate change.
“This study raises new questions about what climate change will do to severe thunderstorms and what is responsible for recent trends,” says Tippett, who is also a member of the Data Science Institute and the Columbia Initiative on Extreme Weather and Climate. “The fact that we don’t see the presently understood meteorological signature of global warming in changing outbreak statistics leaves two possibilities: either the recent increases are not due to a warming climate, or a warming climate has implications for tornado activity that we don’t understand. This is an unexpected finding.”
The researchers used two NOAA datasets, one containing tornado reports and the other observation-based estimates of meteorological quantities associated with tornado outbreaks. “Other researchers have focused on tornado reports without considering the meteorological environments,” notes Chiara Lepore, associate research scientist at the Lamont-Doherty Earth Observatory, who is a coauthor of the paper. “The meteorological data provide an independent check on the tornado reports and let us check for what would be expected under climate change.”
U.S. tornado activity in recent decades has been drawing the attention of scientists. While no significant trends have been found in either the annual number of reliably reported tornadoes or of outbreaks, recent studies indicate increased variability in large normalized economic and insured losses from U.S. thunderstorms, increases in the annual number of days on which many tornadoes occur, and increases in the annual mean and variance of the number of tornadoes per outbreak. In the current study, the researchers used extreme value analysis and found that the frequency of U.S. outbreaks with many tornadoes is increasing, and is increasing faster for more extreme outbreaks. They modeled this behavior using extreme value distributions with parameters that vary to match the trends in the data.
Extreme meteorological environments associated with severe thunderstorms showed consistent upward trends, but the trends did not resemble those currently expected to result from global warming. They looked at two factors: convective available potential energy (CAPE) and a measure of vertical wind shear, storm relative helicity. Modeling studies have projected that CAPE will increase in a warmer climate leading to more frequent environments favorable to severe thunderstorms in the U.S. However, they found that the meteorological trends were not due to increasing CAPE but instead due to trends in storm relative helicity, which has not been projected to increase under climate change.
“Tornadoes blow people away, and their houses and cars and a lot else,” says Joel Cohen, coauthor of the paper and director of the Laboratory of Populations, which is based jointly at Rockefeller University and Columbia’s Earth Institute. “We’ve used new statistical tools that haven’t been used before to put tornadoes under the microscope. The findings are surprising. We found that, over the last half century or so, the more extreme the tornado outbreaks, the faster the numbers of such extreme outbreaks have been increasing. What’s pushing this rise in extreme outbreaks is far from obvious in the present state of climate science. Viewing the thousands of tornadoes that have been reliably recorded in the U.S. over the past half century or so as a population has permitted us to ask new questions and discover new, important changes in outbreaks of these tornadoes.”
Adds Harold Brooks, senior scientist at NOAA’s National Severe Storms Laboratory, who was not involved with this project, “The study is important because it addresses one of the hypotheses that has been raised to explain the observed change in number of tornadoes in outbreaks. Changes in CAPE can’t explain the change. It seems that changes in shear are more important, but we don’t yet understand why those have happened and if they’re related to global warming.”
Better understanding of how climate affects tornado activity can help to predict tornado activity in the short-term, a month, or even a year in advance, and would be a major aid to insurance and reinsurance companies in assessing the risks posed by outbreaks. “An assessment of changing tornado outbreak size is highly relevant to the insurance industry,” notes Kelly Hererid, AVP, Senior Research Scientist, Chubb Tempest Re R&D. “Common insurance risk management tools like reinsurance and catastrophe bonds are often structured around storm outbreaks rather than individual tornadoes, so an increasing concentration of tornadoes into larger outbreaks provides a mechanism to change loss potential without necessarily altering the underlying tornado count. This approach provides an expanded view of disaster potential beyond simple changes in event frequency.”
Tippett notes that more studies are needed to attribute the observed changes to either global warming or another component of climate variability. The research group plans next to study other aspects of severe thunderstorms such as hail, which causes less intense damage but is important for business (especially insurance and reinsurance) because it affects larger areas and is responsible for substantial losses every year.
###
The study was partially funded by Columbia University Research Initiatives for Science and Engineering (RISE) award; the Office of Naval Research; NOAA’s Climate Program Office’s Modeling, Analysis, Predictions and Projections; Willis Research Network; and the National Science Foundation.
LINKS
PAPER: http://science.sciencemag.org/lookup/doi/10.1126/science.aah7393
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

If the mainstream media was truly balanced they would be ripping this claim and many other CAGW claims to shred. They’ll rip anything that is contrary to the meme, but go silent when crap like this is published. Do the believers of CAGW that frequently come to this blog agree with this or do they have some explanation that the rest of us would like to hear?
The only good clusters are almond clusters. Every other use of the word seems designed to scare people.
So does the fact that the number of tornadoes is down not matter?
That fact is useless to their purposes.
Multiple-vortex tornadoes aren’t responsible for the changes. A multiple-vortex tornado still is one tornado in the database. Reporting changes are unlikely to responsible for the observed changes (fewer days with at least one F1+ tornado, more days with many F1+ tornadoes) with the net result being no trend in the annual number of F1+ tornadoes per year from 1954-present (~500/year). Those changes have been known in the literature for a few years.
Those changes are not the main result of this paper. The main result here is in addressing the proposal from Elsner et al (2014) that the clustering on the big days is because of changes in CAPE. CAPE has increased and is expected to increase more as the planet continues to warm. What Tippett et al has shown is that changes in CAPE are not consistent with the changes in tornado reports. The change in reports on big days are associated with changes in storm-relative helicity. From a forecasting standpoint, that makes sense, but it’s not clear how that’s related to a warming planet, so Tippett et al. suggest that some physical process to link the helicity changes to the warming planet must exist or the change in distribution of tornadoes is related to something other than the planet warming.
Harold Brooks,
Several Inquiries. -How was SRH calculated in 1965 compared to today? The number of meteorological inputs have significantly changed have they not?
-Why did they decide to use EF1 and not EF2 as their lower bound? It seems to me more EF1’s are being rated as such today that maybe would have been completely ignored in 1965 or included as part of one longer track tornado? Which begs a larger question on how and what were the procedure/directives for damage surveys in 1965 and how has that changed today?
-How many individuals were conducting damage surveys and how much money was spent on surveys in 1965 compared to today?
-How many more buildings and structures to be damaged are there today compared to 1965 and was this taken into account? For Example there are 260 Million road vehicles today compared to 92 million in 1965. That’s quite a bit less of one type of object that can be damaged or destroyed compared to today. Thank you.
SRH wasn’t calculated in 1965. Neither was CAPE. They didn’t exist then. The calculations are done in this case using the North American Regional Reanalysis, which uses all of the observations available at the time, runs them through the analysis package that would be used to initialize current numerical weather prediction models and creates vertical profiles of the atmosphere. Those profiles match well at collocated radiosonde sites.
(E)F1 is typically used as the lower bound because the decision that it is at least (E)F1+ (causing some minimal damage) has been relatively consistent. This results in no long-term trend in the annual number of (E)F1+ tornadoes (~500/year). It’s not perfect, but it’s much better than using (E)F2+. No tornadoes had damage ratings assigned in 1965. The vast majority of pre-~1977 tornadoes were rated by undergraduate students in the late-’70s using text descriptions of the damage (Fujita’s group had done some assessments 1970-1975). There are a number of pieces of evidence that indicates that the students overrated tornadoes from F1+-the sudden change in many of the statistics of tornadoes (path length, number per day), environmental conditions, Tom Grazulis’s work on uncovering pictures of damage and then assessing them independently. The overrating problem for F1+ for the first 20+ years of the dataset has been known for about 25 years, but there’s no simple fix.
Many of the statistics of individual tornadoes for (E)F1+ have not changed through the years (width is a big exception). What has changed in the last few decades is a slow trend to fewer days with at least 1 (E)F1+ tornado and an increase in the number of days with a lot of (E)F1+, as well as increased variability to the timing of the early part of the national season (primarily impacted by occurrences in the southeast) and a shift of the season earlier in the year in the Plains. It’s really hard to explain those behaviors by invoking reporting changes. The first two (days and number per day) would be expected to change in tandem if it was reporting issues and there’s nothing about variability of the timing that would be expected to change with reporting.
Again, Tippett’s work is significant because it takes the previously reported changes in occurrence and looks at how environments have changed, finding that the changes in helicity correlate well with the changes in occurrence, but the changes in CAPE don’t. Changes in CAPE, which increases in the mean as the planet warms, had been proposed as the explanation for the occurrence changes. Tippett’s work makes that explanation extremely unlikely so that we’re left with two choices: 1) the changes aren’t associated with warming (maybe with one of the slowly evolving teleconnections), or 2) the changes in helicity are associated with warming in a way that’s not completely understood at this point.
Thanks for answering some of my inquiries. Do you know if any adjustments were made in the reanalysis with respect to time(yr, decade) for SRH? Seems like more measurements would increase SRH maxima ie compact shortwaves, MCV’s etc.that wouldn’t be resolved well using a sparser observation network.Thanks -Jon
Not for SRH, specifically. Error characteristics of observation systems are taken into account in the data assimilation to create the state variables (temperature, mixing ratio, components of the wind, pressure) and then calculations of quantities of interest are done on those variables. Over the lifetime of the NARR (the reanalysis used), surface and upper air observation networks have not gotten more dense. It doesn’t use radar data and it begins when the modern satellite era begins (1979).