![wea00246[1]](http://wattsupwiththat.files.wordpress.com/2013/09/wea002461.jpg?w=300&resize=300%2C225)
Problem is, tornado formation, being highly chaotic, can’t be as easily interpolated, infilled, and adjusted like temperature data can. Just because a tornado occurred in two places, doesn’t automatically mean there was one in between them that was unreported. Thunderstorm cell formation is micro to mesoscale in size, meaning tornadoes are highly local, and not all cells produce tornadoes, even if there is a line of tornadic prone cells with a front. They’ll have to make up reports out of whole cloth in my opinion. Interpolation of tornado sighting data just isn’t sensible, but they are going to try anyway:
Their model calls for the reported number in rural areas to be adjusted upward by a factor that depends on the number of tornadoes in the nearest city and the distance from the nearest city.
Also, in my opinion, this is statistical madness.
From an FSU press release, by Jill Elish
Twister history: FSU researchers develop model to correct tornado records for better risk assessment
In the wake of deadly tornadoes in Oklahoma this past spring, Florida State University researchers have developed a new statistical model that will help determine whether the risk of tornadoes is increasing and whether they are getting stronger.
Climatologists have been hampered in determining actual risks by what they call a population bias: That is, the fact that tornadoes have traditionally been underreported in rural areas compared to cities.
Now, FSU geography Professor James B. Elsner and graduate student Laura E. Michaels have outlined a method that takes the population bias into account, as well as what appears to be a recent surge in the number of reported tornadoes, thanks in part to an increasing number of storm chasers and recreational risk-takers roaming Tornado Alley.
Their model is outlined in the article “The Decreasing Population Bias in Tornado Reports across the Central Plains,” published in the American Meteorological Society’s journal Weather, Climate, and Society. The model offers a way to correct the historical data to account for the fact that there were fewer reports in previous decades. In addition to Elsner and Michaels, Kelsey N. Scheitlin, an assistant professor at the University of Tennessee at Knoxville, and Ian J. Elsner, a graduate student at the University of Florida, co-authored the paper.
“Most estimates of tornado risk are probably too low because they are based on the reported number of tornadoes,” Elsner said. “Our research can help better quantify the actual risk of a tornado. This will help with building codes and emergency awareness. With our research, the science of tornadoes can move forward to address questions related to whether cities enhance or inhibit tornadoes.”
Although other researchers have proposed methods to address the population bias, all of them assume the bias is constant over time, Elsner said. This model is the first to take into consideration how the population bias has changed over time.
Historically, the number of reported tornadoes across the premiere storm chase region of the central Plains is lowest in rural areas. However, the number of tornado reports in the countryside has increased dramatically since the 1970s and especially since the 1996 release of the disaster movie “Twister.” The movie spawned a generation of storm chasers who are partially responsible for more tornado reports, Elsner said.
Interestingly, Elsner’s model was developed after he led a team of undergraduate and graduate students on a storm-chasing mission of their own.
“While we were driving around the Great Plains looking for storms, I challenged my students to think about how the historical data could be used to better estimate the risk of getting hit by a tornado,” he said. “The observations of other chasers and the geographic spacing of towns led us to our model for correcting the historical record.”
In addition to more storm chasers logging tornado sightings, greater public awareness of tornadoes and advances in reporting technology, including mobile Internet and GPS navigating systems, may also have contributed to the increase in reports over the past 15 to 20 years.
The increase in reports has diminished the population bias somewhat, but it introduced a second problem: There are more reports, but are there also, in fact, more tornadoes? In other words, is the risk actually increasing?
To address these issues, the FSU researchers first made the assumption that the frequency of tornadoes is the same in cities as in rural areas. They also operated on the assumption that the reported number of tornadoes in rural areas is low relative to the actual number of tornadoes.
Their model calls for the reported number in rural areas to be adjusted upward by a factor that depends on the number of tornadoes in the nearest city and the distance from the nearest city. The model shows that it is likely that tornadoes are not occurring with greater frequency, but there is some evidence to suggest that tornadoes are, in fact, getting stronger.
“The risk of violent tornadoes appears to be increasing,” Elsner said. “The tornadoes in Oklahoma City on May 31 and the 2011 tornadoes in Joplin, Mo., and Tuscaloosa, Ala., suggest that tornadoes may be getting stronger.”
The Oklahoma City tornado on May 31, 2013, was the largest tornado ever recorded, with a path of destruction measuring 2.6 miles in width. The Tuscaloosa and Joplin tornadoes are two of the most deadly and expensive natural disasters in recent U.S. history.
“from 1900 to 2000 you count 1 lighting strikes per month. thats 12 per year since 1900.”
If, while reading the above, you thought “from nineteen hundred to two thousand…” Just slap yourself in the face, hard, twice.
I don’t know about tornados, but we had these amazing tornadic waterspouts form over Lake Michigan today, just off the shore of Kenosha, WI! Clearly, carbon dioxide is the cause….
Tornado undercounting is a restricted to EF1 and below (where it is a marked effect).
EF2 and higher is historically consistent and puts the lie to the idea that storms are either more frequent or that more powerful storms are more frequent.
Thiat is a myth propagated by modellers that simple does not happen in the real world.
http://climategrog.wordpress.com/?attachment_id=255
They’d do better to adjust the models than the data.
If nobody above has already said this, pardon me.
If you do an artificial adjustment along these lines, you are adjusting out any real variation that might be there. Suppose the natural variability was 3 events a month (or whatever) on 1900 and 5 events in 2000. If you inflate the past to 5 in 1900 and keep 5 in 2000 and infill, you are assuming that there is NO NATURAL VARIABILITY.
If you adjust to give no natural variability, what the heck is your reason to do an analysis?
You know the answer before you start.
Maybe the 1900 observations were correct and there is a real trend in this imaginary example. No reason to infill.
So why do it? No reason in any case.
Useless as tits on a bull.
Steven Mosher says:
September 12, 2013 at 12:29 pm
……
‘adjusting” the past data is the wrong terminology. What they are doing is estimating the number in the past that would have been observed had the current observation system been in place.
Steven,
I am disappointed that you can make such a statement.
What they are doing is ‘guessing’. They are making the assumption that tornado outbreaks were the same in the past as they are now in the spread of intensities and numbers. So if today there is a large F5 tornado and the season had n smaller tornadoes, then that means the same distribution of tornadoes existed in past seasons. This is a totally unscientific and assumption and has no basis in meteorology. There is no ‘standard distribution’ of tornadoes – geographic, strength or temporal that allows this type of infilling as their occurrence is chaotic. So the good professor, his students, and now you, are inventing relationships that do not exist. This cannot in any way be called scientific and the output is pure guesswork modeled by parameterized software.
steven says:
September 12, 2013 at 12:11 pm
“What do you get when you cross a tornado and a hockeystick?”
Ish calld a swizzleschtick old bean. ‘nother Pimms?
Mosher — “‘adjusting” the past data is the wrong terminology. What they are doing is estimating the number in the past that would have been observed had the current observation system been in place.”
Yes, they are estimating the past unobserved obsevables *by adjusting* the count past observed observables. Now, quite obviously this is not an empirical experiment. We cannot empirically observe unobserved observables that have already gone and went. So it is either a valid theory or simply not science.
As to adjusting historical data or future data for the purpose of attaining continuity in measurements: Then it’s nothing more than a reference frame quibble. If their math is valid you can raise the past or lower the future. The math doesn’t care which. Pragmatically, however, it then requires that you update *every past tinkered estimate* on the production of each new future observable. Which is a little bit stupid. Other than that and attempts at deceitful chart porn, it is wholly irrelevant.
Is it a valid theory? Sure, same problems with calling a duck a duck though. If they claim it is something other than a theory, then it’s pure bollocks. But it is a valid theory, albeit a non-replicable one. This is not the end of the universe of course, we simply cannot replicate on demand. It requires we *wait* and *watch* numerous tornado seasons go by, and see how the observed observables jive with their theory. If the past values end up being adjusted by future observables, then the theory is knackered. How knackered requires a host of future seasons to go by — without altering the theory — to know just how knackered.
But until that point it is a pure theory that has *yet* to be legitimately tested. There is no need to give it any belief to the affirmative or negative. It’s simply idle trivia that does not warrant futzing around with actuarial tables or Tornado credits. Oddly enough, you might recognize that these statements apply to the vast raft of climate science.
Nothing wrong with your notions as posted generally and I don’t mean to dog you on it. Just adding in the little things I thought should be mentioned.
“Reporting” – what does that mean? There is no serious statistical analysis of the observation methodology (or even of the how to do it). Every time we get a severe thunderstorm up here we get lots of scud reported as “wall clouds”, primarily because we get a few good tornados every few years, and thereafter, for the entire severe weather season, we get lots of reports – every storm damage becomes a tornado.. People still have trouble distinguishing the sky and especially straight line gust fronts that blow down trees and knock over sheds regularly here. Are funnel clouds under reported? Very likely. Radar observations here often show mesocyclonic rotation that occasionally drops an F0 or F1 twister here, and you can bet there are lots of “F-1” dustups. In the past most of these were viewed as just a “thunderstorm” and ignored. Now they’re “extreme weather”
We’ve had a noticable uptick in tornado warnings here, mostly as a result of a perceived failure to under report by Environment Canada. People today don’t seem to know what their forebears knew: bad thunderstorms can knock you down and beat you up. Every time it happens, people are running around screaming “why didn’t you warn us?”. Two days ago we had the provincial emergency management office issuing tornado warnings where none existed (and had not been issued by Environment Canada), causing local emergency response units to activate their call-out tree. I’m sure many civilians “saw” tornados during that thunderstorm.
Increased severe weather, or increased “severe weather hysteria”?
BTW – what happened to preview?
With our research, the science of tornadoes can move forward to address questions related to whether cities enhance or inhibit tornadoes.”
================
nonsense. you are adjusting tornado counts upwards based on proximity to cities, which will artificially skew the question of cities contribution. depending on the amount of adjustment, you can control whatever answer you get, thus you will get the answer you subconsciously expect.
Radical Rodent says:
September 12, 2013 at 3:32 pm
Similarly, in the UK, more and more homes are being built on the flood plains of rivers. Well, guess what – the number of homes affected by flooding is increasing!
==========
farmers historically built their homes on the hill, and farmed the lowlands alongside the river. knowing full well that every now and then the river flooded the lowlands, and the silt thus deposited is what made for great farming.
eventually the farms were sold for development and parceled up for home building. The new owners, not being farmers, were only too keen to pay a premium for waterfront homes alongside the river.
_Jim says:
September 12, 2013 at 5:17 pm
I still have to disagree. Of course, if you a decent set of barometric readings, wind readings, temperature values, etc – over a closely spaced/gridded area – you may well be able to INFER that conditions were or were not ‘favourable’ for the formation of tornadoes, which I assume is what you are alluding too. However, adjusting data to fit any such inference is still completely false, For a start, you have no way of indicating uncertainty levels in your ‘inferences’? In such observational data, an event happens, or it doesn’t – there isn’t really any halfy-halfy state. Think binary here. What would you do? Invent a system which says that if there is 50% conditions met for possible tornado formation, that one didn;t form – or vice versa, that when 51% conditions are met , there was an ‘unseen’ or ‘unrecorded’ tornado? Sorry, but IMHO that is all complete BS and fabrication.
(and yes, we have weather fronts, and I understand them very well! – but no, we don’t really do tornadoes!)
Looking at it, the purpose and reasoning seems sound, but the methodology is backwards. What they should do is map all the current tornados, and all the past tornados. Then, they should find the blind spots in the past and put blinders on those areas for present tornados. This way, we can easily compare the two time periods.
True, this will make it difficult to create a single, pretty chart, but it will involve real data and subsets of such, rather than making things up whole-cloth.
You would think that if we can image the track of dust devils on Mars, it should be fairly simple to look for similar tracks in satellite images and just count them. It would require far less guess work.
If the observed data isn’t falling your way, make shtuff up with zero-skill modelers and their models. Is it any wonder mothers won’t let their children grow up to be climate scientists?
Someone with library research skills will have to verify this, but after the tornado swarm of April 3 1974, the speculation was that it was caused by the airplane carrying Vice President Ford to the opening day game in Cincinnati. The plane was running late and had to fly faster than usual to get to the game on time. Nature is always the fault of some human, usually on the political right. Statistics show this as much then as now.
I am about to embark on statistical modelling project that proves all dogs have five legs, if I can manage to get the words “Climate Change” into the title what are the chances of a government grant?