Smith cited recent reductions in the lead times for twisters and suggested that the frequency at which warnings are correctly issued for them — known as the probability of detection — was a problem. He suggests that one way to improve is to create a review board, modeled after the National Transportation Safety Board, that would independently collect information and evaluate warning performance. I would agree with this assessment from Smith if this was indeed a problem.
Weather Service tornado warning statistics are a multifaceted reflection of nuanced issues. Most importantly, as I will describe, focusing on warning lead-time and/or accuracy misses the most pressing issues related to societal impacts from tornadoes.
For evaluating tornado warnings, one should examine them in the context of basic decision theory. This includes the understanding of tornado warning false alarms (i.e., instances when a tornado warning is issued, but no tornado is reported) and the value of a tornado warning from a cost-loss model. Tornado warning metrics can be boiled down into a 2 x 2 matrix with four potential outcomes: Hit, miss, false alarm and correct null. (A correct null simply means no warning was issued and no tornado occurred.)
The goal with tornado warnings is to maximize hit and correct null events while minimizing misses and false alarms.
Since tornadoes are rare events, it is easy to be quite accurate at the tornado warning desk. Simply forecast that a tornado will never occur! In this case, correct null events will dominate your sample, and you will have high accuracy. Probability of detection, however, will be zero because a tornado warning was never issued.
Smith is correct that the probability of detection for all tornadoes has decreased since the 2011 Joplin tornado, although it is not that simple. Weather Service regions raised their thresholds for warning in 2012 to 2013 with the desire to reduce false alarm events which are not captured by the probability of detection calculation. The result of this change has been to miss one tornado rated EF3 or higher (on the 0 to 5 Enhanced Fujita scale for tornado damage) every three years in exchange for about 90 fewer tornado false alarms every year. This could easily be interpreted as an improvement depending on your cost-loss model.
The major issue with Smith’s interpretation of the warning statistics is related to how the probability of detection has changed as a function of EF scale rating. The reduction in probability of detection and associated lead time is almost entirely tied to challenges in warning for weak tornadoes.
Separating pre- and post 2012, there has only been a one percent (nearly unchanged) drop in the detection for the more intense tornadoes, rated a EF3 or higher. Most of decreased detection has been for weaker tornadoes.
(Ironically, Smith has argued on Twitter that low-end severe weather situations do not merit Weather Service warnings and that they should only be issued for the most extreme life-threatening events, despite the agency’s mission to protect life and property.)
From my perspective, the reduction in probability of detection for weak tornadoes largely reflects efforts to provide warning for what are often short-lived, weaker tornadoes embedded within thunderstorm squall lines or what meteorologists call quasi-linear convective systems. They are much less straight forward to warn for than rotating thunderstorms, or supercells, which can produce longer-lived and more well-defined tornado signatures.
Quasi-linear convective system tornadoes, or QLCS tornadoes for short, can complete an entire life cycle in the matter of one or two Doppler radar scans. They are now easier to detect with dual polarization radar capabilities but tricky to provide warning for because they often develop and dissipate so quickly. Dual polarization radar capabilities fully came online in 2013, further complicating the assessment of warning statistics.
If the goal, as Smith suggests, is to increase tornado warning lead times, this could be achieved by simply focusing less on reducing false alarms. Forecasters would miss fewer tornadoes if they had more latitude to pull the warning trigger even when they may not materialize, increasing the probability of tornado detection. Higher probability of detection would lead to an increase in lead times (they are essentially mathematically related). However, depending on your cost-loss perspective, a couple more minutes of lead time in exchange for a big increase in false alarms is probably not a desired trade off.
I absolutely acknowledge that some Weather Service tornado warnings could be improved. In addition to the cases Smith lists from 2021, there are many warnings (both tornado and severe) that generally leave me befuddled.
Forecasters in the Weather Service are some of the best in the business and deserve high praise for their efforts, especially during times of active severe weather (imagine the stress of being in the life-or-death decision-making business!). Yet, just like the rest of us, they make mistakes with imperfect data and, with post-event assessment and continued training, will improve.
Unlike the NTSB model that Smith references, the entire play-by-play for tornado warnings are not stored in a black box. Anyone can see the process unfold in real-time which creates a Twitter stadium reaction during most significant events. Moreover, sometimes poor storm-spotter reports to the Weather Service prompt unnecessary tornado warnings. Even seasoned storm chasers can be confused by what they are seeing making the warning forecaster’s job even more difficult.
Looking ahead, the Weather Service is on the right track in many respects; it has conducted significant research and solicited stakeholder input for developing new warning techniques. For example, the National Severe Storms Laboratory is testing the “Threats-In-Motion” concept as a part of the Forecasting a Continuum of Environmental Threats (FACETs) framework which will eventually help change the way watches and warnings are issued.
In addition to these and other efforts, my suggested path forward for additional warning improvement mostly surrounds forecaster training, repetition and a dedicated team of warning forecasters.
I have many friends and colleagues in the Weather Service who may go years between issuing tornado warnings because tornadoes are not terribly common in their forecast area.
Some of the best warning forecasters will practice during live events that are not happening in their area, and updated training workshops and warning simulators help keep some forecasters fresh. However, the amount of training varies widely by Weather Service office.
Many may disagree, but if we desire to see significant improvement in national tornado warnings (and severe thunderstorm warnings for that matter), we should at least consider testing a national (or at the very least regional) issuance model, where all warnings are issued by a group of dedicated experts at one centralized hub. It would take a large expert team of warning forecasters, but the repetition, focus, and attention to detail would be unmatched, and I believe it would yield positive results. That said, there are limits to the practical predictability of tornadoes (especially QLCS tornadoes) from a warning perspective given their relatively small space and time footprint.
Finally, and most importantly, tornado warning lead times make for provocative discussion, but the real issue at hand in tornado warning situations for society is the underlying vulnerability of lives and assets in the zone being warned.
Time and time again, we see a disproportionate share of casualties in mobile and pre-manufactured housing. Tornado warning lead-time is irrelevant in these situations, as taking shelter in such structures has largely proven ineffective. Instead, we should encourage these residents to begin executing their tornado safety plans at the tornado “watch” stage to ensure ample time to move to a community shelter or sturdy structure. Of course, this is not a one-size fits all approach, as we know there are segments of the population who cannot simply pack up and leave for shelter.
Truth is, not many are looking at this issue from the other angle. What is the net benefit of the system as is — how many hundreds of lives are saved each year by the system that exists for all its supposed faults? It is difficult to quantify, but maybe looking at this from the opposite direction is the perspective some folks need to start from.
Remember how I suggested this is multifaceted? Appointing an independent review board for a problem that has been misdiagnosed is not going to solve the most pressing issues surrounding tornado impacts on society.
Victor Gensini is a professor of meteorology at Northern Illinois University, whose research focuses on extreme weather, severe storms and climate change.