Technology similar to what Facebook uses for recommending which friends you should “tag” may soon be coming to hailstorms. David John Gagne, a machine-learning scientist at the National Center for Atmospheric Research, is using facial recognition technology to unlock the secrets behind big hail.
“I’m using artificial-intelligence techniques to predict the size of hailstorms,” Gagne said. Working with computer-simulated storms, he created software that is trained to determine which storms produce hail and then to recognize patterns associated with the storms behind the largest hailstones. “The shape of storms is really important,” he said.
His latest work is published in Monthly Weather Review.
Gagne’s novel approach started with his PhD dissertation between 2014 and 2015. It continued with a postdoc fellowship at NCAR, where he used “deep learning” to look at storms and find spatial patterns in the storm data he inputs. Studies conducted by other scientists often looked at finer-scale processes within the storm. Gagne is taking the opposite approach, broadening outward to consider the storm’s entire structure.
The work he’s doing deals with computer-generated storms. “We create storms and derive their hail size with the microphysics,” he said. Gagne then uses the raw data of what the storm “looks” like structurally to train software to predict its hail size. Over time, Gagne’s machine learning model is refined, improving its predictions with each successive run.
Why not deal with actual hailstorms? “Simulated storms are a more self-consistent system,” Gagne said. In real life, there are many more complicating variables that render an experimental data set incomplete.
“The data we have is skewed,” Gagne said. “The hail reports cluster near cities or interstates.” In rural areas, the largest hail may strike in areas where nobody lives, leading to a missed event. Public-submitted hail reports may not be mapped correctly; even subtle discrepancies have a compounding effect over time. Doppler radar data could be used to fill in the gaps, but that comes down to radar coverage — which is somewhat lacking in many hail-prone areas. “And Doppler-estimated hail size has its own biases,” he said.
Other scientists agree it’s a worthwhile project, citing promising results. Philippe Tissot, a researcher at Texas A&M at Corpus Christi who has worked at the intersection of atmospheric sciences in technology, said that Gagne is “leading the field.”
“David is one of the young leaders in our field combining an excellent understanding of atmospheric processes, with high level computational skills, and a deep understanding of how continuously evolving machine learning methods can help us better understand atmospheric and environmental processes and predict them more accurately,” Tissot wrote. He said Gagne is also helping spearhead efforts to organize a conference on artificial intelligence in the environmental sciences at the American Meteorological Society’s January meeting in Boston.
Paul Miller, an assistant professor at Louisiana State University, feels that machine learning can help forecasters sort out some of the randomness in an atmospheric setup.
“Even on days that we believe favorable for severe thunderstorms, not all thunderstorms turn out to be severe due to numerous other processes also affecting thunderstorm intensity,” Miller wrote. “Machine and deep learning techniques can potentially help forecasters refine their severe weather forecasts to better include not only the storms that ‘talk the talk’ but also ultimately ‘walk the walk,’ particularly when combined with radar-based characteristics.”
Miller described Gagne’s paper as “a very advanced meteorological application of deep learning that illustrates how much storm-scale information can be gleaned from model output” beyond that of coarser, lower-resolution weather models.
But it’s not all about the atmospheric sciences. David Wanik is an assistant professor at the University of Connecticut School of Business who studies natural hazards and their impacts with machine learning. He said Gagne’s project could offer new insight into the computer science elements, as well.
“Understanding why a model makes a prediction can be equally as important as the accuracy of the prediction itself,” Wanik wrote. “This paper is an excellent example of how scientists can interact with deep learning models, glean new insights and spark new research ideas from observing how deep learning models treat their input data.”
Gagne hopes that his endeavor may eventually serve as a supplement to meteorologists when dealing with the forecasting and warning of hail events.
“Our goal is to help better forecast hours or even days in advance,” Gagne said. His aspirations include both making more accurate real-time warnings but also stretching the warning time. If forecasters can predict the structure of storms before they form, Gagne’s work may offer insight into potential hail size before the first radar blips appear or storms develop.
“Then we could tell folks to maybe change their plans, put their car in the garage, tweak their outdoor schedules,” Gagne said. He believes this to be of great appeal to larger-scale interests, as well, such as for event planners or those in the transportation sector. “You could bring in more staff to help with crowd control at the airport or to move the planes faster.”
“This could save tons of money,” he said. “Since 2008, we’ve had more than $10 billion in hail damage every year. If you have a hailstorm going through a hail-prone city like Dallas or Minneapolis or Phoenix every few years, it can add up fast."
Gagne himself has witnessed firsthand the destructive power of hail. “The biggest I’ve seen was baseball size during a storm chase in Moore, Oklahoma, in 2010,” he said. Less than four years later, his vehicle suffered major damage when a similar-magnitude hailstorm ripped through the town of Norman. “That was to the tune of several thousand dollars.”