(NOAA/NASA)

Within three years, the United States will be running a new global forecasting model.

It will comprise the most innovative and efficient forecasting technology in the world. It will improve not only day-to-day forecast accuracy but also be able to predict costly extreme weather events more than three weeks in advance. It will give us more lead-time on hurricane landfalls, “zoom in” on smaller storms at high resolution and be flexible enough to nimbly incorporate emerging new research from the forecasting community.

At least that’s what the National Weather Service hopes, and it just took its first critical step toward that future.

On Wednesday the Weather Service announced that, after a highly competitive evaluation process, it has selected the backbone for its global weather model of the future. The so-called dynamic core brings together the equations that govern the atmosphere on a large scale: the way air moves, the way it rotates into low pressure systems and the temperature it climbs to under high pressure.

Six different models underwent months of rigorous testing, after which a selection committee recommended the “FV3,” a dynamic core created by the Princeton, N.J.-based Geophysical Fluid Dynamics Laboratory, or GFDL — a research branch of NOAA.

The Weather Service has three main goals for the new model, which will continue to be called the GFS in its reincarnation. First, they are looking to improve forecast accuracy beyond eight to 10 days — a timescale at which the current model has failed to compete. It also wants to improve hurricane forecasts, and increase the lead time on extreme events (like East Coast blizzards, for example) to three or four weeks.

These are bold goals, and the National Weather Service believes the FV3 is the dynamic core that will carry the GFS to that end. However, there are some in the weather community who disagree with the decision.

The runner-up in the selection process was a model nicknamed “MPAS,” developed by the Boulder-based National Center for Atmospheric Research. Cliff Mass, a professor of atmospheric science at the University of Washington and outspoken critic of Weather Service forecasting endeavors, has serious concerns that by not selecting MPAS, the European model, known for its accuracy on a number of costly extreme weather events, will leave the United States in the forecasting dust.

“The [Europeans] will be chuckling,” Mass wrote in a blog post, “they are now working on a new model that from what I can tell is very similar to MPAS.”

At the end of the day, the committee’s decision came down to two major problems on the part of the MPAS. First, it fell short in a few real-world tests; significantly, it was not able to simulate the eye of a hurricane. Second — and maybe more importantly — the MPAS is a beast of a model that takes a lot more computational power and time to run.

In other words, the FV3 was simply more efficient.

Louis Uccellini, the director of the National Weather Service, said that that model speed was not the only factor that weighed in their decision but noted that efficiency is an imperative requirement for a forecasting system relied on by millions of end-users.

“An operational computer system has schedules and the users expect us to meet those schedules, so the computational efficiency becomes a very important part of it,” Uccellini told The Washington Post. “The overwhelming message is that the test was fair, so it’s important that we move forward with the results in hand.”

Mass and others have criticized the Weather Service for having an insular approach to building its models, not engaging enough with the research and international community. But Uccellini stressed to The Post the Weather Service is “committed” to working with the broader community to improve the model over time.

“[Mass] has raised some important issues with respect to the community engagement,” Uccellini said. “Obviously the proof will be in the pudding, but we are committing ourselves to working with the research community.”

The next steps in designing the new GFS include selecting a physics package and developing a new data assimilation scheme. The physics package includes equations that determine how smaller-scale processes take place, like cloud formation and the way friction affects air flow. Data assimilation is how the model ingests all of the real-time information from things like satellites, weather balloons and ground-level observations.

Physics and more importantly data assimilation (which can make or break a forecast model’s accuracy) are the things most frequently improved in a global weather model. Uccellini says that the development of these critical components will be a group effort across the modeling community so that improvements will happen more frequently, and so the GFS is incorporating the best schemes that exist across all U.S. agencies and models.

The new GFS will take three years to develop, during which time non-vital improvements to the current model will be put on hold. “If something crops up that absolutely needs to get fixed, we’re going to fix it,” Uccellini said.

Otherwise, the National Weather Service will be pouring all of its energy and focus into creating the next best global forecasting system.