In this 2019 photo, Whitney Flynn, a physical scientist at the National Water Center in Tuscaloosa, Ala., works on computer screens that show flood predictions and other information. (Jay Reeves/AP)

NOAA is moving to close the computer modeling gap between the United States and international weather and climate agencies, such as the European Center for Medium-Range Weather Forecasts (ECMWF).

On Feb. 20, the National Oceanic and Atmospheric Administration announced it signed a contract that would provide two new supercomputers from Cray Computing, a division of HP, which would triple the computing capacity of the machines devoted to running the agency’s weather forecasting and research models.

By the time all the machines are up and running in 2022, the agency would have a total of 40 petaflops of computing capacity dedicated to running about two-dozen operational computer models, from the flagship Global Forecast System (GFS) to the short-range High Resolution Rapid Refresh (HRRR) model.

The contract was awarded to CSRA, a General Dynamics company, and that company will manage the procurement and installation of the supercomputers. The contract has a total value of about $505 million over eight years.

In recent years, the accuracy of the GFS, which is America’s main forecasting model that produces predictions more than a week into the future, has fallen well behind the European model as well as the primary models run by Britain’s Met Office and, at times, Environment Canada.

This well-publicized modeling gap, which first gained prominence during Hurricane Sandy in 2012, has worsened recently, motivating Congress to provide additional funding for NOAA’s computing resources.

While the new machines will provide additional speed to run NOAA’s computer models and allow for more data to be fed into them, large differences will still remain between how NOAA operates and how the European model is researched and run each day. These differences may mean that, despite the computing upgrade, the GFS model will continue to lag behind the European in accuracy for years to come.

“This enhanced resource would give [NOAA] the capacity to make better forecasts,” said Cliff Mass, a professor of atmospheric sciences at the University of Washington and an advocate for greater resources for U.S. weather modeling. “But it is important to note that the proposed increased resources are far less than needed to provide the American people with state-of-science weather forecasting.”

The new contract will provide for two new Cray machines, one of which will be used operationally with the other continually functioning as a backup. Each of these will have a 12-petaflop capacity. This means the computers will be capable of performing 12 quadrillion floating-point operations per second.

New computers will enable model improvements


A computer model projection from the GFS of surface air pressure and wind speeds on Feb. 16 associated with Storm Dennis in the North Atlantic Ocean. (Earth.nullschool.net)

These computers will replace the existing systems known as Luna and Mars, located in Reston, Va., as well as Surge and Venus in Orlando. The new machines will be located in Manassas, Va., and Phoenix.

When combined with NOAA’s research and development supercomputers located in West Virginia, Tennessee, Mississippi and Colorado, which have a combined capacity of 16 petaflops, the total NOAA supercomputing supporting operational weather prediction and research will reach 40 petaflops by 2022.

According to NOAA, this not only triples the agency’s performance capacity, it also doubles the “storage and interconnect speed.” This will allow NOAA to develop models with a higher resolution, more advanced physics, and improved ways of incorporating observational data from satellites, aircraft, surface stations and more and feeding them into the models in a way that improves accuracy (a process known as data assimilation).

Acting NOAA administrator Neil Jacobs is looking to the upgrade to help advance the agency’s modeling just as it engages with the research community to form a new organization, known as the Earth Prediction Innovation Center (EPIC), that will address long-standing bottlenecks in translating research gains into operational forecasting tools.

According to Peter Bauer, deputy director of research department at the ECMWF, the new computing contract will bring the United States roughly to parity with the European Center when it comes to raw computing power. Bauer said its latest contract would advance the ECMWF to 37 petaflops for operations, research and other applications. Britain’s Met Office is on a similar trajectory.

Bauer cautions that processing speed is not the best metric to use for comparing performance, since there are more potential gains to be made with new coding and software that are becoming available. Gains from simply installing faster processors are diminishing over time, Bauer said in an interview.

European scientists as well as others around the world are focusing on pursuing a “radical” change to the computer code used for weather and climate models to improve efficiency.

“This software development task is also called a ‘software gap’ because there is new hardware out there, but we are not ready to use it,” Bauer said via email. “It’s not easy because you have very large codes … and to fundamentally revamp them takes some time and also acceptance by scientists,” he said.

Europeans’ singular focus gives them an edge


A look at a small section of the enormous supercomputer running the ECMWF model. (ECMWF)

One of the big differences between the European Center and NOAA is that NOAA runs numerous computer models on its system, whereas the European Center just runs one. At NOAA, these include everything from the GFS and its ensembles (numerous computer model runs made with slightly different initial conditions) to the WaveWatch model that’s used to predict ocean conditions.

NOAA “has far more responsibilities than the European Center and other international centers and thus requires considerably more resources,” Mass said.

The European Center has the advantage of a narrower focus.

“We’re running like 25 or 30 additional models other than just the GFS,” NOAA’s Jacobs, who has a background in weather modeling before coming to NOAA, said in an interview. He says a more “apples to apples” comparison with the Europeans is how many nodes the United States runs the GFS and its ensembles on when compared to the European ensemble.

On that metric, “What we can dedicate to global [modeling] is a good bit less than what they can dedicate to the European [global model],” Jacobs said.

Jacobs said he looks forward to a hypothetical day when NOAA can run the GFS and its ensembles with a full 12 petaflops of computing power. “We would crush it, it would be pretty amazing,” he said.

Mass estimates the cost for this kind of computing power is no more than a single high-tech jet fighter.

The key to making this leap in modeling will be gaining the capacity to run a global model at high resolution.

A computer model breaks the world down into slices, or grids, within which it projects future conditions. The higher the resolution, the smaller those slices are, and the more fine-scale details a model is likely to capture. A severe thunderstorm, for example, could be only 5 miles in diameter, meaning that a model with a coarse resolution could completely miss that storm.

In general, the greater the model resolution is, the more accurate a model may be.

NOAA’s HRRR model, for example, is run at a high resolution to capture individual thunderstorms that could develop across the United States on short time scales and is used as a near-term forecasting tool.

Not coincidentally, Jacobs says it’s the model that currently requires the most computing power of any of NOAA’s current generation of models, in a sign of what kind of technology will be needed as the agency’s global model is brought to higher resolution.

Richard Rood, a professor of meteorology at the University of Michigan and co-chair of a steering committee for improving U.S. computer modeling, called the announced computer upgrade “exciting” and “welcomed,” but, like Mass, seeks even more resources.

“We need long-term funding and planning to provide sustained state-of-the-art computational infrastructure that includes supercomputing, data storage, networking, electrical services, as well as essential tools and services,” Rood said.

Jason Samenow contributed to this article.