This post has been updated.
The United States can now claim to be in possession of the world’s fastest supercomputer. IBM’s “Sequoia” system beat out the previous leader, the “K computer” made by Japan-based Fujistu.
According to the BBC’s Naveena Kottoor, to give you an idea of Sequoia’s processing power, the machine can process in one hour what it would take 6.7 billion people (slightly less than every person on the planet) 320 years to calculate using calculators. That translates to 16.32 petaflops, relative to the K computer’s 10.51 petaflops.
This is the first time since 2009 that the United States has been able to claim the fastest supercomputer in the world.
The Top500 list of the world’s fastest supercomputers is released every six months and compiled by Hans Meuer of the University of Mannheim in Germany, Erich Strohmaier and Horst Simon of the Lawrence Berkeley National Laboratory and Jack Dongarra of the University of Tennessee in Knoxville. The latest list was released Monday at the 2012 International Supercomputer Conference in Hamburg, Germany.
Sequoia is located at the Lawrence Livermore National Laboratory in Livermore, Calif. It was installed there in 2011 and will be fully deployed in 2012 for work on behalf of the National Nuclear Security Administration. The supercomputer was created for use by the Department of Energy’s National Nuclear Security Administration (NNSA), which was established by Congress in 2000 to manage the nation’s nuclear weapons stockpile and prevent nuclear proliferation, among other responsibilities.
Not everyone had a positive reaction to the announcement. ZDNet’s Zack Whittaker writes, ”The U.S. may claim home to some the world’s top scientists, just as China has for two non-consecutive years claimed the world’s fastest computer. ... At the end of the day: supercomputers are just tools.”
To which Whittaker’s colleague Dan Kusnetzky wrote in reply: “On the other hand, IBM or any other supplier’s ability to create such a complex, powerful computing research is of extreme importance. The thought, the tools and the procedures needed to build and operate such a huge system are directly applicable to other types of computing.”
Their full pieces are well worth a read for additional context as to the ongoing race for the supercomputing crown and what it stands to mean. Is it about the tool, the people or both?
At the end of the day, it may just be about U.S. supercomputing fans taking a moment to chant: “We’re #1.”
Updated 1:21 p.m.:
I spoke with Dave Turek, vice president of high performance computing systems at IBM on Monday about both Whittaker and Kusnetzky’s points.
“There are elements of truth in terms of what everybody said,” Turek said. “From a technological perspective, it’s a really big deal.”
According to Turek, here’s why: The Sequoia system is about 1/10 the size of Fujitsu’s K computer, uses considerably less energy and has “remarkable reliability.”
“It’s hard to do those kinds of things all at the same time,” Turek said. The computer can do two billion calculations per second with 1 watt of energy, making it, according to Turek, the most efficient in the world.
“If you look at your PC today, that microprocessor in there,” he said, “that system is probably consuming 100 to 150 watts.”
“Many people have abused or misused Top500,” Turek said, add that people have used it as a motivation to get funding to buy systems and that companies have used it as a target for design.
“What’s happening is that the benchmark, per se, is too narrow to convey everything about a system,” Turek said, citing the incorporation of big data and analytics into the supercomputing world.
“The future is changing in terms of how people do high-performance computing.”
Updated 2:24 p.m.:
I also caught up with Michel McCoy, program director for Advanced Simulation and Computing at Lawrence Livermore National Laboratory, where Sequoia is housed.
The team at Livermore has been working on the three generations of BlueGene computers that have contributed to the creation of Sequoia for a little longer than a decade.
Asked when enough supercomputing power would be enough, McCoy said that “to answer that question with precision...is very difficult.”
“We’re kind of facing a computational speed limit – or a computational barrier,”continued McCoy, who said that if the U.S. does nothing, the ability to model physical systems (think, systems critical to America’s U.S. nuclear stockpile), will be threatened.
“That’s the challenge and I hope this country is up for it,” he said.
That challenge is not merely one of funding but, as we often mention here on Ideas@Innovations — talent. The talent pool for work done on behalf of NNSA is understandably narrow and precise. McCoy estimates that security clearance combined with getting new staff members up to speed can take anywhere from 2 1/2 to 5 years and cost well over $1 million. This means McCoy and his colleagues are “very worried about retention.”
“There is no country in the world better positioned to lead,” said McCoy, “It’s really a question of whether we can make the decision to continue to lead.”
Read more news and ideas about Innovations: