Select Page

Petascale Day Part 1: The Hardware

By Rob Mitchum // October 18, 2012

In the computational world, where speed is king, fifteen zeros is the current frontier. The new wave of petascale supercomputers going online around the world in the coming months are capable of performing at least one quadrillion, or 1,000,000,000,000,000, floating calculations per second. In exponential notation, a quadrillion is shortened to 1 x 10^15, so clever computer scientists declared October 15th (get it?) to be Petascale Day, a celebration of this new computational land speed record and its ability to transform science.

Here at the Computation Institute, we observed the day by hosting a lunch event with the University of Chicago Research Computing Center, putting together a roster of six talks about these powerful machines and the new types of research they will enable. The speakers, who hailed from Argonne National Laboratory, the University of Chicago, and the Computation Institute, talked about the exciting potential of the petascale, as well as the technical challenges scientists face to get the most out of the latest supercomputers.

To put petascale in perspective, the computer you’re reading this on likely works at the gigascale — armed with a processor that can do some number of billions of calculations per second. That sounds like a lot, but it’s still a long path from the gigascale, via the terascale (where most of today’s supercomputers and computing clusters sit), to the petascale, which is roughly one million times faster than your laptop. The latest home computers achieve faster speeds through the use of multi-core processors which can split up the computational workload among 2, 4, or 8 processing unit, and the same strategy is employed by the new class of supercomputers as well, although in their case the number of cores is fast approaching one million.

Take Mira, the new supercomputer currently being installed at Argonne National Laboratory, which carries 1,024 16-core nodes. Per rack. And there are 48 racks, bringing the grand total to 768,000 cores. That’s enough horsepower to make Mira the #3 fastest supercomputer in the world when it reaches “full production mode” in 2014, said Katherine Riley, Manager of the Scientific Applications Group at the Argonne Leadership Computing Facility (ALCF). At peak strength, Mira will be a 10 petaflop machine (capable of 10 quadrillion calculations per second) that will provide over 5 billion computing hours to scientists around the world who want to use the supercomputer for research.

Reaching the petascale with Mira is only the latest milestone for the ALCF, which was founded by the Department of Energy in the mid-2000’s (along with a twin facility at Oak Ridge) to bring computational clout to ambitious science projects, Riley said.

“If you wanted to solve very, very large scale problems, you had very few places that you could go,” Riley said. “So the way these facilities were built was not just to build a large facility and drop it on the floor, but to bring experts into the facility so that people could actually talk to computational science experts about how they could run their problems on these machines.”

But the challenge of harvesting the maximum rewards out of unprecedented computational power ability remains relevant below the petascale as well, said H. Birali Runesha, director of the Research Computing Center at the University of Chicago. At the heart of the RCC is the Midway computing cluster, which might develop an inferiority complex next to Mira if computers had feelings, what with only 3,200 computing cores. But Runesha said that the RCC sees its hardware as building blocks bridging the gap between researchers using their desktop computers and world-class machines such as Mira. Many scientists don’t need petascale computing to run the applications they need for their research, Runesha said, and many more don’t know how to write the software that truly capitalizes upon such powerful resources.

“The hardware evolves so fast, but the software is really lagging behind,” Runesha said. “The bulk of the work happening on campuses with graduate students and researchers is at a scale below 100 processors. There is a lot of work that needs to be done even at the terascale level before moving to the petascale level.”

pcf_inside2.jpgWhen researchers are ready to make that jump, there will be a lot of local petascale computing time available for request. In addition to the billions of computing hours Mira will bring to the table, a sibling from downstate will also share the load. Blue Waters will be a 760,000-core (380,000 floating point unit), 1 petaflop supercomputer housed at the National Center for Supercomputing Applications at the University of Illinois in Urbana-Champaign. As Computation Institute Fellow Michael Wildedescribed it, Blue Waters will serve a very similar audience of scientists and engineers looking to increase the scale of their computational research. Fifty million of the computer’s core hours will be allocated to the Great Lakes Consortium for Petascale Computation, an alliance of 28 universities, laboratories, and even K-12 school districts including the University of Chicago (You can apply now for some of those Blue Waters hours here).

As with all powerful new tools, scientists will also spend a lot of time on Blue Waters and other petascale machines of its ilk,  learning how to make their applications perform well on these integrations of leadership-class computing, storage and networking facilities — underscoring just how much still needs to be learned about our powerful new computational abilities.

“The machine was also focused to some extent on achieving computer science and computer technology innovation,” Wilde said. “A fair amount of focus on the machine will be on architectural research and on software research to begin to understand what kind of softwares we need to keep a machine this large sustained.”

[But what can scientists do with all these quadrillions of calculations in medicine, chemistry and economics? Are they ready for the petascale? Find out in Part 2 of the Petascale Day recap tomorrow, featuring talks by Robert Grossman, John Hammond and Svetlozar Nestorov.]