Select Page

Rick Stevens on the Race to Exascale

By Rob Mitchum // May 23, 2013

The exascale — one million trillion calculations per second — is the next landmark in the perpetual race for computing power. Although this speed is 500 times faster than the world’s current leading supercomputers and many technical challenges remain, experts predict that the exascale will likely be reached by 2020. But while the United States is used to being the frontrunner in high-performance computing achievement, this leg of the race will feature intense competition from Japan, China and Europe. In order to pass the exascale barrier first and reap the application rewards in energy, medicine and engineering research, government funding is critical.

On Capitol Hill yesterday, CI Senior Fellow Rick Stevens testified to this urgency as part of a Congressional Subcommittee on Energy hearing, “America’s Next Generation Supercomputer: The Exascale Challenge.” The hearing was related to the American High-End Computing Leadership Act[pdf], a bill proposed by Rep. Randy Hultgren of Illinois to improve the HPC research program of the Department of Energy and make a renewed push for exascale research in the United States. You can watch archived video of the hearing here, and Stevens’ prepared opening statement is reproduced in full below.

=====

Thank you Chairman Lummis, Ranking Member Swalwell, and Members of the Subcommittee. I appreciate this opportunity to talk to you about the future of high performance computing research and development, and about the importance of U.S. leadership in the development and deployment of Exascale computing.

I am Rick Stevens, the Associate Laboratory Director responsible for Computing, Environment, and Life Sciences research at Argonne National Laboratory. My laboratory operates one of the two Leadership Class computing systems for DOE’s Office of Science. My own research focuses on finding new ways to increase the impact of computation on science – from the development of new more powerful computer systems to the creation of large-scale applications for computational genomics targeting research in energy, the environment and infectious disease. I also am a Professor at the University of Chicago in the Department of Computer Science, where I hold senior fellow appointments in the University’s Computation Institute and the Institute for Genomics and Systems Biology.

I believe that advancing American leadership in high-performance computing is vital to our national interest. High-performance computing is a critical technology for the nation. It is the underlying foundation for advanced modeling and simulation and big data applications.

It is needed by all branches of science and engineering. It is used more and more by U.S. industry to maintain a competitive edge in the development of new products and services and it is emerging as a critical policy tool for government leaders.

Today the United States is the undisputed leader in the development and use of high-performance computing technologies. However, other nations are increasingly aware of the strategic importance of HPC and are creating supercomputing research programs to challenge us.

Japan has had significant programs for over a decade and now China is emerging as a serious player and Europe is acting to revitalize its high-performance computing sector. All have set their sights on development of machines that are at least one hundred times more powerful than the most powerful machines today.

Achieving this goal is important. The drive to Exascale computing will have a sustained impact on American competitiveness. It will give companies and researchers the increased impetus to innovate in the development of new processes, new services and new products.

For example, we need increased compute power to enable a first principles design of new energy storage materials that will enable an affordable 500-mile electric car battery. We want to build end-to-end simulations of advanced nuclear reactors that are modular, safe and affordable. We want to revolutionize small business manufacturing through digital design, digital fabrication and the development of a software based supply chain. We want to model controls for an electric grid that has 30 percent renewable generation from solar and wind. We want to add full atmospheric chemistry and microbial processes to climate models, and to increase the resolution of climate models to provide detailed regional impacts. We want to increase our ability to predict severe storms and tornados. We want to create personalized medicine that will incorporate an individual’s genetic information into a specific, customized plan for prevention or treatment.

All of these challenges require machines that have hundreds or thousands of times the processing power of current systems. The development of a practical Exascale computing capability also means very affordable Petascale computing that can be very broadly deployed. The global demand for supercomputing has never been higher.

The DOE Office of Science supercomputing centers at Argonne, Berkeley and Oak Ridge are oversubscribed typically by factors of three or more. With current funding levels these systems can be only be updated about once every four to five years. At the current levels of research investment U.S. HPC vendors will not likely reach an Exascale performance level that we can afford to deploy until considerably after 2020. And the DOE Centers I mentioned will certainly not be able to deploy such systems at the current levels of investment.

This is a problem for us if we want to maintain our leadership.

Both China and Japan are working on plans to reach that level by 2020 or before. Japan is developing plans for a $1.1B investment program aiming to deploy an Exascale capable machine by 2020. China has announced its intention of reaching Exascale before 2020. China is aggressively spending on infrastructure for supercomputing and is succeeding in deploying large-scale systems rivaling the largest systems deployed in the U.S. It is widely expected they will regain the lead on capability this year, largely through designs that incorporate U.S. components, though they are investing heavily in domestic research and have plans to deploy large-scale systems based on Chinese components in the near future.

Since 2007 I have been working with my colleagues in the National Laboratory system, academia, private industry and the DOE to develop an integrated, ambitious plan to keep the United States at the forefront of high-performance computing.

We have identified five major hurdles that must be overcome if we are to achieve our goal of pushing the computing performance frontier to the Exascale by the end of the decade:

  • We must reduce system power consumption by at least a factor of 50.
  • We must improve memory performance and lower cost by a factor of 100.
  • We must improve our ability to program systems with dramatically increased levels of parallelism.
  • We must increase the parallelism of our applications software, math librareis and operating systems by at least a factor of 1,000.
  • We must improve systems reliability by at least a factor of 10.

These are not simple tasks. But all of us who are working in this community believe that Exascale supercomputing will be a reality by the end of this decade.

It will happen first in the U.S. if we can get the investment needed. This bill is a great start to that commitment.

Ultimately, this is a race, not against our international competitors, but rather it’s a race for us. Exascale computing is necessary to the achievement of our most urgent goals in energy, in medicine, in science and in the environment. And it will have a profound impact on industry competitiveness and national security.

I believe we have a duty to move as swiftly as we can.