The Top500 list ranks the most powerful computers in the world.[1] As of this writing, the top spot belongs to Tianhe-2, a Chinese computer capable of 33.86 petaflops (quadrillion computations per second). That’s perhaps a million times the performance of the computer you are viewing this on now.[2] Many of the remaining top 10 computers are located at national laboratories around the United States, including Titan at Oak Ridge National Laboratory and Sequoia at Lawrence Livermore National Laboratory (both reaching 17 petaflops).
The supercomputer rankings are based on the LINPACK benchmark, which is essentially a measure of a computer’s “horsepower”. While there are endless discussions about employing better benchmarks that give a more practical “top speed”, perhaps a more important problem is that many researchers needing large amounts of computing don’t care much about the top speed of their computer. Rather than supercharged dragsters that set land speed records, these scientists require a fleet of small and reliable vans that deliver lots of total mileage over the course of a year.
HPC and HTC
The fundamental difference between the “drag racers” and the “delivery van operators” is whether the simulations coordinate all the computer processors towards a common task. In combustion studies and climate simulations, scientists typically run a single large computing job that coordinates results between tens or even hundreds of thousands of processors. These projects employ “computer jocks” that are the equivalent of a NASCAR pit crew; they tweak the supercomputers and simulation code to run at peak performance. Meanwhile, many applications in biology and materials science require many small independent simulations – perhaps one calculation per gene sequence or one per material. Each calculation consumes only a modest amount of processing power; however, the total computational time needed for such applications can now rival the bigger simulations because millions of such tiny simulations might be needed. Like running a delivery service, the computer science problems tend to be more about tracking and managing all these computing jobs rather than breaking speed records. And instead of a highly parallel machine tuned for performance, many inexpensive computers would do just fine.
These two modes of computing are sometimes called “high-performance computing” (or HPC) for the large coordinated simulations and “high-throughput computing” (or HTC) for myriad small simulations. They are not always at odds. Just as consumer cars benefit from improvements on the racetrack, today’s high-throughput computing has benefited from designing yesterday’s high-performance systems. And the machines built for high-performance computing can sometimes still be used for high-throughput computing while maintaining low cost per CPU cycle and high energy efficiency. But, fundamental differences prevent the relationship from being completely symbiotic.
Making deliveries in racecars
One issue with large supercomputers concerns drivability – how easy is it to use and program the (often complex) computing architecture? Unfortunately, tweaking the computers to go faster often means making them less flexible. Sometimes, enhancing performance for large simulations prevents small jobs from running at all. The Mira Blue Gene cluster at Argonne Lab (currently the #5 system in the Top500 list) has a minimum partition of over 8000 processors and a low ratio of I/O nodes to compute nodes, meaning the machine is fundamentally built for large computations and doesn’t support thousands of small ones writing files concurrently. These top supercomputing machines are built like Death Stars: they are designed for massive coordinated power towards a single target but can potentially crumble under a load of many small jobs.

Other times, there are no major technical barriers to running small jobs on large machines. However, “run limits” imposed by the computing centers effectively enforce that only large computing tasks achieve large total computer time. One can bypass this problem by writing code that allows a group of small jobs to masquerade as a big job.[3] It doesn’t always work, and is a crude hack even when it does.
If supercomputers are naturally suited to support large jobs, why not take small high-throughput jobs elsewhere? A major reason is cost. The Materials Project currently consumes approximately 20 million CPU-hours and 50 terabytes of storage a year. This level of computing might cost two million dollars a year[4] on a cloud computing service such as Amazon EC2 but can be gotten essentially for “free” by writing a competitive science grant to use the existing government supercomputers (for example, two of these centers – Argonne and Oak Ridge – have a combined computing budget of 8 billion CPU-hours*).
Still, many high-throughput computational scientists have chosen not to employ supercomputers. The Harvard Clean Energy Project, an effort to discover new organic solar cells through millions of small simulations, leverages the personal computers of thousands of citizen volunteers via the IBM World Community Grid. Cycle Computing is a business that maps scientific computing problems to cloud computing services. The National Science Foundation is funding computing centers such as XSEDE that are more accommodating to massive amounts of small jobs. Adopting politically-tinged slogans like “Supercomputing/HPC for the 99 percent”[5] in their press releases, these efforts are actively rebelling against the majority of computer time going to the small minority of computer jocks that can operate the supercomputing behemoths.
Can’t we all just get along?
The HPC/HTC divide is discouraging, especially because large and small jobs are not completely immiscible. The National Energy Research Scientific Computing Center (NERSC) has added a high-throughput queue to its large production cluster, Hopper (a 1 petaflop machine that was at one time the #8 computer on the Top500 list). In theory, this could be of mutual benefit. Supercomputing centers are partly evaluated by their utilization, the average percent of computing cycles that are used for scientific work (versus waiting idly for an appropriate-size task). A small job can fill a gap in computing like a well-shaped Tetris piece, thereby improving utilization. The two communities can certainly cooperate more.[6] HPC and HTC certainly belong to different computing families, but a Montague-Capulet style war may only hurt both sides.
Footnotes[1] Many people believe that supercomputers owned by government agencies or commercial entities such as Google would win if they decided to enter into the competition.
[2] I’m assuming ~30 gigaflops LINPACK performance for a desktop based on this article in The Register.
[3] The FireWorks software I develop is one workflow code that has this feature built-in.
[4] (9/18/2014) I’ve heard an estimate that computer time at DOE leadership computing facilities costs about 2.5 cents per core-hour, such that 20 million CPU-hours would be equivalent to about $500,000.
[5] For example, see these articles released by San Diego Computing Center and in Ars Technica.
[6] Note: I am part of a queueing subcommittee at NERSC that hopes to start investigating this soon. * (2/7/2014) The original version of this article stated that the DOE had a total 2013 computing budget of 2.4 billion CPU-hours, which is too low. The updated figure of 8 billion for just two top centers is based on this presentation.