Science & Tech

Harnessing fun for serious science

9 min read

Graphics processing units provide computational horsepower

For a billion years after the Big Bang, the universe experienced its “dark ages,” a time when space was a vast sea of atomic hydrogen. That period ended with the birth of stars, galaxies, and black holes, ultimately leading to the brilliant skies above us at night.

“The basic building blocks of our universe formed during the dark ages,” said Lincoln Greenhill, a senior research fellow and lecturer on astronomy at the Harvard-Smithsonian Center for Astrophysics (CfA). “But our understanding of this incredibly important time is in fact based on very little hard data.”Greenhill, together with U.S., Australian, and Indian colleagues, is planning to map the dark ages in search of clues about this time. They’re building a revolutionary radio telescope — 8,000 antennas spread across 1.5 kilometers of desert — deep in the Australian outback. The antennas will generate so much data, however, that without a new kind of computing, running at faster speeds while requiring lower power, the project would be impossible.

That’s where teenage boys and their computer games come in.

Greenhill’s Murchison Widefield Array (MWA) is one of a trio of projects at Harvard whose massive computing needs have prompted investigators to join forces to pioneer new computing techniques that will benefit not just radio astronomy, but quantum chemistry and neuroscience as well. The three projects have come together in an effort called “SciGPU” (www.scigpu.org), which was founded at the Harvard Initiative in Innovative Computing (IIC) and which recently received a prestigious National Science Foundation Cyber-Enabled Discovery and Innovation grant to pursue joint research.

The investigators have been at work on computers that rely on GPUs, or graphics processing units, to do the heavy lifting. In more traditional computing – whether a desktop computer or a supercomputer — the processing is done by one or more of the machine’s central processing units, or CPUs.  CPUs have long been considered the computer’s brains and excel at complex operations.

GPUs were developed over the past two decades as manufacturers sought ways to speed up computers as they manipulated ever-larger and more-detailed computer graphics imagery – first computer animations and then also videos. Handling these computer-generated images consumed more and more CPU resources, which in turn were not available for other functions. In response, manufacturers developed a second kind of processing unit, a graphics processing unit that could handle computer graphics and video and leave the CPU to do other things.

The explosion of the video game industry in the 1990s and 2000s drove the development of ever-more-complex GPUs, as gamers demanded faster and more physically realistic action sequences.

First-generation GPUs were designed for one essential purpose: to decide what color each pixel on a computer screen should be at any one time. They are very good at doing that operation – multiplied by the millions of pixels on a screen — repeatedly and rapidly to form the changing images seen in video games.

To meet the demands of the video game industry, GPUs became programmable over time. A few years ago, computer experts realized that these programmable GPUs might be good at other jobs similar to what they were already doing – performing many simple calculations at the same time, tasks that experts describe as “massively parallel.”

Richard Edgar, a computational research scientist at the IIC, said two important events made GPU computing practical today. In the late 1990s, the first programmable GPU was developed, and roughly two years ago, the programming language to program it, called CUDA, was released by video card manufacturer Nvidia Corp. Once CUDA was released, programmers outside the video game industry could take the cheap, powerful, off-the-shelf GPUs built to play war games or shoot monsters and turn them to scientific purposes.

Depending on the nature of the task set to it, a computer containing both a CPU and a GPU can cycle back and forth between the two processors as appropriate, outperforming a conventional CPU-driven machine. Edgar said that one particularly complex problem, called a “many body problem,” deals with the properties of microscopic systems made up of many tiny particles, all interacting with each other. GPU computers, he said, are roughly 100 times faster at solving those problems than traditional CPU machines.

In Australia’s outback, hundreds of kilometers off the electrical grid, GPU computing was a requirement for Greenhill’s radio telescope.

When the Murchison Widefield Array begins operations in late 2010, it will generate so much raw data that it must be processed by computers as it comes off the telescope. It just can’t be stored for later analysis.

“We have volumes of data so enormous that no storage system in existence could hold it all for any useful length of time,” Greenhill said. “Just one day of data could fill about 200 of the largest hard disks now on the market. Yet the MWA needs to operate every night for months and months on end.”

Instead of storing and transporting the raw data, the MWA will process it on-site, crunching it down to a manageable size — about one disk per day. To process that much data on a traditional machine would require a computer too large to operate and cool in the desert, given the site’s diesel generators, so researchers needed to find a less-power-intensive way to do the computing. A GPU-driven supercomputer, Greenhill said, can be 10 times smaller than one built around CPUs alone.

“Those little gaming things will enable a revolutionary radio telescope to be built,” Greenhill said.

Harvard has become one of a handful of pioneering centers around the world pushing GPU computing ahead. Investigators involved with Harvard’s efforts, though, said that GPUs are so cheap and widely available on computer store shelves that others are rushing to catch up.

Harvard has participated in several international events designed both to share information among those currently involved in GPU computing and to bring new people into the field. Most recently, the National Nanotechnology Infrastructure Network and the Harvard Center for Nanoscale Systems hosted a mid-August workshop, organized by Michael Stopa, director of the National Nanotechnology Infrastructure Network Computation Project. The workshop introduced CUDA to those interested. In addition, Gordon McKay Professor of the Practice of Computer Science Hanspeter Pfister at the School of Engineering and Applied Sciences (SEAS) is teaching a course, “Massively Parallel Computing,” that introduces students at Harvard and the Harvard Extension School to GPU computing.

Harvard’s main GPU cluster is called Orgoglio and is integrated with the Faculty of Arts and Sciences’ research computing cluster, a centrally located and administered supercomputer available for researchers who need high-powered computing for their work.  A second GPU cluster, called Resonance, is at SEAS and is mainly used for instruction.

The other projects teamed up with the MWA in its GPU efforts involve probing the deepest secrets of atoms through quantum chemistry and understanding the vastly intricate neural connections in the brain, through a project called the Connectome.

Alán Aspuru-Guzik, assistant professor of chemistry and chemical biology, said GPU-based computing has sped up by 15 times calculations for some problems in quantum chemistry – a field on the border of chemistry and physics that applies the principles of quantum mechanics to chemistry.

Solving a problem such as figuring out how much energy it takes for a molecule to bind to a substrate involves matrix equations with potentially hundreds of thousands of elements, Aspuru-Guzik said.

“Usually these quantum chemistry problems are cast in a series of matrix equations, so you have these huge matrices – arrays of numbers – and you need to multiply them with each other,” Aspuru-Guzik said. “The [GPUs] work like a swarm of ants, rather than like central processing units, that are like a big, slow scarab beetle: smarter than an ant, but slower to perform a single task.”

Aspuru-Guzik said he got started with GPU computing as an experimental project for an undergraduate summer researcher. Through the IIC, he met two other faculty members interested in the same thing: Greenhill and Pfister.

Pfister, an expert in computer science, is involved in the Connectome Project – an ambitious effort to map all the connections in the brain and peripheral nervous system of model animals, such as the lab mouse. The effort, using high-resolution microscopy, requires painstaking identification and tracing of the nerve axons that make up the body’s wiring.

Pfister said that humans looking at the images can easily identify the axons and trace them through images taken of subsequent slices of the brain. With the brain’s enormous complexity, the job would go far faster if the process could be automated, but the process is proving difficult for computers. Pfister and other experts on the project have designed a computer algorithm that is beginning to help, though it still needs to be checked by humans. Running the program on a GPU-based computer gives much higher performance, Pfister said.

“You can put four teraflops [of GPU processing] in a desktop. It’s really a game changer,” Pfister said.

GPU computing, of course, has its difficulties. A new level of precision is being built into the chips to meet exacting standards of research computing not necessary when computing pixels on a screen. Right now this higher precision comes at the cost of lower performance. Another issue is that GPU computing requires a specialist who understands how to manipulate the unique architecture of the GPUs in order to use them effectively.

Despite those issues, Pfister sees new applications for GPU computing in areas such as biomedical imaging, which is producing more and more detailed images, requiring ever-more powerful computing to manipulate and analyze the data.

Pfister also believes that GPU computing shouldn’t be relegated to a niche where it grinds away at massively parallel problems. Rethinking how computers approach other types of problems could make them amenable to quick resolution by GPUs as well.

“I think projects like the Connectome, MWA, and quantum chemistry are just the tip of the iceberg,” Pfister said.