The Division of Engineering and Applied Sciences (DEAS) and computer giant IBM are teaming up through an applied research award to create a pilot computer grid that, if successful, could one day provide researchers access to greatly increased computing power.
Still in its infancy, grid computing may turn out to be computing’s next evolution. Years ago, computing power was stored in massive supercomputers bolted to the floor. Today, humble desktops, servers, and other machines hold a lot of computing power, but that power is spread out and in many cases available only to the machines’ users.
Grid computing would harness the power of those scattered computers and merge it into a computing grid where the power of computers of all sorts – big and small – is seamlessly available to anyone who needs it.
“Grid computing holds great promise to bring extraordinarily powerful computational power to the research community – in some cases enabling us to tackle problems that were previously too computationally intensive to consider,” said Provost Steven E. Hyman. “I’m delighted to see DEAS take a strong first step in this area and look forward to seeing how the project evolves. I’m also pleased to be able to support this project by providing the high-end networking and hosting infrastructure required to launch this effort.”
If successful, the result, which researchers will call the “Crimson Grid,” should merge the scattered power of many computers into a flexible, responsive supercomputer. Beyond sheer computing power, the grid would make available to grid users specific resources now found on perhaps just one machine – such as complex programs or models, data, and storage capacity.
“A grid could potentially provide the tools to solve any type of problem, from a complex literature search to mining the genome,” said DEAS’s Chief Information Officer and Information Technology Director Jayanta Sircar.
The effort, called the Crimson Grid Test Bed, is being launched with the help of several parties, including IBM, Intel Corp., the Faculty of Arts and Sciences, and University Information Systems.
The initiative is just part of a larger DEAS effort to work with computer industry leaders to develop new computational tools and techniques that can benefit researchers in other parts of the University, according to DEAS Dean Venkatesh Narayanamurti.
“Our goal is to provide the enabling infrastructure for state-of-the-art research computing,” Narayanamurti said. “Such infrastructure is critical to several scientific disciplines, spanning areas such as high-energy physics, material science, computer science, astronomy, and biology.”
Physics Department Chairman John Huth can vouch for the potential power of a grid. Huth works with grid computing in high-energy physics research and in connection with his work on an experiment at the European Center for Particle Physics (CERN) in Geneva, called ATLAS.
Huth has put together a full-scale production grid, called Grid3. The project has linked together 2,700 central processing units from computers at 27 different institutions, including four national laboratories and 23 universities. Huth said they ran an average of 500 jobs on the grid at one time and shipped over 100 terabytes – a million million bytes – of information over the Internet.
Other Harvard faculty are looking forward to the computational muscle a “Crimson Grid” would bring to Harvard.
Efthimios Kaxiras, Gordon McKay Professor of Applied Physics and professor of physics, said grid computing would allow him and other scientists at Harvard to expand the scope of their work.
“We have great hopes that this will enable us to realize the dream of creating nanoscale materials by design on the computer,” Kaxiras said. “Our community is not as advanced in grid computing as the high-energy community, but eventually we also stand to benefit from the availability of a high-performance grid for very large-scale applications.”
Through an IBM Shared University Research Program, teams from Harvard and IBM will develop and test standardized grid tools and protocols designed to help other academic institutions take advantage of grid computing.
“Harvard and IBM share a vision of using grid technology to significantly broaden the boundaries of academic research, especially in the area of life sciences,” Bruce Harreld, IBM senior vice president, Strategy, said in a statement. “This grid project can open doors to new research and help both organizations to draw on complementary strengths, including IBM’s expertise in grid computing, computational biology, and advanced IT [information technology] solutions.”
Narayanamurti said the Division is pleased to be working with IBM on the project.
“This award will provide an excellent opportunity for industry and academia to collaborate,” Narayanamurti said. “Such an effort blends well with the interdisciplinary and team environment of the Division itself.”
Will it work here?
A computing grid gets its flexibility and power by changing the relationship between networked computers.
Most office computers today are configured in what’s called a “client-server” relationship. In this sort of setup, the desktop computers are linked to an office’s central server, which is usually a more powerful computer. Each desktop can access the resources of the central server – things like databases, storage capacity, and specific files – but it can’t access similar resources available on its neighbor’s desktop or on computers in another office’s network.
In a grid system, all the connected computers are equal “nodes” on the grid and their resources can be made available for other grid users to share. Special software, called “middleware,” is installed on each computer on the grid and transforms it into a grid node.
Sircar compared a computing grid to the telephone system, where a person picks up a telephone in one location and calls someone else at another location. How the call is routed, though it may be technically complex, isn’t important. It’s important that the call gets through.
Add in many people making calls at the same time, all using the telephone system’s resources seamlessly, yet in slightly different ways, and you have an idea of what a computing grid might look like.
Grid users would use their computers as they normally would, but the computer resources available to them would not be tied to the capabilities of the machines they or their research group actually owns. If a resource-hungry molecular modeling program, for example, overloaded their machine, instead of slowing down or crashing, it could get extra power from the computers on the grid, drawing on those machines that had available additional resources.
Though the theory of a computer grid sounds great – and it has been proven to work in other limited settings – Sircar said questions do remain about its practicality in a higher education setting. The pilot Crimson Grid program aims to work those questions out so that Harvard can see if implementing a grid makes sense here.
In addition to technical questions, budgets, delivery methods, policies, and protocols have to be examined. Once they’re fully understood, a grid’s benefits and costs can be compared with other types of computer network configurations.
Sircar is thinking beyond Harvard as well. If the project is successful here, he sees a time when the lessons learned at Harvard can be applied to similar grids at universities across the country. With those in place, those grids themselves could be joined in what’s called a “meta-grid,” or a grid of grids, that would allow pooling resources across the nation.
“We don’t have the answers to those questions yet,” Sircar said. “But exploring those questions is what makes the grid project so exciting.”