A-Z Index | Directory | Careers

Lab Team Again Proves Best at Moving Massive Amounts of Data across Networks

November 15, 2001

For the second year in a row, a team led by high-performance computing experts from Berkeley Lab took top honors in a contest to move the most data across the network built around SC, the annual conference of high-performance computing and networking, held last month in Denver. The winning application was a live visualization of a simulation of colliding black holes.

SC2001, held this week in Denver, marked the second staging of the Network Bandwidth Challenge in which researchers with high-bandwidth applications were invited to push the network infrastructure’s multi-gigabit links to their limits with demonstrations of leading-edge computer applications. During the 2001 Network Bandwidth Challenge, teams of researchers from around the world used SCinet, the conference fiber-optic network, to demonstrate applications using huge amounts of distributed data. The Network Bandwidth Challenge was held under the theme of “Veni, Vidi, Conexui Maxime,” Latin for “I came, I saw, I connected to the max.”

The Berkeley Lab team of  John Shalf, Wes Bethel, Michael Bennett, John Christman, George “Chip” Smith, Eli Dart, Brent Draney and David Paul, along with collaborators in Illinois and Germany, created a live visualization of a simulation of colliding black holes computed in real time at the NERSC supercomputing center at LBNL and another center Champaign, Illinois. This required the integration of computational tools in many disciplines. The team achieved a sustained network performance level of 3.3 gigabits per second (3.3 billion bits of data). The group predicted they would break the 3.0 gigabits rate, but were expecting a sustained rate closer to 2.0 gigabits.

For the conference, the SCinet team, which included Bill Kramer, Eli Dart, Stephen Lau and Bill Iles of Computing Sciences, assembled a special network infrastructure that featured a 14.5 gigabit wide-area network connection over multiple OC-48 links to the exhibit floor and connections to most high-speed national research networks. DOE’s Energy Sciences Network (ESnet), which is managed by LBNL, and the LBLnet networking support group helped provide the high-bandwidth connections between the Lab and Denver.

“Members of the ESnet staff really went out of their way to get the OC-48 link up and running to make our participation in the High-Performance Bandwidth Challenge possible,” said Bill Kramer, head of NERSC’s High Performance Computing Department and chair of the bandwidth competition. “In addition to the challenge team members, ESnet and LBLnet deserve a lot of credit for helping the Lab team win.”

The team also had a new piece of equipment to help them out – a 10 Gigabit Ethernet switch built by Force 10 Networks. SC2001 marked the public debut of the new switch, one of which was in the Lab booth and the second was in the conference network operations center. John Christman of LBLnet facilitated the use of the switches from Force 10, and Mike Bennett spent many late hours testing the switch. The team used the switch to achieve a high bandwidth connection between computers at various sites and Raju Shah from Force 10 was on hand to ensure the connections were seamless.

"The 10 Gigabit switch was one of the few trouble-free components of this entire network-distributed application,” said team leader Shalf. "Despite the performance of the Visapult/Cactus application, we never got close to stressing the Force 10 switch."

The team ran the simulation using an eight-server cluster, with each server equipped with a high-speed SysKonnect networking card.

The project created a scientific visualization of a grazing collision of two black holes using the Cactus simulation code developed by collaborators at the Albert Einstein Institute in Germany on NERSC's 5 teraflop/s IBM SP-2 system. The data from that running simulation was then sent realtime via high-performance networks to the Denver where it was volume-rendered in parallel using the Visapult application running on a cluster of PCs in the LBNL booth on the SC2001 show floor. The application provided highly interactive visualization and computational steering of a production-scale simulation code over a wide area network.

Judges commented that the team’s application was a useful tool for allowing scientists to view results from their data stored at distant and dispersed computing sites.

“The network is often overlooked in terms of its contribution to enabling scientific discovery in areas of interest to such research organizations as the Department of Energy, and to advancing communication and understanding around the world,” said Walt Polansky, one of the competition judges and acting director of DOE’s Mathematical, Information and Computational Sciences Division. “This Network Bandwidth Challenge shines the spotlight on the network and the people who operate and push network technologies, and provides an opportunity to demonstrate innovative applications across all disciplines.”


About Computing Sciences at Berkeley Lab

High performance computing plays a critical role in scientific discovery. Researchers increasingly rely on advances in computer science, mathematics, computational science, data science, and large-scale computing and networking to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab’s Computing Sciences Area researches, develops, and deploys new foundations, tools, and technologies to meet these needs and to advance research across a broad range of scientific disciplines.