ESnet Reaches First Milestone in ANI Deployment with Infinera DWDM Installation
April 26, 2010
ESnet Contact: Wendy Wolfson Tsabba, WWolfson@lbl.gov, 510-486-5745
In March of this year, the U.S. Department of Energy's Energy Sciences Network (ESnet) completed the first milestone in constructing its Advanced Network Initiative (ANI) testbed by installing Infinera's dense wavelength-division multiplexing (DWDM) equipment. DWDM refers to optical networking systems that can send large volumes of data over multiple wavelengths of light on a single fiber.
Funded by $62 million under the American Recovery and Reinvestment Act (Recovery Act), the ANI testbed will be community resource for scientists, technologists, and industry to conduct network, middleware, and application research and development. The ANI testbed, launched in September 2009, is being designed, built, and operated by ESnet network engineers. ESnet is funded by the DOE Office of Advanced Scientific Computing Research and managed by Lawrence Berkeley National Laboratory (Berkeley Lab).
The ANI testbed will support research on the data, control, management, authentication/authorization, and service planes. It will be initially capable of running 10Gbps and eventually 40Gbps and 100Gbps circuits. The testbed will start out as a "tabletop" testbed at Berkeley Lab, and later be deployed in the metro area.
Anatomy of the ANI testbed
The ANI network testbed will comprise DWDM optical systems that are General Multiprotocol Label Switching (GMPLS)-enabled. The deployed Layer 2 switches will additionally support OpenFlow, an open standard that provides a standardized interface to add and remove flow table entries in an Ethernet switch. Layer 3 routers currently include small Multiprotocol Label Switching (MPLS)-enabled routers, compatible with ESnet's Science Data Network (SDN) running On-demand Secure Circuit and Reservation System or OSCARS. MPLS is a mechanism in high-performance networks that directs and carries data from one network node to the next, making it easy to create "virtual links" between distant nodes.
The Infinera systems support the ability to experiment with multi-domain hybrid networking, enabling research activities targeted at multi-layer, multi-domain control and signaling.
ESnet has the ability to reserve and manage the various testbed components and provide access to all available monitoring data. "The ability to isolate and segregate traffic in different control planes for the different types of traffic is key," said Steve Cotter, head of ESnet. "This will allow researchers to test new protocols that can dynamically provision layer 1, 2 or 3 virtual circuits so traffic can be transferred to the most cost-effective layer. We transport very large datasets on our network for climate science and physics. The near term goal is to learn how to identify flows and move them down to the layer where they can be transported with greatest efficiency."
"Getting an operational 10G network that can support research traffic is an important first step," Cotter said. “The eventual intent is for the testbed to connect to the 100G prototype network in progress. Our goal is to scale up the network by stages to eventually move climate computational data around the network at 100G for data intensive projects like the DOE's Earth System Grid."
In the next few months, ESnet will be issuing a call for proposals for users to conduct research on the testbed. ESnet is also issuing a Request for Proposals (RFP) for equipment vendors and service providers to build the next phase of the ANI project; a planned nationwide 100G prototype network that is slated to link three of DOE's major supercomputing centers— NERSC at Berkeley Lab, the Argonne Leadership Computing Facility at Argonne National Laboratory in Illinois and the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory in Tennessee— and MANLAN, the international exchange point in New York. In addition to the ARRA goals, a goal of this prototype network will be to support interoperability testing of 100G components from multiple vendors.
About Computing Sciences at Berkeley Lab
The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.
ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.
Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.