ESnet to Support Wide Range of Demos at SC14
When an estimated 10,000 high performance computing and networking experts from around the world converge on New Orleans for the SC14 conference looking for the latest developments, ESnet will play a major role in keeping everyone connected. Each year, a team of volunteers create SCinet, the networking infrastructure for the conference, making the convention center one of the best-connected sites on the planet. For SC14, SCinet is bringing in a total connectivity of 1.2 terabits per second, of which 400 gigabits-per-second (Gbps) is directly connected to ESnet.
A number of demonstrations at the conference will also be supported by ESnet. In particular, ESnet will support a project led by Caltech to demonstrate terabit-level networking. Here is a look at some of the other demos:
Booth 3115: Towards Managed Terabit/s Scientific Data FlowsA team led by Caltech in Booth 3115 will showcase 1 terabit-per-second network connectivity and data transfers, as well as multilayer dynamic provisioning among the Caltech, the International Center for Advanced Internet Research at Northwestern University (iCAIR) and Vanderbilt University booths. External sites include Caltech, CERN, University of Victoria in Canada, the University of Michigan, SPRACE in São Paulo, Brazil, and the Kyungpook National University in Korea. The demonstration, titled “Towards Managed Terabit/s Scientific Data Flows,” will show a system efficiently moving large scientific data sets from the Large Hadron Collider (LHC) between external and internal LHC data centers at up to 1,500 Gbps over various networks, including ESnet. The system will consist of dynamically reconfigurable network infrastructure by leveraging the application intelligence through different layers of software and hardware among various end sites.
Booth 1639: Remote I/O Pipeline ProcessingThe Naval Research Laboratory (NRL) will conduct “Remote I/O Pipeline Processing,” a coast-to-coast remote I/O demo to show on-the-fly processing and delivery of large volume imagery data over intercontinental distance without bulk data transfers and with flexible distribution of the load across the network and compute resources. The demonstration will tap data in EchoStreams storage systems at NERSC in Oakland, Calif., and in Booth 1639, the Laboratory for Advanced Computing/Open Cloud Consortium, at SC14. Data will be processed at StarLight. The NRL in Washington, D.C., and at SC14 and will be carried over ESnet.
Booth 1939: The Multicore-Aware Data Transfer Middleware (MDTM)The Multicore-Aware Data Transfer Middleware (MDTM) project by Brookhaven National Laboratory (BNL), Fermilab and Stony Brook University will demonstrate how new manycore processors provide advanced features that can be exploited to design and implement a new generation of high-performance data movement tools. The demo in DOE Booth 1939 will use the 100 Gbps ESnet connection between Fermilab and Brookhaven.
Booth 1639: Real-time Data Transfer Over 100 Gbps WANs Using UDT-based App SuiteESnet, the European Bioinformatics Institute-UK, the University of Amsterdam and the University of Edinburgh will demonstrate real-time data transfer over 100 Gbps WAN networks using a UDT-based application suite. The demo in Booth 1639, the Laboratory for Advanced Computing/Open Cloud Consortium, will connect with the StarLight International Communications Exchange Facility (StarLight) in Chicago and the University of Chicago and show how large-scale providers of data can peer with each other so that a researcher using cloud-based computing services at one of the sites can access data transparently over wide area 100 Gbps connections at one of the other sites.
Booth 533: Using ExoGENI to Perform Experiments With 40 Gbps WANA demo by Renaissance Computing Institute (RENCI) at the University of North Carolina Chapel Hill, Northwestern University and StarLight will show how experimenters can use the capabilities of ExoGENI, a networked infrastructure-as-a-service test bed funded through the National Science Foundation’s Global Environment for Network Innovation (GENI) project, to perform experiments with 40 Gbps wide-area network. The demo in RENCI Booth 533 will access ExoGENI systems at Berkeley Lab’s Oakland Scientific Facility via ESnet.
Booth 658: FASP Transfer Technology SoftwareAspera will demonstrate its FASP transfer technology software showing live 80 Gbps file transfer technology across a loop from the show floor to ESnet’s router in Chicago. The demo will be in Aspera Booth 658.
Booth 2739: 100 Gbps Disk-to-Disk Data TransfersEngineers from NASA's Goddard Space Flight Center in Booth 2739, in collaboration with ESnet, StarLight and the Mid-Atlantic Crossroads (MAX) Gigapop, will demonstrate a 100 Gbps disk-to-disk data transfer capability suitable for next-generation Big Data science applications.
About Computing Sciences at Berkeley Lab
The Computing Sciences Area at Lawrence Berkeley National Laboratory provides the computing and networking resources and expertise critical to advancing Department of Energy Office of Science research missions: developing new energy sources, improving energy efficiency, developing new materials, and increasing our understanding of ourselves, our world, and our universe.
Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 13 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.