A-Z Index | Phone Book | Careers

InTheLoop | 02.03.2014

February 3, 2014

NERSC Accepts, Dedicates Edison

The National Energy Research Scientific Computing Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated on Feb. 5 as part of the annual NERSC Users Group meeting being held Feb. 3-6 at Berkeley Lab (»More information). "As we celebrate NERSC's 40th anniversary, it's quite fitting we start the year by dedicating Edison, a system that embodies our guiding principle over the last four decades: computing in the service of science," said NERSC director Dosanjh. »Read more.

Early Edison Users Deliver Results

Before any supercomputer is accepted at NERSC, scientists are invited to put the system through its paces during an "early science" phase. While the main aim of this period is to test the new system, many NERSC users significantly advance their research during this period. From cosmology to nanomaterials, combustion, and carbon sequestration, we offer a glimpse into the science a few researchers did during the early science hours on Edison, including that of David Trebotich of CRD and Zarija Lukic of the Computational Cosmology Center. »Read more.

Deployments of perfSONAR Hit 1,000

Pinpointing the problem on large-scale networks can be difficult, so a collaboration of research and academic networking organizations has developed perfSONAR, a publicly available, easy-to-install software suite that takes the guesswork out of network diagnostics. In January 2014, perfSONAR reached a milestone with 1,000 instances of the diagnostic software installed on networking hosts around the U.S. and in 25 other countries. The decade-old collaboration that created the software has included the Energy Sciences Network (ESnet), Fermilab, SLAC National Laboratory, Georgia Tech, Indiana University, Internet2, University of Delaware, the GÉANT project in Europe and RNP in Brazil. »Read more.

This Week's Computing Sciences Seminars

»View the CS Seminars calendar.

Measurements of the Properties of the Higgs Boson in the Four Lepton Final State
Mon, February 3, 10am – 11am, NERSC OSF, Room 254
Matthew Snowball, University of Florida

Following the discovery of the Higgs boson, the age of precision measurements of this newly discovered particle has begun. In this presentation, I will show the latest results from the CMS detector on the properties measurements of a Higgs boson in the four lepton final state. These measurements are crucial to the understanding of the final piece of the Standard Model. Without the worldwide computing GRID and the ability to handle staggering.

Open Source Scalable Data Storage, Visualization and Analysis for Scientific Data
Thu, February 6, 2pm – 3pm, Bldg. 50F, Room 1647
Marcus D. Hanwell, Kitware, Inc.

As computational power and storage capabilities grow we need to move beyond simply storing data and towards scalable techniques for indexing, visualizing, and interacting with our data. This requires a variety of techniques, from metadata abstraction, powerful search algorithms, client-server based data processing pipelines using flexible frameworks, and domain-specific approaches. Some of the building blocks common to different areas will be covered, along with several approaches explored for chemical data and how that might be expanded for general materials simulations and data.

The Open Chemistry project will be described as it concerns the complete simulation lifecycle, and how it has reused several projects developed at Kitware and from the wider community. We will focus on the use of MongoDB and related technologies in the Open Chemistry project, and how that same technology is being applied in other areas such as the DARPA XDATA project. We will touch on the development of several related projects such as XDATA and VTKWeb and how they are being incorporated in the Open Chemistry project to provide a simple solution for materials scientists who want to store, search, collaborate, and share their data with the wider community.

These tools are permissively licensed open source projects developed collaboratively, an overview of the software process used will also be given. Leveraging Open Tools to Fuel Next-Generation Computing As the size of data grows inexorably toward the exascale, computational power and storage capabilities are improving to try and keep pace. However, growth in the size of monolithic storage systems is not sufficient. Scalable techniques that enable dynamic indexing, visualizing, and interacting with data will provide a flexible solution for the future of computing.

There are a variety of open tools that provide the best avenue for this as they can be integrated, scaled, and adapted as needed to evolve with changing computational challenges. These tools can provide critical techniques such as metadata abstraction, powerful search algorithms, client-server based data processing pipelines using flexible frameworks, and domain-specific approaches. Kitware contributes to and leverages a variety of such tools and software packages to address a variety of computing challenges, from visualization with VTK, ParaView, and ParaViewWeb; to data storage and job submission with MongoDB and MoleQueue. We will focus on the how these technologies are being used at Kitware and how those applications are supporting the efforts of research communities, including computational chemistry, materials TEM tomography, and HPC simulation.

Understanding Application Behavior with Sight
Fri, February 7, 10am – 11am, Bldg. 50F, Room 1647
Grigory Bronevetsky, Lawrence Livermore National Laboratory

The complexity of computer hardware and software is making it difficult to understand and predict how a given piece of software will behave on a given piece of hardware. This includes (i) prediction of application performance, (ii) the response of applications to soft errors, (iii) the effect of application configuration options (e.g. iteration threshold or time step size) on accuracy of its results and (iv) the classic debugging challenge of making sure the application produces correct results. All these tasks share a common need to collect vast amounts of information about application behavior, analyze it and present it to application developers in a comprehensible way. In this talk, I will summarize my work on developing a suite of analysis techniques and tools to address these challenges. It includes: - Prediction of how long application tasks will run when contending for resources with other tasks - Detection and characterization of performance faults based on their effects on application behavior - Analysis of application vulnerability to soft errors - Synthesis of scientific approximations from observations of how physical systems behave I will also describe my work to make these techniques easily available to application developers via Sight, a tool to simplify the process of collection, analysis and display of heterogeneous application information.

Improving the accuracy and time scales of ab initio molecular dynamics simulations for actinide and geochemical systems: Exact exchange, free energy, and parallel in time algorithms
Fri, February 7, 1pm – 2pm, Bldg. 50A, Room 5132
Eric Bylask, WR Wiley Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory (PNNL)

Methods of directly simulating the behavior of complex strongly interacting atomic systems (molecular dynamics, Monte Carlo) have provided important insight into the behavior of nanoparticles, biochemical systems, actinide systems and geofluids. The limitation of these methods to even wider applications is the difficulty of developing accurate potential interactions in these systems at the molecular level that capture their complex chemistry. Ab initio molecular dynamics methods have provided a means to simulate dynamics from molecules to large nanoscale systems. However, these methods have been limited to low levels of electronic structure theory and short time-scales. This talk will focus on our developments in two areas: implementation of higher-level electronic structure methods into NWChem including exact exchange and application and development of free energy and parallel in time algorithms. The talk will focus on the fundamentals of these methods and the realities in terms of system size, computational requirements and simulation times that are required for their application. Recent applications of these methods will be shown for solvated mineral surfaces and their interaction with metal cations.

InTheLoop is a weekly email newsletter produced by the Computing Sciences Area of Lawrence Berkeley National Laboratory.

An index of past issues (from 2007 to today) is available on the Computing Sciences web site at http://cs.lbl.gov/news-media/intheloop/. Issues from 1997 to 2006 are in the process of being posted.

You are receiving this newsletter because you are on CS staff, affiliated with CS, or have requested to receive it.

If you have questions about, or suggested items for InTheLoop, please contact Margie Wylie (mwylie@lbl.gov) or Jon Bashor (jbashor@lbl.gov).

Lawrence Berkeley National Laboratory
Computing Sciences
1 Cyclotron Rd., MS50B4230
Berkeley, CA 94720