A-Z Index | Phone Book | Careers

Computing Sciences Summer Students: 2016 Talks & Events

High-Performance Computing at NERSC 


Who: Rebecca Hartman-Baker and Brandon Cook
When: June 10, 10:00 a.m. - 11:30 a.m.
Where: 59-3101-CR

Abstract: What is High-Performance Computing (and storage!), or HPC? Who uses it and why? We'll talk about these questions as well as what makes a Supercomputer so super and what's so big about scientific Big Data. Finally we'll discuss the challenges facing system designers and application scientists as we move into the many-core era of HPC. After the presentation, we'll take a tour of the NERSC machine room. (Closed-toe shoes are REQUIRED for the machine room tour.)

Bios:

Rebecca Hartman-Baker is the acting leader of the User Engagement Group at NERSC. She got her start in HPC as a graduate student at the University of Illinois, where she worked as a graduate research assistant at NCSA. After earning her PhD in Computer Science, she worked at Oak Ridge National Laboratory in Tennessee and the Pawsey Supercomputing Centre in Australia before joining NERSC in 2015. Rebecca's expertise lies in the development of scalable parallel algorithms for the petascale and beyond.

Brandon Cook is an HPC consultant in the Application Performance Group at NERSC. He serves as the liaison to several teams preparing for the new Cori machine architecture at NERSC, and also active in analyzing jobs and engaging NERSC users through consulting. He earned his PhD in physics from Vanderbilt University in 2012, where he studied ab initio methods for quantum transport in nanomaterials. Before joining NERSC he was a postdoc at Oak Ridge National Laboratory where he developed and applied electronic structure methods to problems in material science.

SLIDES


From Face Detection to the Faces of Scientific Images 


Who: Hari Krishnan
When: June 16, 11 a.m. - 12 p.m.
Where: 50A Auditorium

Abstract: Research across science domains is often reliant on image-based data from experiments. Our goal is to enable scientists to uncover relevant, but hidden, information from digital images through pattern recognition and machine learning, using emerging algorithms for dealing with massive data sets. In collaboration with colleagues from diverse science domains, we will build software tools to accelerate image analysis, scaling scientific procedures by reducing time between experiments, and opening more opportunities for more users of the imaging facilities.

Bio: Hari Krishnan has a Ph.D. in Computer Science and works for the visualization and graphics group as a computer systems engineer at Lawrence Berkeley National Laboratory. His research focuses on scientific visualization on HPC platforms and many-core architectures. He leads the development effort on several HPC related projects which include research on new visualization methods, optimizing scaling and performance on NERSC machines, working on data model optimized I/O libraries and enabling remote workflow services. As a member of The Center for Advanced Mathematics for Energy Research Applications (CAMERA), he supports development of the software infrastructure, works on accelerating image analysis algorithms and reconstruction techniques. He is also an active developer of several major open source projects which include VisIt, NiCE, H5hut, and has developed plugins that support performant-centric scalable image filtering and analysis routines in Fiji/ImageJ.

SLIDES


Using Math and Computing to Model Supernovae 


Who: Andy Nonaka
When: June 23, 11 a.m. - 12 p.m.
Where: 59-4102-CR

Abstract: We describe our recent efforts for understanding the physics of Type Ia supernovae - the largest thermonuclear explosions in the universe - by simulating multiple stages of stellar evolution using two massively parallel code frameworks developed at Berkeley Lab.  Each code framework was developed using mathematical models well-suited for the character of the burning, whether it be the pre-ignition deflagration phase, or the post-ignition explosion phase.  Our results will help scientists form models that describe the ultimate fate of our galaxy.

Bio: Andy Nonaka is a Research Scientist in Berkeley Lab's Center for Computational Sciences and Engineering (CCSE).  After studying electrical engineering as an undergraduate, he changed fields and became interested in modeling and simulation during graduate school, where he developed algorithms for microfluidic fluid flow.  Since joining CCSE 9 years ago he has been involved with problems in combustion, atmospherics, stochastic fluids, and astrophysics utilizing high-performance computing centers such as NERSC.

SLIDES


Designing and Presenting a Science Poster 


Who: Jonathan Carter
When: June 30, 11 a.m. - 12 p.m.
Where: 59-4102-CR

Abstract: During the poster session on August 4th, members of our summer visitor program will get to showcase the work they have been doing this summer. Perhaps some of you have presented posters before, perhaps not. This talk will cover the basics of poster presentation: designing an attractive format; how to present your information clearly; what to include and what not to include. Presenting a poster is different from writing a report or giving a presentation. This talk will cover the differences, and suggest ways to avoid common pitfalls and make poster sessions work more effectively for you.

Bio: Before assuming the Deputy role for CS, Dr. Carter was leader of the NERSC User Services Group (USG). He joined NERSC as a consultant in USG at the end of 1996, helping users learn to effectively use the computing systems. He became leader of USG at the end of 2005. Carter maintains an active interest in algorithms and architectures for high-end computing, and participates in benchmarking and procurement activities to deploy new systems for NERSC. In collaboration with the Future Technologies Group in CRD, and the NERSC Advanced Technology Group, he has published several architecture evaluation studies, and looked at what it takes to move common simulation algorithms to exotic architectures. His applications work on the Japanese Earth Simulator earned him a nomination as Gordon Bell Prize finalist in 2005. Prior ro LBNL, Dr. Carter worked at the IBM Almaden Research Center as a developer of computational chemistry methods and software, and as a researcher of chemical problems of interest to IBM. He holds a Ph.D. and B.S in chemistry from the University of Sheffield, UK, and performed postdoctoral work at the University of British Columbia, Canada.

SLIDES


Exploring Networking Fundamentals Using Raspberry Pis 


Who: Sowmya Balasubramanian
When: July 07, 11 a.m. - 12 p.m.
Where: 59-4102-CR

Abstract: Picture this! You are streaming your favorite movie and just as the climax scene approaches, the video pauses waiting to buffer. It is at 2 am and you know that not that many people are logged in at that hour. The server is also not busy so why then did the video pause? Can it be the network? But, you have the best plan the internet company can offer? So, why then is the video still slow? What has TCP got to do with this? The scientists too, have large data files to transfer and often face the same issues as above. So, in this workshop we will explore what happens in a wide area network (WAN). We will use single board computers connected as a local area network (LAN) to emulate the WAN. You will learn the characteristics of wide-area network – delays and packet losses, and how they affect the overall performance of the network and in turn, impact data transfers. This workshop was developed with the help of another ESnet member, Mary Hester.

Bio: Sowmya Balasubramanian is a Software Engineer at ESnet. She is passionate about developing software and middleware for computer networks and distributed systems. She has been part of the perfSONAR project, a popular network monitoring and  troubleshooting open source software. She has contributed to various stages of the software’s life cycle- development, build and packaging as well as providing support to end-users. She has also worked on cloud middleware and host management agents. Balasubramanian, originally from Chennai, a city in South India, graduated with an M.S. in Information Networking from Carnegie Mellon University. She completed her undergraduate education at SSN College of Engineering of Anna University in India in Electronics and Communication.


Quantum Computation for Chemistry and Materials 


Who: Jarrod McClean
When: July 14, 11 a.m. - 12 p.m.
Where: 59-4102-CR

Abstract: Quantum computers promise to dramatically advance our understanding of new materials and novel chemistry. Unfortunately, many proposed algorithms have resource requirements not yet suitable for near-term quantum devices. In this talk I will focus on the application of quantum computers to hard problems in the application area of chemistry and materials and discuss the challenges and opportunities related to current algorithms. One particular method of interest to overcome quantum resource requirements is the variational quantum eigensolver (VQE), a recently proposed hybrid quantum-classical method for solving eigenvalue problems and more generic optimizations on a quantum device leveraging classical resources to minimize coherence time requirements. I will briefly review the original VQE approach and introduce a simple extension requiring no additional coherence time to approximate excited states.  Moreover, we show that this approach offers a new avenue towards mitigation of decoherence in quantum simulation requiring no additional coherence time beyond the original algorithm and utilizing no formal error correction codes.

Bio: Jarrod McClean is a current Luis W. Alvarez fellow in computing sciences at Lawrence Berkeley National Laboratories studying chemistry and materials simulation on quantum devices. He received his PhD in Chemical Physics from Harvard University specializing in quantum chemistry and quantum computation supported by the US Department of Energy as a Computational Science Graduate Fellow.  His research interests broadly include quantum computation, quantum chemistry, numerical methods, and information sparsity.

SLIDES


Accelerating Science with the NERSC Burst Buffer Early User Program 


Who: Debbie Bard
When: July 21, 11 a.m. - 12 p.m.
Where: 59-4102-CR

Abstract: NVRAM-based Burst Buffers are an important part of the emerging HPC storage landscape. The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory recently installed one of the first Burst Buffer systems as part of our new Cori supercomputer, collaborating with Cray on the development of the DataWarp software. NERSC has a diverse user base comprised of over 6500 users in 750 different projects spanning a wide variety of scientific applications, including climate modeling, combustion, fusion, astrophysics, computational biology, and many more. The potential applications of the Burst Buffer at NERSC are therefore also considerable and diverse. We describe here the Burst Buffer Early User Program at NERSC, which selected a number of research projects to gain early access to the Burst Buffer and exercise its different capabilities to enable new scientific advancements. We present details of the program, in-depth performance results and lessons-learnt from highlighted projects.

Bio: Debbie Bard is a Big Data Architect at the National Energy Research Scientific Computing Center (NERSC) at Berkeley National Lab. A native of the UK, her career spans research in particle physics, cosmology and computing on both sides of the Atlantic. She obtained her PhD at Edinburgh University, and worked at Imperial College London and SLAC National Accelerator Laboratory before joining the Data and Analytics group at NERSC, where she focuses on data-intensive computing and research.

SLIDES


Big Bang, Big Data, Big Iron - High Performance Computing For Cosmic Microwave Background Data Analysis 


Who: Julian Borrill
When: July 28, 11 a.m. - 12 p.m.
Where: 59-4102-CR

Abstract: The Cosmic Microwave Background (CMB) is the last echo of the Big Bang, and carries within it the imprint of the entire history of the Universe. Decoding this preposterously faint signal requires us to gather every-increasing volumes of data and reduce them on the most powerful high performance computing (HPC) resources available to us at any epoch. In this talk I will describe the challenge of CMB analysis in an evolving HPC landscape.

Bio: Julian Borrill is the Group Leader of the Computational Cosmology Center at LBNL. His current work is focused on developing and deploying the high performance computing tools that are needed to analyse the huge data sets being gathered by current Cosmic Microwave Background (CMB) polarization experiments (particularly the Planck satellite, Polarbear ground-based, and EBEX balloon missions) and extending these to coming generations of both experiment and supercomputer. For the last 15 years he has also managed the CMB community and Planck-specific HPC resources at the DOE's National Energy Research Scientific Computing Center.

SLIDES