A-Z Index | Phone Book | Careers

Computing Sciences Summer Students: 2017 Talks & Events

High-Performance Computing at NERSC

Who: Rebecca Hartman-Baker and Brandon Cook
When: June 08, 10:00 a.m. - 11:30 a.m.
Where: 59-3101-CR

Abstract: What is High-Performance Computing (and storage!), or HPC? Who uses it and why? We'll talk about these questions as well as what makes a Supercomputer so super and what's so big about scientific Big Data. Finally we'll discuss the challenges facing system designers and application scientists as we move into the many-core era of HPC. After the presentation, we'll take a tour of the NERSC machine room. (Closed-toe shoes are REQUIRED for the machine room tour.)

Bios: Rebecca Hartman-Baker is the acting leader of the User Engagement Group at NERSC. She got her start in HPC as a graduate student at the University of Illinois, where she worked as a graduate research assistant at NCSA. After earning her PhD in Computer Science, she worked at Oak Ridge National Laboratory in Tennessee and the Pawsey Supercomputing Centre in Australia before joining NERSC in 2015. Rebecca's expertise lies in the development of scalable parallel algorithms for the petascale and beyond. Brandon Cook is an HPC consultant in the Application Performance Group at NERSC. He serves as the liaison to several teams preparing for the new Cori machine architecture at NERSC, and also active in analyzing jobs and engaging NERSC users through consulting. He earned his PhD in physics from Vanderbilt University in 2012, where he studied ab initio methods for quantum transport in nanomaterials. Before joining NERSC he was a postdoc at Oak Ridge National Laboratory where he developed and applied electronic structure methods to problems in material science.

Rise of the Machines

Who: Prabhat
When: June 15, 11:00 a.m. - 12:00 p.m.
Where: 59-3101-CR

Abstract: This talk will review progress in Artificial Intelligence (AI) and Deep Learning (DL) systems in recent decades. We will cover successful applications of DL in the commercial world. Closer to home, we will review NERSC’s efforts in deploying DL tools on HPC resources, and success stories across a range of scientific domains. We will touch upon the frontier of open research/production challenges and conjecture about the role of humans (vis-a-vis AI) in the future of scientific discovery.

Bio: Prabhat leads the fantastic Data and Analytics Services team at NERSC.


From Microfluidics to Exploding Stars - Fluid Simulations at Berkeley Lab

Who: Andy Nonaka
When: June 22, 11:00 a.m. - 12:00 p.m.
Where: 59-3101-CR

Abstract: I will provide an overview of fluid modeling and simulation capabilities developed at the Center for Computational Sciences and Engineering (CCSE) at Berkeley Lab. I will explain how we use mathematics to derive more computationally efficient models and algorithms for a variety of applications, and how we develop codes for high-performance computing centers such as NERSC.

Bio: Andy Nonaka is a Research Scientist in Berkeley Lab's CCSE. After studying electrical engineering as an undergraduate, he became interested in numerical modeling and simulation during graduate school, where he developed algorithms for microfluidic flow. Since joining CCSE 10 years ago he has been involved with problems in combustion, atmospherics, stochastic fluids, multiphase flow, and astrophysics, working with experts in these fields to develop state-of-the-art codes for addressing scientific concerns. He is also a faculty member in the School of Engineering and Computer Science at the University of the Pacific.


Designing and Presenting a Science Poster

Who: Jonathan Carter
When: June 29, 11:00 a.m. - 12:00 p.m.
Where: 59-3101-CR

Abstract: During the poster session on August 3rd, members of our summer visitor program will get the opportunity to showcase the work and research they have been doing this summer. Perhaps some of you have presented posters before, perhaps not. This talk will cover the basics of poster presentation: designing an attractive format; how to present your information clearly; what to include and what not to include. Presenting a poster is different from writing a report or giving a presentation. This talk will cover the differences, and suggest ways to avoid common pitfalls and make poster sessions work more effectively for you.

Bio: Before assuming the Deputy role for CS, Dr. Carter was leader of the NERSC User Services Group (USG). He joined NERSC as a consultant in USG at the end of 1996, helping users learn to effectively use the computing systems. He became leader of USG at the end of 2005. Carter maintains an active interest in algorithms and architectures for high-end computing, and participates in benchmarking and procurement activities to deploy new systems for NERSC. In collaboration with the Future Technologies Group in CRD, and the NERSC Advanced Technology Group, he has published several architecture evaluation studies, and looked at what it takes to move common simulation algorithms to exotic architectures. His applications work on the Japanese Earth Simulator earned him a nomination as Gordon Bell Prize finalist in 2005. Prior to LBNL, Dr. Carter worked at the IBM Almaden Research Center as a developer of computational chemistry methods and software, and as a researcher of chemical problems of interest to IBM. He holds a Ph.D. and B.S in chemistry from the University of Sheffield, UK, and performed postdoctoral work at the University of British Columbia, Canada.


Big Bang, Big Data, Big Iron - High Performance Computing For Cosmic Microwave Background Data Analysis

Who: Julian Borrill
When: July 06, 11 a.m. - 12 p.m.
Where: 59-3101-CR

Abstract: The Cosmic Microwave Background (CMB) is the last echo of the Big Bang, and carries within it the imprint of the entire history of the Universe. Decoding this preposterously faint signal requires us to gather every-increasing volumes of data and reduce them on the most powerful high performance computing (HPC) resources available to us at any epoch. In this talk I will describe the challenge of CMB analysis in an evolving HPC landscape.

Bio: Julian Borrill is the Group Leader of the Computational Cosmology Center at LBNL. His work is focused on developing and deploying the high performance computing tools needed to simulate and analyse the huge data sets being gathered by current Cosmic Microwave Background (CMB) polarization experiments, and extending these to coming generations of both experiment and supercomputer. For the last 15 years he has also managed the CMB community and Planck-specific HPC resources at the DOE's National Energy Research Scientific Computing Center.


Origami Workshop

Who: Terry Ligocki
When: July 11, 11 a.m. - 12 p.m.
Where: 59-4102-CR

Abstract: Unit origami is a form of origami where individual pieces -- "units" -- are folded and assembled into more complex, beautiful geometric shapes. The Origami Workshop is a relaxed "hands on" workshop where everyone will learn to fold an origami unit and then, in groups, will assemble them. If time permits, instructor will demonstrate a second type of unit for all to attempt completion of a second type of assembly. Instructor will bring examples of completed unit origami objects. All folding paper will be supplied -- no previous experience folding is necessary or required. Come and have fun!

Bio: Terry has been a core developer of the Chombo library for numerically solving PDE using adaptive mesh refinement techniques since he joined the Applied Numerical Algorithms Group, ANAG, at LBNL in 2001. He has focused on the embedded boundary aspects of the Chombo library. Specifically, embedded boundary generation from implicitly defined volumes created using constructive solid geometry. His research interests include computational geometry, finite volume numerical algorithms, and transforming complex algorithms for large scale computing. He was a member of APDEC for SciDAC and SciDAC 2 and is currently a member of FASTMath for SciDAC 3.

Using Machine Learning in Networks

Who: Mariam Kiran
When: July 13, 11 a.m. - 12 p.m.
Where: 59-3101-CR

Abstract: Machine learning is bringing innovative solutions in self-driving cars, drones and more. Mostly efforts led by industry, products such as Amazon's Alexa, Siri and many more boast doing extra artificial intelligence to tailor to the needs of users. On the other hand, wide area networks are very complex systems. These involve many actors, users and instruments involved. We have been exploring what machine learning is and how its algorithms can be used in wide area networks.

Bio: Mariam’s research focuses on learning and decentralized optimization of system architectures and algorithms for high performance computing, underlying networks and Cloud infrastructures. She has been exploring various platforms such as HPC grids, GPUs, Cloud and SDN-related technologies. Her work involves optimization of QoS, performance using parallelization algorithms and software engineering principles to solve complex data intensive problems such as large-scale complex simulations. Over the years, she has been working with biologists, economists, social scientists, building tools and performing optimization of architectures for multiple problems in their domain.


ALS Tour

Who: Michael Banda
When: July 19, 3 p.m. - 4 p.m.
Where: Advanced Light Source (ALS) Facility, building 6

Abstract: You are invited to tour the internationally renowned Advanced Light Source (ALS), a third-generation synchrotron and national user facility that hosts experiments in a wide variety of fields, attracting scientists from around the world. You need to sign up for the tour by 12:00 PM, Monday, July 17, 2017, to allow the ALS staff to best accommodate the tour.

Quantum Computation for Chemistry and Materials

Who: Jarrod McClean
When: July 20, 11 a.m. - 12 p.m.
Where: 59-3101-CR

Abstract: Quantum computers promise to dramatically advance our understanding of new materials and novel chemistry. Unfortunately, many proposed algorithms have resource requirements not yet suitable for near-term quantum devices. In this talk I will focus on the application of quantum computers to hard problems in the application area of chemistry and materials and discuss the challenges and opportunities related to current algorithms. One particular method of interest to overcome quantum resource requirements is the variational quantum eigensolver (VQE), a recently proposed hybrid quantum-classical method for solving eigenvalue problems and more generic optimizations on a quantum device leveraging classical resources to minimize coherence time requirements. I will briefly review the original VQE approach and introduce a simple extension requiring no additional coherence time to approximate excited states. Moreover, we show that this approach offers a new avenue towards mitigation of decoherence in quantum simulation requiring no additional coherence time beyond the original algorithm and utilizing no formal error correction codes.

Bio: Jarrod McClean is a current Luis W. Alvarez fellow in computing sciences at Lawrence Berkeley National Laboratories studying chemistry and materials simulation on quantum devices. He received his PhD in Chemical Physics from Harvard University specializing in quantum chemistry and quantum computation supported by the US Department of Energy as a Computational Science Graduate Fellow.  His research interests broadly include quantum computation, quantum chemistry, numerical methods, and information sparsity.


Understanding Scalable Realtime Collaborative Workflows

Who: Hari Krishnan
When: July 27, 11 a.m. - 12 p.m.
Where: 59-3101-CR

Abstract: Large DOE research projects often require complex and stringent constraints on resources whether human or computational. Handling data acquired from experimental, observed, or simulated sources and running analysis on heterogeneous architectures requires comprehensive understanding of the analysis and data movement process. This talk will highlight examples of two scientific endeavors within Department of Energy (DOE) research programs: Environmental Management and Advanced Light Sources. These two domains serve to highlight diverse challenges science communities face which require very different engineering frameworks to effectively address. The presentation will cover solutions to several technical challenges encountered while understanding and developing end to end pipelines, including issues with data movement, types of analysis, reconstruction, visualization, and data access while focused on providing usable scientific results.

Bio: Hari Krishnan has a Ph.D. in Computer Science and works for the visualization and graphics group as a computer systems engineer at Lawrence Berkeley National Laboratory. His research focuses on scientific visualization on HPC platforms and many-core architectures. He leads the development effort on several HPC related projects which include research on new visualization methods, optimizing scaling and performance on NERSC machines, working on data model optimized I/O libraries and enabling remote workflow services. As a member of The Center for Advanced Mathematics for Energy Research Applications (CAMERA), he supports development of the software infrastructure, works on accelerating image analysis algorithms and reconstruction techniques. He is also an active developer of several major open source projects which include VisIt, NiCE, H5hut, and has developed plugins that support performant-centric scalable image filtering and analysis routines in Fiji/ImageJ.