A-Z Index | Directory | Careers

Computing Sciences Summer Program: 2021 Talks & Events

Summer Program Kickoff


Who:
 David Brown
When: June 1, 1-2 p.m.
Where: Zoom (see calendar entry)

Bio: David Brown has been the Director of the Computational Research Division at Berkeley Lab since August 2011. His career with the U.S. Department of Energy National Laboratories includes fourteen years at Los Alamos National Laboratory (LANL) and thirteen years at Lawrence Livermore National Laboratory (LLNL), where he was the technical lead of several major research projects and held a number of line and program management positions. Dr. Brown's research expertise and interests lie in the development and analysis of algorithms for the solution of partial differential equations. He is particularly enthusiastic about promoting opportunities in computational science for young scientists from diverse backgrounds and is a founding member of the steering committee for the DOE Computational Graduate Fellowship Program. More recently, in collaboration with colleagues at Berkeley Lab and the Sustainable Horizons Institute, he helped create and promote the Sustainable Research Pathways program that brings faculty and students from diverse backgrounds for summer research experiences at Berkeley Lab.

SLIDES
RECORDING


NERSC: Scientific Discovery through Computation

Who: Rebecca Hartman-Baker
When: June 3, 11 a.m. - 12 p.m.
Where: Zoom (see calendar entry)

Abstract: What is High-Performance Computing (and storage!), or HPC? Who uses it and why? We'll talk about these questions as well as what makes a Supercomputer so super and what's so big about scientific Big Data. Finally, we'll discuss the challenges facing system designers and application scientists as we move into the exascale era of HPC.

Bio: Rebecca Hartman-Baker leads the User Engagement Group at NERSC, where she is responsible for engagement with the NERSC user community to increase user productivity via advocacy, support, training, and the provisioning of usable computing environments. She began her career at Oak Ridge National Laboratory, where she worked as a postdoc and then as a scientific computing liaison in the Oak Ridge Leadership Computing Facility. Before joining NERSC in 2015, she worked at the Pawsey Supercomputing Centre in Australia, where she coached two teams to the Student Cluster Competition at the annual Supercomputing conference, led the HPC training program for a time, and was in charge of the decision-making process for determining the architecture of the petascale supercomputer installed there in 2014. Rebecca earned a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign.

SLIDES
RECORDING


Introduction to NERSC Resources


Who:
 Yun (Helen) He
When: June 3, 1-3 p.m.
Where: Zoom (see calendar entry)

Abstract: This class will provide an informative overview to acquaint students with the basics of NERSC computational systems and their programming environments. Topics include systems overview, connecting to NERSC, software environment, file systems and data management/transfer, and available data analytics software and services. More details on how to compile applications and run jobs on NERSC Cori/Edison will be presented including hands-on exercises. The class will also showcase various online resources that are available on NERSC web pages.

Bio: Helen is a High Performance Computing consultant of the User Engagement Group at NERSC. She has been the main point of contact among users, system people, and vendors, for the Cray XT4 (Franklin), XE6 (Hopper) systems, and XC40 (Cori) systems at NERSC in the past 10 years. Helen has worked on investigating how large-scale scientific applications can be run effectively and efficiently on massively parallel supercomputers: design parallel algorithms, develop and implement computing technologies for science applications. She provides support for climate users and some of her experiences include software programming environment, parallel programming paradigms such as MPI and OpenMP, scientific applications porting and benchmarking, distributed components coupling libraries, and climate models.

SLIDES
RECORDING


Designing and Presenting a Science Poster


Who: Jonathan Carter
When: June 08, 2-3 p.m.
Where: Zoom (see calendar entry)

Abstract: During the poster session on August 3, members of our summer visitor program will get the opportunity to showcase the work and research they have been doing this summer. Perhaps some of you have presented posters before, perhaps not. This talk will cover the basics of poster presentation: designing an attractive format; how to present your information clearly; what to include and what not to include. Presenting a poster is different from writing a report or giving a presentation. This talk will cover the differences, suggest ways to avoid common pitfalls, and make poster sessions work more effectively for you.

Bio: Jonathan Carter is the Associate Laboratory Director for Computing Sciences at Lawrence Berkeley National Laboratory (Berkeley Lab). The Computing Sciences Area at Berkeley Lab encompasses the National Energy Research Scientific Computing Division (NERSC), the Scientific Networking Division (home to the Energy Sciences Network, ESnet), and the Computational Research Division.

SLIDES
RECORDING


Parallelism in Deep Neural Network Training, with an emphasis on Graph Neural Networks


Who: Aydın Buluç
When: June 10, 11 a.m.-12 p.m.
Where: Zoom (see calendar entry)

Abstract:  I will cover basic sources of parallelism in deep neural network training, such as model, data, and pipeline parallelism. I will then focus on parallel computing for various forms of graph neural networks, which are emerging types of neural networks for graph-structured data.

Bio:  Aydın Buluç is a Staff Scientist at the Lawrence Berkeley National Laboratory (LBNL) and an Adjunct Assistant Professor of EECS at UC Berkeley. His research interests include parallel computing, combinatorial scientific computing, high performance graph analysis and machine learning, sparse matrix computations, and computational biology. Previously, he was a Luis W. Alvarez postdoctoral fellow at LBNL and a visiting scientist at the Simons Institute for the Theory of Computing. He received his PhD in Computer Science from the University of California, Santa Barbara in 2010. Dr. Buluç is a recipient of the DOE Early Career Award in 2013 and the IEEE TCSC Award for Excellence for Early Career Researchers in 2015.

SLIDES
RECORDING


Crash Course on High-Performance Computing


Who: Rebecca Hartman-Baker
When: June 11, 10 am - 11 am & 1 pm - 3 pm
Where: Zoom (see calendar entry)

Abstract: In this two-part course, students will learn to write parallel programs that can be run on a supercomputer. We begin by discussing the concepts of parallelization before introducing MPI and OpenMP, the two leading parallel programming libraries. Finally, the students will put together all the concepts from the class by programming, compiling, and running a parallel code on one of the NERSC supercomputers. Recommend attend AM and PM Sessions

Bio: Rebecca Hartman-Baker leads the User Engagement Group at NERSC, where she is responsible for engagement with the NERSC user community to increase user productivity via advocacy, support, training, and the provisioning of usable computing environments. She began her career at Oak Ridge National Laboratory, where she worked as a postdoc and then as a scientific computing liaison in the Oak Ridge Leadership Computing Facility. Before joining NERSC in 2015, she worked at the Pawsey Supercomputing Centre in Australia, where she coached two teams to the Student Cluster Competition at the annual Supercomputing conference, led the HPC training program for a time, and was in charge of the decision-making process for determining the architecture of the petascale supercomputer installed there in 2014. Rebecca earned a PhD in Computer Science from the University of Illinois at Urbana-Champaign.

SLIDES
RECORDING


The Quantum Fourier Transform Revisited


Who: Roel Van Beeumen
When: June 15, 11 am - 12 pm
Where: Zoom (see calendar entry)

Abstract: In this talk, we review the quantum Fourier transform (QFT) and derive it from an alternative vantage point that only requires prior knowledge of basic matrix analysis. The quantum Fourier transform is used as a part of many of the most successful quantum algorithms proposed to date, including Shor’s integer factoring algorithm. Originally introduced by Coppersmith in 1994, the quantum Fourier transform is most often analyzed by tracing the effect of the transformation on the computational basis states. The proposed alternative approach considers the matrix representation of the discrete Fourier transform and only uses standard matrix operations to decompose the discrete Fourier transform matrix as a product of matrices with Kronecker product structure that can be interpreted as the quantum Fourier transform. The corresponding quantum circuit directly follows as a diagrammatic representation of this matrix decomposition.

Bio: Roel Van Beeumen is a Research Scientist in the Computational Research Division at Berkeley Lab. His research interests range from numerical linear algebra and software for solving large-scale and high dimensional eigenvalue problems to quantum computing and quantum circuit synthesis. He earned his PhD in Engineering Science: Computer Science (2015) at KU Leuven in Belgium, from which he also holds Master degrees in Mathematical Engineering (2010) and in Archaeology (2011). Roel Van Beeumen is a recipient of the 2019 LDRD Early Career Award.

SLIDES
RECORDING


Machine Learning and the Future of Particle Physics


Who: Daniel Murnane
When: June 17, 11 am - 12 pm
Where: Zoom (see calendar entry)

Abstract: The discovery of the Higgs boson at the Large Hadron Collider was a colossal effort that brought together many advanced experimental, theoretical, and computational techniques. We are opening a door to a new generation of particle physics experiments that will try to answer questions about dark matter, dark energy, and why the universe seems to be so perfectly tuned for complex structures, like life. Many Higgs-era techniques may not be powerful enough for these new energy/intensity scales, and so we turn to machine learning (ML) techniques to help answer these questions. It turns out that not only does ML boost our ability to discover new physics, it also challenges us to think about physics problems in new ways.

Bio: Daniel Murnane, Ph.D. is a Postdoctoral Researcher in the Computational Research Division at Lawrence Berkeley National Laboratory (LBNL). He obtained his Ph.D. studying the fine-tuning problem of theories beyond the Standard Model of particle physics. His current work seeks to understand how machine learning (ML) techniques can be used to reconstruct particle collisions at future colliders, like the High Luminosity LHC. He is also interested in exploring novel ML architectures for science in general and approaches for making these available on low-energy and low-latency accelerator hardware.

SLIDES
RECORDING


Challenges in Building Quantum Computers


Who: Anastasiia Butko
When: June 22, 11 am - 12 pm
Where: Zoom (see calendar entry)

Abstract: Building a quantum computer that can outperform classical super-computers is an extremely challenging task. It requires to address various research questions ranging from material science and electronics to create reliable qubit devices to classical architecture and software tools to support and control fundamentally different programming paradigm. A team of researches from different fields have united at the LBNL to address these question and challenges to bring quantum computing into the reality.

Bio: Anastasiia Butko, Ph.D. is a Research Scientist in the Computational Research Division at Lawrence Berkeley National Laboratory (LBNL), CA. Her research interests lie in the general area of computer architecture, with particular emphasis on high-performance computing, emerging and heterogeneous technologies, associated programming models and architectural simulation techniques. Her primary research projects address architectural challenges in adopting novel technologies to provide continuing performance scaling in the approaching Post-Moore’s Law era. Dr. Butko is a chief architect of the custom control hardware stack for the Advanced Quantum Tested at LBNL.

SLIDES
RECORDING


Modeling Antarctic Ice with Adaptive Mesh Refinement


Who: Dan Martin
When: June 24, 11 am - 12 pm
Where: Zoom (see calendar entry)

Abstract: The response of the Antarctic Ice Sheet (AIS) remains the largest uncertainty in projections of sea level rise. The AIS (particularly in West Antarctica) is believed to be vulnerable to collapse driven by warm-water incursion under ice shelves, which causes a loss of buttressing, subsequent grounding-line retreat, and large (potentially up to 4m) contributions to sea level rise. Understanding the response of the Earth's ice sheets to forcing from a changing climate has required the development of a new generation of next-generation ice sheet models which are much more accurate, scalable, and sophisticated than their predecessors. For example very fine (finer than 1km) spatial resolution is needed to resolve ice dynamics around shear margins and grounding lines (the point at which grounded ice begins to float). The LBL-developed BISICLES ice sheet model uses adaptive mesh refinement (AMR) to enable sufficiently-resolved modeling of full-continent Antarctic ice sheet response to climate forcing. This talk will discuss recent progress and challenges modeling the sometimes-dramatic response of the ice sheet to climate forcing using AMR.

Bio: Dan Martin is a computational scientist and group leader for the Applied Numerical Algorithms Group at Lawrence Berkeley National Laboratory. After earning his PhD in mechanical engineering from U.C. Berkeley, Dan joined ANAG and LBL as a post-doc in 1998. He has published in a broad range of application areas including projection methods for incompressible flow, adaptive methods for MHD, phase-field dynamics in materials, and Ice sheet modeling. His research involves development of algorithms and software for solving systems of PDEs using adaptive mesh refinement (AMR) finite volume schemes, high (4th)-order finite volume schemes for conservation laws on mapped meshes, and Chombo development and support. Current applications of interest are developing the BISICLES AMR ice sheet model as a part of the SCIDAC-funded ProSPect application partnership, and some development work related to the COGENT gyrokinetic modeling code, which is being developed in partnership with Lawrence Livermore National Laboratory as a part of the Edge Simulation Laboratory (ESL) collaboration.

SLIDES
RECORDING


Neural Networks with Euclidean Symmetry for Physical Sciences


Who: Tess Smidt
When: June 29, 11 am - 12 pm
Where: Zoom (see calendar entry)

Abstract: To use machine learning to understand and engineer complex physical systems (e.g. materials for energy and computation and molecules and proteins for medicines), we need methods built to handle the “data types” of physical systems: 3D geometry and the geometric tensors. These are traditionally challenging data types to use for machine learning because coordinates and coordinate systems are sensitive to the symmetries of 3D space: 3D rotations, translations, and inversion. In this talk, I present Euclidean neural networks which naturally handle these data types. These networks eliminate the need for data augmentation -- the 500-fold increase in brute-force training necessary for a model to learn 3D patterns in arbitrary orientations. They are extremely data-efficient; they result in more accurate models and require less training data to do so. With these networks, we are able to scale expensive quantum mechanical computer simulations to unprecedented system sizes and invent algorithms that can guide the design of atomic systems. I'll demonstrate some unique properties of Euclidean neural networks and show recent applications to increasing the accuracy and speed of molecular dynamics, predicting vibrational properties of crystals, and beyond.

Bio: Tess Smidt is the 2018 Alvarez Postdoctoral Fellow in Computing Sciences. Her current research interests include intelligent computational materials discovery and deep learning for atomic systems. She is currently designing algorithms that can propose new hypothetical atomic structures. Tess earned her Ph.D. in physics from UC Berkeley in 2018 working with Professor Jeffrey B. Neaton. As a graduate student, she used quantum mechanical calculations to understand and systematically design the geometry and corresponding electronic properties of atomic systems. During her Ph.D., Tess spent a year as an intern on Google’s Accelerated Science Team where she developed a new type of convolutional neural network, called Tensor Field Networks, that can naturally handle 3D geometry and properties of physical systems. As an undergraduate at MIT, Tess engineered giant neutrino detectors in Professor Janet Conrad's group and created a permanent science-art installation on MIT's campus called the Cosmic Ray Chandeliers, which illuminate upon detecting cosmic-ray muons.

SLIDES
RECORDING


Meet & Greet a Scientist: Postdocs


Who: Bashir Mohammed, Jackie Yao, Lisa Claus
When: June 30, 10 - 11 a.m.
Where: Zoom (see calendar entry)

Bashir Mohammed: Bashir Mohammed joined the scientific data management(SDM) group as a postdoctoral research scholar in Mariam Kiran and John Wu's group. His current research focuses on developing A.I. and machine-learning algorithms to optimally control high-speed distributed network resources, minimize network downtime and avoid network traffic congestion for critical exascale scientific workflows.

Bashir received his M.S. in Control System Engineering from the University of Sheffield and a Ph.D. in Computer Science from the University of Bradford in the U.K. before moving to the United States in 2019 to start his postdoctoral work. He was an intern with Rolls-Royce in Sheffield, where he developed a control and optimization algorithm to model gas turbine engines. In 2020, he was endorsed by the U.K. government as a world-leading exceptional talent in digital technology. He has written several peer-reviewed articles on his research. Outside of the lab, Bashir enjoys playing basketball while also pursuing his love of acting on stage in local theatres.

Jackie Yao: Zhi (Jackie) Yao is currently working in the Center for Computational Sciences and Engineering (CCSE) at CRD, LBL. She obtained the Ph.D. degree in 2017 and the M.S. degree in 2014, both in the ECE Department at UCLA. Her current research interest is on developing high performance code leveraging the ECP products AMReX and WarpX, to explore quantum information science and microelectronic physics. She has received multiple academic honors, such as the 1st place Best Student Paper in 2017 International Microwave Symposium, 2017 IEEE Antennas and Propagation Society Doctoral Research Grant, and 2015 Qualcomm Innovation Fellowship. She intends to invest her interdisciplinary training to investigate the fundamental of wave-material interactions and how such insights inspire new electronic applications.

Lisa Claus: Lisa Claus is a mathematician who currently works as a Postdoctoral Scholar in the Scalable Solvers Group of the Computational Research Division. Her research focuses on the development of high-performance computing software to accelerate the simulation of various applications from climate models to electromagnetism. She is currently working on the STRUMPACK software library which offers scalable sparse direct solvers for large sparse linear systems supported by the Exascale Computing Project. Before joining LBL, Lisa completed a PhD in numerical mathematics at the University of Wuppertal in Germany in 2019.

SLIDES: Bashir MohammedLisa Claus, & Zhi Jackie Yao
RECORDING


Enabling Scalable LCLS Covid-19 Analysis on NERSC Resources


Who: Anna Giannakou
When: July 1, 11 am - 12 pm
Where: Zoom (see calendar entry)

Abstract: Today scientific applications run different data analytics pipelines that oftentimes include compute tasks that are not designed to run at scale on HPC resources. In this talk we will present our work on detecting and analyzing compute and I/O performance bottlenecks for a crystalography workflow used to analyze Covid-19 samples collected from an LCLS beamline in July. Furthermore, we will present a set of core workflow improvements that enabled scalable Covid-19 analytics on NERSC resources.

Bio: Anna Giannakou is a research scientist in the IDF group working in self adaptable workflows and network analytics. Anna Giannakou received her Ph.D. from Institut National des Sciences Appliquées at Inria Rennes in July 2017. Working in Dr. Christine Morin's research group, Anna's research focused on self-adaptable security monitoring for cloud environments. Anna holds a Masters in Information Security from the University of Luxembourg and a Bachelor's in Computer Science from the University of Athens.

SLIDES
RECORDING


FAIR Principles and Neurophysiology:  The Neurodata Without Borders Ecosystem for Neurophysiological Data Science


Who: Oliver Rübel
When: July 6, 11 am - 12 pm
Where: Zoom (see calendar entry)

Abstract: One of the greatest questions in science today is understanding how the brain works and gives rise to thoughts, memories, perception, and consciousness. To address this challenge, neurophysiologists within the NIH BRAIN initiative and around the world perform diverse experiments that measure electrophysiologically and optically the activity of neurons from different parts of the brain in diverse species (from flies to humans) and relate that activity to sensation and behavior. These experiments generate large, complex, multi-modal datasets at terabyte scale, which continue to grow in both size and complexity under global investments in neuroscience research, such as the U.S. BRAIN Initiative and the E.U. Human Brain Project. Understanding the brain requires integration of data across this diversity, and thus these data must be findable, accessible, interoperable, and reusable (FAIR). This requires a standard for data and metadata that can coevolve with neuroscience. In this seminar, I will describe design and implementation principles for Neurodata Without Borders (NWB), a data standard and ecosystem for neurophysiological data science. I will identify and discuss the key interdependent, yet separable, components of the data standardization process and demonstrate their implementation and role in NWB. More broadly, the design principles of NWB are generally applicable to enhance discovery across biology through data FAIRness.

Bio: Oliver Rübel is a Staff Scientist in the Machine Learning and Analytics group and the Computational Biosciences group at Lawrence Berkeley National Laboratory. He earned his Ph.D. in computer science (Dr. rer. nat.) from the University of Kaiserslautern, Germany in 2009. His research to date has focused on high-performance data science in support of the DOE mission in computational and experimental science, with a particular focus on (a) machine learning, visualization, and analytics algorithms and methods for large-scale, high-dimensional, and multi-modal data, (b) data modeling and management for efficient data sharing and integration, and (c) data and analysis systems to enable data sharing, reuse, and analysis. Throughout his research career, Dr. Rübel has worked in close collaboration with a wide range of application sciences — including, gene expression, metabolomics, particle physics, finance, neuroscience, and climate. He is the lead of the NIH BRAIN Initiative project on Neurodata Without Borders (NWB), an INCF-endorsed, leading community data standard and software ecosystem for neurophysiology. He has contributed to and led the design, implementation, and development of publicly available and well-recognized data science applications, including, NWB, PointCloudXplore, WarpIV, HDMF, BASTet, and OpenMSI among others. He has received two prestigious R&D100 Awards (in 2015 for the development of OpenMSI and in 2019 for the development of NWB) as well as an LBNL Director’s Award for Exceptional Early Scientific Career Achievement in 2016. He has contributed to more than 56 peer-reviewed publications in high-ranking journals and conferences and more than 160 other publications in the form of posters, technical reports, theses, and presentations.

SLIDES
RECORDING


Preparing for Exascale Computing: Lesson Learned from Porting Large-Scale Materials Science Codes to GPU Acceleration


Who: Mauro Del Ben
When: July 8, 11 am - 12 pm
Where: Zoom (see calendar entry)

Abstract:  HPC software development is in the middle of a revolution. While HPC software developers of yesteryear could rely on advancements in strong-scaling CPUs to provide performance boosts passively, the (re-)emergence of vendor-specific architectures as the dominant HPC paradigm forces developers to actively maintain and optimize core compute kernels going forward. The paradigm of graphics processing units (GPUs) currently dominates the conversation, with major upcoming DoE supercomputers (NERSC's Perlmutter, ORNL's Frontier, Argonne's Aurora) having GPUs as a core source of FLOPs. Within the GPU paradigm, however, achieving performance portability is a non-trivial task and various strategies need to be accounted for by HPC software developers to fully exploit GPU acceleration.

In this talk, we will focus on our experiences navigating this revolution for a widely-used materials science code, namely the BerkeleyGW software package. BerkeleyGW is an HPC software package developed at LBNL employed to study the excited state properties of electrons in materials via GW method and beyond. GW calculations are state-of-the-art to accurately describe these properties, which are critical for the design of novel new devices based on complex materials with applications in many fields including energy storage/conversion, photovoltaic, nanoelectronics, and quantum computers. We showcase here the various techniques used to achieve performance portability for BerkeleyGW on hybrid architectures targeting to accelerate large-scale simulations with thousands of atoms. We achieve excellent strong and weak scaling on thousands of GPUs, and an order of magnitude or more reduction in time to solution compared to the CPU-only implementation. We demonstrate the scale of GW calculations to the order of over 10,000 electrons utilizing the entire Summit at OLCF (more than 27k GPUs) achieving over 100 PFLOP/s of double-precision performance and time to solution of the order of minutes.

Bio:  Mauro Del Ben, Ph.D. in Chemistry from the University of Zurich, since 2018 Research Scientist at the Computational Research Division, Lawrence Berkeley National Laboratory, and member of the Center for Computational Study of Excited-State Phenomena in Energy Materials (C2SEPEM) at LBNL. His research is focused on the development of new computational and mathematical methods for first-principle simulations of ground and excited-state phenomena in chemistry and materials, with a particular focus on the development of high-performance computing algorithms for large-scale massively parallel applications, including hybrid accelerated architectures.

SLIDES
RECORDING


Meet & Greet: Staff Scientists & Engineers


Who: Nan Ding, Richard Cziva, & Suren Byna
When: July 9, 11 am - 12 pm
Where: Zoom (see calendar entry)

Nan Ding: Nan is a research scientist in the Performance and Algorithm group in CRD. Her research interests include high-performance computing, performance modeling, and optimizations. Nan received her Ph.D. in computer science from Tsinghua University, Beijing, China in 2018.

Richard Cziva: Data Science Engineer at ESnet’s Planning and Architecture group. He works on research and development projects around capacity planning, large-scale flow analysis, machine learning projects on network telemetry data. Prior to joining ESnet in 2018, Richard was a Research Associate at the University of Glasgow, where he worked with network function virtualization and software-defined networking research. Richard has a Ph.D. from the University of Glasgow, UK.

Suren Byna: Suren Byna is a Staff Scientist in the Scientific Data Management Group at Lawrence Berkeley National Lab (LBNL). His research interests are in scalable scientific data management. More specifically, he is interested in optimizing parallel I/O, developing data management systems for managing scientific data, and handling heterogeneity of hardware devices. He is also interested in energy-efficient parallel computing.

Before joining LBNL in November 2010, Suren was a researcher at NEC Labs America, where he was a part of the Computer Systems Architecture Department (now Integrated Systems Department) and was involved in the Heterogeneous Cluster Computing project. Prior to that, he was a Research Assistant Professor in the Department of Computer Science at Illinois Institute of Technology (IIT) and a Guest Researcher at the Math and Computer Science division of the Argonne National Laboratory, as well as a Faculty Member of the Scalable Computing Software Laboratory at IIT. He received his Masters and Ph.D. degrees in Computer Science from the Illinois Institute of Technology, Chicago.

SLIDES: Nan Ding, Richard Cziva, Suren Byna
RECORDING


Leveraging the Python Ecosystem to Enable Eross Eacility Ecience at DOE Lightsource Facilities


Who: Hari Krishnan
When: July 13, 11 am - 12 pm
Where: Zoom (see calendar entry)

Abstract: Enabling remote access is now an essential requirement to support operations. However, effectively operating complex instrumentation to perform experiments using remote tools, especially without direct access, is very much a challenge. This talk covers how Python and its ecosystem enables remote access for a wide range of use cases. From enabling cross-facility analytics to remote REST-based data lifecycle orchestration, the Python environment provides a flexible backend for operational services. Furthermore, a robust interpreted frontend enables extensibility from low level APIs to fully visual user interfaces. This presentation will cover a comprehensive end-to-end example of how specific Python tools have enabled remote access to experimental hardware, cross facility data analytics, visualization capabilities, and finally automation and feedback. Finally, we will showcase a prototype web-based GUI frontend written within the JupyterLab framework unifying access to all of these services.

Bio: Hari Krishnan has a Ph.D. in Computer Science and works for the visualization and graphics group as a computer systems engineer at Lawrence Berkeley National Laboratory. His research focuses on scientific visualization on HPC platforms and many-core architectures. He leads the development effort on several HPC related projects which include research on new visualization methods, optimizing scaling and performance on NERSC machines, working on data model optimized I/O libraries and enabling remote workflow services. As the software architect of The Center for Advanced Mathematics for Energy Research Applications (CAMERA), he supports the development of the software infrastructure, works on accelerating image analysis algorithms and reconstruction techniques.

SLIDES
RECORDING


Virtual Tour of the ALS


Who: Ina Reichel
When: July 14, 3 pm - 4 pm
Where: Zoom (see calendar entry)

Abstract: The Advanced Light Source is a U.S. Department of Energy scientific user facility at Lawrence Berkeley National Laboratory. Our mission is to advance science for the benefit of society by providing our world-class synchrotron light source capabilities and expertise to a broad scientific community.

SLIDES
RECORDING


Supercomputing For Nuclear Astrophysics


Who: Donald Willcox
When: July 15, 11 am - 12 pm
Where: Zoom (see calendar entry)

Abstract: Exciting phenomena at the forefront of astrophysics like supernovae, X-ray bursts, and neutron star mergers rely on intricate interactions between hydrodynamics, radiation, gravity, nuclear physics, and quantum mechanics. Purely analytic methods are unable to capture the range of physics, length scales, and high dimensionality of these events. Supercomputing resources are thus essential for understanding how these physics components work together to explain observations. I will first discuss the general astrophysics of these phenomena and how we currently understand them to work, as well as the open questions we are pursuing. Next we will delve into the computational techniques and algorithm design we use to assemble the governing physics equations and solve them efficiently. We will see how to design scalable, parallelized simulations for modern supercomputers by implementing techniques like symbolic code generation, adaptive mesh refinement, finite volume, and particle-in-cell methods. I will then discuss the science questions these methods are currently enabling us to answer along with an outlook on how the latest advances in computing can advance nuclear astrophysics in the near future.

Bio:  Don Willcox is a postdoctoral researcher in the Center for Computational Sciences and Engineering (CCSE) in the Computational Research Division at Berkeley Lab. His research in computational astrophysics includes algorithms for nuclear burning, solving PDEs on adaptive meshes, and neutrino quantum kinetics. He also develops methods for accelerating these algorithms for GPU-based supercomputers. Before joining Berkeley Lab, Don completed his PhD in Physics at Stony Brook University in August 2018 working on thermonuclear supernovae modeling and the convective Urca process in white dwarf stars.

SLIDES
RECORDING


Self-Driving Networks


Who: Mariam Kiran
When: July 20, 11 am - 12 pm
Where: Zoom (see calendar entry)

Abstract: This research is at the intersection of deep (and machine) learning and computer networking for building Self-driving Networks (e.g. high-speed networks aka DOE's ESnet). The main techniques investigated are bringing deep reinforcement learning to real-world infrastructure challenges, that are scalable and deployable to allow self-driving capabilities to facilities, the edge and end-to-end workflows.

Bio:  Dr. Kiran is a research scientist in the Scientific Networking Division, housed in the Prototypes and Testbed group at ESnet, LBNL, and is leading research efforts in AI solutions for operational network research and engineering problems. She received her Ph.D. and MSc (Eng) in Computer Science from University of Sheffield in 2011, 2009, respectively. She was selected as the Royal Society's Scientists in Westminster in 2015 and is the recipient of the 2017 U.S. DOE Early Career Award. She joined ESnet at Lawrence Berkeley National Laboratory in 2016. Her work focuses on machine learning and decentralized optimization of distributed computing system architectures, wide area networks, wireless, and Cloud infrastructures, with an emphasis on deep learning and reinforcement learning algorithms for designing and building end-to-end self-driving networks. She is also one of the developers of FLAME an open-source agent-based framework being used worldwide for multi-disciplinary research.

Before joining the lab, Kiran worked at various Universities of Sheffield, Leeds, and University College London, with collaborations with European industries such as SAP, ATOS, and BT. She is a member of ACM and a Senior IEEE member.

SLIDES
RECORDING


Computational Electrodynamics: From Pulsar Magnetospheres to Microelectronics


Who: Revathi Jambunathan
When: July 22, 11 am - 12 pm
Where: Zoom (see calendar entry)

Abstract: Pulsars are rapidly rotating, highly-magnetized neutron stars that were discovered more than half a century ago. Yet, we do not understand the fundamental processes driving their electromagnetic radiation. Particle-In-Cell plasma simulations provide an accurate computational lens for uncovering the fundamental mechanisms that drive the large-scale dynamics observed in the pulsar magnetospheres. However, it is challenging to resolve the disparate length-scales that span the magnetosphere while also capturing the evolution of the current sheet over long temporal scales. In this talk, I will discuss our work on extending WarpX, an electromagnetic Particle-In-Cell solver to address some of these challenges.

Bio: Revathi Jambunathan is a postdoctoral researcher at the Center for Computational Sciences and Engineering at Lawrence Berkeley National Laboratory. She graduated with a PhD in aerospace engineering at the University of Illinois, Urbana-Champaign. Revathi is one of the core-developers of the particle-in-cell WarpX code and she is currently extending WarpX for modeling astrophysical plasma and microelectronics applications.

SLIDES
RECORDING


Meet & Greet: Sr. Staff/Group Leaders


Who: Damian Rouson, Tina Declerk, Michael Wehner
When: July 23, 11 am - 12 pm
Where: Zoom (see calendar entry)

Damian Rousan Bio: The Group Lead for the Computer Languages and Systems Software (CLaSS) Group at Berkeley Lab. He is a mechanical engineer with experience in modeling classical, quantum, and magnetohydrodynamic turbulence and multiphase flow. He leads the development of the OpenCoarrays parallel runtime library and the Morfeus partial differential equation solver framework. His research at Berkeley Lab explores the use of machine learning to accelerate predictions of climate change's regional impacts using Fortran 2018 and UPC++.

He co-authored the textbook Scientific Software Design: The Object-Oriented Way (Cambridge University Press, 2011) and has taught related university courses and tutorials on Fortran 2018 and agile software development. He is an alternate member of the Fortran standards committee. He has held academic staff and faculty positions at the City University of New York, the University of Maryland, the University of Cyprus, the University of Bergen, and Stanford University. He has held technical staff and leadership positions at the U.S. Naval Research Laboratory and Sandia National Laboratories. He received a 2003-'04 NASA Summer Faculty Fellowship and a 2020-'21 Department of Energy Better Scientific Software Fellowship. He has been a (co-)principal investigator on research grants and research software engineering contracts funded by the National Institute of Standards and Technology, the National Science Foundation, the Office of Naval Research, the U.S. Nuclear Regulatory Commission and the National Aeronautics and Space Administration.

He founded Sourcery, Inc., a research software engineering consultancy focused on modern Fortran, including modernizing legacy Fortran. Sourcery has worked on software projects in domains ranging from particle-beam physics and nuclear energy to weather and climate science. He also founded Sourcery Institute, a California public-benefit nonprofit corporation granted 501(c)(3) tax-exempt status for research and education in computational science and engineering. Sourcery Institute offers training courses, publishes open-source course modules, and funds a Ph.D. fellowship at Cranfield University. He holds a B.S. from Howard University and an M.S. and Ph.D. from Stanford University, all in mechanical engineering. He is also a licensed Professional Engineer (P.E.) in the State of California.

Tina Declerk Bio: I started work at the National Engergy Research Scientific Computer Center (NERSC) in 1997 as a system anaylst in the systems group. I forayed into the startup world from 2000-2006 focused primarily on storage startups. In 2007 I returned to the system group at NERSC where I was a system analyst on multiple computers in the top 10 of the Top 500 supercomputers in the world. I am currently the Systems Department Head responsible for the Computational Systems, Security and Network, Operations, and Building Infrastructure groups.

Michael Wehner Bio: Dr. Wehner’s current research concerns the behavior of extreme weather events in a changing climate, especially heat waves, intense precipitation, drought and tropical cyclones. Before joining the Berkeley Lab in 2002, Wehner was an analyst at the Lawrence Livermore National Laboratory in the Program for Climate Modeling Diagnosis and Intercomparison. He is the author or co-author of over 200 scientific papers and reports. He was a lead author for both the 2013 Fifth Assessment Report of the Intergovernmental Panel on Climate Change and the 2nd,3rd and 4th US National Assessments on climate change. He is currently a lead author on the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Dr. Wehner earned his master’s degree and Ph.D. in nuclear engineering from the University of Wisconsin-Madison, and his bachelor’s degree in Physics from the University of Delaware.

RECORDING


Scientific Applications in the NESAP Program


Who: Muaaz Awan, Jaideep Pathak, Neil Mehta, Raphaël Prat
When: July 27, 11 am - 12 pm
Where: Zoom (see calendar entry)

Abstract: NERSC Exascale Scientific Application Program started in 2014 and has been forming collaborations with application development teams and vendors in preparing scientific applications for the current and upcoming architectures. These collaborations involve NERSC staff taking deep dives into the application codes and engineering them to give optimal performance on NERSC and other systems. In this talk NERSC engineers will give an overview of their collaborations with four science teams and provide an insight into efforts involved in adapting their applications to take advantage of the next generation of supercomputing architectures. This talk will cover optimization and development efforts for applications for metagenome analysis, computational fluid dynamics, Machine learning assisted Molecular Dynamics and Climate Science.

Muaaz Awan Bio: Muaaz Awan is an application performance specialist at NERSC, his expertise include bioinformatics software development, GPU porting, optimization and performance analysis. Currently he is associated with the ExaBiome project where he contributes as a GPU application developer in the metagenomics analysis software pipelines. Previously, he has worked as a postdoc scholar at NERSC, LBNL (Lawrence Berkeley National Lab) and as GPU application developer at EMSL, PNNL (Pacific Northwest National Lab). His doctoral thesis explored high-performance computing strategies for LC-MS/MS based proteomics workflows.

Jaideep Pathak Bio: Jaideep Pathak is a NESAP postdoctoral researcher at NERSC, LBL. He works on Machine Learning (ML) methods for augmenting numerical Partial Differential Equation solvers with applications in computational fluid dynamics and climate science. Jaideep graduated with a Ph.D. in Physics from the University of Maryland, College Park where he worked on applying novel ML techniques to problems in chaotic nonlinear dynamical systems.

Neil Mehta Bio: Neil Mehta is a NESAP postdoctoral researcher at NERSC, LBL working on improving the performance of particle based method codes. He works on the ECP Exaalt project with focus on the SNAP potential. His primary work is developing data sharing interface between C++ and python using pybind11. Neil earned his Ph.D. from University of Illinois at Urbana Champaign in 2020 in Aerospace Engineering. The subject of his Ph.D. thesis was developing Monte Carlo methods for surface ablation and long-range Coulomb interaction model for ionic liquid simulations. 

Raphaël Prat Bio: Raphaël Prat is a NESAP postdoctoral researcher at NERSC, LBNL working on the optimization on GPU of Chombo 4 (CFD simulation) and Proto. Proto is a High Performance middleware used by Chombo 4 on GPU architecture with a user-friendly API. Raphaël Prat has earned a Ph.D. from the CEA (Commissariat à l'énergie atomique et aux énergies alternatives) and the University of Bordeaux in October 2019 under the supervision of Laurent Colombet and Raymond Namyst. His Ph.D. focused on the dynamic load balancing for future exaflop supercomputers applied to molecular dynamics. 

SLIDES
RECORDING


Behavioral Based Interviewing Workshop: Effective Interviewing Techniques


Who: William Cannan & LaTonja Wright
When: July 29, 11 am - 12 pm
Where: Zoom (see calendar entry)

Abstract: Past Behavior is the best predictor of future performance! Behavioral-based interviewing is a competency-based interviewing technique in which employers evaluate a candidate's past behavior in different situations in order to predict their future performance. This technique is the new norm for academic and industry-based organizations searching for talent. This workshop will provide information and tools to help you prepare for your next interview including an overview of the behavioral-based interview process, sample questions, and techniques on how to prepare.

William Cannan Bio: Bill Cannan is the Sr. HR Division Partner that supports Computing Sciences and IT.  Bill has over 20 years of HR related experience as a recruiter and HR Generalist in both industry and National Lab environments.  This includes over 12 years at Lawrence Berkeley National Lab and 3 years with Lawrence Livermore National Lab. Bill is responsible for providing both a strategic and hands-on full cycle Human Resources support and consultation to employees and managers.

LaTonja Wright Bio: LaTonja Wright is the Staff HR Division Partner that supports Computing Sciences and IT.  She has 17 plus years experience partnering with various organizations to meet business objectives with employees and management.  The primary focus is to provide consultation to managers and employees in the areas of Talent Management, Employee and Labor Relations, Compensation & Benefits, Performance Management and Diversity, Equity & Inclusion (DEI).  LaTonja is one of the founding members of the African American Employee Resource Group (AAERG).  The Lab's first African American ERG.  The purpose of the AAERG is to attract, retain, empower and inspire African American employees to achieve their fullest potential across the spectrum of employment opportunities the Lab including scientists, engineers, technologists and operations staff. "The act of Stewardship is who I am professionally and personally.

RECORDING