A-Z Index | Directory | Careers

Computing Sciences Summer Program 2022: Talks & Events

Summer Program Kickoff


Deb AgarwalWho: Deborah Agarwal
When: June 7, 1 p.m. – 2 p.m.
Where: Zoom (see calendar)

Bio: Dr. Agarwal's research focuses on scientific tools which enable sharing of scientific experiments, advanced networking infrastructure to support sharing of scientific data, data analysis support infrastructure for eco-science, and cybersecurity infrastructure to secure collaborative environments. Dr. Agarwal is a Research Affiliate at the Berkeley Institute for Data Science and an Inria International Chair, where she co-leads the DALHIS (Data Analysis on Large-scale Heterogeneous Infrastructures for Science) Inria Associate team. Dr. Agarwal also leads teams developing data server infrastructure to significantly enhance data browsing and analysis capabilities and enable eco-science synthesis at the watershed scale to understand hydrologic and conservation questions and at the global scale to understand carbon flux. Some of the projects Dr. Agarwal is working on include Environmental Systems Science Digital Infrastructure for a Virtual Ecosystem (ESS-DIVE), Watershed Function SFA, AmeriFlux Management Project, FLUXNET, NGEE Tropics, International Soil Carbon Network. Dr. Agarwal received her Ph.D. in electrical and computer engineering from the University of California, Santa Barbara, and a B.S. in Mechanical Engineering from Purdue University.

SLIDES

VIDEO


NERSC: Scientific Discovery through Computation


Rebecca Hartman BakerWho: Rebecca Hartman-Baker
When: June 9, 11 a.m – 12 p.m.
Where: Zoom (see calendar)

Abstract: What is High-Performance Computing (and storage!), or HPC? Who uses it and why? We'll talk about these questions as well as what makes a Supercomputer so super and what's so big about scientific Big Data. Finally we'll discuss the challenges facing system designers and application scientists as we move into the exascale era of HPC.

Bio: Rebecca Hartman-Baker leads the User Engagement Group at NERSC, where she is responsible for engagement with the NERSC user community to increase user productivity via advocacy, support, training, and the provisioning of usable computing environments. She began her career at Oak Ridge National Laboratory, where she worked as a postdoc and then as a scientific computing liaison in the Oak Ridge Leadership Computing Facility. Before joining NERSC in 2015, she worked at the Pawsey Supercomputing Centre in Australia, where she coached two teams to the Student Cluster Competition at the annual Supercomputing conference, led the HPC training program for a time, and was in charge of the decision-making process for determining the architecture of the petascale supercomputer installed there in 2014. Rebecca earned a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign.

SLIDES

VIDEO


Introduction to NERSC Resources


Helen HeWho: Yun (Helen) He
When: June 9, 1 – 3 p.m.
Where: Zoom (see calendar)

Abstract: This class will provide an informative overview to acquaint students with the basics of NERSC computational systems and their programming environment. Topics include systems overview, connecting to NERSC, software environment, file systems and data management/transfer, and available data analytics software and services. More details on how to compile applications and run jobs on NERSC Cori will be presented including hands-on exercises. The class will also showcase various online resources that are available on NERSC web pages.

Bio: Helen He is a high performance computing consultant with the User Engagement Group at NERSC. She has been the main point of contact among users, system people, and vendors, for the Cray XT4 (Franklin), XE6 (Hopper) systems, and XC40 (Cori) systems at NERSC for the past 10 years. Helen has worked on investigating how large-scale scientific applications can be run effectively and efficiently on massively parallel supercomputers: design parallel algorithms, and develop and implement computing technologies for science applications. She provides support for climate users and some of her experiences include software programming environment, parallel programming paradigms such as MPI and OpenMP, scientific applications porting and benchmarking, distributed components coupling libraries, and climate models.

SLIDES

VIDEO


Designing and Presenting a Science Poster


Jonathan CarterWho:
 Jonathan Carter
When: June 14, 11 a.m – 12 p.m.
Where: Zoom (see calendar)

Abstract: During the poster session on August 2nd, members of our summer visitor program will get the opportunity to showcase the work and research they have been doing this summer. Perhaps some of you have presented posters before, perhaps not. This talk will cover the basics of poster presentation: designing an attractive format; how to present your information clearly; what to include and what not to include. Presenting a poster is different from writing a report or giving a presentation. This talk will cover the differences, and suggest ways to avoid common pitfalls and make poster sessions work more effectively for you.

Bio: Jonathan Carter is the Associate Laboratory Director for Computing Sciences at Lawrence Berkeley National Laboratory (Berkeley Lab). The Computing Sciences Area at Berkeley Lab encompasses the National Energy Research Scientific Computing Division (NERSC), the Scientific Networking Division (home to the Energy Sciences Network, ESnet), and the Computational Research Division.

SLIDES

VIDEO


Crash Course in Supercomputing


Rebecca Hartman BakerWho: Rebecca Hartman-Baker
When: June 14, 1 – 5 p.m.
Where: Zoom (see calendar)

Abstract: In this two-part course, participants will learn to write parallel programs that can be run on a supercomputer. We begin by discussing the concepts of parallelization before introducing MPI and OpenMP, the two leading parallel programming libraries. Finally, participants will put together all the concepts from the class by programming, compiling, and running a parallel code on one of the NERSC supercomputers. Recommend attending AM and PM Sessions.

Bio: Rebecca Hartman-Baker leads the User Engagement Group at NERSC, where she is responsible for engagement with the NERSC user community to increase user productivity via advocacy, support, training, and the provisioning of usable computing environments. She began her career at Oak Ridge National Laboratory, where she worked as a postdoc and then as a scientific computing liaison in the Oak Ridge Leadership Computing Facility. Before joining NERSC in 2015, she worked at the Pawsey Supercomputing Centre in Australia, where she coached two teams to the Student Cluster Competition at the annual Supercomputing conference, led the HPC training program for a time, and was in charge of the decision-making process for determining the architecture of the petascale supercomputer installed there in 2014. Rebecca earned a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign.

SLIDES

VIDEO


Modeling Antarctic Ice with Adaptive Mesh Refinement


Dan MartinWho: Dan Martin
When: June 16, 11 a.m. – 12 p.m.
Where: Zoom (see calendar) and Building 50 Auditorium

Abstract: The response of the Antarctic Ice Sheet (AIS) remains the largest uncertainty in projections of sea-level rise. The AIS (particularly in West Antarctica) is believed to be vulnerable to collapse driven by warm-water incursion under ice shelves, which causes a loss of buttressing, subsequent grounding-line retreat, and large (potentially up to 4m) contributions to sea-level rise. Understanding the response of the Earth's ice sheets to forcing from a changing climate has required the development of a new generation of next-generation ice sheet models which are much more accurate, scalable, and sophisticated than their predecessors. For example very fine (finer than 1km) spatial resolution is needed to resolve ice dynamics around shear margins and grounding lines (the point at which grounded ice begins to float). The LBL-developed BISICLES ice sheet model uses adaptive mesh refinement (AMR) to enable sufficiently-resolved modeling of full-continent Antarctic ice sheet response to climate forcing. This talk will discuss recent progress and challenges modeling the sometimes-dramatic response of the ice sheet to climate forcing using AMR.

Bio: Dan Martin is a computational scientist and group leader for the Applied Numerical Algorithms Group at Lawrence Berkeley National Laboratory. After earning his Ph.D. in mechanical engineering from U.C. Berkeley, Dan joined ANAG and Berkeley Lab as a post-doc in 1998. He has published in a broad range of application areas including projection methods for incompressible flow, adaptive methods for MHD, phase-field dynamics in materials, and Ice sheet modeling. His research involves the development of algorithms and software for solving systems of PDEs using adaptive mesh refinement (AMR) finite volume schemes, high (4th)-order finite volume schemes for conservation laws on mapped meshes, and Chombo development and support. Current applications of interest are developing the BISICLES AMR ice sheet model as a part of the SCIDAC-funded ProSPect application partnership, and some development work related to the COGENT gyrokinetic modeling code, which is being developed in partnership with Lawrence Livermore National Laboratory as a part of the Edge Simulation Laboratory (ESL) collaboration.

SLIDES PART 1
SLIDES PART 2

VIDEO


Scientific Machine Learning: Methods to Bridge Scientific Spatial and Temporal Modeling with Machine Learning


Aditi KrishnapriyanWho: Aditi Krishnapriyan
When: June 21, 11 a.m.  12 p.m.
Where: Zoom (see calendar)

Abstract: Machine learning (ML) has achieved success in a wide range of applications. These methods are also seeing interest in the scientific domain. However, many challenges remain: learning ML models for scientific phenomena is difficult and is often limited by the lack of training data. In this talk, I will discuss work aimed to overcome these challenges by developing methods that incorporate scientific mechanistic modeling into the ML setting.

I will first discuss integrating physical constraints into ML models, and how changing the machine learning procedure can improve predictive performance for common engineering problems. I will then discuss how developing a neural network architecture that incorporates differential equation constrained optimization can verify desired physical constraints exactly over a given spatial and/or temporal domain. The neural network can then be trained in an end-to-end differentiable manner. I will show how this architecture allows us to efficiently and accurately learn a family of PDE solutions rather than a single PDE at a time, and demonstrate this on common scientific problems.
I will also explore learning from discrete scientific data, and discuss a methodology from numerical analysis theory to verify whether an ML model has learned a meaningfully continuous approximation to the underlying dynamics of such systems. Using this, we can verify that the ML models we learn satisfy robustness and convergence properties from numerical analysis. Learning meaningfully continuous models improves interpolation and extrapolation for a number of scientific applications. These include resolving fine-scale features despite training only on low resolution data, and correctly extrapolating to other initial conditions than those on which the ML model was trained. Finally, I will look forward to addressing other important challenges to bridge the gap between ML methods and the scientific domain, including uncertainty quantification, and the use of symmetry-based constraints.

Bio: Aditi Krishnapriyan is the 2020 Alvarez Fellow in Computing Sciences at Lawrence Berkeley National Laboratory and UC Berkeley. Her research interests include developing new methods to incorporate domain-driven scientific mechanistic modeling with data-driven machine learning methodologies to accelerate and improve spatial and temporal modeling. Previously, she received a PhD at Stanford University, supported by the Department of Energy Computational Science Graduate Fellowship. During her PhD, she also spent time working on machine learning research at Los Alamos National Laboratory, Toyota Research Institute, and Google Research.


Using Computer Simulations to Shine a Light on Pulsars and Neutron Star Mergers


Hannah KlionWho: Hannah Klion
When: June 23, time TBD
Where: Zoom (see calendar)

Abstract: Neutron stars are some of the most extreme environments in the universe, with exceptionally strong magnetic and gravitational fields. State-of-the-art observations have answered many questions about these objects while raising others. Computer simulations are a powerful tool to help bridge the gap between theoretical models and observations. I will talk about our approach to simulating two classes of neutron star phenomena: pulsars and neutron star mergers. Pulsars are highly-magnetized, rapidly-rotating neutron stars that show unexpectedly high-energy gamma-ray flares. Plasma simulations of pulsar magnetospheres can help understand how these particles get their energy. Neutron stars can also be found in binaries, whose orbits decay due to gravitational radiation. When the stars collide and merge, they launch radioactive outflows that are visible across the electromagnetic spectrum. Our simulations can predict the light we see from those events and help interpret our observations.

Bio: Hannah Klion is a postdoctoral researcher in the Center for Computational Sciences and Engineering at Lawrence Berkeley National Laboratory. She earned her BS in Physics from Caltech (2015) and her Ph.D. in Physics from UC Berkeley in 2021. At UC Berkeley, she was a Department of Energy Computational Science Graduate Fellow and a UC Berkeley Physics Theory Fellow. Her research focuses on high-performance multi-physics simulations of astrophysical systems.

SLIDES

VIDEO


Overview of Research and Developments in Scalable Solvers Group


Xiaoye Sherry Li2Who: Xiaoye (Sherry) Li
When: June 28, 11 a.m. – 12 p.m.
Where: Zoom (see calendar)

Abstract: We will highlight various ongoing projects in SSG. They span many areas of numerical linear algebra and high-performance scientific computing. The group members are also actively engaged in many application areas, such as material sciences, computational chemistry, machine learning, and quantum computing.

Bio: Sherry Li is a Senior Scientist in the Applied Mathematics and Computational Research Division, Lawrence Berkeley National Laboratory. She has worked on diverse problems in high-performance scientific computations, including parallel computing, sparse matrix computations, high precision arithmetic, and combinatorial scientific computing. She is the lead developer of SuperLU, a widely-used sparse direct solver, and has contributed to the development of several other mathematical libraries, including ARPREC, LAPACK, PDSLin, STRUMPACK, and XBLAS. She earned a Ph.D. in Computer Science from UC Berkeley and B.S. in Computer Science from Tsinghua Univ. in China. She has served on the editorial boards of the SIAM J. Scientific Comput. and ACM Trans. Math. Software, as well as many program committees of the scientific conferences. She is a SIAM Fellow and an ACM Senior Member.

SLIDES

VIDEO


LaTex Workshop


Who: Lipi Gupta
When: June 29, 3 p.m. - 4:30 pm
Where: Zoom (see calendar)

Abstract: For participants: This is an interactive activity, and it is encouraged that anyone participating have access to a LaTeX compiler.

Bio: Lipi Gupta defended her Ph. D. in Physics at the University of Chicago in July 2021. She was awarded an Office of Science Graduate Student Research Program award from the US Department of Energy to complete her Ph.D research at the SLAC National Accelerator Lab in Menlo Park, California. At SLAC, Lipi worked on studying how to apply machine learning techniques to improve particle accelerator operation and control. Lipi also has a background in nonlinear beam dynamics, focusing on sextupole magnet resonance elimination, through her research at University of Chicago and research conducted while earning a Bachelor of Arts in Physics with a minor in mathematics at Cornell University.

SLIDES

VIDEO


Exploring Post-Moore Microelectronics with HPC


Jackie YaoWho: Zhi (Jackie) Yao
When: June 30, 11 a.m. – 12 p.m.
Where: Zoom (see calendar)

Abstract: 

The post-Moore's law era has seen unprecedented prosperity of electronicmicrodevices harnessing novel wave-material interactions beyond conventional single-phase materials. However, gaining an in-depth understanding of the interaction between the waves and materials has been difficult because of the inherent disparity in time and length scale and the lack of effective modeling techniques. To address these challenges, we have developed scalable simulation tools to enable leadership computing systems to model emerging post-CMOS microelectronic devices (electronic, spintronic, nanomagnetic and nanomechanical). Our exascale, open-source code, ARTEMIS (Adaptive mesh Refinement Time-domain ElectrodynaMIcs Solver), contains support for heterogeneous physical coupling. It is portable and scales well on many-core/GPU-based supercomputers that are far beyond the reach of commercial tools, allowing us to capture larger-scale spatial disparities inherent to realistic circuits. We have demonstrated algorithmic flexibility by developing a micromagnetics module. Our current efforts include upgrading the functionality of ARTEMIS to accurately describe new devices such as a magnetoelectric spin-orbit logic, negative capacitance field effect transistors, and quantum magnonic components, enabling ARTEMIS to serve as the device-level design and optimization tool.

Bio: Zhi (Jackie) Yao is currently the 2019 Luis W. Alvarez postdoctoral fellow in Computing Sciences Area of Lawrence Berkeley National Laboratory. Her current research interest is on physical modeling and design of microelectronic device components and quantum circuitry, high performance computing and GPU codes. Prior to joining LBNL, she was a postdoctoral researcher in the Electrical and Computer Engineering Department at University of California, Los Angeles (UCLA). She obtained the M.S. degree in 2014 and the Ph.D. degree in 2017, both from the ECE Department at UCLA. Her past research works include designing and characterizing innovative high-frequency devices with smart martials and heterogeneous physical modeling. She intends to invest her interdisciplinary training to investigate the fundamental of wave-material interactions and how such insights inspire new electronic applications and devices. She is currently one of the co-PIs and key personnel of LBL’s Microelectronics Co-design projects.

SLIDES

VIDEO


ESnet, the Wireless Edge, and Practical Lessons (so far) in Developing Services for Science


Andrew WiedleaWho: Andrew Wiedlea
When: July 5, 11 a.m. – 12 p.m.
Where: Zoom (see calendar)

Abstract: In this talk, I'll discuss some of the new technologies for wireless scientific data movement and work underway to design and extend ESnet to the wireless edge, as part of our support for energy, environmental, and other field science applications where serpentine cabling cannot go.

Bio: Andrew Wiedlea is with the ESnet Science Engagement team, working to support the wide range of projects who depend on ESnet. Previously, he was head of the User Support Department with Berkeley Lab IT and has also had the opportunity to do interesting things with the Department of Defense and Los Alamos National Laboratory. Once upon a time, he was also a summer student at a UC Berkeley research institute affiliated with Berkeley Lab.

SLIDES

VIDEO


The Future of Computing Beyond Moore’s Law


john shalfWho: John Shalf
When: July 7, 11 a.m. – 12 p.m.
Where: Zoom (see calendar)

Abstract: Moore’s Law is a techno-economic model that has enabled the information technology industry to double the performance and functionality of digital electronics roughly every 2 years within a fixed cost, power and area. Advances in silicon lithography have enabled this exponential miniaturization of electronics, but, as transistors reach atomic scale and fabrication costs continue to rise, the classical technological driver that has underpinned Moore’s Law for 50 years is failing and is anticipated to flatten by 2025. This presentation provides an updated view of what a post-exascale system will look like and the challenges ahead, based on our most recent understanding of technology roadmaps. It also discusses the tapering of historical improvements, and how it affects options available to continue scaling of successors to the first exascale machine. Lastly, this presentation covers the many different opportunities and strategies available to continue computing performance improvements in the absence of historical technology drivers.

Bio: John Shalf is Department Head for Computer Science Lawrence Berkeley National Laboratory, and recently was deputy director of Hardware Technology for the DOE Exascale Computing Project. Shalf is a coauthor of over 80 publications in the field of parallel computing software and HPC technology, including three best papers and the widely cited report “The Landscape of Parallel Computing Research: A View from Berkeley” (with David Patterson and others). He also coauthored the 2008 “ExaScale Software Study: Software Challenges in Extreme Scale Systems,” which set the Defense Advanced Research Project Agency’s (DARPA’s) information technology research investment strategy. Prior to coming to Berkeley Laboratory, John worked at the National Center for Supercomputing Applications and the Max Planck Institute for Gravitation Physics/Albert Einstein Institute (AEI) where he was was co-creator of the Cactus Computational Toolkit.

SLIDES

VIDEO


Introduction to Topological Data Analysis


dmitriy morozovWho: Dmitriy Morozov
When: July 12, 11 a.m. – 12 p.m.
Where: Zoom (see calendar)

Abstract: Topological data analysis (TDA) is a young but rapidly growing field at the intersection of computational geometry and algebraic topology. We will focus on persistent homology, one of the key mathematical theories behind TDA, used to describe the shape of data in a way that generalizes clustering. This talk will introduce persistence and some of its applications, including connections to machine learning.

Bio: Dmitriy Morozov is a Staff Scientist in the Scientific Data Division. He received a Ph.D. in Computer Science from Duke University in 2008. After a postdoctoral appointment in the Departments of Mathematics and Computer Science at Stanford University, he moved to the Lawrence Berkeley National Laboratory in 2011. His main interests are computational topology and geometry as well as high-performance computing, and how the interplay between these fields helps data analysis.

VIDEO


ALS Virtual Tour


ina reichelWho: Ina Reichel
When: July 13, 3 p.m. – 4 p.m.
Where: Zoom (see calendar)

Details: Virtual tour of the Advanced Light Source (ALS) at Berkeley Lab with Ina Reichel of the Accelerator Technology and Applied Physics Division of Berkeley Lab.

View SLIDES and VIDEO of the virtual tour.


Engineering Self-driving Networks using Deep Learning


Mariam KiranWho: Mariam Kiran
When: July 14, 11 a.m. – 12 p.m.
Where: Zoom (see calendar)

Abstract: The pandemic and ongoing data challenges have highlighted the need for reliable network connectivity, but upgrading these systems is not only expensive but complex due to the many vendors involved. In my work, I've been exploring how deep learning methods that allow large data sets and train simple neural networks on these data sets, can be used to perform classification and reinforcement learning to help control and optimize the network for large-scale science transfers and give them optimal performance. In my talk I will describe the challenges faced in understanding and deploying AI models, and also how to interface these with real-world network systems paving the future for future research challenges in digital twins and self-driving infrastructures.

Bio: Dr. Kiran is a Computer Scientist in the Scientific Networking Division, Lawrence Berkeley National Laboratory. She currently leads the AI for networking research group building solutions for operational network research and engineering problems. She received her Ph.D. and MSc (Eng) in Computer Science from the University of Sheffield, UK, in 2011 and 2009, respectively. Before joining Berkeley Lab in 2016, Kiran worked at the Universities of Sheffield, Leeds, Oxford, and University College London with collaborations with European industries such as SAP, ATOS, and BT. Her research explores machine learning and decentralized optimization for wide area networks, wireless, and Cloud infrastructures, needed for building 'self-driving networks'. She also helped develop FLAME, an open-source agent-based platform, currently used worldwide for complexity research. Her notable awards include the Royal Society Scientist in Westminster in 2015, the 2017 U.S. DOE Early Career Award, and ACM's N2Women Rising Stars in Networking Award in 2021. She is an active member of ACM and a Senior Member of IEEE communities.

SLIDES

VIDEO


FasTensor and Its Applications in Geoscience


Who: Bin Dong
When: July 19, 11 a.m. – 12 p.m.
Where: Zoom (see calendar)

Abstract: Scientific applications produce mountains of data; these scientific data are usually stored as tensors, also known as multidimensional arrays.  Extracting insights from these data mostly depends on the way that scientists develop customized data analysis programs from scratch, which is a tedious and error-prone process. On the other hand, most big data processing systems, like MapReduce and Spark, have been developed to avoid the tedious and error-prone process in the data analysis program development. However, these systems were initially designed and developed to process data like web pages, which are naturally converted into the key-value (KV) data pair. Scientific data has tensor data structures rather than key-value data pairs, which leads to significant overhead to use these existing systems. This talk will present our recent efforts to build a tensor data processing system namely FasTensor. It will also introduce one case-study example: how to apply FasTensor to perform data analysis tasks from a geoscience application namely DAS (Distributed acoustic sensing).

Bio: Bin Dong is a research scientist at LBNL. Bin's research interests are in Big scientific data analysis, parallel computing, and machine learning. Bin is exploring new and scalable algorithms and data structures for sorting, organizing, indexing, searching, analyzing Big array data with supercomputers. 

SLIDES

VIDEO


AI Representation Learning for Scientific Insight


Kristofer BouchardWho: Kristofer Bouchard
When: July 21, 11 a.m. – 12 p.m.
Where: Zoom (see calendar entry)

Abstract: Modern AI algorithms achieve outstanding results in maximizing accuracy on a variety of tasks, from segmentation to regression and classification. Indeed, the use of deep learning as a black-box optimizer for maximizing a figure of merit has now become fairly routine in both science and industry. In contrast, the ability of modern AI algorithms to provide insight into the structure of latent processes generating complex scientific data is nascent. AI algorithms that either learn representations that are scientifically insightful or extract such representations from other learned models, are required to enable scientific data-driven discovery and hypothesis formulation. Work in this domain is challenging, as mathematically formulating what it means to be a scientifically meaningful representation is often unclear. In this talk, I will overview a variety of algorithms and work we have done in this space, with examples from diverse data types (e.g., images, time series) in neuroscience.

Bio: Kris Bouchard is the lead of the Computational Biosciences Group in the Scientific Data Division, Principal Investigator of the Neural Systems and Data Science lab in the Biological Systems and Engineering Division, and Adjunct Professor in Neuroscience at UC Berkeley. His personal research interests are diverse, but can broadly be summarized as developing novel machine learning and data science tools that provide insight into the processes that generate complex biological data sets to enable prediction, control, and understanding of the biological systems generating the data.


Scientific Applications in the NESAP Program


Who: NESAP Postdocs Soham Ghosh, Dhruva Kulkarni, and Nicholas (Nick) Tyler
When: July 26, 11 a.m. – 12 p.m.
Where: Zoom (see calendar)

Abstract: NERSC Exascale Scientific Application Program started in 2014 and has been forming collaborations with application development teams and vendors in preparing scientific applications for the current and upcoming architectures. These collaborations involve NERSC staff taking deep dives into the application codes and engineering them to give optimal performance on NERSC and other systems. In this talk NERSC engineers will give an overview of their collaborations with four science teams and provide an insight into efforts involved in adapting their applications to take advantage of the next generation of supercomputing architectures. This talk will cover optimization and development efforts for applications for metagenome analysis, computational fluid dynamics, Machine learning assisted Molecular Dynamics and Climate Science.

Bio: Soham Ghosh is a NESAP postdoctoral researcher at NERSC, LBL. He works on development, GPU porting, optimization and performance analysis of electronic structure and molecular dynamics codes. He is presently associated with the Quantum Espresso group in developing scalable, high throughput material science software. He got his PhD in physics from Florida state University where he explored surface and interface properties of novel oxides using computation condensed matter techniques.

Bio: Dhruva Kulkarni is a postdoctoral fellow in NERSC's Exascale Science Application Program (NESAP) for Simulations. Dhruva's research interests include high performance computing, experimental and applied physics, and software engineering and development. He is currently working on performance optimization of the Whole Device Model Application (WDMapp), part of the Exascale Computing Project (ECP). His part of the project aims at evaluating and optimizing the performance of various kernels of WDMApp across different hardware (NVIDIA, AMD, Intel) and software (Kokkos, OpenMP target) stacks. Dhruva received his Ph.D. in Physics from Clemson University in May 2017 under the guidance of Prof. Chad Sosolik for researching the transport and energy deposition characteristics of highly charged ions at the Clemon University Electron Electron Beam Ion Trap (CUEBIT) Lab as part of DARPA's Local Control of Materials Synthesis (LoCo) program. He received awards for best teaching assistant at the departmental as well as collegiate levels as a graduate student at Clemson University. Dhruva also has a master's degree in Physics from Clemson University, and a bachelor's degree in Computer Engineering from PICT, Pune University. Prior to joining NERSC, Dhruva has worked on development of control software for triple quad mass spectrometers and FAIMS instruments (Thermo Fisher Scientific), development and single core optimization (SIMD) of a proprietary molecular dynamics program to predict protein-ligand binding energies (Verseon Corporation), and development of a high speed stateful traffic generator utilizing a proprietary multi core hyperthreaded network processor (Nevis Networks). In his spare time, Dhruva enjoys volunteering at schools and food banks, and getting involved in science outreach projects.

Bio: Nick Tyler is a NESAP for Data postdoc working with the Joint Genome Institute at Berkeley Lab on their workflow management system, JAWS (JGI Analysis Workflow Service). Nick received his Ph.D. in experimental nuclear physics from the University of South Carolina in 2021, and a bachelor's degree in physics from Canisius College in 2008. While at Canisius and UofSC he was a member of the CLAS collaboration at Jefferson Lab, working on experimental nuclear physics and hadron spectroscopy. During his research, Nick utilized many computing facilities and approaches, from High Throughput Computing to perform particle detector simulations using the Open Science Grid and High Performance Computing resources at the UofSC and Jefferson Lab to process and combine the experimental data with simulation results to calculate the final hardon cross-sections.

VIDEO (All presenters)

SLIDES - Soham Ghosh

SLIDES - Dhruva Kulkarni

SLIDES - Nick Tyler


Behavioral Based Interviewing Workshop: Effective Interviewing Techniques


Who: Bill Cannan & Nicolette Carroll
When: July 28, 11 a.m. – 12 p.m.
Where: Zoom (see calendar)

Abstract: Past Behavior is the best predictor of future performance! Behavioral-based interviewing is a competency-based interviewing technique in which employers evaluate a candidate's past behavior in different situations in order to predict their future performance. This technique is the new norm for academic and industry-based organizations searching for talent. This workshop will provide information and tools to help you prepare for your next interview including an overview of the behavioral-based interview process, sample questions, and techniques on how to prepare.

Bio: Bill Cannan is the Sr. HR Division Partner that supports Computing Sciences and IT. Bill has over 20 years of HR related experience as a recruiter and HR Generalist in both industry and National Lab environments. This includes over 12 years at Berkeley Lab and three years with Lawrence Livermore National Lab. Bill is responsible for providing both strategic and hands-on full cycle Human Resources support and consultation to employees and managers.

Bio: Nicolette Carroll is a Staff HR Division Partner that supports Computing Sciences and IT.

SLIDES


Summer Program Poster Session


Who: All Summer Program Participants are Welcome to Attend
When: August 2, 10 – 11 a.m.
Where: Zoom (see calendar)


WDE Poster Session


Who: All Summer Program Participants are Welcome to Attend
When: August 3, 10 a.m – 12 p.m.
Where: Zoom (see calendar). Posters will be displayed in CRT building.