A-Z Index | Phone Book | Careers

Computing Sciences Summer Students: 2013 Talks & Events

High-Order Discontinuous Galerkin Methods for Conservation Laws

Who: Per-Olof Persson
When: June 06, 12:00-1:00 PM
Where: 70-191

It is widely believed that high-order accurate numerical methods, for example discontinuous Galerkin (DG) methods, will eventually replace the traditional low-order methods in the solution of many problems, including fluid flow, solid dynamics, and wave propagation. I will explain what DG methods are and demonstrate what these methods can do in real-world problems, such as the analysis of flapping flight (e.g. the flight of birds and bats), microelectromechanical systems (MEMS), and wind turbines.

Per-Olof Persson is an Assistant Professor in the Department of Mathematics at University of California, Berkeley, and a Mathematician Faculty Scientist/Engineer at the Lawrence Berkeley National Laboratory. He has previously been an Instructor of Applied Mathematics at the Massachusetts Institute of Technology, from where he also received his Ph.D. in 2005 under the supervision of Gilbert Strang and Alan Edelman. He received the Air Force Office of Scientific Research Young Investigator Award in 2010 and an Alfred P. Sloan fellowship in 2011. His current research interests are in high-order discontinuous Galerkin methods for computational fluid and solid mechanics, with applications in aerodynamics, aeroacoustics, and flapping flight.

Introduction to HPC and NERSC Tour

Who: Richard Gerber
When: June 10, 09:00 AM-12:00 PM
Where: Berkeley Lab's OSF

What is High Performance Computing (and storage!), or "HPC"? Who uses it and why? We'll talk about these questions as well as what makes a "Supercomputer" so super and what's so big about scientific "Big Data". Finally we'll discuss the challenges facing system designers and application scientists as we move into the "many-core" era of HPC.

Big Bang, Big Data, Big Iron

Who: Julian Borrill
When: June 13, 12:00 -01:00 PM
Where: 70-191

On March 21st 2013 the European Space Agency announced the first cosmology results from its billion-dollar Planck satellite mission. The culmination of 20 years of work, Planck’s observations of the Cosmic Microwave Background – the faint echo of the Big Bang itself – provide profound insights into the foundations of cosmology and fundamental physics.

Planck has been making 10,000 observations of the CMB every second since mid-2009, providing a dataset of unprecedented richness and precision; however the analysis of these data is an equally unprecedented computational challenge. For the last decade we have been developing the high performance computing tools needed to make this analysis tractable, and deploying them at supercomputing centers in the US and Europe.

This first Planck data release required tens of millions of CPU-hours on the NERSC supercomputers. This included generating the largest Monte Carlo simulation set ever fielded in support of a CMB experiment, comprising 1,000 realizations of the mission reduced to 250,000 maps of the Planck sky. However our work is far from done; future Planck data releases will require ten times as many simulations, and next-generation CMB experiments will gather up to a thousand times as much data as Planck.

How to Write a Bad Proposal

Who: Kathy Yelick
When: June 18, 12:00-1:00 PM
Where: 50A-5132

Whether you spend most of your career in a university, a national laboratory, or an industrial research lab, writing proposals is likely to be an important piece of your career. In any setting, getting funding (or management buy-in) is important to building a large team effort and in some cases keeping yourself and your students and staff employed. This talk will describe some of the common pitfalls in proposal writing and in doing give you some ideas about how to be successful. The talk comes from many experiences, successful and not, over the years.

Using Math and Computing to Model Supernovae 

Who: Andy Nonaka
When: June 20, 12:00-1:00 PM
Where: 70-191

We describe our recent efforts for understanding the physics of Type Ia supernovae - the largest thermonuclear explosions in the universe - by simulating multiple stages of stellar evolution using two massively parallel code frameworks developed at Berkeley Lab. Each code framework was developed using mathematical models well-suited for the character of the burning, whether it be the pre-ignition deflagration phase, or the post-ignition explosion phase. Our results will help scientists form models that describe the ultimate fate of our galaxy.

Computing Challenges in Bioinformatics

Who: Sarah Richardson, Alicia Clum and Alex Boyd
When: June 27, 1:30-3:00 PM
Where: Berkeley Lab's OSF

We have three panelists from the Joint Genome Institute that will discuss the computational challenges they tackle in the bioinformatics space.

Sarah Richardson is a distinguished postdoctoral fellow in the synthetic biology group. She gets to design and build custom bacteria that are needed to help us understand the connection between genes and their functions. She's written software called GeneDesign that is used by researchers around the world to design DNA sequence that can be constructed in the lab.

Alicia Clum is a puzzle master. She's an analyst in the Genome Assembly Group and she takes the thousands of short reads created by next-generation sequencers (puzzle pieces) and finds ways to put them back together in order to reconstruct microbial genomes. She's recently been doing work on metagenome assembly - an even trickier problem where you have 10-100 slightly puzzles all jumbled up. The goal is similar, but you are now trying to reconstruct 10-100 different genomes and figure out how the microbes may work together to perform tasks like carbon sequestration.

Alex Boyd is a software development wiz.In his short time at the JGI he has re-architected several critical pipelines for the sequence data management group (they keep things organized as data comes off the sequencers). Most recently he has helped write an archiving tool that the JGI will use to optimize storage of data on spinning disk and tape. Without this archiving tool, the mass quantities of data generated by sequencers and analysis generated today at the JGI would have a high probability of being completely inaccessible in a decade.

Why High Performance Visual Data Analysis is both Relevant and Difficult

Who: Wes Bethel
When: July 11, 12:00-1:00 PM
Where: 70-191

Data visualization, as well as data analysis and data analytics, are all an integral part of the scientific process. Collectively, these technologies provide the means to gain insight into data of ever-increasing size and complexity. Over the past two decades, a substantial amount of visualization, analysis, and analytics R&D has focused on the challenges posed by increasing data size and complexity, as well as on the increasing complexity of a rapidly changing computational platform landscape. While some of this research focuses on solely on technologies, such as indexing and searching or novel analysis or visualization algorithms, other R&D projects focus on applying technological advances to specific application problems. Some of the most interesting and productive results occur when these two activities-R&D and application-are conducted in a collaborative fashion, where application needs drive R&D, and R&D results are immediately applicable to real-world problems.

An Overview of ESnet's Advanced Network Services

Who: Brian Tierney
When: July 25, 12:00-1:00.
Where: 70-191

The Energy Sciences Network (ESnet) is a high-performance, unclassified national network infrastructure optimized for science data transport. Funded by the U.S. Department of Energy's Office of Science and managed by LBNL, ESnet's mission is to support scientific discovery. ESnet provides services to the more than 40 DOE research sites, including the entire National Laboratory system, its supercomputing centers, and its major scientific instruments. ESnet also connects to 140 research and commercial networks, permitting tens of thousands of DOE-funded scientists around the world to collaborate productively.

This talk will provide an overview of ESnet, and go into more details on some of its main projects, including the Science DMZ (an network design pattern for data intensive science), perfSONAR (a network monitoring and troubleshooting system), and OSCARS (a virtual circuit creation and reservation system).