Can Qubits Be Made from Cheaper Stuff than Diamond?
The leading method for creating quantum bits, or qubits, currently involves exploiting the structural defects in diamonds. But using NERSC resources, University of Chicago researchers found that the same defect could be engineered in cheaper aluminum nitride. If confirmed by experiments, this discovery could significantly reduce the cost of manufacturing quantum technologies. Their findings were published in Nature Scientific Reports.
HPC Webinar Series Kicks Off Wednesday
A new webinar series kicks off with "What All Codes Should Do: Overview of Best Practices in HPC Software Development" at 10 a.m. PST on Wednesday. Presented by NERSC, ALCF and OLCF in collaboration with the IDEAS scientific software productivity project, the series explores best practices for HPC systems software development. Other planned topics include Developing, Configuring, Building, and Deploying HPC Software; Distributed Version Control and Continuous Integration Testing; Testing and Documenting your Code; How the HPC Environment is Different from the Desktop (and Why); Basic Performance Analysis and Optimization; and Best Practices for I/O on HPC Systems
Please visit the series web page for more information and to register.
From Beirut to Berkeley, Melissa Stockman is Newest Member of ESnet’s Tools Team
After living for 18 years in Beirut, Lebanon, where she was director of IT infrastructure and support at the Lebanese American University in Beirut, Melissa Stockman returned to the U.S. and was looking for a new position when she saw a posting for a job with the ESnet Tools Team to develop network automation tools.
“My first thought was that it looked so interesting and was a combination of my background in machine learning and networking,” Stockman said. “I had experience in both of those areas.”
Since she joined ESnet earlier this year, she’s been developing software in two different areas, one to perform network analytics and the other to assist in network automation. Her current analytics project involves storing large amounts of router data in the cloud for later analysis. On the network automation side, she is working on a tool to check, verify and automatically update ESnet’s point-to-point router information that can become out of sync when devices are changed.
Tomorrow: BIDS Spring 2016 Data Science Faire
The Berkeley Institute for Data Science holds its Spring 2016 Data Science Faire between 1:30 and 4:30 p.m. tomorrow in room 190 of UC Berkeley's Doe Library. The Data Science Faire closes out BIDS' second academic year and celebrates data science at Berkeley. At this year's Data Science Faire, BIDS will showcase data-intensive initiatives at BIDS and UC Berkeley, highlighting work from the diverse community of data scientists around campus. Learn more about the exciting open source projects BIDS affiliates are working on and catch up on the work of researchers at top data science centers, including NERSC, Berkeley Research Computing, and more. The keynote will be delivered by BIDS Director Saul Perlmutter, who holds a joint appointment at Berkeley Lab and is both a Nobel Laureate and long-time NERSC user. Find details and a full agenda on the BIDS web site.
This Week's CS Seminars
Monday, May 2
NERSC Data Seminar
Data Analytics for BRAIN at NERSC
Noon to 1:30 p.m., Wang Hall, Bldg. 59, Room 3101
Kristofer Bouchard Biological Systems and Engineering Division, LBNL Computational Research Division (by courtesy), LBNL Kavli Institute for Fundamental Neuroscience, UCSF
The BRAIN Initiative has highlighted the need for advanced technologies for understanding the brain. As with many other scientific domains, neuroscience is entering the ‘extreme data’ space with little experience and few plans for the challenges this brings. Neuroscience is a diverse field, and the variety of data types collected by neuroscience experiments reflects this diversity. Therefore, we have developed data models and management systems that can flexibly handle these diverse data types, that is extensible to future needs, and standardized across experiments. Likewise, the algorithms used to analyze these diverse data sets are not implemented to handle the growing data volumes. Hence, we have devised advanced analysis and visualization methods that powerfully reveal structure in data, scale with large data sets, and (often) yield interpretable results. Finally, to maximize the utilization of the growing volumes of data, different data sets need to be collocated with the computing resources required to analyze them, but such facilities are not available to neuroscientists. Indeed, neuroscience data sets are already large enough that analysis with even basic algorithms is unreasonable with standard computing resources. We have been utilizing NERSC as a user facility, supported by experts, in which high-performance computing resources are co-located with diverse, multi-modal data sets to enable data sharing and collaborative analysis. This talk will summarize these activities, with a focus on data analytics, and present a unified vision of how HPC user facilities can support the BRAIN initiative.
Reconstruction Algorithms for Next Generation Imaging: Multi-Tiered iterative phasing for fluctuation X-ray scattering and single-particle diffraction
2 to 3 p.m., Bldg. 70A, Room 3377
Jeffrey Donatelli, Lawrence Berkeley National Laboratory
With recent advances in imaging technology, we are now able to overcome the limitations of traditional imaging techniques by performing new imaging experiments that were previously impossible. One such emerging experimental technique is fluctuation X-ray scattering (FXS), where one collects a series of diffraction patterns from multiple particles in solution using an ultrafast X-ray pulse, which is able to take snapshots below rotational diffusion times of the particles. The resulting images contain angularly varying information from which angular correlations can be computed, yielding several orders of magnitude more information than traditional solution scattering methods. However, determining molecular structure from FXS data introduces several challenges, since, in addition to the classical phase problem, one must also solve a hyper-phase problem to determine the 3D intensity function from the correlation data. In another technique known as single-particle diffraction (SPD), several diffraction patterns from individual particles are collected using an ultrabright X-ray beam. However, the samples are delivered to the beam at unknown orientations and may also be present in several different conformational states. In order to reconstruct structural information from SPD, one must determine the orientation and state for each image, extract an accurate 3D model of the intensity function from the images, and solve for the missing complex phases, which are not measured in diffraction images.
In this talk, we present the multi-tiered iterative phasing (M-TIP) algorithm for determining molecular structure from FXS and SPD data. This algorithm breaks up the associated reconstruction problems into a set of simpler subproblems that can be efficiently solved by applying a series of projection operators. These operators are combined in an iterative framework which is able to simultaneously determine missing parameters, the 3D intensity function, the complex phases, and the underlying structure from the data. In particular, this approach is able to leverage prior knowledge about the structural model, such as finite size or symmetry, to obtain a reconstruction from very limited data with excellent global convergence properties and high computational efficiency. We show results from applying M-TIP to determine molecular structure from both simulated data and experimental data collected at the Linac Coherent Light Source (LCLS).
Wednesday, May 4
Applied Math Seminar
High-order Mesh Generation and Mesh Adaptation
3:30 to 4:30 p.m., 939 Evans Hall, UC Berkeley
Meire Fortunato, UC Berkeley
High-order methods are receiving considerable interest from the computational community because they can achieve higher accuracy with reduced computational cost compared to traditional low-order approaches. These methods generally require unstructured meshes of non-inverted curved elements, and the generation of high-order curved meshes in a robust and automatic way is an important and challenging open problem.
We present a method to generate high-order unstructured curved meshes by solving the classical Winslow equations using a new continuous Galerkin finite element discretization. This formulation appears to produce high quality curved elements, which are highly resistant to inversion. In addition, the corresponding nonlinear equations can be solved efficiently using Picard iterations, even for highly stretched boundary layer meshes.
Another challenge that mesh-based methods face is that the discretization of the domain is usually generated before the solution is known, which can lead to large numerical errors or non-convergent schemes. A tool that can be used to overcome this problem is mesh adaptivity. We use the Winslow variable diffusion equations – which are a variation of the classical form – to perform high-order mesh adaptivity. We show how this scheme can be used to adapt a mesh with stretched and curved elements in the presence of shocks that are formed when solving the Euler equations of gas dynamics for supersonic flow.
Friday, May 6
A Conjugate Gradient Optimization Method for Electronic Structure Calculations
2 to 3 p.m., Bldg. 50B, Room 4205
Aihui Zhou, Institute of Computational Mathematics, Chinese Academy of Sciences
In this presentation, we describe a conjugate gradient method for electronic structure calculations. We propose a Hessian based step size strategy, which together with three orthogonality approaches yields three algorithms for computing the ground state energy of atomic and molecular systems. Under some mild assumptions, we prove that our algorithms converge locally. We show by numerical experiments that the conjugate gradient method performs quite well.
Link of the Week—Slime mold: The next wet thing in computing?
This week's link is a double-header: Washington Post recently reported that scientists had uncovered learning behavior in brainless slime molds. The same story links to an earlier report about scientists suggesting using these smart slimy, but brainless creatures in computing. The researchers published a paper on April 4 in Materials Today describing how they managed to get slime mold to function as a logical circuit.