A-Z Index | Phone Book | Careers

InTheLoop | 11.26.2012

November 26, 2012

Modeling the Breaking Points of Metallic Glasses

Christopher Rycroft of Berkeley Lab’s Computational Research Division has developed some novel computational techniques to address why metallic glass alloys, or liquid metals, have such wildly different toughness and breaking points, depending on how they are made. Read more.


UCSF/LBNL Workshop: Computational Challenges for Precision Medicine

The University of California, San Francisco (UCSF) and Berkeley Lab will hold a day-long workshop on “Computational Challenges for Precision Medicine” on Friday, November 30, at the UCSF Mission Bay Campus.

Precision medicine refers to the creation of an integrated repository of all disease-related data and the use of this repository, or “knowledge network,” to improve our molecular understanding, diagnosis and treatment of disease. Researchers across the spectrum of biomedical research, from fundamental discovery to clinical and population research, will populate the network with data. Citizens will also contribute their personal health data, enriching the collective data pool. Linkages between seemingly disparate data sets will provide novel insights into the molecular mechanisms of disease, while also informing clinical decision in real-time.

Achieving this ambitious vision will require overcoming significant scientific, technical, and policy challenges as outlined in the National Academy of Science (NAS) report titled “Toward Precision Medicine: Building a Knowledge Network for Biomedical Research and a New Taxonomy of Disease.” Berkeley Lab and UCSF are committed to working together to address these challenges by imagining and supporting collaborative pilot projects.

This workshop, one in a series with different foci, will bring together investigators from LBNL and UCSF to focus on computational challenges posed by precision medicine. Vice Chancellor for Research and committee member for the NAS report, Keith Yamamoto, will kick off the workshop with an overview of precision medicine. He will describe the key challenge areas where pilot projects are needed in order for the goals of precision medicine to be realized.

Next, LBNL and UCSF investigators will give “team talks” related to four defined computational challenge areas. In each case, the UCSF investigator will describe the specific challenges they face in their health sciences research field. Then, the LBNL investigator will describe how they have approached analogous computational challenges, including in fields outside medicine or the life sciences. The team talks are meant to help participants in the workshop understand the challenges of precision medicine and, importantly, to identify linkages between the expertise at UCSF and LBNL that could be applied in collaborative pilot projects.

There will be time in the afternoon for participants to discuss opportunities for collaboration. Breakout groups will organized around each of the challenge areas covered in the team talks. We will end the day by reporting back on the ideas generated in the breakouts and then convening for a reception. Ideas from the workshop will be discussed by a joint leadership group with the goal of announcing specific collaborative challenge areas and a joint funding mechanism in early 2013.


HEP Requirements Review This Week

The NERSC program requirements review “Large Scale Computing and Storage Requirements for High Energy Physics” will be held Tuesday and Wednesday, Nov. 27–28, in Rockville, MD. Organized by the Department of Energy’s Office of High Energy Physics (HEP), Office of Advanced Scientific Computing Research (ASCR), and NERSC, the review’s goal is to characterize HEP production computing and storage requirements through 2017 at NERSC. Richard Gerber and Harvey Wasserman are NERSC’s review coordinators.

This review brings together DOE program managers, leading domain scientists, and NERSC personnel to ensure that NERSC continues to provide the world-class facilities and services needed to support DOE Office of Science research. The review will produce a report that includes HPC requirements for HEP along with supporting narratives, illustrated by specific science-based case studies. Findings from the review will guide NERSC procurements and service offerings, and will help NERSC, ASCR, and HEP advocate for the HPC resources needed to support DOE science goals.

Presentations by Berkeley Lab researchers include:

  • Julian Borrill, CRD: Case Study: Cosmic Background Radiation Data Analsyis for the Planck Satellite Mission
  • Sudip Dosanjh, NERSC Director: NERSC’s Role in HEP Research and Emerging Technologies
  • Cameron Geddes, Accelerator and Fusion Research Division: Case Study: Plasma Accelerator Simulation Using Laser and Particle Beam Drivers
  • Peter Nugent, CRD/NERSC: Case Study: Baryon Oscillation Spectroscopic Survey and/or Distance Supernova Search
  • Michele Papucci, Physics Division: Case Study: Theoretical Particle Physics Simulations for LHC Processes
  • Craig Tull, CRD: Case Study: Intensity Frontier Data Analysis (Daya Bay)

This Week’s Computing Sciences Seminars

Beyond the Hill of Multicores Lies the Valley of Accelerators

Monday, Nov. 26, 10:00–11:00 am, 50F-1647
Aviral Shrivastava, Arizona State University

The power wall has resulted in a sharp turn in processor designs, and they irrevocably went multi-core. Multi-cores are good because they promise higher potential throughput (and never mind the actual performance of your applications). This is because the cores can be made simpler and run at lower voltage resulting in much more power-efficient operation. Even though the performance of single-core is much reduced, the total possible throughput of the system scales with the number of cores. However, the excitement of multi-core architectures will only last so long. This is not only because the benefits of voltage scaling will reduce with decreasing voltage, but also because after some point, making a core simpler will only be detrimental and may actually increase power-efficiency. What next! How do we further improve power-efficiency?

Beyond the hill of multi-cores, lies the valley of accelerators. Accelerators: hardware accelerators (e.g., Intel SSE), software accelerators (e.g., VLIW accelerators), reconfigurable accelerators (e.g., FPGAs), programmable accelerators (CGRAs) are some of the foreseeable solutions that can further improve power-efficiency of computation. Among these, we find CGRAs, or Coarse Grain Reconfigurable Arrays a very promising technology. They are slightly reconfigurable (and therefore close to hardware), but are programmable (therefore usable as more general-purpose accelerators). As a result, they can provide power-efficiencies of up to 100 GOps/W, while being relatively general purpose. Although very promising, several challenges remain in compilation for CGRAs, especially because they have very little dynamism in the architecture, and almost everything (including control) is statically determined. In this talk, I will talk about our recent research in developing compiler technology to enable CGRAs as general-purpose accelerators.

Physics Colloquium: Certifiable Quantum Dice

Monday, Nov. 26, 4:15–5:15 pm, LeConte Hall, Room 1, UC Berkeley
Umesh Vazirani, UC Berkeley

Is it possible to certify that the n-bit output of a physical random number generator is “really random”? Since every fixed length sequence should be output with equal probability, there seems to be no basis on which to reject any particular output. Indeed, in the classical World it seems impossible to provide such a test.

The uniquely quantum mechanical phenomena of entanglement and Bell inequality violations allow for a remarkable random number generator. Its output is certifiably random in the following sense: if there is no information communicated between the two boxes in the randomness generating device (based, say, on the speed of light limit imposed by special relativity), and provided the output passes a simple statistical test, then the output is certifiably random. Paradoxically, the certification of randomness does not depend upon the correctness of quantum mechanics, and would even be convincing to a quantum skeptic!

The work on certifiable randomness has recently played a major role in the resolution of one of the great challenges in quantum cryptography — achieving device independence. This very strong guarantee of security allows the users of the cryptographic scheme to test in real time that its security has not been compromised.

Based on joint work with Thomas Vidick.



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.