A-Z Index | Phone Book | Careers

InTheLoop | 09.12.2011

September 12, 2011

ASCR Leadership Computing Challenge Is Accepting Proposals

Open to scientists from the research community in national laboratories, academia and industry, the ASCR Leadership Computing Challenge (ALCC) allocates up to 30% of the computational resources at NERSC and the Leadership Computing Facilities at Argonne and Oak Ridge for special situations of interest to the DOE’s energy mission, with an emphasis on high-risk, high-payoff simulations. These special situations of interest are directly related the following:

  • Advancing the clean energy agenda.
  • Understanding the environmental impacts of global energy systems.
  • Responding to natural and man-made disasters or incidents; e.g., hurricanes, earthquakes, tsunamis, pandemics.
  • Broadening the community of researchers capable of using leadership computing resources.
  • Exploration of new frontiers in physical and biological sciences.

ALCC applications submitted from September 1, 2011 through February 14, 2012 will be considered for allocation in 2012. Read more.


NERSC Offers New User Training on Tuesday, September 13

NERSC will present a two-hour “Getting Started at NERSC” training event aimed at new users from 10:00 to 12:00 PDT on Tuesday, September 13. This event will be broadcast over the web and presented in person at NERSC’s Oakland Scientific Facility. This is a repeat of the training presented on June 7, 2011, updated and slightly expanded based on suggestions from attendees.

Topics will include:

  • What is NERSC?
  • Hardware Resources
  • Storage Resources
  • How to Get Help
  • Accounts and Allocations
  • Connecting to NERSC
  • Using NX to accelerate X Windows Applications
  • Compiling Code
  • Running Jobs

For more information and to register, go here. There is no registration fee, but please register to help us plan the event.


Next Round of ESnet 100 Gbps Testbed Proposals Are Due October 1

ESnet is pleased to announce that the Advanced Networking Initiative (ANI) testbed will be upgraded to include a 2200-mile 100 Gbps segment by the end of this year. This will make it the world’s first long-distance 100 Gbps testbed, available to any network researcher, whether from government, university, or industry. But take note: The next round of proposals to use the ANI Testbed is due on October 1—just three weeks away—so submit your ideas soon to take advantage of this great resource.

“There has been a lot of exciting work on the testbed recently, including 40 Gbps RDMA testing, detailed power monitoring experiments, high-performance middleware testing, Openflow testing, and more,” says Brian Tierney, head of ESnet’s Advanced Network Technologies Group. “We also have some projects planning some interesting TCP research as soon as the 100 Gbps network is available.”

For more information on how to submit a proposal, go here.


Get Safety, Computer Glasses at Health Services

Free glasses for eye protection or assistance with computer viewing are available to all Lab employees through a Health Services optometrist, who is on site every Wednesday. Eyestrain and related headaches are more common among computer users than more widely known problems such as carpal tunnel syndrome. Eyeglasses tuned for monitor use are an effective way to address these issues. Eye exams are performed Wednesdays by appointment only. Bring in your eyeglass prescription. If an eye exam is needed, the cost is $40, a portion of which may be reimbursable under the Vision Service Plan (VSP). Call Health Services at x6266.


This Week’s Computing Sciences Seminars

Uncertainty Quantification Study Group: Open Discussion
Monday, Sept. 12, 10:00–11:00 am, 50B-4205
Dan Gunter and Lavanya Ramakrishnan, LBNL

This open discussion will include these topics (please feel free to bring up others):
a) What have we learned so far?
b) Do we have a grasp of open challenges in each of the sub-topics?
c) What are areas of interest and where can LBL contribute?
d) Where would the group like to go from here?

Attributing the Risk of Weather-Related Extremes to Anthropogenic Climate Change: The Pilot Case of the UK Autumn 2000 Floods, and Beyond
Wednesday, Sept. 14, 11:00 am–12:00 pm, 50F-1647
Pardeep Pall, Institute for Atmospheric and Climate Science, ETH Zurich, Switzerland

The occurrence of damaging weather-related events often prompts debate as to whether anthropogenic climate change is to blame. Yet apparently conflicting expert statements often appear quickly in the aftermath, along the lines that “one might expect more intense and frequent extreme events under climate change” whilst at the same time “one cannot attribute any individual event to climate change.” Furthermore, climate models used to study the attribution problem typically do not resolve the weather systems associated with damaging events — particularly hydrological ones.

Here we present a multi-step physically-based “probabilistic event attribution” framework for providing a more objective attribution assessment of weather-related risk, and at scales resolving the impact of interest. We apply it to the pilot case of the UK floods of October and November 2000 that occurred during the wettest autumn in England and Wales since records began in 1766. These floods damaged nearly 10,000 properties across that region, disrupted services severely, and caused insured losses estimated at £1.3 billion. Although the flooding was deemed a “wake-up call” to the impacts of climate change at the time, such claims are typically supported only by general thermodynamic arguments that suggest increased extreme precipitation under global warming but fail to account fully for the complex hydrometeorology associated with flooding.

Our probabilistic event attribution framework shows that it is very likely that global anthropogenic greenhouse gas emissions substantially increased the risk of flood occurrence in England and Wales in autumn 2000. Using publicly volunteered distributed computing, we generate several thousand seasonal-forecast-resolution climate model simulations of autumn 2000 weather, both under realistic conditions, and under conditions as they might have been had these greenhouse gas emissions and the resulting large-scale warming never occurred. Results are fed into a precipitation-runoff model that is used to simulate severe daily river runoff events in England and Wales (proxy indicators of flood events). The precise magnitude of the anthropogenic contribution remains uncertain, but in nine out of ten cases our model results indicate that twentieth-century anthropogenic greenhouse gas emissions increased the risk of floods occurring in England and Wales in autumn 2000 by more than 20%, and in two out of three cases by more than 90%.

Follow-up studies exploiting this year 2000 database for other regions show interesting results — particularly for winter drought risk in the Northwest US and extreme precipitation risk in Southern Africa. However, the major challenge now is to assess trends in attributable risk for several past decades, and for the near future. Candidate databases include the new climateprediction.net weatheathome experiment, and the recently sanctioned CLIVAR Climate of the 20th Century project.

The billions of dollars being pledged for climate adaptation activities over the coming years increases the urgency for such objective, timely, and robust attribution assessments — so as to distinguish legitimate cases for funding from cases of mere bad luck with the weather.

Matrix Computations and Scientific Computing Seminar: Jacobian SDP Relaxation for Polynomial Optimization
Wednesday, Sept. 14, 12:10–1:00 pm, 380 Soda Hall, UC Berkeley
Jiawang Nie, UC San Diego

Consider the global optimization problem of minimizing a polynomial function subject to polynomial equalities and/or inequalities. Jacobian SDP relaxation is the first method that can solve this problem globally and exactly by using semidefinite programming. Its basic idea is to use the minors of Jacobian matrix of the given polynomials, add new redundant polynomial equations about the minors to the constraints, and then apply the hierarchy of Lasserre’s semidefinite programming relaxations. The main result is that this new semidefinite programming relaxation will be exact for a sufficiently high (but finite) order, that is, the global minimum of the polynomial optimization can be computed by solving a semidefinite programming problem.

Controlling Storage Quality of Service: Net/Comm/DSP Seminar
Wednesday, Sept. 14, 2:00–3:00 pm, 521 Cory Hall (Hogan room), UC Berkeley
Arif Merchant, Google

Commercial environments often use large storage systems, either large disk arrays or a distributed store, shared between many users. Controlling the performance received by users of a shared storage service is difficult because workloads can have very different load characteristics, requirements, and priorities. In this talk, I will describe two approaches for providing users with differing service levels. Maestro is a system based on adaptive control that monitors the performance of applications and dynamically allocates storage resources to meet their requirements. mClock is an algorithm for IO resource allocation. It supports proportional-share fairness subject to a minimum reservation and a maximum limit on the IO rate per user, and can be extended to a distributed storage environment.

Par Lab Seminar: A Toolbox for High-Performance Graph Computation
Thursday, Sept. 15, 11:00 am–12:30 pm, Soda Hall, Wozniak Lounge, UC Berkeley
John Gilbert, UC Santa Barbara; Chandra Krintz, UC Santa Barbara

The Knowledge Discovery Toolbox is a flexible open-source toolkit for implementing complex graph algorithms and executing them on high-performance parallel computers. KDT is aimed at two classes of users: Domain-expert analysts who are not graph experts and who use KDT primarily by invoking existing KDT routines from Python; and graph-algorithm developers who use KDT primarily by writing Python code that invokes and composes KDT’s computational primitives. These computational primitives are supplied by a parallel backend, the Combinatorial BLAS, which is written in C++ with MPI for high performance and portability. KDT is designed to eventually allow other backends to be used as well.

I will outline the architecture and capabilities of KDT, and describe several applications. I will say a little about the algorithmic foundations of Combinatorial BLAS, which views graph computations as sparse matrix computations on various algebraic semirings. In current work, we are enabling the KDT user to define operations called “filters” in either C++ or Python, which act to modify KDT’s action based on attributes that label individual edges or vertices; I will highlight some of the performance challenges this poses, and our plans to address them.

TRUST Security Seminar: A Collaborative Cyber Security Experiment Design and Analysis Framework
Thursday, Sept. 15, 1:00–2:00 pm, Soda Hall, Wozniak Lounge, UC Berkeley
Alefiya Hussain, University of Southern California

In the area of cyber security research, the ability to rapidly and systematically explore a threat scenario is a key enabler. Systematically exploring a threat scenario typically requires several iterations of integrating and parameterizing the components, instantiating the components, and comparing the results. This process currently is ad hoc and hard to repeat.

In this talk, I will discuss recent tools and methodologies developed at DETERLab, a cyber security experimentation facility, that provide a collaborative environment where experimenters can iteratively build on their own work and the work of others to design, execute, and analyze their cyber security experiments.


Link of the Week: Exascale: Power Is Not the Problem!

In an opinion piece in HPCwire, Andrew Jones, Vice President of Numerical Algorithms Group, argues that power consumption is not the biggest obstacle to exascale computing. That problem can be solved by more money—and the strong scientific case that can be made to justify the expenditure. After all, he says, “There are several large scientific facilities that have comparable power requirements, often with much narrower missions—remember that supercomputing can advance almost all scientific disciplines.” In a sidebar, he notes that “the most powerful supercomputers are not just computers—they are major scientific/research instruments that are built using computer technology. There is a difference.”

But his main point is:

[A]pplications are still years away from having algorithms and software implementations that can exploit that scale of computing efficiently….

[T]he top roadblock for achieving the hugely beneficial potential output from exascale computing is software. There are many challenges to do with the software ecosystem that will take years, lots of skilled workers, and sustained/predictable investment to solve.



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.