A-Z Index | Phone Book | Careers

InTheLoop | 07.18.2011

July 18, 2011

Berkeley Lab Lays Foundation for 100 Gbps Prototype Network

On July 13, Berkeley Lab announced a major step toward creating one of the world’s fastest scientific networks to accelerate research in fields ranging from advanced energy solutions to particle physics. Known as the Advanced Networking Initiative (ANI), the effort represents a $62 million multi-year investment by the DOE Office of Science in next-generation networking technology.

“As science becomes increasingly data-driven and global in scale, it’s critical that we create an infrastructure that will enable our scientists to collaborate and compete successfully in the search for solutions to some of the world’s biggest challenges in energy,” said DOE Office of Science Director William F. Brinkman. “The Advanced Networking Initiative is the kind of investment that will help secure and maintain America’s scientific pre-eminence and improve the quality of life for all of us.”

Read the news release and the Network Matters blog.


OSCARS Goes with the Flow—OpenFlow and NEC ProgrammableFlow, That Is

ESnet’s Inder Monga and Samrat Ganguly of NEC Corporation made a splash at the Summer 2011 ESCC/Internet2 Joint Techs Conference in Fairbanks, Alaska, by demonstrating some brand new ways that laboratories, universities, and industry can integrate end-to-end network virtualization across both the local area network (LAN) and wide area network (WAN).

Monga and Ganguly combined OpenFlow and On-Demand Secure Circuits and Advance Reservation System (OSCARS) as centralized controller technologies with NEC ProgrammableFlow switches to demonstrate automation and secure service provisioning that will enable scientists to share data and collaborate easier while easing the burden on their campus IT infrastructure staff.

Read more or check out the video of their demo “OpenFlow with OSCARS: Bridging the gap between campus, data centers and the WAN.”


International AstroComputing Summer School Begins Today

The 2011 University of California High-Performance Astro-Computing Center (UC-HIPACC) International AstroComputing Summer School on Computational Explosive Astrophysics begins today, July 18, and runs through July 29, with morning sessions at Berkeley Lab (Bldg. 66) and afternoon sessions at Sutardja Dai Hall on the UCB campus. The organizers are Daniel Kasen of UC Berkeley and Berkeley Lab’s Nuclear Science Division, and Peter Nugent of CRD, NERSC, and the Lab’s Computational Cosmology Center.

This year’s summer school will focus on computational explosive astrophysics, including the modeling of core collapse and thermonuclear supernovae, gamma-ray bursts, neutron star mergers, and other energetic transients. Lectures will include instruction in the physics and numerical modeling of multi-dimensional hydrodynamics, general relativity, radiation transport, nuclear reaction networks, neutrino physics, and equations of state.

Afternoon workshops will guide students in running and visualizing simulations on supercomputers using codes such as FLASH, CASTRO, GR1D, and modules for nuclear burning and radiation transport. All students will be given accounts and computing time at NERSC and have access to the codes and test problems in order to gain hands-on experience running simulations at a leading supercomputing facility.

Berkeley Lab instructors include:

  • Katie Antypas: Introduction to High Perfomance Computing/NERSC
  • Hank Childs: Introduction to HPC/Visualization; Using NERSC Workshop
  • Ann Almgren: AMR Hydrodynamics; Self Gravity/AMR; CASTRO/Scaling/Core Collapse Workshop; Parallelization/Advanced Topics
  • Dan Kasen: Radiation Transport/SN Light Curves; Monte Carlo/SynApps Workshop
  • Rollin Thomas: Radiation Transport/SN spectra and Synapps; Monte Carlo/SynApps Workshop

Elizabeth Bautista Contributes to UC Talent Management and Succession Planning Report

Elizabeth Bautista of NERSC’s Computational Systems Group is Berkeley Lab’s senior delegate to the Council for the University of California Staff Assemblies (CUCSA) and has been serving on CUCSA’s Talent Management and Succession Planning Workgroup, which recently released its final report, Talent Management and Succession Planning Within the University of California: Working Together to Make UC a Career Destination.

The report points out that as the University has experienced numerous budget cuts and the departure of talent — and more importantly, as it faces a predicted severe labor shortage by 2017 due to the exodus of retirement-age staff, continued below-market compensation, and other factors — the subject of talent management and succession planning moves to a critical level of urgency.

The report identifies key barriers to progress and provides concrete suggestions that could make UC a “career destination.” The Workgroup suggests that hiring and salary policies, and payroll and job classifications, be more consistently applied across the UC system; that career pathways are developed systemwide; that the performance review process is used to incent supervisors to engage in talent management and succession planning; and that certain services are shared systemwide.

The CUCSA chair, Ravinder Singh (UCOP), and chair-elect, Steve Garber (UCB), presented the report to the UC Regents on July 14.


TechWomen Visit Lab; Taghrid Samak Serves As Cultural Mentor

The TechWomen program is a Department of State initiative to bring women who are technical leaders from the Middle East and North Africa for a month of mentoring and exchange of ideas with Silicon Valley companies. Participants from Algeria, Egypt, Jordan, Lebanon, Morocco, and Palestine were in the Bay Area for the month of June and visited Berkeley Lab on June 24. Taghrid Samak, a postdoctoral fellow in CRD’s Advanced Computing for Science Department and a native of Alexandria, Egypt, volunteered as a cultural mentor for three of the participants during their visit to the U.S. and organized the group’s visit to the lab.

For their visit, Jonathan Carter gave an overview of the Lab and Computing Sciences; Amy Chen, Sherry Li, and Elizabeth Bautista talked about their work at the lab; and Deb Agarwal gave a talk on leadership. The women had lunch with Computing Sciences staff and also went on a tour of the ALS.

Samak was invited to join the TechWomen group in Washington DC for the 4th of July weekend, where they attended meetings at the Department of State. Secretary of State Hillary Rodham Clinton spoke at the closing luncheon on July 6, announcing that next year TechWomen would be complemented by TechGirls, which will bring teenage girls to the U.S. for a month of educational activities.


DOE CSGF Annual Conference Being Held This Week

The 2011 DOE Computational Science Graduate Fellowship (CSGF) Annual Conference is being held this week (July 21–23) in Arlington, Virginia. Katie Antypas, leader of NERSC’s User Services Group, will present “Getting Started at NERSC and Using the CSGF Allocation: An introduction and hands-on session.” Berkeley Lab will showcase our computational science research and employment opportunities at a DOE Laboratory Poster Session.

Six CSGF Fellows who are working or have worked at Berkeley Lab will present posters on their latest research:

  • Leslie Dewan (MIT): Quantifying radiation damage to the network structure of radioactive waste-bearing materials
  • Christopher Eldred (University of Utah): WRF model configuration for seasonal climate forecasting over the Bolivian altiplano
  • Thomas Fai (New York University): An FFT-based immersed boundary method for variable viscosity and density fluids
  • Eric Liu (MIT): The importance of mesh adaptation for higher-order solutions of PDEs
  • Douglas Mason (Harvard): Husimi projection in graphene eigenstates
  • Britton Olson (Stanford): Shock-turbulence interactions in a rocket nozzle

Registration Is Open for Pab Lab Boot Camp

The 2011 Pab Lab Boot Camp—Short Course on Parallel Programming is intended to offer programmers a practical introduction to parallel programming techniques and tools on current parallel computers, emphasizing multicore and manycore computers. It will be held August 15–17 in the Banatao Audiotorium, Sutardja Dai Hall on the UC Berkeley campus. Applicants can choose to attend on campus or online as a virtual student.

Anyone is welcome to attend. This course is free for industrial affiliates of the Parallel Computing Laboratory, affiliates of the Lawrence Berkeley National Lab, and students, faculty and staff of UC Berkeley, UC Davis, UC Merced, and UC Santa Cruz. Details and registration are available here.


CCGrid 2012 Issues Call for Papers

The 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2012), which will be held on May 13–16, 2012 in Ottawa, Canada, has issued a call for papers and a call for workshops. Topics of interest include, but are not limited to:

  • Paradigms and Technologies: System architectures, design and deployment; programming models, language, systems and tools/environments; virtualization; middleware technologies; volunteer computing.
  • Service Orientation: Service oriented architectures; utility computing models; *aaS paradigm; service composition and orchestration.
  • Greening: Environment friendly computing ecosystems; hardware/software/application energy efficiency; power and cooling; thermal/power awareness.
  • Autonomic Management: Self-* behaviors, models and technologies; autonomic paradigms and approaches (control-based, bio-inspired, emergent, etc.); bio-inspired approaches to management; SLA definition and enforcement.
  • Economic Aspects: Utility models and computing economies; economic-based models and approaches.
  • Monitoring and Evaluation: Performance models; monitoring and evaluation tools; $analysis of system/application performance; benchmarks and testbeds.
  • Security and Trust: Cloud/grid security and trust; access control; data privacy and integrity; regulation.
  • Applications and Experiences: Applications to real and complex problems in science, engineering, business and society; user studies; experiences with large-scale deployments systems or applications.

Workshop proposals are due October 10, 2011; papers are due November 25, 2011.


This Week’s Computing Sciences Seminars

High Performance Computing in the Geosciences
July 18–21, 9:00 am–5:00 pm, Banatao Auditorium, Sutardja Dai Hall, UC Berkeley

This workshop will focus on implementations of geophysical applications, in addition to the application in reservoir, petroleum engineering and geology presented by practitioners at the cutting edge of the field. Topics will include recent advances in HPC, as well as challenges due to the end of Moore’s Law and its impact on the oil and gas industry, and tools such as parallel visual analytics and routines that can be coupled with data analysis and simulations run on new HPC architectures or so-called integrated data analysis, visualization, simulation, computing environments that provide an end-to-end solution for analysis and visualization of scientific data and simulation results.

QMDS: A File System Metadata Management Service Supporting a Graph Data Model-Based Query Language
Monday, July 18, 11:30 am–12:30 pm, 50B-2222 & OSF 943-238 (video conference)
Sasha Ames, Lawrence Livermore National Laboratory & University of California, Santa Cruz

File system metadata management has become a bottleneck for many data-intensive applications that rely on high-performance file systems. Part of the bottleneck is due to the limitations of an almost 50 year old interface standard with metadata abstractions that were designed at a time when high-end file systems managed less than 100 MB. Today’s high-performance file systems store 7 to 9 orders of magnitude more data, resulting in numbers of data items for which these metadata abstractions are inadequate, such as directory hierarchies unable to handle complex relationships among data. Users of file systems have attempted to work around these inadequacies by moving application-specific metadata management to relational databases to make metadata searchable. Splitting file system metadata management into two separate systems introduces inefficiencies and systems management problems.

To address this problem, we propose QMDS: a file system metadata management service that integrates all file system metadata and uses a graph data model with attributes on nodes and edges. Our service uses a query language interface for file identification and attribute retrieval. We present our metadata management service design and architecture and study its performance using a text analysis benchmark application. Results from our QMDS prototype show the effectiveness of this approach. Compared to the use of a file system and PostgreSQL relational database, the QMDS prototype shows superior performance for both ingest and query workloads.

Overview of Uncertainty Quantification Methods and Methodologies for Multi-Physics Applications
Tuesday, July 19, 10:00–11:00 am, 50A-5132
Charles Tong, Lawrence Livermore National Laboratory

In this talk I will discuss some of our experiences with developing and using uncertainty quantification (UQ) methods and methodologies for multi-physics simulation models. Details of the presentation will include UQ approaches, general UQ methodologies, a survey of methods for dealing with high dimensional uncertain parameter space, constructing surrogate models, uncertainty and sensitivity propagation, and data fusion.


Link of the Week: Can Computers Predict Crimes of the Future?

It’s great when cops catch criminals after they’ve done their dirty work. But what if police could stop a crime before it was even committed? Though that may sound like a fantasy straight from a Philip K. Dick novel, it’s a goal police departments from Los Angeles to Memphis are actively pursuing with help from the Department of Justice and a handful of cutting-edge academics.

It’s called “predictive policing.” The idea: Although no one can foresee individual crimes, it is possible to forecast patterns of where and when homes are likely to be burgled or cars stolen by analyzing truckloads of past crime reports and other data with sophisticated computer algorithms. Read more.



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.