A-Z Index | Phone Book | Careers

InTheLoop | 05.17.2010

May 17, 2010

Victor Markowitz Appointed JGI CIO and Associate Director

An announcement from DOE Joint Genome Institute (JGI) Director Eddy Rubin:

I am pleased to announce that Victor Markowitz has been appointed JGI Chief Informatics Officer (CIO) and Associate Director. Victor’s primary role will be to provide vision and strategic leadership to establish and maintain leading edge information technology and informatics capabilities for the JGI and DOE partners. Victor will serve as a member of the JGI senior management team and will play a key role in advising me on the appropriate direction and formulation of the JGI’s information technology infrastructure.

In addition to his new responsibilities, Victor will continue to head the Computational Research Division’s Biological Data Management and Technology Center (BDMTC), which has been engaged in partnership with the JGI scientists in the development of microbial genome and metagenome data management and analysis systems. Prior to setting up BDMTC in 2004, Victor served as Chief Information Officer and Senior Vice President, Data Management Systems, at Gene Logic Inc. Before joining Gene Logic, Victor spent 10 years at Berkeley Lab, where he led the development of the Object Protocol Model (OPM) data management and integration tools which were used for developing public and commercial genome databases. Victor received his M.Sc. and D.Sc. degrees in computer science from Technion, the Israel Institute of Technology.

Please join me in formally welcoming Victor to this new position at the JGI.


Global Cloud Resolving Model May Reduce Climate Uncertainty

Cloud parameterizations are the greatest source of uncertainty in today’s climate models. Researchers using supercomputers at NERSC are working to clear up that uncertainty by developing and testing a new kind of global climate model, called a global cloud resolving model (GCRM)—a model that’s designed to take advantage of the extreme-scale computers expected in the near future. Read more.


NERSC Launches New Analytics Server, “Euclid”

Last week all active NERSC users were given access to Euclid, the new analytics and visualization server that will replace DaVinci, which will be decommissioned on June 16. Euclid, named in honor of the ancient Greek mathematician, is a Sun Microsystems Sunfire x4640 SMP. Its single node contains eight 6-core Opteron 2.6 GHz processors, with all 48 cores sharing the same 512 GB of memory. The system’s theoretical peak performance is 499.2 Gflop/s.


This Week’s Computing Sciences Seminars

GPU Computing and Irregular Parallelism
Tuesday, May 18, 12:00–1:00 pm, 50F-1647
John Owens, University of California, Davis

The computational power of GPUs, coupled with increasing programmability, is making the GPU an increasingly compelling platform for high-performance computing. In this talk I’ll give a brief overview of GPU computing and talk about the current research challenges that our group is tackling, with particular attention to supporting irregular parallelism on the GPU. While GPUs are particularly good at regular, structured codes, it’s still a large challenge to efficiently support more irregular codes. I’ll talk about our recent work in task queuing, hash tables, and fragment compositing, and conclude my talk with a discussion of the research problems I’d like to address going forward.

BioVLAB: A Reconfigurable Cloud Computing Environment for Bioinformatics
Wednesday, May 19, 9:00–10:00 am, 943-236 & 50B-2222
Youngik Yang, Indiana University

The recent advance of high throughput experiment generates a huge amount of data, and there is a growing need for a computational environment for analysis of such high volume of data. However, analyzing such data requires the involvement of experts in bioinformatics and computer science as well as a good computing infrastructure. Resolving these two issues (computing infrastructure and bioinformatics expertise) will broaden the scientific research community that utilizes the genome-wide data and will allow participation of small research labs.

We have developed a computational infrastructure called BioVLAB for molecular biology data utilizing the Amazon cloud computing and a graphical workflow composer, XBaya. Use of the cloud computing addresses the need for computing infrastructure for small research labs, and use of the graphical workflow composer addresses the need for bioinformatics expertise. Thus, end users can perform the bioinformatics computational analysis using BioVLAB in three steps by downloading a pre-composed workflow, creating an account on the Amazon computing cloud account, and then running the workflow.

Separating Functional and Parallel Correctness of Parallel Programs
Thursday, May 20, 11:00 am–12:00 pm, 50B-2222
Koushik Sen, University of California, Berkeley

Parallel multi-threaded programs are more difficult to write than their sequential counterparts because while writing parallel programs, programmers must consider all possible behaviors due to thread interleavings, in addition to the algorithmic correctness of the program.  We share a widespread belief that the only way to make multi-threaded programming accessible to a large number of programmers is to come up with programming paradigms and associated tools that explicitly separate reasoning about functional correctness from reasoning about additional behaviors arising due to parallelism.

In this talk, I will describe two strategies for separating the parallelization correctness aspect of a program from its functional correctness. First, for many parallel programs it is desired that the output be deterministic. In other words, it is desired to ensure that the non-determinism introduced by the thread scheduler in a parallel program does not change the intended output of the program.  I will describe an assertion framework for specifying that regions of a parallel program behave deterministically despite nondeterministic thread interleaving.

The second strategy is based on the observation that a natural step in the development of a parallel program is to first extend the sequential algorithm with a controlled amount of non-determinism, followed by the actual parallelization, when additional non-determinism is introduced by thread interleavings.  I will describe the use of \emph{non-deterministic sequential programs} as a specification mechanism, such that for each execution of a parallel program there exists an equivalent execution of the corresponding non-deterministic sequential program. I will argue that such non-deterministic sequential programs decouple parallelization correctness from functional correctness.


Link of the Week: PowerPoint Combatants Go at It Again

It has been said that there are two kinds of information graphics: those that explain things and those that need to be explained. That’s the gist of a long-running battle over the merits of PowerPoint presentations, and it’s turning into something of a quagmire at the Defense Department, according to an article in Federal Computer Week.

The latest skirmish broke out when Elisabeth Bumiller of the New York Times reported on the decidedly anti-PowerPoint sentiments of Army Gen. Stanley McChrystal, who leads U.S. and NATO forces in Afghanistan. “It’s dangerous because it can create the illusion of understanding and the illusion of control,” McChrystal said. “Some problems in the world are not bullet-izable.”

To prove his point, McChrystal has been displaying an indecipherable slide of the United States’ military strategy in Afghanistan. “When we understand that slide, we’ll have won the war,” he said.

McChrystal’s diagram “has an undeniable beauty,” writes Julian Borger at the Guardian’s Global Security blog. “Done the right way (embroidered perhaps), it would make a lovely wall-hanging and an ideal gift for the foreign policy-maker in your life.”

The MSNBC website has posted a PDF of the entire presentation, “Dynamic Planning for COIN in Afghanistan.” The fact that no one has been indicted for treason for making our strategy available to our enemies is perhaps the highest testimony to the presentation’s uselessness. Nevertheless, Bumiller reports, “Last year when a military Web site, Company Command, asked an Army platoon leader in Iraq, Lt. Sam Nuxoll, how he spent most of his time, he responded, ‘Making PowerPoint slides.’ When pressed, he said he was serious.”



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.