InTheLoop | 07.21.2014
CRD Researchers Named Data Science Fellows
Five researchers in Berkeley Lab’s Computational Research Division have been named Berkeley Institute for Data Science (BIDS) fellows.
As a Data Science Fellow, Daniela Ushizima will receive funding to explore data, information, science and computing (DISC) issues, specifically those that involve connecting diverse domain sciences to analytics tools. As Data Science Senior Fellows, Deb Agarwal, Wes Bethel, Peter Nugent and James Sethian will contribute their extensive expertise in data science approaches—honed over decades of research and collaboration—to advise about BIDS initiatives.
Founded in fall 2013, BIDS is part of a five-year collaborative effort between UC Berkeley, the University of Washington and New York University to dramatically accelerate the growth of data intensive discovery in a broad range of fields. The Gordon and Betty Moore Foundation and Alfred P. Sloan Foundation support this work with a $37.7 million award.
»Learn more about BIDS.
»See a complete list of fellows.
↑ Back to top
SC14 Poster Submissions due July 31, 2014
The SC14 conference is accepting poster submissions that display original, unpublished, cutting-edge research and work in progress in high performance computing, storage, networking, algorithms, and applications. SC14 will be held Nov. 16-21, 2014, in New Orleans. SC14 is the premier international conference on high performance computing, networking, storage and analysis.
The deadline for poster submissions is Thursday, July 31. »Learn more.
DOE Posts Guidelines for Acknowledging Office of Science Support in Papers
DOE’s Office of Science (SC) has recently published a page of acknowledgements to be used by DOE-supported researchers when they write peer-reviewed articles and technical papers. The guidelines also cover acknowledgement of SC User Facilities and apply to visual presentations, news releases, public-facing web articles and social media. It is important that researchers properly acknowledge SC support. Accordingly, program managers funding the work may follow up on published acknowledgements to assure guideline adherence. »Learn more.
This Week's CS Seminars
Improving large-eddy simulation of atmospheric boundary layer flow on adaptive mesh refinement grids using the turbulence closure
Thursday, July 24, 10am-11:30am, Bldg. 50F, Room 1647
Lauren Goodfriend, Ph.D. Candidate, Department of Civil and Environmental Engineering, University of California, Berkeley
Many realistic flows, such as the atmospheric boundary layer, are too expensive to simulate directly. Large-eddy simulation (LES) and adaptive mesh refinement (AMR) reduce the computational cost of turbulence modeling by restricting resolved length scales, but combining these techniques generates additional errors. The grid refinement interfaces in AMR grids can create interpolation errors and reflect resolved energy. This talk will explore using the turbulence closure to mitigate grid interface errors in LES. Specifically, explicit filtering of the advection term and the mixed model are compared to implicit filtering and the eddy viscosity model. I will present a half-channel case study in which the domain is split into two structured grids, one fine and one coarse. This simple test case allows observation of the effects of the grid interfaces. It is found that explicitly filtering the advection term allows both mass and momentum to be conserved across grid refinement interfaces by reducing interpolation errors. The mixed model decreases unphysical energy accumulation generated by wave reflection. These results inform the use of LES on block-structured non-uniform grids, including the nested grids used in local atmospheric models or on more complex AMR grids.
Performance Engineering for Stencil Updates on Modern Processor with the ECM model
Berkeley Lab – Computing Sciences Exascale Seminar Series
Friday, July 25, 10am - 11:30am, Bldg. 70A, Room 3377
Gerhard Wellein, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
The recently introduced ECM (Execution-Cache-Memory) performance model can be seen as an extension to the well-known Roofline Model. The ECM model aims at a clear identification of all relevant single core runtime contributions and uses a simple scaling approach to derive the scalability within a multicore chip. ECM distinguishes between runtime contributions from in-core execution and the data delay, which includes all data transfers in the cache hierarchy. This provides a rather accurate prediction of single-core performance and saturation properties of streaming codes, leading to deeper insight into performance bottlenecks and enabling a model-guided performance engineering approach, in which the concept of "optimal performance" is well-defined. The talk introduces the ECM model and presents case studies for several short-and long-range stencil codes.
Latest work: our ECM modelling approach see G. Hager, J. Treibig, J. Habich, and G. Wellein: Exploring performance and power properties of modern multicore chips via simple machine models, preprint at arXiv:1208.2908. Concurrency and Computation: Practice and Experience, DOI: 10.1002/cpe.3180 (2013). And, +) a sparse SIMD-friendly SINGLE data storage scheme for efficient sparse matrix vector multiplications on all modern architectures, e.g. see M. Kreutzer, G. Hager, G. Wellein, H. Fehske, and A. R. Bishop: A unified sparse matrix data format for modern processors with wide SIMD units. Submitted. Preprint at arXiv:1307.6209