InTheLoop | 05.29.2012
May 29, 2012
White House Releases “Digital Government” Strategy
The White House released a strategy document, “Digital Government: Building a 21st Century Platform to Better Serve the American People,” on May 23, 2012. The plan “complements several initiatives aimed at building a 21st century government that works better for the American people.” Read the Presidential Memorandum.
The Digital Government Strategy sets out to accomplish three things:
- Enable the American people and an increasingly mobile workforce to access high-quality digital government information and services anywhere, anytime, on any device.
- Ensure that as the government adjusts to this new digital world, we seize the opportunity to procure and manage devices, applications, and data in smart, secure and affordable ways.
- Unlock the power of government data to spur innovation across our Nation and improve the quality of services for the American people.
Could Computers Protect the Market From Computers?
A May 25 Wall Street Journal story on the use of computers to monitor and control financial markets quoted David Leinweber, head of the Computational Research Division’s (CRD’s) Center for Innovative Financial Technology, and Berkeley Lab Deputy Director Horst Simon.
Leinweber points out that while air traffic controllers continuously track flights to keep the skies safe, and weather experts watch hurricanes long before they make landfall, the financial markets have grown too dispersed and complex for humans to monitor. He proposes that supercomputers—like those at national laboratories such as Berkeley’s—should track every trade in real time. If volume began surging dangerously, the system would flash a “yellow light.” Regulators or stock exchanges could then slow trading down, giving the market time to clear and potentially averting a crisis.
Simon worries it might take some kind of market catastrophe “for people to wake up and say that there’s a real danger out there of our whole system being brought down by a simple [problem] that could have been prevented if we had just paid attention.” Read more.
Catastrophic Droughts Facing Midwest, Mexico, and Central America
A May 23 Current TV story on catastrophic droughts facing the Midwestern United States, Mexico, and Central America quoted CRD’s Michael Wehner, who was first author of the paper “Projections of Future Drought in the Continental United States and Mexico,” published in the December 2011 issue of the Journal of Hydrometeorology.
If climate change pushes the global average temperature to 2.5 degrees Celsius above pre-industrial era levels, as many experts now expect, these regions will be under severe and permanent drought conditions. “Drought conditions will prevail no matter what precipitation rates are in the future,” Wehner said. “Even in regions where rainfall increases, the soils will get drier” because warmer air temperatures will dry out soils more than additional rain can replenish them. Read more.
ESnet’s Leadership Recognized at TERENA Networking Conference
As mentioned in the May 14 issue of InTheLoop, ESnet staff members Eric Pouyoul, Jon Dugan, and Bill Johnston were among the speakers at the 2012 TERENA Networking Conference held May 21–24 in Reykjavík, Iceland. On May 25, ESnet Acting Division Director Greg Bell sent an update from the conference to ESnet staff. Here are a few excerpts (with emphasis added):
To begin with, our talks were quite successful. Eric gave a well-attended presentation on ESnet's effort to stitch together OSCARS, OpenFlow, and RDMA. There’s a lot of OpenFlow hype out there, and it was refreshing to see discussion of a concrete application, implemented end-to-end, at continental scale. Jon Dugan gave a talk on the newly-redesigned MyESnet portal, which received many compliments. Afterwards, staff from three European networks told me they've been studying the portal closely (Gopal [Vaswani], take a bow here as well). Finally, Bill J gave a jaw-dropping presentation on the networking requirements of the Square Kilometer Array: 15,000 Tb/s (yes) of raw data from the instruments….
Throughout the conference, I was struck by how much respect ESnet has earned overseas. Several network leaders told me “you're definitely in front,” and they mentioned our staff members by name. This confirms the feedback we got in our 2012 Operational Assessment: “ESnet provides the most advanced networking capabilities in the world,” in the words of one reviewer. However those same leaders are working hard to catch up. We can’t rest on our laurels, and we won't.
R&E Networks As Instruments for Discovery, Not Just Infrastructures
In an interview with the ALICE project (América Latina Interconectada Con Europa), ESnet Acting Division Director Greg Bell describes his vision of R&E networks in the future:
In the future, we’ll think of R&E networks as instruments for discovery, not just infrastructures. These instruments will be programmable, and they will offer a rich services interface to meet the needs of any collaboration. R&E networks will constantly communicate with each other over simple web-service interfaces, coordinating the lifecycle of service requests, brokering competing demands, and optimizing network services based on the specific requirements of individual workflows. Read more.
The Risks of Not Deploying IPv6 in the R&E Community
When having discussions with CIOs of various colleges, universities, and national laboratories, ESnet’s resident IPv6 Expert Michael Sinatra often hears about such issues as “risk,” “return on investment,” “up-front-costs,” “CAPEX/OPEX,” and the like. When the topic turns to IPv6, costs are cited as well as potential risks involved with adopting IPv6. However, any good risk assessment should include risks and costs of not doing something as well as doing it. Until recently, much of the risk of not deploying IPv6 was centered around running out of IPv4 addresses and not much more. Organizations that had a lot of IPv4 addresses (or thought they did) presumably didn’t have to consider such risks. In a Network Matters blog discussion, Sinatra notes several more risks of not deploying IPv6, advantages of IPv6, and reasons to move forward. Read more.
Maciej Haranczyk Gives Plenary Lecture at CIPS 2012
Maciej Haranczyk has been invited to give a plenary lecture at the Commonwealth Scientific and Industrial Research Organisation (CSIRO) Intelligent Processing Symposium and Workshop (CIPS 2012), which is being held this week, May 28–30, in Melbourne. “Intelligent processing” refers to technologies that automate and optimize the methods by which one material input to a process is converted to a higher value output material, while minimizing or eliminating waste.
Haranczyk’s talk on “In Silico Design of Carbon Capture Materials” summarizes the work of his team in CRD as well as collaborations with CRD’s Math group, the Energy Frontier Research Center (EFRC) for Gas Separations, as well as other partners within the LBNL-based ARPA-E project. Here is the abstract:
One of the main bottlenecks to deploying large-scale carbon capture and storage (CCS) at power plants is the energy required to separate the CO2 from flue gas. CCS applied to coal-fired power plants, for example, reduces the net output of the plant by some 30% and increases the cost of electricity by 60-80%. Developing capture materials and processes which reduce the parasitic energy imposed by CCS is therefore an important area of research. We have developed a computational approach for identifying adsorbents for CCS. Using this analysis, we have screened hundreds of thousands of potential zeolite and metal organic frameworks structures and identified many different structures that have the potential to reduce the parasitic energy of CCS by nearly a factor of two compared to near-term, amine solution-based technologies.
Our screening approach consists of purpose-build cheminformatics tools to screen and sample material databases, graphical processing units (GPU)-based molecular simulation tools to calculate gas adsorption characteristics, as well as an engineering model of a power plant used to calculate parasitic energy for a given material. Applications of our discovery approach are not limited to carbon capture: an application focused on separation of hydrocarbons will also be discussed.
John Shalf Is Program Co-Chair for IEEE Optical Interconnects Conference
CRD Computer and Data Sciences Department Head John Shalf served as Program Co-Chair for the IEEE Optical Interconnects Conference, which was held May 20–23 in Santa Fe, NM. Plenary speakers included ASCR Research Division Director Bill Harrod, who spoke on “Requirements for the DOE Exascale Program.” Next year Shalf will be General Chair of the conference.
CRD Welcomes Didem Unat, 2012 Alvarez Fellow
As a 2012 Luis W. Alvarez Fellow, Didem Unat will be designing programming models for future exascale architectures, as part of the Hardware Software Co-Design project in Berkeley Lab's Computational Research Division. Specifically, Unat will be evaluating the performance of fluid dynamics and combustion kernels on current architectures, and projecting their performance on future systems. She will be targeting data locality issues and providing novel programming concepts for improving performance on exascale systems. Read more.
For more information on the fellowship, go here.
Techbridge Girls Visit Lab for Science Inspiration
Three sets of young women visited Berkeley Lab last week as part of the Techbridge program, which seeks to encourage girls to pursue study and careers in technology, science and engineering. The visitors participated in hands-on science activities and demonstrations, tours of the Advanced Light Source, and Q&A sessions with principal investigators. Volunteers included Elizabeth Bautista of NERSC.
About Computing Sciences at Berkeley Lab
The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.
ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 7,000-plus scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are Department of Energy Office of Science User Facilities.
Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.