A-Z Index | Directory | Careers

Edge Computing

Around the world, scientists are deploying new sensors capable of collecting information in areas of the world where communication links may be limited, there may be a need to analyze results quickly, or scientific results may depend on coordinating activities between sensors in a local area. The increasing need to manage streaming data analysis, visualization, and reduction close to experiment or field sites has kick-started edge computing, including fast processing, edge hardware, and software capabilities development, and deployment of wireless networks.

Operating these sensors and analyzing their data can often be challenging, as they are often deployed in areas with limited power and data connectivity. They may be in areas where weather and other factors make access laborious and challenging, such as at the top of a mountain, down a borehole, in the arctic, or under dense forest canopy. New capabilities – such as sensors tied to digital twin simulations for control of complex physical processes – require specialized network tools and technologies and tight coupling between workflows and compute resources. Edge computing is an important part of enabling these workflows. As science sensing becomes more intelligent, autonomous, and responsive to environmental changes, our edge computing efforts help scientists succeed despite these challenges.

Projects

ESnet Wireless Edge Project

Increasingly, Earth and environmental and other field sciences are deploying sensor systems beyond the traditional reach of fiber-optic-based research and engineering networks. This project researches, tests, fields, and operates advanced wireless capabilities to bring ESnet to field scientists via 5G, LoRa, satellite, or other advanced wireless capabilities as needed. Contact: Andrew Wiedlea

Large-scale Self-driving 5G Network for Science

Wireless technologies and network advances with 5G, and even beyond-5G, are ushering in a new era for the Internet of Things (IoT) with intelligent sensors bringing complex temporal and spatial challenges to the way we do science across multiple DOE-related activities. While bringing many advantages such as connectivity in urban non-wired areas, mmWave technologies, and new upload/download speeds with less latency, these changes also bring unprecedented data demands, new hardware, and the desire to connect across multiple network domains seamlessly. We will be baking artificial intelligence (AI) into individual edge nodes for intelligent edges and connecting to DOE facilities such as network and data centers for intelligent core, forming a new network ecosystem for science. Contact: Anastasiia Butko

FPGA-Accelerated Supercomputing on the Edge for NCEM and ALS High-Performance Sensors

With experimental facilities' data production rates exponentially increasing, there is a greater need to move computation closer to the experimental source. To this end, we have developed the Berkeley eXtensible Processing Engine (BXPE) Framework, which allows experimentalists to utilize FPGAs stitched into the network pipeline to process data in real time as it flows over the network. We provide a Python-tooling front end to allow for the development of algorithms using this framework. In this particular design, we ported the NCEM Center-of-Mass and the ALS convolution applications to this edge computing framework. Contacts: Farzad Fatollahi-Fard, Anastasiia Butko

EJFAT ( ESnet/Jlab) FPGA Accelerated Transport

The keystone of this ESnet Jlab FPGA accelerated transport (EJFAT) is the joint development of a dynamic, real-time compute workload balancer (LB) of UDP-streamed data from data acquisition (DAQ) systems. The LB is a suite consisting of a field-programmable gate array (FPGA) executing the dynamically configurable, low fixed latency LB data plane featuring real-time packet redirection at terabits/sec throughput. A control plane running on the FPGA host computer monitors network and compute-farm telemetry to make dynamic decisions for destination compute host redirection/load balancing. The LB provides for three-tier horizontal scaling across LB suites, core compute hosts, and CPUs within a host. The LB effectively provides seamless integration of edge/core computing to support direct experimental data processing for immediate use by JLab science programs and others such as the EIC, as well as data centers of the future requiring high throughput and low latency for both hot and cooled data for both running experiment data acquisition systems and data center use cases. Contact: Yatish Kumar

Visualization and Data Analytics at the Edge

We’re working to insert data reduction capabilities directly onsite based on heterogeneous architectures (FPGAs, GPUs, and more) to reduce the required network/high performance computing (HPC) facility bandwidth. This avoids costly one-offs to broaden deployment to more facilities. Over time, local feedback improves facility utilization and data quality. Contact: Gunter Weber

News

Collaboration Charts New Course for Future of Field Research Data

February 2, 2022

Spearheaded by DOE's ESnet, new investments in local 5G connectivity and remote satellite data backhaul is providing a high-tech road map for the future of data in field research. Read More »