As traditional semiconductors reach the limits of miniaturization and capacity, a new approach to semiconductor design is needed. One promising alternative is neuromorphic computing.
In contemporary CMOS architectures, the electronics that store the data are separated from those that process the information. So computers communicate by retrieving data from memory, moving it to the processing unit, and then moving it back to the memory. This back and forth is time-consuming, energy-consuming, and creates a bottleneck when large datasets need to be processed.
Neuromorphic computing eliminates this back-and-forth with in-memory computing. And it relies on algorithms and networks to mimic the physics of the human brain and nervous system by establishing “spiking neural networks,” where spikes from individual neurons activate other neurons down a cascading chain. This allows neuromorphic chips to compute more flexibly and broadly, as its spiking neurons operate without any prescribed order.
At Berkeley Lab, our researchers are working on a neuromorphic computing framework based on oscillatory collective network dynamics, wherein each node in the network is an oscillator (dynamic process) and the network structure encodes which oscillators interact with each other. This approach allows for flexible and adaptive processing in dynamically self-reconfiguring neural networks, which is useful for coordinating large-scale distributed computing. We are also exploring the tradeoffs between the computational complexity of individual neurons and the cost associated with their implementation. To this end, we are using information theory to understand the biophysical basis of neuronal computational complexity across all neuron types in the brain and will utilize that insight in the design of next-generation neuromorphic systems to optimize space, weight, and power.
Flexible and Self-Modifying Neuromorphic Computation
Inspired by recent findings on how oscillations and waves in the brain might coordinate and distribute computation among sub-networks, this work focuses on the sub-network level and (1) develop algorithms and learning rules for flexible computation and self-organized reconfiguration of neuronal circuits based on coupled oscillator networks and (2) implement those dynamically reconfiguring neural networks (DRNNs) on energy-efficient superconducting devices based hardware systems. The resulting flexible neuromorphic computing system will have a broad range of applications, including complex contextual and adaptive processing, attention-guided computation, belief propagation-based inference, and coordinating computation in network-of-experts networks. Contact: Dilip Vasudevan
Center of Excellence on Brain-Derived Neuromorphic Computing with Intelligent Photonic and Electronic Materials
The ExPlor Center aims to understand and realize a biologically-plausible neuromorphic system that is hierarchical, scalable, intelligent, efficient, and high-throughput by (a) taking the ‘best-of-both-worlds’ of electronics/photonics and utilizing bio-inspired intelligent materials, (b) combining dynamic synaptic plasticity and dendrite computing for brain-derived learning algorithms at multiple spatio-temporal scales, (c) enabling adaptive self-reconfiguration for different environments and applications, and (d) exploiting intelligent ionic and photonic materials on the silicon ecosystem for a new generation of neuromorphic computing. Contact: Kristofer Bouchard
Although neuromorphic computing is still in its infancy, researchers in Berkeley Lab's CRD hope that these tiny, low-power, brain-inspired computing systems could one day help alleviate some of science’s big data challenges. Read More »
As he prepared to head to ISC19 to give a keynote address on the topic, John Shalf – who leads the Computer Science Department in Berkeley Lab’s Computational Research Division – shared his thoughts on what the future holds for computing technologies and architectures in the era beyond exascale. Read More »