Peter Brown Graduates May 2020!
Peter Brown presented his Ph.D. defense on Monday, April 13th, "Sparse Approximation Accelerators with Spiking Neural-Networks."
Abstract:
Today's mobile intelligent devices are often limited more by the energy required for data communication than for data processing. Thus, in addition to their traditional uses in signal processing, compressed sensing techniques have now found increasing relevance in low power sensing systems. However, basis pursuit denoising (BPDN), the sparse optimization required by such techniques, typically is too computationally intensive to solve directly, so implementations usually resort to greedy pursuit-based methods which approximate the optimization.
Locally competitive algorithms (LCAs), a specific class of spiking recurrent neural-networks, can solve BPDN, but efficiently implementing such networks at scale is difficult. This thesis proposes efficient hardware architectures for BPDN accelerators using the LCA spiking neural-network.
One such accelerator is a prototype sparse image coder, which achieves unparalleled energy efficiency with custom analog neurons that this work integrates into the digital design flow. At only 48.9 pJ/pixel and 50.1 nJ/encoding, the efficiency of the mixed-signal prototype is double that of an equivalent fully digital architecture. When tasked with encoding images of handwritten digits, the prototype produces sparse codes that are compressed more than 90% while demonstrably preserving features.
Next, a prototype compressed sensing radar processor boosts the accuracy of target range and velocity estimations by over 6$\times$ compared to conventional processing techniques. Capable of producing over 100,000 estimates per second, the prototype improves throughput by 8x and efficiency by 18x over state-of-the-art. Furthermore, due to a unique form of synaptic weight compression, the prototype architecture is the largest hardware realization of a fully-connected LCA neural-network to date.
Peter Brown presented his Ph.D. defense on Monday, April 13th, "Sparse Approximation Accelerators with Spiking Neural-Networks."
Abstract:
Today's mobile intelligent devices are often limited more by the energy required for data communication than for data processing. Thus, in addition to their traditional uses in signal processing, compressed sensing techniques have now found increasing relevance in low power sensing systems. However, basis pursuit denoising (BPDN), the sparse optimization required by such techniques, typically is too computationally intensive to solve directly, so implementations usually resort to greedy pursuit-based methods which approximate the optimization.
Locally competitive algorithms (LCAs), a specific class of spiking recurrent neural-networks, can solve BPDN, but efficiently implementing such networks at scale is difficult. This thesis proposes efficient hardware architectures for BPDN accelerators using the LCA spiking neural-network.
One such accelerator is a prototype sparse image coder, which achieves unparalleled energy efficiency with custom analog neurons that this work integrates into the digital design flow. At only 48.9 pJ/pixel and 50.1 nJ/encoding, the efficiency of the mixed-signal prototype is double that of an equivalent fully digital architecture. When tasked with encoding images of handwritten digits, the prototype produces sparse codes that are compressed more than 90% while demonstrably preserving features.
Next, a prototype compressed sensing radar processor boosts the accuracy of target range and velocity estimations by over 6$\times$ compared to conventional processing techniques. Capable of producing over 100,000 estimates per second, the prototype improves throughput by 8x and efficiency by 18x over state-of-the-art. Furthermore, due to a unique form of synaptic weight compression, the prototype architecture is the largest hardware realization of a fully-connected LCA neural-network to date.