A Cost-Efficient High-Speed VLSI Architecture for Spiking Convolutional Neural Network Inference Using Time-Step Binary Spike Maps
Author(s): Zhang, L (Zhang, Ling); Yang, J (Yang, Jing); Shi, C (Shi, Cong); Lin, YC (Lin, Yingcheng); He, W (He, Wei); Zhou, XCA (Zhou, Xichuan); Yang, X (Yang, Xu); Liu, LY (Liu, Liyuan); Wu, NJ (Wu, Nanjian)
Source: SENSORS Volume: 21 Issue: 18 Article Number: 6006 DOI: 10.3390/s21186006 Published: SEP 2021
Abstract: Neuromorphic hardware systems have been gaining ever-increasing focus in many embedded applications as they use a brain-inspired, energy-efficient spiking neural network (SNN) model that closely mimics the human cortex mechanism by communicating and processing sensory information via spatiotemporally sparse spikes. In this paper, we fully leverage the characteristics of spiking convolution neural network (SCNN), and propose a scalable, cost-efficient, and high-speed VLSI architecture to accelerate deep SCNN inference for real-time low-cost embedded scenarios. We leverage the snapshot of binary spike maps at each time-step, to decompose the SCNN operations into a series of regular and simple time-step CNN-like processing to reduce hardware resource consumption. Moreover, our hardware architecture achieves high throughput by employing a pixel stream processing mechanism and fine-grained data pipelines. Our Zynq-7045 FPGA prototype reached a high processing speed of 1250 frames/s and high recognition accuracies on the MNIST and Fashion-MNIST image datasets, demonstrating the plausibility of our SCNN hardware architecture for many embedded applications.
Accession Number: WOS:000701139500001
PubMed ID: 34577214