Efficient Hardware Architectures for 1D- and MD-LSTM Networks

  • PDF / 4,155,252 Bytes
  • 27 Pages / 595.224 x 790.955 pts Page_size
  • 46 Downloads / 238 Views

DOWNLOAD

REPORT


Efficient Hardware Architectures for 1D- and MD-LSTM Networks Vladimir Rybalkin1

· Chirag Sudarshan1 · Christian Weis1 · Jan Lappas1 · Norbert Wehn1 · Li Cheng2

Received: 18 September 2019 / Revised: 8 May 2020 / Accepted: 20 May 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract Recurrent Neural Networks, in particular One-dimensional and Multidimensional Long Short-Term Memory (1D-LSTM and MD-LSTM) have achieved state-of-the-art classification accuracy in many applications such as machine translation, image caption generation, handwritten text recognition, medical imaging and many more. However, high classification accuracy comes at high compute, storage, and memory bandwidth requirements, which make their deployment challenging, especially for energy-constrained platforms such as portable devices. In comparison to CNNs, not so many investigations exist on efficient hardware implementations for 1D-LSTM especially under energy constraints, and there is no research publication on hardware architecture for MD-LSTM. In this article, we present two novel architectures for LSTM inference: a hardware architecture for MD-LSTM, and a DRAM-based Processing-in-Memory (DRAM-PIM) hardware architecture for 1D-LSTM. We present for the first time a hardware architecture for MD-LSTM, and show a trade-off analysis for accuracy and hardware cost for various precisions. We implement the new architecture as an FPGA-based accelerator that outperforms NVIDIA K80 GPU implementation in terms of runtime by up to 84× and energy efficiency by up to 1238× for a challenging dataset for historical document image binarization from DIBCO 2017 contest, and a well known MNIST dataset for handwritten digits recognition. Our accelerator demonstrates highest accuracy and comparable throughput in comparison to state-of-the-art FPGA-based implementations of multilayer perceptron for MNIST dataset. Furthermore, we present a new DRAM-PIM architecture for 1D-LSTM targeting energy efficient compute platforms such as portable devices. The DRAM-PIM architecture integrates the computation units in a close proximity to the DRAM cells in order to maximize the data parallelism and energy efficiency. The proposed DRAM-PIM design is 16.19× more energy efficient as compared to FPGA implementation. The total chip area overhead of this design is 18 % compared to a commodity 8 Gb DRAM chip. Our experiments show that the DRAM-PIM implementation delivers a throughput of 1309.16 GOp/s for an optical character recognition application. Keywords Long short-term memory · LSTM · MD-LSTM · 2D-LSTM · FPGA · DRAM · Processing-in-memory · PIM · Optical character recognition · OCR · MNIST · DIBCO · Zynq · Image binarization · Hardware architecture · Deep learning

1 Introduction In recent years, a wide variety of neural network accelerators [9, 26, 53] have been published that achieve higher performance and higher energy efficiency compared to general purpose computing platforms. Many of these accelerators target feed-forward networks such as CNN,