@jk_itwm

Effective Quantization Approaches for Recurrent Neural Networks

, , , , and . (2018)cite arxiv:1802.02615Comment: 8 pages, 23 figures,Submitted to International Joint Conference on Neural Networks (IJCNN) 2018.

Abstract

Deep learning, and in particular Recurrent Neural Networks (RNN) have shown superior accuracy in a large variety of tasks including machine translation, language understanding, and movie frame generation. However, these deep learning approaches are very expensive in terms of computation. In most cases, Graphic Processing Units (GPUs) are in used for large scale implementations. Meanwhile, energy efficient RNN approaches are proposed for deploying solutions on special purpose hardware including Field Programming Gate Arrays (FPGAs) and mobile platforms. In this paper, we propose an effective quantization approach for Recurrent Neural Networks (RNN) techniques including Long Short Term Memory (LSTM), Gated Recurrent Units (GRU), and Convolutional Long Short Term Memory (ConvLSTM). We have implemented different quantization methods including Binary Connect -1, 1, Ternary Connect -1, 0, 1, and Quaternary Connect -1, -0.5, 0.5, 1. These proposed approaches are evaluated on different datasets for sentiment analysis on IMDB and video frame predictions on the moving MNIST dataset. The experimental results are compared against the full precision versions of the LSTM, GRU, and ConvLSTM. They show promising results for both sentiment analysis and video frame prediction.

Description

1802.02615.pdf

Links and resources

Tags

community

  • @jk_itwm
  • @dblp
@jk_itwm's tags highlighted