Efficient Processing of Deep Neural Networks: A Tutorial and Survey
V. Sze, Y. Chen, T. Yang, and J. Emer. (2017)cite arxiv:1703.09039Comment: Based on tutorial on DNN Hardware at eyeriss.mit.edu/tutorial.html.
Abstract
Deep neural networks (DNNs) are currently widely used for many artificial
intelligence (AI) applications including computer vision, speech recognition,
and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it
comes at the cost of high computational complexity. Accordingly, techniques
that enable efficient processing of DNNs to improve energy efficiency and
throughput without sacrificing application accuracy or increasing hardware cost
are critical to the wide deployment of DNNs in AI systems.
This article aims to provide a comprehensive tutorial and survey about the
recent advances towards the goal of enabling efficient processing of DNNs.
Specifically, it will provide an overview of DNNs, discuss various hardware
platforms and architectures that support DNNs, and highlight key trends in
reducing the computation cost of DNNs either solely via hardware design changes
or via joint hardware design and DNN algorithm changes. It will also summarize
various development resources that enable researchers and practitioners to
quickly get started in this field, and highlight important benchmarking metrics
and design considerations that should be used for evaluating the rapidly
growing number of DNN hardware designs, optionally including algorithmic
co-designs, being proposed in academia and industry.
The reader will take away the following concepts from this article:
understand the key design considerations for DNNs; be able to evaluate
different DNN hardware implementations with benchmarks and comparison metrics;
understand the trade-offs between various hardware architectures and platforms;
be able to evaluate the utility of various DNN design techniques for efficient
processing; and understand recent implementation trends and opportunities.
Description
[1703.09039] Efficient Processing of Deep Neural Networks: A Tutorial and Survey
%0 Generic
%1 sze2017efficient
%A Sze, Vivienne
%A Chen, Yu-Hsin
%A Yang, Tien-Ju
%A Emer, Joel
%D 2017
%K 2017 arxiv deep-learning survey tutorial
%T Efficient Processing of Deep Neural Networks: A Tutorial and Survey
%U http://arxiv.org/abs/1703.09039
%X Deep neural networks (DNNs) are currently widely used for many artificial
intelligence (AI) applications including computer vision, speech recognition,
and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it
comes at the cost of high computational complexity. Accordingly, techniques
that enable efficient processing of DNNs to improve energy efficiency and
throughput without sacrificing application accuracy or increasing hardware cost
are critical to the wide deployment of DNNs in AI systems.
This article aims to provide a comprehensive tutorial and survey about the
recent advances towards the goal of enabling efficient processing of DNNs.
Specifically, it will provide an overview of DNNs, discuss various hardware
platforms and architectures that support DNNs, and highlight key trends in
reducing the computation cost of DNNs either solely via hardware design changes
or via joint hardware design and DNN algorithm changes. It will also summarize
various development resources that enable researchers and practitioners to
quickly get started in this field, and highlight important benchmarking metrics
and design considerations that should be used for evaluating the rapidly
growing number of DNN hardware designs, optionally including algorithmic
co-designs, being proposed in academia and industry.
The reader will take away the following concepts from this article:
understand the key design considerations for DNNs; be able to evaluate
different DNN hardware implementations with benchmarks and comparison metrics;
understand the trade-offs between various hardware architectures and platforms;
be able to evaluate the utility of various DNN design techniques for efficient
processing; and understand recent implementation trends and opportunities.
@misc{sze2017efficient,
abstract = {Deep neural networks (DNNs) are currently widely used for many artificial
intelligence (AI) applications including computer vision, speech recognition,
and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it
comes at the cost of high computational complexity. Accordingly, techniques
that enable efficient processing of DNNs to improve energy efficiency and
throughput without sacrificing application accuracy or increasing hardware cost
are critical to the wide deployment of DNNs in AI systems.
This article aims to provide a comprehensive tutorial and survey about the
recent advances towards the goal of enabling efficient processing of DNNs.
Specifically, it will provide an overview of DNNs, discuss various hardware
platforms and architectures that support DNNs, and highlight key trends in
reducing the computation cost of DNNs either solely via hardware design changes
or via joint hardware design and DNN algorithm changes. It will also summarize
various development resources that enable researchers and practitioners to
quickly get started in this field, and highlight important benchmarking metrics
and design considerations that should be used for evaluating the rapidly
growing number of DNN hardware designs, optionally including algorithmic
co-designs, being proposed in academia and industry.
The reader will take away the following concepts from this article:
understand the key design considerations for DNNs; be able to evaluate
different DNN hardware implementations with benchmarks and comparison metrics;
understand the trade-offs between various hardware architectures and platforms;
be able to evaluate the utility of various DNN design techniques for efficient
processing; and understand recent implementation trends and opportunities.},
added-at = {2018-07-13T23:10:41.000+0200},
author = {Sze, Vivienne and Chen, Yu-Hsin and Yang, Tien-Ju and Emer, Joel},
biburl = {https://www.bibsonomy.org/bibtex/257ed07e3348879e2bd79a835014fa517/analyst},
description = {[1703.09039] Efficient Processing of Deep Neural Networks: A Tutorial and Survey},
interhash = {247aa12513af5ee451197e6cf8147ef4},
intrahash = {57ed07e3348879e2bd79a835014fa517},
keywords = {2017 arxiv deep-learning survey tutorial},
note = {cite arxiv:1703.09039Comment: Based on tutorial on DNN Hardware at eyeriss.mit.edu/tutorial.html},
timestamp = {2018-07-13T23:10:41.000+0200},
title = {Efficient Processing of Deep Neural Networks: A Tutorial and Survey},
url = {http://arxiv.org/abs/1703.09039},
year = 2017
}