Abstract
Convolutional rectifier networks, i.e. convolutional neural networks with
rectified linear activation and max or average pooling, are the cornerstone of
modern deep learning. However, despite their wide use and success, our
theoretical understanding of the expressive properties that drive these
networks is partial at best. On the other hand, we have a much firmer grasp of
these issues in the world of arithmetic circuits. Specifically, it is known
that convolutional arithmetic circuits possess the property of "complete depth
efficiency", meaning that besides a negligible set, all functions that can be
implemented by a deep network of polynomial size, require exponential size in
order to be implemented (or even approximated) by a shallow network. In this
paper we describe a construction based on generalized tensor decompositions,
that transforms convolutional arithmetic circuits into convolutional rectifier
networks. We then use mathematical tools available from the world of arithmetic
circuits to prove new results. First, we show that convolutional rectifier
networks are universal with max pooling but not with average pooling. Second,
and more importantly, we show that depth efficiency is weaker with
convolutional rectifier networks than it is with convolutional arithmetic
circuits. This leads us to believe that developing effective methods for
training convolutional arithmetic circuits, thereby fulfilling their expressive
potential, may give rise to a deep learning architecture that is provably
superior to convolutional rectifier networks but has so far been overlooked by
practitioners.
Description
[1603.00162] Convolutional Rectifier Networks as Generalized Tensor Decompositions
Links and resources
Tags
community