Abstract
Tensor regression networks achieve high rate of compression of model
parameters in multilayer perceptrons (MLP) while having slight impact on
performances. Tensor regression layer imposes low-rank constraints on the
tensor regression layer which replaces the flattening operation of traditional
MLP. We investigate tensor regression networks using various low-rank tensor
approximations, aiming to leverage the multi-modal structure of high
dimensional data by enforcing efficient low-rank constraints. We provide a
theoretical analysis giving insights on the choice of the rank parameters. We
evaluated performance of proposed model with state-of-the-art deep
convolutional models. For CIFAR-10 dataset, we achieved the compression rate of
0.018 with the sacrifice of accuracy less than 1%.
Users
Please
log in to take part in the discussion (add own reviews or comments).