Various studies that address the compressed sensing problem with Multiple
Measurement Vectors (MMVs) have been recently carried. These studies assume the
vectors of the different channels to be jointly sparse. In this paper, we relax
this condition. Instead we assume that these sparse vectors depend on each
other but that this dependency is unknown. We capture this dependency by
computing the conditional probability of each entry in each vector being
non-zero, given the "residuals" of all previous vectors. To estimate these
probabilities, we propose the use of the Long Short-Term Memory (LSTM) 1, a
data driven model for sequence modelling that is deep in time. To calculate the
model parameters, we minimize a cross entropy cost function. To reconstruct the
sparse vectors at the decoder, we propose a greedy solver that uses the above
model to estimate the conditional probabilities. By performing extensive
experiments on two real world datasets, we show that the proposed method
significantly outperforms the general MMV solver (the Simultaneous Orthogonal
Matching Pursuit (SOMP)) and the model-based Bayesian methods including
Multitask Compressive Sensing 2 and Sparse Bayesian Learning for Temporally
Correlated Sources 3. The proposed method does not add any complexity to the
general compressive sensing encoder. The trained model is used just at the
decoder. As the proposed method is a data driven method, it is only applicable
when training data is available. In many applications however, training data is
indeed available, e.g. in recorded images and videos.
Описание
Distributed Compressive Sensing: A Deep Learning Approach
%0 Generic
%1 palangi2015distributed
%A Palangi, Hamid
%A Ward, Rabab
%A Deng, Li
%D 2015
%K deep-learning
%T Distributed Compressive Sensing: A Deep Learning Approach
%U http://arxiv.org/abs/1508.04924
%X Various studies that address the compressed sensing problem with Multiple
Measurement Vectors (MMVs) have been recently carried. These studies assume the
vectors of the different channels to be jointly sparse. In this paper, we relax
this condition. Instead we assume that these sparse vectors depend on each
other but that this dependency is unknown. We capture this dependency by
computing the conditional probability of each entry in each vector being
non-zero, given the "residuals" of all previous vectors. To estimate these
probabilities, we propose the use of the Long Short-Term Memory (LSTM) 1, a
data driven model for sequence modelling that is deep in time. To calculate the
model parameters, we minimize a cross entropy cost function. To reconstruct the
sparse vectors at the decoder, we propose a greedy solver that uses the above
model to estimate the conditional probabilities. By performing extensive
experiments on two real world datasets, we show that the proposed method
significantly outperforms the general MMV solver (the Simultaneous Orthogonal
Matching Pursuit (SOMP)) and the model-based Bayesian methods including
Multitask Compressive Sensing 2 and Sparse Bayesian Learning for Temporally
Correlated Sources 3. The proposed method does not add any complexity to the
general compressive sensing encoder. The trained model is used just at the
decoder. As the proposed method is a data driven method, it is only applicable
when training data is available. In many applications however, training data is
indeed available, e.g. in recorded images and videos.
@misc{palangi2015distributed,
abstract = {Various studies that address the compressed sensing problem with Multiple
Measurement Vectors (MMVs) have been recently carried. These studies assume the
vectors of the different channels to be jointly sparse. In this paper, we relax
this condition. Instead we assume that these sparse vectors depend on each
other but that this dependency is unknown. We capture this dependency by
computing the conditional probability of each entry in each vector being
non-zero, given the "residuals" of all previous vectors. To estimate these
probabilities, we propose the use of the Long Short-Term Memory (LSTM) [1], a
data driven model for sequence modelling that is deep in time. To calculate the
model parameters, we minimize a cross entropy cost function. To reconstruct the
sparse vectors at the decoder, we propose a greedy solver that uses the above
model to estimate the conditional probabilities. By performing extensive
experiments on two real world datasets, we show that the proposed method
significantly outperforms the general MMV solver (the Simultaneous Orthogonal
Matching Pursuit (SOMP)) and the model-based Bayesian methods including
Multitask Compressive Sensing [2] and Sparse Bayesian Learning for Temporally
Correlated Sources [3]. The proposed method does not add any complexity to the
general compressive sensing encoder. The trained model is used just at the
decoder. As the proposed method is a data driven method, it is only applicable
when training data is available. In many applications however, training data is
indeed available, e.g. in recorded images and videos.},
added-at = {2015-09-08T13:04:21.000+0200},
author = {Palangi, Hamid and Ward, Rabab and Deng, Li},
biburl = {https://www.bibsonomy.org/bibtex/2b0466df0be1eaa4924b28e59be747467/stdiff},
description = {Distributed Compressive Sensing: A Deep Learning Approach},
interhash = {2ec25a22f40c52481f951a13efc71605},
intrahash = {b0466df0be1eaa4924b28e59be747467},
keywords = {deep-learning},
note = {cite arxiv:1508.04924},
timestamp = {2015-09-08T13:04:21.000+0200},
title = {Distributed Compressive Sensing: A Deep Learning Approach},
url = {http://arxiv.org/abs/1508.04924},
year = 2015
}