Аннотация
Solving inverse problems with iterative algorithms is popular, especially for
large data. Due to time constraints, the number of possible iterations is
usually limited, potentially affecting the achievable accuracy. Given an error
one is willing to tolerate, an important question is whether it is possible to
modify the original iterations to obtain faster convergence to a minimizer
achieving the allowed error without increasing the computational cost of each
iteration considerably. Relying on recent recovery techniques developed for
settings in which the desired signal belongs to some low-dimensional set, we
show that using a coarse estimate of this set may lead to faster convergence at
the cost of an additional reconstruction error related to the accuracy of the
set approximation. Our theory ties to recent advances in sparse recovery,
compressed sensing, and deep learning. Particularly, it may provide a possible
explanation to the successful approximation of the l1-minimization solution by
neural networks with layers representing iterations, as practiced in the
learned iterative shrinkage-thresholding algorithm (LISTA).
Пользователи данного ресурса
Пожалуйста,
войдите в систему, чтобы принять участие в дискуссии (добавить собственные рецензию, или комментарий)