Smoothed Dilated Convolutions for Improved Dense Prediction
Z. Wang, und S. Ji. (2018)cite arxiv:1808.08931Comment: The original version was accepted by KDD2018. Code is publicly available at https://github.com/divelab/dilated.
DOI: 10.1145/3219819.3219944
Zusammenfassung
Dilated convolutions, also known as atrous convolutions, have been widely
explored in deep convolutional neural networks (DCNNs) for various dense
prediction tasks. However, dilated convolutions suffer from the gridding
artifacts, which hampers the performance. In this work, we propose two simple
yet effective degridding methods by studying a decomposition of dilated
convolutions. Unlike existing models, which explore solutions by focusing on a
block of cascaded dilated convolutional layers, our methods address the
gridding artifacts by smoothing the dilated convolution itself. In addition, we
point out that the two degridding approaches are intrinsically related and
define separable and shared (SS) operations, which generalize the proposed
methods. We further explore SS operations in view of operations on graphs and
propose the SS output layer, which is able to smooth the entire DCNNs by only
replacing the output layer. We evaluate our degridding methods and the SS
output layer thoroughly, and visualize the smoothing effect through effective
receptive field analysis. Results show that our methods degridding yield
consistent improvements on the performance of dense prediction tasks, while
adding negligible amounts of extra training parameters. And the SS output layer
improves the performance significantly and is very efficient in terms of number
of training parameters.
%0 Generic
%1 wang2018smoothed
%A Wang, Zhengyang
%A Ji, Shuiwang
%D 2018
%K ismail
%R 10.1145/3219819.3219944
%T Smoothed Dilated Convolutions for Improved Dense Prediction
%U http://arxiv.org/abs/1808.08931
%X Dilated convolutions, also known as atrous convolutions, have been widely
explored in deep convolutional neural networks (DCNNs) for various dense
prediction tasks. However, dilated convolutions suffer from the gridding
artifacts, which hampers the performance. In this work, we propose two simple
yet effective degridding methods by studying a decomposition of dilated
convolutions. Unlike existing models, which explore solutions by focusing on a
block of cascaded dilated convolutional layers, our methods address the
gridding artifacts by smoothing the dilated convolution itself. In addition, we
point out that the two degridding approaches are intrinsically related and
define separable and shared (SS) operations, which generalize the proposed
methods. We further explore SS operations in view of operations on graphs and
propose the SS output layer, which is able to smooth the entire DCNNs by only
replacing the output layer. We evaluate our degridding methods and the SS
output layer thoroughly, and visualize the smoothing effect through effective
receptive field analysis. Results show that our methods degridding yield
consistent improvements on the performance of dense prediction tasks, while
adding negligible amounts of extra training parameters. And the SS output layer
improves the performance significantly and is very efficient in terms of number
of training parameters.
@misc{wang2018smoothed,
abstract = {Dilated convolutions, also known as atrous convolutions, have been widely
explored in deep convolutional neural networks (DCNNs) for various dense
prediction tasks. However, dilated convolutions suffer from the gridding
artifacts, which hampers the performance. In this work, we propose two simple
yet effective degridding methods by studying a decomposition of dilated
convolutions. Unlike existing models, which explore solutions by focusing on a
block of cascaded dilated convolutional layers, our methods address the
gridding artifacts by smoothing the dilated convolution itself. In addition, we
point out that the two degridding approaches are intrinsically related and
define separable and shared (SS) operations, which generalize the proposed
methods. We further explore SS operations in view of operations on graphs and
propose the SS output layer, which is able to smooth the entire DCNNs by only
replacing the output layer. We evaluate our degridding methods and the SS
output layer thoroughly, and visualize the smoothing effect through effective
receptive field analysis. Results show that our methods degridding yield
consistent improvements on the performance of dense prediction tasks, while
adding negligible amounts of extra training parameters. And the SS output layer
improves the performance significantly and is very efficient in terms of number
of training parameters.},
added-at = {2020-11-03T11:32:21.000+0100},
author = {Wang, Zhengyang and Ji, Shuiwang},
biburl = {https://www.bibsonomy.org/bibtex/2c4c5059807b9123771421b182afc0774/topel},
doi = {10.1145/3219819.3219944},
interhash = {ce85f75118893c8c85c7e98c4b823d69},
intrahash = {c4c5059807b9123771421b182afc0774},
keywords = {ismail},
note = {cite arxiv:1808.08931Comment: The original version was accepted by KDD2018. Code is publicly available at https://github.com/divelab/dilated},
timestamp = {2020-11-03T11:32:21.000+0100},
title = {Smoothed Dilated Convolutions for Improved Dense Prediction},
url = {http://arxiv.org/abs/1808.08931},
year = 2018
}