Stochastic Shared Embeddings: Data-driven Regularization of Embedding
Layers
L. Wu, S. Li, C. Hsieh, and J. Sharpnack. (2019)cite arxiv:1905.10630Comment: Accepted to 2019 Conference on Neural Information Processing Systems.
Abstract
In deep neural nets, lower level embedding layers account for a large portion
of the total number of parameters. Tikhonov regularization, graph-based
regularization, and hard parameter sharing are approaches that introduce
explicit biases into training in a hope to reduce statistical complexity.
Alternatively, we propose stochastically shared embeddings (SSE), a data-driven
approach to regularizing embedding layers, which stochastically transitions
between embeddings during stochastic gradient descent (SGD). Because SSE
integrates seamlessly with existing SGD algorithms, it can be used with only
minor modifications when training large scale neural networks. We develop two
versions of SSE: SSE-Graph using knowledge graphs of embeddings; SSE-SE using
no prior information. We provide theoretical guarantees for our method and show
its empirical effectiveness on 6 distinct tasks, from simple neural networks
with one hidden layer in recommender systems, to the transformer and BERT in
natural languages. We find that when used along with widely-used regularization
methods such as weight decay and dropout, our proposed SSE can further reduce
overfitting, which often leads to more favorable generalization results.
Description
Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers
%0 Generic
%1 wu2019stochastic
%A Wu, Liwei
%A Li, Shuqing
%A Hsieh, Cho-Jui
%A Sharpnack, James
%D 2019
%K KG overfitting toread
%T Stochastic Shared Embeddings: Data-driven Regularization of Embedding
Layers
%U http://arxiv.org/abs/1905.10630
%X In deep neural nets, lower level embedding layers account for a large portion
of the total number of parameters. Tikhonov regularization, graph-based
regularization, and hard parameter sharing are approaches that introduce
explicit biases into training in a hope to reduce statistical complexity.
Alternatively, we propose stochastically shared embeddings (SSE), a data-driven
approach to regularizing embedding layers, which stochastically transitions
between embeddings during stochastic gradient descent (SGD). Because SSE
integrates seamlessly with existing SGD algorithms, it can be used with only
minor modifications when training large scale neural networks. We develop two
versions of SSE: SSE-Graph using knowledge graphs of embeddings; SSE-SE using
no prior information. We provide theoretical guarantees for our method and show
its empirical effectiveness on 6 distinct tasks, from simple neural networks
with one hidden layer in recommender systems, to the transformer and BERT in
natural languages. We find that when used along with widely-used regularization
methods such as weight decay and dropout, our proposed SSE can further reduce
overfitting, which often leads to more favorable generalization results.
@misc{wu2019stochastic,
abstract = {In deep neural nets, lower level embedding layers account for a large portion
of the total number of parameters. Tikhonov regularization, graph-based
regularization, and hard parameter sharing are approaches that introduce
explicit biases into training in a hope to reduce statistical complexity.
Alternatively, we propose stochastically shared embeddings (SSE), a data-driven
approach to regularizing embedding layers, which stochastically transitions
between embeddings during stochastic gradient descent (SGD). Because SSE
integrates seamlessly with existing SGD algorithms, it can be used with only
minor modifications when training large scale neural networks. We develop two
versions of SSE: SSE-Graph using knowledge graphs of embeddings; SSE-SE using
no prior information. We provide theoretical guarantees for our method and show
its empirical effectiveness on 6 distinct tasks, from simple neural networks
with one hidden layer in recommender systems, to the transformer and BERT in
natural languages. We find that when used along with widely-used regularization
methods such as weight decay and dropout, our proposed SSE can further reduce
overfitting, which often leads to more favorable generalization results.},
added-at = {2020-08-23T12:19:27.000+0200},
author = {Wu, Liwei and Li, Shuqing and Hsieh, Cho-Jui and Sharpnack, James},
biburl = {https://www.bibsonomy.org/bibtex/24f3449e10068629622d35c394653b75e/hotho},
description = {Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers},
interhash = {137031dbf758f1a9b719d2ba9556f90f},
intrahash = {4f3449e10068629622d35c394653b75e},
keywords = {KG overfitting toread},
note = {cite arxiv:1905.10630Comment: Accepted to 2019 Conference on Neural Information Processing Systems},
timestamp = {2020-08-23T12:19:27.000+0200},
title = {Stochastic Shared Embeddings: Data-driven Regularization of Embedding
Layers},
url = {http://arxiv.org/abs/1905.10630},
year = 2019
}