Computer vision has benefited from initializing multiple deep layers with
weights pretrained on large supervised training sets like ImageNet. Natural
language processing (NLP) typically sees initialization of only the lowest
layer of deep models with pretrained word vectors. In this paper, we use a deep
LSTM encoder from an attentional sequence-to-sequence model trained for machine
translation (MT) to contextualize word vectors. We show that adding these
context vectors (CoVe) improves performance over using only unsupervised word
and character vectors on a wide variety of common NLP tasks: sentiment analysis
(SST, IMDb), question classification (TREC), entailment (SNLI), and question
answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe
improves performance of our baseline models to the state of the art.
Description
Learned in Translation: Contextualized Word Vectors
%0 Generic
%1 mccann2017learned
%A McCann, Bryan
%A Bradbury, James
%A Xiong, Caiming
%A Socher, Richard
%D 2017
%K contextualization embeddings word
%T Learned in Translation: Contextualized Word Vectors
%U http://arxiv.org/abs/1708.00107
%X Computer vision has benefited from initializing multiple deep layers with
weights pretrained on large supervised training sets like ImageNet. Natural
language processing (NLP) typically sees initialization of only the lowest
layer of deep models with pretrained word vectors. In this paper, we use a deep
LSTM encoder from an attentional sequence-to-sequence model trained for machine
translation (MT) to contextualize word vectors. We show that adding these
context vectors (CoVe) improves performance over using only unsupervised word
and character vectors on a wide variety of common NLP tasks: sentiment analysis
(SST, IMDb), question classification (TREC), entailment (SNLI), and question
answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe
improves performance of our baseline models to the state of the art.
@misc{mccann2017learned,
abstract = {Computer vision has benefited from initializing multiple deep layers with
weights pretrained on large supervised training sets like ImageNet. Natural
language processing (NLP) typically sees initialization of only the lowest
layer of deep models with pretrained word vectors. In this paper, we use a deep
LSTM encoder from an attentional sequence-to-sequence model trained for machine
translation (MT) to contextualize word vectors. We show that adding these
context vectors (CoVe) improves performance over using only unsupervised word
and character vectors on a wide variety of common NLP tasks: sentiment analysis
(SST, IMDb), question classification (TREC), entailment (SNLI), and question
answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe
improves performance of our baseline models to the state of the art.},
added-at = {2017-12-23T15:24:43.000+0100},
author = {McCann, Bryan and Bradbury, James and Xiong, Caiming and Socher, Richard},
biburl = {https://www.bibsonomy.org/bibtex/2c341c86354737d30fec67f08719cfee7/thoni},
description = {Learned in Translation: Contextualized Word Vectors},
interhash = {043e5dc1490019e2e8d6a45b64418872},
intrahash = {c341c86354737d30fec67f08719cfee7},
keywords = {contextualization embeddings word},
note = {cite arxiv:1708.00107},
timestamp = {2017-12-23T15:24:43.000+0100},
title = {Learned in Translation: Contextualized Word Vectors},
url = {http://arxiv.org/abs/1708.00107},
year = 2017
}