Abstract
Words in natural language follow a Zipfian distribution whereby some words
are frequent but most are rare. Learning representations for words in the "long
tail" of this distribution requires enormous amounts of data. Representations
of rare words trained directly on end-tasks are usually poor, requiring us to
pre-train embeddings on external data, or treat all rare words as
out-of-vocabulary words with a unique representation. We provide a method for
predicting embeddings of rare words on the fly from small amounts of auxiliary
data with a network trained against the end task. We show that this improves
results against baselines where embeddings are trained on the end task in a
reading comprehension task, a recognizing textual entailment task, and in
language modelling.
Users
Please
log in to take part in the discussion (add own reviews or comments).