Abstract
Determining the intended sense of words in text -- word sense disambiguation
(WSD) -- is a long-standing problem in natural language processing. In this
paper, we present WSD algorithms which use neural network language models to
achieve state-of-the-art precision. Each of these methods learns to
disambiguate word senses using only a set of word senses, a few example
sentences for each sense taken from a licensed lexicon, and a large unlabeled
text corpus. We classify based on cosine similarity of vectors derived from the
contexts in unlabeled query and labeled example sentences. We demonstrate
state-of-the-art results when using the WordNet sense inventory, and
significantly better than baseline performance using the New Oxford American
Dictionary inventory. The best performance was achieved by combining an LSTM
language model with graph label propagation.
Users
Please
log in to take part in the discussion (add own reviews or comments).