Learning Semantic Textual Similarity from Conversations
, , , , , , , , , and .
Proceedings of The Third Workshop on Representation Learning for NLP, page 164--174. Melbourne, Australia, Association for Computational Linguistics, (July 2018)

We present a novel approach to learn representations for sentence-level semantic similarity using conversational data. Our method trains an unsupervised model to predict conversational responses. The resulting sentence embeddings perform well on the Semantic Textual Similarity (STS) Benchmark and SemEval 2017's Community Question Answering (CQA) question similarity subtask. Performance is further improved by introducing multitask training, combining conversational response prediction and natural language inference. Extensive experiments show the proposed model achieves the best performance among all neural models on the STS Benchmark and is competitive with the state-of-the-art feature engineered and mixed systems for both tasks.
  • @patrickz
  • @blasp
  • @theodoro
  • @braun4
  • @fagecev
  • @karime
  • @dawa
  • @cabanillas
  • @komadagar
  • @dblp
This publication has not been reviewed yet.

rating distribution
average user rating0.0 out of 5.0 based on 0 reviews
    Please log in to take part in the discussion (add own reviews or comments).