@albinzehe

Universal Language Model Fine-tuning for Text Classification

, and . (2018)cite arxiv:1801.06146Comment: ACL 2018, fixed denominator in Equation 3, line 3.

Abstract

Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Universal Language Model Fine-tuning (ULMFiT), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a language model. Our method significantly outperforms the state-of-the-art on six text classification tasks, reducing the error by 18-24% on the majority of datasets. Furthermore, with only 100 labeled examples, it matches the performance of training from scratch on 100x more data. We open-source our pretrained models and code.

Description

Universal Language Model Fine-tuning for Text Classification

Links and resources

Tags

community

  • @schwemmlein
  • @msteininger
  • @albinzehe
  • @dblp
@albinzehe's tags highlighted