Abstract

Transformer-based models are now widely used in NLP, but we still do not understand a lot about their inner workings. This paper describes what is known to date about the famous BERT model (Devlin et al. 2019), synthesizing over 40 analysis studies. We also provide an overview of the proposed modifications to the model and its training regime. We then outline the directions for further research.

Description

[2002.12327] A Primer in BERTology: What we know about how BERT works

Links and resources

Tags

community

  • @bechr7
  • @kirk86
  • @nosebrain
  • @cmcneile
  • @dblp
@kirk86's tags highlighted