BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
J. Devlin, M. Chang, K. Lee, and K. Toutanova. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), page 4171--4186. Minneapolis, Minnesota, Association for Computational Linguistics, (June 2019)
DOI: 10.18653/v1/N19-1423
Abstract
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7\% (4.6\% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
Description
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding - ACL Anthology
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
%0 Conference Paper
%1 devlin-etal-2019-bert
%A Devlin, Jacob
%A Chang, Ming-Wei
%A Lee, Kenton
%A Toutanova, Kristina
%B Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
%C Minneapolis, Minnesota
%D 2019
%I Association for Computational Linguistics
%K BERT
%P 4171--4186
%R 10.18653/v1/N19-1423
%T BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
%U https://www.aclweb.org/anthology/N19-1423
%X We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7\% (4.6\% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
@inproceedings{devlin-etal-2019-bert,
abstract = {We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7{\%} (4.6{\%} absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).},
added-at = {2020-12-12T10:08:35.000+0100},
address = {Minneapolis, Minnesota},
author = {Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
biburl = {https://www.bibsonomy.org/bibtex/2a2e417aaddd9f6119570dc60eafed411/hotho},
booktitle = {Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)},
description = {BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding - ACL Anthology},
doi = {10.18653/v1/N19-1423},
interhash = {fc0c0c0264f63b06db4518c57ee21b9d},
intrahash = {a2e417aaddd9f6119570dc60eafed411},
keywords = {BERT},
month = jun,
pages = {4171--4186},
publisher = {Association for Computational Linguistics},
timestamp = {2020-12-12T10:08:35.000+0100},
title = {{BERT}: Pre-training of Deep Bidirectional Transformers for Language Understanding},
url = {https://www.aclweb.org/anthology/N19-1423},
year = 2019
}