We present SpanBERT, a pre-training method that is designed to better
represent and predict spans of text. Our approach extends BERT by (1) masking
contiguous random spans, rather than random tokens, and (2) training the span
boundary representations to predict the entire content of the masked span,
without relying on the individual token representations within it. SpanBERT
consistently outperforms BERT and our better-tuned baselines, with substantial
gains on span selection tasks such as question answering and coreference
resolution. In particular, with the same training data and model size as
BERT-large, our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0,
respectively. We also achieve a new state of the art on the OntoNotes
coreference resolution task (79.6\% F1), strong performance on the TACRED
relation extraction benchmark, and even show gains on GLUE.
Description
[1907.10529] SpanBERT: Improving Pre-training by Representing and Predicting Spans
%0 Generic
%1 joshi2019spanbert
%A Joshi, Mandar
%A Chen, Danqi
%A Liu, Yinhan
%A Weld, Daniel S.
%A Zettlemoyer, Luke
%A Levy, Omer
%D 2019
%K bert span spanbert
%T SpanBERT: Improving Pre-training by Representing and Predicting Spans
%U http://arxiv.org/abs/1907.10529
%X We present SpanBERT, a pre-training method that is designed to better
represent and predict spans of text. Our approach extends BERT by (1) masking
contiguous random spans, rather than random tokens, and (2) training the span
boundary representations to predict the entire content of the masked span,
without relying on the individual token representations within it. SpanBERT
consistently outperforms BERT and our better-tuned baselines, with substantial
gains on span selection tasks such as question answering and coreference
resolution. In particular, with the same training data and model size as
BERT-large, our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0,
respectively. We also achieve a new state of the art on the OntoNotes
coreference resolution task (79.6\% F1), strong performance on the TACRED
relation extraction benchmark, and even show gains on GLUE.
@misc{joshi2019spanbert,
abstract = {We present SpanBERT, a pre-training method that is designed to better
represent and predict spans of text. Our approach extends BERT by (1) masking
contiguous random spans, rather than random tokens, and (2) training the span
boundary representations to predict the entire content of the masked span,
without relying on the individual token representations within it. SpanBERT
consistently outperforms BERT and our better-tuned baselines, with substantial
gains on span selection tasks such as question answering and coreference
resolution. In particular, with the same training data and model size as
BERT-large, our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0,
respectively. We also achieve a new state of the art on the OntoNotes
coreference resolution task (79.6\% F1), strong performance on the TACRED
relation extraction benchmark, and even show gains on GLUE.},
added-at = {2020-01-26T17:25:56.000+0100},
author = {Joshi, Mandar and Chen, Danqi and Liu, Yinhan and Weld, Daniel S. and Zettlemoyer, Luke and Levy, Omer},
biburl = {https://www.bibsonomy.org/bibtex/2e57d57e5e564069fce3b894fe394f53a/nosebrain},
description = {[1907.10529] SpanBERT: Improving Pre-training by Representing and Predicting Spans},
interhash = {57364a324f93fc65e3cbaf6bbfd52566},
intrahash = {e57d57e5e564069fce3b894fe394f53a},
keywords = {bert span spanbert},
note = {cite arxiv:1907.10529Comment: Accepted at TACL},
timestamp = {2020-01-26T17:25:56.000+0100},
title = {SpanBERT: Improving Pre-training by Representing and Predicting Spans},
url = {http://arxiv.org/abs/1907.10529},
year = 2019
}