Which *BERT? A Survey Organizing Contextualized Encoders
P. Xia, S. Wu, and B. Van Durme. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), page 7516--7533. Association for Computational Linguistics, (November 2020)
DOI: 10.18653/v1/2020.emnlp-main.608
Abstract
Pretrained contextualized text encoders are now a staple of the NLP community. We present a survey on language representation learning with the aim of consolidating a series of shared lessons learned across a variety of recent efforts. While significant advancements continue at a rapid pace, we find that enough has now been discovered, in different directions, that we can begin to organize advances according to common themes. Through this organization, we highlight important considerations when interpreting recent contributions and choosing which model to use.
Description
Which *BERT? A Survey Organizing Contextualized Encoders - ACL Anthology
%0 Conference Paper
%1 xia2020which
%A Xia, Patrick
%A Wu, Shijie
%A Van Durme, Benjamin
%B Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
%D 2020
%I Association for Computational Linguistics
%K bert deep dnn language learning model network neural nlp
%P 7516--7533
%R 10.18653/v1/2020.emnlp-main.608
%T Which *BERT? A Survey Organizing Contextualized Encoders
%U https://www.aclweb.org/anthology/2020.emnlp-main.608
%X Pretrained contextualized text encoders are now a staple of the NLP community. We present a survey on language representation learning with the aim of consolidating a series of shared lessons learned across a variety of recent efforts. While significant advancements continue at a rapid pace, we find that enough has now been discovered, in different directions, that we can begin to organize advances according to common themes. Through this organization, we highlight important considerations when interpreting recent contributions and choosing which model to use.
@inproceedings{xia2020which,
abstract = {Pretrained contextualized text encoders are now a staple of the NLP community. We present a survey on language representation learning with the aim of consolidating a series of shared lessons learned across a variety of recent efforts. While significant advancements continue at a rapid pace, we find that enough has now been discovered, in different directions, that we can begin to organize advances according to common themes. Through this organization, we highlight important considerations when interpreting recent contributions and choosing which model to use.},
added-at = {2021-03-23T15:39:36.000+0100},
author = {Xia, Patrick and Wu, Shijie and Van Durme, Benjamin},
biburl = {https://www.bibsonomy.org/bibtex/2c2c59bcd08bbd497d88903b038f31ecf/jaeschke},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
description = {Which *BERT? A Survey Organizing Contextualized Encoders - ACL Anthology},
doi = {10.18653/v1/2020.emnlp-main.608},
interhash = {ac0549c4da9acbb34808ec3163867549},
intrahash = {c2c59bcd08bbd497d88903b038f31ecf},
keywords = {bert deep dnn language learning model network neural nlp},
month = nov,
pages = {7516--7533},
publisher = {Association for Computational Linguistics},
timestamp = {2021-05-19T08:34:54.000+0200},
title = {Which *{BERT}? {A} Survey Organizing Contextualized Encoders},
url = {https://www.aclweb.org/anthology/2020.emnlp-main.608},
year = 2020
}