S. Jain, and B. Wallace. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), page 3543--3556. Minneapolis, Minnesota, Association for Computational Linguistics, (June 2019)
DOI: 10.18653/v1/N19-1357
Abstract
Attention mechanisms have seen wide adoption in neural NLP models. In addition to improving predictive performance, these are often touted as affording transparency: models equipped with attention provide a distribution over attended-to input units, and this is often presented (at least implicitly) as communicating the relative importance of inputs. However, it is unclear what relationship exists between attention weights and model outputs. In this work we perform extensive experiments across a variety of NLP tasks that aim to assess the degree to which attention weights provide meaningful ``explanations'' for predictions. We find that they largely do not. For example, learned attention weights are frequently uncorrelated with gradient-based measures of feature importance, and one can identify very different attention distributions that nonetheless yield equivalent predictions. Our findings show that standard attention modules do not provide meaningful explanations and should not be treated as though they do.
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
%0 Conference Paper
%1 jain-wallace-2019-attention
%A Jain, Sarthak
%A Wallace, Byron C.
%B Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
%C Minneapolis, Minnesota
%D 2019
%I Association for Computational Linguistics
%K attention explanation interpretability neuralnet nlp
%P 3543--3556
%R 10.18653/v1/N19-1357
%T Attention is not Explanation
%U https://aclanthology.org/N19-1357
%X Attention mechanisms have seen wide adoption in neural NLP models. In addition to improving predictive performance, these are often touted as affording transparency: models equipped with attention provide a distribution over attended-to input units, and this is often presented (at least implicitly) as communicating the relative importance of inputs. However, it is unclear what relationship exists between attention weights and model outputs. In this work we perform extensive experiments across a variety of NLP tasks that aim to assess the degree to which attention weights provide meaningful ``explanations'' for predictions. We find that they largely do not. For example, learned attention weights are frequently uncorrelated with gradient-based measures of feature importance, and one can identify very different attention distributions that nonetheless yield equivalent predictions. Our findings show that standard attention modules do not provide meaningful explanations and should not be treated as though they do.
@inproceedings{jain-wallace-2019-attention,
abstract = {Attention mechanisms have seen wide adoption in neural NLP models. In addition to improving predictive performance, these are often touted as affording transparency: models equipped with attention provide a distribution over attended-to input units, and this is often presented (at least implicitly) as communicating the relative importance of inputs. However, it is unclear what relationship exists between attention weights and model outputs. In this work we perform extensive experiments across a variety of NLP tasks that aim to assess the degree to which attention weights provide meaningful {``}explanations{''} for predictions. We find that they largely do not. For example, learned attention weights are frequently uncorrelated with gradient-based measures of feature importance, and one can identify very different attention distributions that nonetheless yield equivalent predictions. Our findings show that standard attention modules do not provide meaningful explanations and should not be treated as though they do.},
added-at = {2022-02-07T16:00:04.000+0100},
address = {Minneapolis, Minnesota},
author = {Jain, Sarthak and Wallace, Byron C.},
biburl = {https://www.bibsonomy.org/bibtex/2b65fedb2ca5c7f972dee88dc7f36242a/albinzehe},
booktitle = {Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)},
doi = {10.18653/v1/N19-1357},
interhash = {8b147bce6c9083a2945bb5266e60f34b},
intrahash = {b65fedb2ca5c7f972dee88dc7f36242a},
keywords = {attention explanation interpretability neuralnet nlp},
month = jun,
pages = {3543--3556},
publisher = {Association for Computational Linguistics},
timestamp = {2022-02-07T16:00:04.000+0100},
title = {{A}ttention is not {E}xplanation},
url = {https://aclanthology.org/N19-1357},
year = 2019
}