Y. Liu, and M. Lapata. (2017)cite arxiv:1705.09207Comment: change to one-based indexing, published in Transactions of the Association for Computational Linguistics (TACL), https://transacl.org/ojs/index.php/tacl/article/view/1185/280.
Abstract
In this paper, we focus on learning structure-aware document representations
from data without recourse to a discourse parser or additional annotations.
Drawing inspiration from recent efforts to empower neural networks with a
structural bias, we propose a model that can encode a document while
automatically inducing rich structural dependencies. Specifically, we embed a
differentiable non-projective parsing algorithm into a neural model and use
attention mechanisms to incorporate the structural biases. Experimental
evaluation across different tasks and datasets shows that the proposed model
achieves state-of-the-art results on document modeling tasks while inducing
intermediate structures which are both interpretable and meaningful.
Description
[1705.09207] Learning Structured Text Representations
cite arxiv:1705.09207Comment: change to one-based indexing, published in Transactions of the Association for Computational Linguistics (TACL), https://transacl.org/ojs/index.php/tacl/article/view/1185/280
%0 Generic
%1 liu2017learning
%A Liu, Yang
%A Lapata, Mirella
%D 2017
%K modelling naacl2018 neuralnet rnn session2 structure
%T Learning Structured Text Representations
%U http://arxiv.org/abs/1705.09207
%X In this paper, we focus on learning structure-aware document representations
from data without recourse to a discourse parser or additional annotations.
Drawing inspiration from recent efforts to empower neural networks with a
structural bias, we propose a model that can encode a document while
automatically inducing rich structural dependencies. Specifically, we embed a
differentiable non-projective parsing algorithm into a neural model and use
attention mechanisms to incorporate the structural biases. Experimental
evaluation across different tasks and datasets shows that the proposed model
achieves state-of-the-art results on document modeling tasks while inducing
intermediate structures which are both interpretable and meaningful.
@misc{liu2017learning,
abstract = {In this paper, we focus on learning structure-aware document representations
from data without recourse to a discourse parser or additional annotations.
Drawing inspiration from recent efforts to empower neural networks with a
structural bias, we propose a model that can encode a document while
automatically inducing rich structural dependencies. Specifically, we embed a
differentiable non-projective parsing algorithm into a neural model and use
attention mechanisms to incorporate the structural biases. Experimental
evaluation across different tasks and datasets shows that the proposed model
achieves state-of-the-art results on document modeling tasks while inducing
intermediate structures which are both interpretable and meaningful.},
added-at = {2018-06-02T19:18:37.000+0200},
author = {Liu, Yang and Lapata, Mirella},
biburl = {https://www.bibsonomy.org/bibtex/2cc1d432509fefe4897db4d0fdd659875/albinzehe},
description = {[1705.09207] Learning Structured Text Representations},
interhash = {cd39f66c87adbc7deaa66fa8bd457ff3},
intrahash = {cc1d432509fefe4897db4d0fdd659875},
keywords = {modelling naacl2018 neuralnet rnn session2 structure},
note = {cite arxiv:1705.09207Comment: change to one-based indexing, published in Transactions of the Association for Computational Linguistics (TACL), https://transacl.org/ojs/index.php/tacl/article/view/1185/280},
timestamp = {2018-06-02T19:18:37.000+0200},
title = {Learning Structured Text Representations},
url = {http://arxiv.org/abs/1705.09207},
year = 2017
}