A Graph-to-Sequence Model for AMR-to-Text Generation
L. Song, Y. Zhang, Z. Wang, and D. Gildea. (2018)cite arxiv:1805.02473Comment: ACL 2018 camera-ready, Proceedings of ACL 2018.
Abstract
The problem of AMR-to-text generation is to recover a text representing the
same meaning as an input AMR graph. The current state-of-the-art method uses a
sequence-to-sequence model, leveraging LSTM for encoding a linearized AMR
structure. Although being able to model non-local semantic information, a
sequence LSTM can lose information from the AMR graph structure, and thus faces
challenges with large graphs, which result in long sequences. We introduce a
neural graph-to-sequence model, using a novel LSTM structure for directly
encoding graph-level semantics. On a standard benchmark, our model shows
superior results to existing methods in the literature.
Description
[1805.02473] A Graph-to-Sequence Model for AMR-to-Text Generation
%0 Generic
%1 song2018graphtosequence
%A Song, Linfeng
%A Zhang, Yue
%A Wang, Zhiguo
%A Gildea, Daniel
%D 2018
%K deepgeneration naacl2018 neuralnet rnn
%T A Graph-to-Sequence Model for AMR-to-Text Generation
%U http://arxiv.org/abs/1805.02473
%X The problem of AMR-to-text generation is to recover a text representing the
same meaning as an input AMR graph. The current state-of-the-art method uses a
sequence-to-sequence model, leveraging LSTM for encoding a linearized AMR
structure. Although being able to model non-local semantic information, a
sequence LSTM can lose information from the AMR graph structure, and thus faces
challenges with large graphs, which result in long sequences. We introduce a
neural graph-to-sequence model, using a novel LSTM structure for directly
encoding graph-level semantics. On a standard benchmark, our model shows
superior results to existing methods in the literature.
@misc{song2018graphtosequence,
abstract = {The problem of AMR-to-text generation is to recover a text representing the
same meaning as an input AMR graph. The current state-of-the-art method uses a
sequence-to-sequence model, leveraging LSTM for encoding a linearized AMR
structure. Although being able to model non-local semantic information, a
sequence LSTM can lose information from the AMR graph structure, and thus faces
challenges with large graphs, which result in long sequences. We introduce a
neural graph-to-sequence model, using a novel LSTM structure for directly
encoding graph-level semantics. On a standard benchmark, our model shows
superior results to existing methods in the literature.},
added-at = {2018-06-01T18:25:31.000+0200},
author = {Song, Linfeng and Zhang, Yue and Wang, Zhiguo and Gildea, Daniel},
biburl = {https://www.bibsonomy.org/bibtex/21c4647bc1c13dd7bbda2e5b537a756a5/albinzehe},
description = {[1805.02473] A Graph-to-Sequence Model for AMR-to-Text Generation},
interhash = {188e136d0f7ff3d59f10a2c0b5eaa17e},
intrahash = {1c4647bc1c13dd7bbda2e5b537a756a5},
keywords = {deepgeneration naacl2018 neuralnet rnn},
note = {cite arxiv:1805.02473Comment: ACL 2018 camera-ready, Proceedings of ACL 2018},
timestamp = {2018-06-01T18:25:31.000+0200},
title = {A Graph-to-Sequence Model for AMR-to-Text Generation},
url = {http://arxiv.org/abs/1805.02473},
year = 2018
}