Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and
ConceptNet) poses unique challenges compared to the much studied conventional
knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form
text to represent nodes, resulting in orders of magnitude more nodes compared
to conventional KBs (18x more nodes in ATOMIC compared to Freebase
(FB15K-237)). Importantly, this implies significantly sparser graph structures
- a major challenge for existing KB completion methods that assume densely
connected graphs over a relatively smaller set of nodes. In this paper, we
present novel KB completion models that can address these challenges by
exploiting the structural and semantic context of nodes. Specifically, we
investigate two key ideas: (1) learning from local graph structure, using graph
convolutional networks and automatic graph densification and (2) transfer
learning from pre-trained language models to knowledge graphs for enhanced
contextual representation of knowledge. We describe our method to incorporate
information from both these sources in a joint model and provide the first
empirical results for KB completion on ATOMIC and evaluation with ranking
metrics on ConceptNet. Our results demonstrate the effectiveness of language
model representations in boosting link prediction performance and the
advantages of learning from local graph structure (+1.5 points in MRR for
ConceptNet) when training on subgraphs for computational efficiency. Further
analysis on model predictions shines light on the types of commonsense
knowledge that language models capture well.
Description
Commonsense Knowledge Base Completion with Structural and Semantic Context
%0 Conference Paper
%1 malaviya2019commonsense
%A Malaviya, Chaitanya
%A Bhagavatula, Chandra
%A Bosselut, Antoine
%A Choi, Yejin
%B Proceedings of AAAI
%D 2020
%K graph kg knowledge language model
%T Commonsense Knowledge Base Completion with Structural and Semantic
Context
%U http://arxiv.org/abs/1910.02915
%X Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and
ConceptNet) poses unique challenges compared to the much studied conventional
knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form
text to represent nodes, resulting in orders of magnitude more nodes compared
to conventional KBs (18x more nodes in ATOMIC compared to Freebase
(FB15K-237)). Importantly, this implies significantly sparser graph structures
- a major challenge for existing KB completion methods that assume densely
connected graphs over a relatively smaller set of nodes. In this paper, we
present novel KB completion models that can address these challenges by
exploiting the structural and semantic context of nodes. Specifically, we
investigate two key ideas: (1) learning from local graph structure, using graph
convolutional networks and automatic graph densification and (2) transfer
learning from pre-trained language models to knowledge graphs for enhanced
contextual representation of knowledge. We describe our method to incorporate
information from both these sources in a joint model and provide the first
empirical results for KB completion on ATOMIC and evaluation with ranking
metrics on ConceptNet. Our results demonstrate the effectiveness of language
model representations in boosting link prediction performance and the
advantages of learning from local graph structure (+1.5 points in MRR for
ConceptNet) when training on subgraphs for computational efficiency. Further
analysis on model predictions shines light on the types of commonsense
knowledge that language models capture well.
@inproceedings{malaviya2019commonsense,
abstract = {Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and
ConceptNet) poses unique challenges compared to the much studied conventional
knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form
text to represent nodes, resulting in orders of magnitude more nodes compared
to conventional KBs (18x more nodes in ATOMIC compared to Freebase
(FB15K-237)). Importantly, this implies significantly sparser graph structures
- a major challenge for existing KB completion methods that assume densely
connected graphs over a relatively smaller set of nodes. In this paper, we
present novel KB completion models that can address these challenges by
exploiting the structural and semantic context of nodes. Specifically, we
investigate two key ideas: (1) learning from local graph structure, using graph
convolutional networks and automatic graph densification and (2) transfer
learning from pre-trained language models to knowledge graphs for enhanced
contextual representation of knowledge. We describe our method to incorporate
information from both these sources in a joint model and provide the first
empirical results for KB completion on ATOMIC and evaluation with ranking
metrics on ConceptNet. Our results demonstrate the effectiveness of language
model representations in boosting link prediction performance and the
advantages of learning from local graph structure (+1.5 points in MRR for
ConceptNet) when training on subgraphs for computational efficiency. Further
analysis on model predictions shines light on the types of commonsense
knowledge that language models capture well.},
added-at = {2021-07-20T14:29:16.000+0200},
author = {Malaviya, Chaitanya and Bhagavatula, Chandra and Bosselut, Antoine and Choi, Yejin},
biburl = {https://www.bibsonomy.org/bibtex/256ce8933eb9acb4d310eb5ca6495df17/jannaom},
booktitle = {Proceedings of AAAI},
description = {Commonsense Knowledge Base Completion with Structural and Semantic Context},
interhash = {556048a6e3012da61564be448658bca8},
intrahash = {56ce8933eb9acb4d310eb5ca6495df17},
keywords = {graph kg knowledge language model},
timestamp = {2021-10-07T10:16:33.000+0200},
title = {Commonsense Knowledge Base Completion with Structural and Semantic
Context},
url = {http://arxiv.org/abs/1910.02915},
year = 2020
}