PreAdapter: Pre-training Language Models on Knowledge Graphs
J. Omeliyanenko, A. Hotho, and D. Schlör. The Semantic Web -- ISWC 2024, page 210--226. Cham, Springer Nature Switzerland, (2025)
Abstract
Pre-trained language models have demonstrated state-of-the-art performance in various downstream tasks such as summarization, sentiment classification, and question answering. Leveraging vast amounts of textual data during training, these models inherently hold a certain amount of factual knowledge, which is particularly beneficial for knowledge-driven tasks such as question answering. However, the knowledge implicitly contained within the language models is not complete. Consequently, many studies incorporate additional knowledge from Semantic Web resources such as knowledge graphs, which provide an explicit representation of knowledge in the form of triples.
Description
PreAdapter: Pre-training Language Models on Knowledge Graphs | SpringerLink
%0 Conference Paper
%1 10.1007/978-3-031-77850-6_12
%A Omeliyanenko, Janna
%A Hotho, Andreas
%A Schlör, Daniel
%B The Semantic Web -- ISWC 2024
%C Cham
%D 2025
%E Demartini, Gianluca
%E Hose, Katja
%E Acosta, Maribel
%E Palmonari, Matteo
%E Cheng, Gong
%E Skaf-Molli, Hala
%E Ferranti, Nicolas
%E Hernández, Daniel
%E Hogan, Aidan
%I Springer Nature Switzerland
%K myown 2024 kg knowledge LLM graph from:hotho
%P 210--226
%T PreAdapter: Pre-training Language Models on Knowledge Graphs
%U https://link.springer.com/chapter/10.1007/978-3-031-77850-6_12
%X Pre-trained language models have demonstrated state-of-the-art performance in various downstream tasks such as summarization, sentiment classification, and question answering. Leveraging vast amounts of textual data during training, these models inherently hold a certain amount of factual knowledge, which is particularly beneficial for knowledge-driven tasks such as question answering. However, the knowledge implicitly contained within the language models is not complete. Consequently, many studies incorporate additional knowledge from Semantic Web resources such as knowledge graphs, which provide an explicit representation of knowledge in the form of triples.
%@ 978-3-031-77850-6
@inproceedings{10.1007/978-3-031-77850-6_12,
abstract = {Pre-trained language models have demonstrated state-of-the-art performance in various downstream tasks such as summarization, sentiment classification, and question answering. Leveraging vast amounts of textual data during training, these models inherently hold a certain amount of factual knowledge, which is particularly beneficial for knowledge-driven tasks such as question answering. However, the knowledge implicitly contained within the language models is not complete. Consequently, many studies incorporate additional knowledge from Semantic Web resources such as knowledge graphs, which provide an explicit representation of knowledge in the form of triples.},
added-at = {2025-01-20T03:32:45.000+0100},
address = {Cham},
author = {Omeliyanenko, Janna and Hotho, Andreas and Schl{\"o}r, Daniel},
biburl = {https://www.bibsonomy.org/bibtex/2b0b69c4d5f48b6fb6c8c9caef6bf85dc/dmir},
booktitle = {The Semantic Web -- ISWC 2024},
description = {PreAdapter: Pre-training Language Models on Knowledge Graphs | SpringerLink},
editor = {Demartini, Gianluca and Hose, Katja and Acosta, Maribel and Palmonari, Matteo and Cheng, Gong and Skaf-Molli, Hala and Ferranti, Nicolas and Hern{\'a}ndez, Daniel and Hogan, Aidan},
interhash = {62289724f7151574ed37c5d9a429d83b},
intrahash = {b0b69c4d5f48b6fb6c8c9caef6bf85dc},
isbn = {978-3-031-77850-6},
keywords = {myown 2024 kg knowledge LLM graph from:hotho},
pages = {210--226},
publisher = {Springer Nature Switzerland},
timestamp = {2025-01-20T03:32:45.000+0100},
title = {PreAdapter: Pre-training Language Models on Knowledge Graphs},
url = {https://link.springer.com/chapter/10.1007/978-3-031-77850-6_12},
year = 2025
}