@dmir

PreAdapter: Pre-training Language Models on Knowledge Graphs

, , and . The Semantic Web -- ISWC 2024, page 210--226. Cham, Springer Nature Switzerland, (2025)

Abstract

Pre-trained language models have demonstrated state-of-the-art performance in various downstream tasks such as summarization, sentiment classification, and question answering. Leveraging vast amounts of textual data during training, these models inherently hold a certain amount of factual knowledge, which is particularly beneficial for knowledge-driven tasks such as question answering. However, the knowledge implicitly contained within the language models is not complete. Consequently, many studies incorporate additional knowledge from Semantic Web resources such as knowledge graphs, which provide an explicit representation of knowledge in the form of triples.

Description

PreAdapter: Pre-training Language Models on Knowledge Graphs | SpringerLink

Links and resources

Tags

community

  • @hotho
  • @dmir
@dmir's tags highlighted