Abstract
Relational machine learning studies methods for the statistical analysis of
relational, or graph-structured, data. In this paper, we provide a review of
how such statistical models can be "trained" on large knowledge graphs, and
then used to predict new facts about the world (which is equivalent to
predicting new edges in the graph). In particular, we discuss two fundamentally
different kinds of statistical relational models, both of which can scale to
massive datasets. The first is based on latent feature models such as tensor
factorization and multiway neural networks. The second is based on mining
observable patterns in the graph. We also show how to combine these latent and
observable models to get improved modeling power at decreased computational
cost. Finally, we discuss how such statistical models of graphs can be combined
with text-based information extraction methods for automatically constructing
knowledge graphs from the Web. To this end, we also discuss Google's Knowledge
Vault project as an example of such combination.
Users
Please
log in to take part in the discussion (add own reviews or comments).