Abstract
Over the last years, the Semantic Web has been growing steadily. Today, we count more than 10,000 datasets made available online following Semantic Web standards.
Nevertheless, many applications, such as data integration, search, and interlinking, may not take the full advantage of the data without having a priori statistical information about its internal structure and coverage.
In fact, there are already a number of tools, which offer such statistics, providing basic information about RDF datasets and vocabularies.
However, those usually show severe deficiencies in terms of performance once the dataset size grows beyond the capabilities of a single machine.
In this paper, we introduce a software component for statistical calculations of large RDF datasets, which scales out to clusters of machines.
More specifically, we describe the first distributed in-memory approach for computing 32 different statistical criteria for RDF datasets using Apache Spark.
The preliminary results show that our distributed approach improves upon a previous centralized approach we compare against and provides approximately linear horizontal scale-up.
The criteria are extensible beyond the 32 default criteria, is integrated into the larger SANSA framework and employed in at least four major usage scenarios beyond the SANSA community.
Links and resources
Tags
community