SPARQL is the standard language for querying
RDF data. There exists a variety of SPARQL query evaluation
systems implementing different architectures for the distribution
of data and computations. Differences in architectures coupled
with specific optimizations, for e.g. preprocessing and indexing,
make these systems incomparable from a purely theoretical
perspective. This results in many implementations solving the
SPARQL query evaluation problem while exhibiting very different
behaviors, not all of them being adapted in any context. We
provide a new perspective on distributed SPARQL evaluators,
based on multi-criteria experimental rankings. Our suggested set
of 5 features (namely velocity, immediacy, dynamicity, parsimony,
and resiliency) provides a more comprehensive description of the
behaviors of distributed evaluators when compared to traditional
runtime performance metrics. We show how these features
help in more accurately evaluating to which extent a given
system is appropriate for a given use case. For this purpose,
we systematically benchmarked a panel of 10 state-of-the-art
implementations. We ranked them using a reading grid that
helps in pinpointing the advantages and limitations of current
technologies for the distributed evaluation of SPARQL queries
%0 Generic
%1 noauthororeditor
%A Graux, Damien
%A Jachiet, Louis
%A Genevès, Pierre
%A Layaïda, Nabil
%B Big Data 2018 - IEEE International Conference on Big Data proceedings
%D 2018
%K projecthobbit
%P 1-10
%T A Multi-Criteria Experimental Ranking of Distributed SPARQL Evaluator
%U https://2018.eswc-conferences.org/wp-content/uploads/2018/02/ESWC2018_paper_149.pdf
%X SPARQL is the standard language for querying
RDF data. There exists a variety of SPARQL query evaluation
systems implementing different architectures for the distribution
of data and computations. Differences in architectures coupled
with specific optimizations, for e.g. preprocessing and indexing,
make these systems incomparable from a purely theoretical
perspective. This results in many implementations solving the
SPARQL query evaluation problem while exhibiting very different
behaviors, not all of them being adapted in any context. We
provide a new perspective on distributed SPARQL evaluators,
based on multi-criteria experimental rankings. Our suggested set
of 5 features (namely velocity, immediacy, dynamicity, parsimony,
and resiliency) provides a more comprehensive description of the
behaviors of distributed evaluators when compared to traditional
runtime performance metrics. We show how these features
help in more accurately evaluating to which extent a given
system is appropriate for a given use case. For this purpose,
we systematically benchmarked a panel of 10 state-of-the-art
implementations. We ranked them using a reading grid that
helps in pinpointing the advantages and limitations of current
technologies for the distributed evaluation of SPARQL queries
@conference{noauthororeditor,
abstract = {SPARQL is the standard language for querying
RDF data. There exists a variety of SPARQL query evaluation
systems implementing different architectures for the distribution
of data and computations. Differences in architectures coupled
with specific optimizations, for e.g. preprocessing and indexing,
make these systems incomparable from a purely theoretical
perspective. This results in many implementations solving the
SPARQL query evaluation problem while exhibiting very different
behaviors, not all of them being adapted in any context. We
provide a new perspective on distributed SPARQL evaluators,
based on multi-criteria experimental rankings. Our suggested set
of 5 features (namely velocity, immediacy, dynamicity, parsimony,
and resiliency) provides a more comprehensive description of the
behaviors of distributed evaluators when compared to traditional
runtime performance metrics. We show how these features
help in more accurately evaluating to which extent a given
system is appropriate for a given use case. For this purpose,
we systematically benchmarked a panel of 10 state-of-the-art
implementations. We ranked them using a reading grid that
helps in pinpointing the advantages and limitations of current
technologies for the distributed evaluation of SPARQL queries},
added-at = {2019-01-16T16:22:25.000+0100},
author = {Graux, Damien and Jachiet, Louis and Genevès, Pierre and Layaïda, Nabil},
biburl = {https://www.bibsonomy.org/bibtex/29099f96819b2c36aafaf65c96a6ffbe3/georgis_chris},
booktitle = {Big Data 2018 - IEEE International Conference on Big Data proceedings},
eventdate = {Dec 2018},
eventtitle = {IEEE Big Data 2018},
interhash = {1c09c47918e2562356da90ef9261617a},
intrahash = {9099f96819b2c36aafaf65c96a6ffbe3},
keywords = {projecthobbit},
organization = {IEEE},
pages = {1-10},
timestamp = {2019-01-16T16:33:41.000+0100},
title = {A Multi-Criteria Experimental Ranking of Distributed SPARQL Evaluator},
url = {https://2018.eswc-conferences.org/wp-content/uploads/2018/02/ESWC2018_paper_149.pdf},
venue = {Seattle, United States},
year = 2018
}