Inproceedings,

Expert Sourcing to Support the Identification of Model Elements in System Descriptions

, , and .
SWQD 2018: Software Quality: Methods and Tools for Better Software and Systems, page 83--99. (2018)
DOI: 10.1007/978-3-319-71440-0_5

Abstract

Context Expertsourcing is a novel approach to model quality assurance: it relies on methods and tooling from crowdsourcing research to split the task of model quality assurance and parallelize its execution across several expert-users. Concretely, given a text-based system description and a corresponding model such as an EER diagram, experts are guided towards inspecting the model based on so-called expected model elements (EMEs). EMEs are entities, attributes and relations that appear in text and are reflected by the corresponding model. Therefore, EMEs play a crucial role in splitting up the model quality assurance task among experts. Objective & Method In this paper, we investigate the effectiveness of identifying the EMEs themselves through expertsourcing. To that end, we perform a feasibility study in which we compare EMEs identified through expertsourcing with EMEs provided by a task owner who has a deep knowledge of the entire system specification text. Conclusions Results of the data analysis show that the effectiveness of the crowdsourcing- style EME acquisition is influenced by the complexity of these EMEs: entity EMEs can be harvested with high recall and precision, but the lexical and semantic variations of attribute EMEs hamper their automatic aggregation and reaching consensus (these EMEs are harvested with high precisions but limited recall). Based on these lessons learned we propose a new task design for expertsourcing EMEs.

Tags

Users

  • @ekaputra
  • @dblp

Comments and Reviews