Zusammenfassung

Reproducibility and repeatability are key properties of benchmarks. However, achieving reproducibility can be difficult. We faced this while applying the micro-benchmark MooBench to the resource monitoring framework SPASS-meter. In this paper, we discuss some interesting problems that occurred while trying to reproduce previous benchmarking results. In the process of reproduction, we extended MooBench and made improvements to the performance of SPASS-meter. We conclude with lessons learned for reproducing (micro-)benchmarks.

Links und Ressourcen

Tags

Community

  • @dblp
  • @eichelbe
@eichelbes Tags hervorgehoben