Software performance testing is an essential quality assurance mechanism that can identify optimization opportunities. Automating this process requires strong tool support, especially in the case of Continuous Integration (CI) where tests need to run completely automatically and it is desirable to provide developers with actionable feedback. A lack of existing tools means that performance testing is normally left out of the scope of CI. In this paper, we propose a toolchain - PerfCI - to pave the way for developers to easily set up and carry out automated performance testing under CI. Our toolchain is based on allowing users to (1) specify performance testing tasks, (2) analyze unit tests on a variety of python projects ranging from scripts to full-blown flask-based web services, by extending a performance analysis framework (VyPR) and (3) evaluate performance data to get feedback on the code. We demonstrate the feasibility of our toolchain by using it on a web service running at the Compact Muon Solenoid (CMS) experiment at the world's largest particle physics laboratory - CERN. Package. Source code, example and documentation of PerfCI are available: https://gitlab.cern.ch/omjaved/PerfCI. Tool demonstration can be viewed on YouTube: https://youtu.be/RDmXMKAlv7g. We also provide the data set used in the analysis: https://gitlab.cern.ch/omjaved/PerfCI-dataset.
Description
PerfCI: A Toolchain for Automated Performance Testing during Continuous Integration of Python Projects | IEEE Conference Publication | IEEE Xplore
%0 Conference Paper
%1 Javed:2020:PerfCI
%A Javed, Omar
%A Dawes, Joshua Heneage
%A Han, Marta
%A Franzoni, Giovanni
%A Pfeiffer, Andreas
%A Reger, Giles
%A Binder, Walter
%B 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE)
%D 2020
%K Benchmarking Continuous Performance Tracking UnitTesting
%P 1344-1348
%T PerfCI: A Toolchain for Automated Performance Testing during Continuous Integration of Python Projects
%U https://ieeexplore.ieee.org/document/9286019
%X Software performance testing is an essential quality assurance mechanism that can identify optimization opportunities. Automating this process requires strong tool support, especially in the case of Continuous Integration (CI) where tests need to run completely automatically and it is desirable to provide developers with actionable feedback. A lack of existing tools means that performance testing is normally left out of the scope of CI. In this paper, we propose a toolchain - PerfCI - to pave the way for developers to easily set up and carry out automated performance testing under CI. Our toolchain is based on allowing users to (1) specify performance testing tasks, (2) analyze unit tests on a variety of python projects ranging from scripts to full-blown flask-based web services, by extending a performance analysis framework (VyPR) and (3) evaluate performance data to get feedback on the code. We demonstrate the feasibility of our toolchain by using it on a web service running at the Compact Muon Solenoid (CMS) experiment at the world's largest particle physics laboratory - CERN. Package. Source code, example and documentation of PerfCI are available: https://gitlab.cern.ch/omjaved/PerfCI. Tool demonstration can be viewed on YouTube: https://youtu.be/RDmXMKAlv7g. We also provide the data set used in the analysis: https://gitlab.cern.ch/omjaved/PerfCI-dataset.
@inproceedings{Javed:2020:PerfCI,
abstract = {Software performance testing is an essential quality assurance mechanism that can identify optimization opportunities. Automating this process requires strong tool support, especially in the case of Continuous Integration (CI) where tests need to run completely automatically and it is desirable to provide developers with actionable feedback. A lack of existing tools means that performance testing is normally left out of the scope of CI. In this paper, we propose a toolchain - PerfCI - to pave the way for developers to easily set up and carry out automated performance testing under CI. Our toolchain is based on allowing users to (1) specify performance testing tasks, (2) analyze unit tests on a variety of python projects ranging from scripts to full-blown flask-based web services, by extending a performance analysis framework (VyPR) and (3) evaluate performance data to get feedback on the code. We demonstrate the feasibility of our toolchain by using it on a web service running at the Compact Muon Solenoid (CMS) experiment at the world's largest particle physics laboratory - CERN. Package. Source code, example and documentation of PerfCI are available: https://gitlab.cern.ch/omjaved/PerfCI. Tool demonstration can be viewed on YouTube: https://youtu.be/RDmXMKAlv7g. We also provide the data set used in the analysis: https://gitlab.cern.ch/omjaved/PerfCI-dataset.},
added-at = {2021-08-31T00:50:17.000+0200},
author = {Javed, Omar and Dawes, Joshua Heneage and Han, Marta and Franzoni, Giovanni and Pfeiffer, Andreas and Reger, Giles and Binder, Walter},
biburl = {https://www.bibsonomy.org/bibtex/2b913e759ef23013289d332a2035877f5/gron},
booktitle = {2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE)},
description = {PerfCI: A Toolchain for Automated Performance Testing during Continuous Integration of Python Projects | IEEE Conference Publication | IEEE Xplore},
interhash = {f8292e84a25deb57623823bfa20f8e02},
intrahash = {b913e759ef23013289d332a2035877f5},
issn = {2643-1572},
keywords = {Benchmarking Continuous Performance Tracking UnitTesting},
month = {Sep.},
pages = {1344-1348},
timestamp = {2021-08-31T00:50:17.000+0200},
title = {PerfCI: A Toolchain for Automated Performance Testing during Continuous Integration of Python Projects},
url = {https://ieeexplore.ieee.org/document/9286019},
year = 2020
}