Test them all, is it worth it? Assessing configuration sampling on the JHipster Web development stack

Axel Halin, Alexandre Nuttinck, Mathieu Acher, Xavier Devroey, Gilles Perrouin, Benoit Baudry

Research output: Contribution to journalArticle

1 Downloads (Pure)

Abstract

Many approaches for testing configurable software systems start from the same assumption: it is impossible to test all configurations. This motivated the definition of variability-aware abstractions and sampling techniques to cope with large configuration spaces. Yet, there is no theoretical barrier that prevents the exhaustive testing of all configurations by simply enumerating them if the effort required to do so remains acceptable. Not only this: we believe there is a lot to be learned by systematically and exhaustively testing a configurable system. In this case study, we report on the first ever endeavour to test all possible configurations of the industry-strength, open source configurable software system JHipster, a popular code generator for web applications. We built a testing scaffold for the 26,000+ configurations of JHipster using a cluster of 80 machines during 4 nights for a total of 4,376 hours (182 days) CPU time. We find that 35.70% configurations fail and we identify the feature interactions that cause the errors. We show that sampling strategies (like dissimilarity and 2-wise): (1) are more effective to find faults than the 12 default configurations used in the JHipster continuous integration; (2) can be too costly and exceed the available testing budget. We cross this quantitative analysis with the qualitative assessment of JHipster’s lead developers.

Original languageEnglish
Pages (from-to)1-44
Number of pages44
JournalEmpirical Software Engineering
DOIs
Publication statusPublished - 17 Jul 2018

Fingerprint

Sampling
Testing
Scaffolds
Program processors
Chemical analysis
Industry

Keywords

  • Case study
  • Configuration sampling
  • JHipster
  • Software testing
  • Variability-intensive system

Cite this

@article{22b795a17d2544d2a23ae7ec2456885e,
title = "Test them all, is it worth it? Assessing configuration sampling on the JHipster Web development stack",
abstract = "Many approaches for testing configurable software systems start from the same assumption: it is impossible to test all configurations. This motivated the definition of variability-aware abstractions and sampling techniques to cope with large configuration spaces. Yet, there is no theoretical barrier that prevents the exhaustive testing of all configurations by simply enumerating them if the effort required to do so remains acceptable. Not only this: we believe there is a lot to be learned by systematically and exhaustively testing a configurable system. In this case study, we report on the first ever endeavour to test all possible configurations of the industry-strength, open source configurable software system JHipster, a popular code generator for web applications. We built a testing scaffold for the 26,000+ configurations of JHipster using a cluster of 80 machines during 4 nights for a total of 4,376 hours (182 days) CPU time. We find that 35.70{\%} configurations fail and we identify the feature interactions that cause the errors. We show that sampling strategies (like dissimilarity and 2-wise): (1) are more effective to find faults than the 12 default configurations used in the JHipster continuous integration; (2) can be too costly and exceed the available testing budget. We cross this quantitative analysis with the qualitative assessment of JHipster’s lead developers.",
keywords = "Case study, Configuration sampling, JHipster, Software testing, Variability-intensive system",
author = "Axel Halin and Alexandre Nuttinck and Mathieu Acher and Xavier Devroey and Gilles Perrouin and Benoit Baudry",
year = "2018",
month = "7",
day = "17",
doi = "10.1007/s10664-018-9635-4",
language = "English",
pages = "1--44",
journal = "Empirical Software Engineering",
issn = "1382-3256",
publisher = "Springer",

}

Test them all, is it worth it? Assessing configuration sampling on the JHipster Web development stack. / Halin, Axel; Nuttinck, Alexandre; Acher, Mathieu; Devroey, Xavier; Perrouin, Gilles; Baudry, Benoit.

In: Empirical Software Engineering , 17.07.2018, p. 1-44.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Test them all, is it worth it? Assessing configuration sampling on the JHipster Web development stack

AU - Halin, Axel

AU - Nuttinck, Alexandre

AU - Acher, Mathieu

AU - Devroey, Xavier

AU - Perrouin, Gilles

AU - Baudry, Benoit

PY - 2018/7/17

Y1 - 2018/7/17

N2 - Many approaches for testing configurable software systems start from the same assumption: it is impossible to test all configurations. This motivated the definition of variability-aware abstractions and sampling techniques to cope with large configuration spaces. Yet, there is no theoretical barrier that prevents the exhaustive testing of all configurations by simply enumerating them if the effort required to do so remains acceptable. Not only this: we believe there is a lot to be learned by systematically and exhaustively testing a configurable system. In this case study, we report on the first ever endeavour to test all possible configurations of the industry-strength, open source configurable software system JHipster, a popular code generator for web applications. We built a testing scaffold for the 26,000+ configurations of JHipster using a cluster of 80 machines during 4 nights for a total of 4,376 hours (182 days) CPU time. We find that 35.70% configurations fail and we identify the feature interactions that cause the errors. We show that sampling strategies (like dissimilarity and 2-wise): (1) are more effective to find faults than the 12 default configurations used in the JHipster continuous integration; (2) can be too costly and exceed the available testing budget. We cross this quantitative analysis with the qualitative assessment of JHipster’s lead developers.

AB - Many approaches for testing configurable software systems start from the same assumption: it is impossible to test all configurations. This motivated the definition of variability-aware abstractions and sampling techniques to cope with large configuration spaces. Yet, there is no theoretical barrier that prevents the exhaustive testing of all configurations by simply enumerating them if the effort required to do so remains acceptable. Not only this: we believe there is a lot to be learned by systematically and exhaustively testing a configurable system. In this case study, we report on the first ever endeavour to test all possible configurations of the industry-strength, open source configurable software system JHipster, a popular code generator for web applications. We built a testing scaffold for the 26,000+ configurations of JHipster using a cluster of 80 machines during 4 nights for a total of 4,376 hours (182 days) CPU time. We find that 35.70% configurations fail and we identify the feature interactions that cause the errors. We show that sampling strategies (like dissimilarity and 2-wise): (1) are more effective to find faults than the 12 default configurations used in the JHipster continuous integration; (2) can be too costly and exceed the available testing budget. We cross this quantitative analysis with the qualitative assessment of JHipster’s lead developers.

KW - Case study

KW - Configuration sampling

KW - JHipster

KW - Software testing

KW - Variability-intensive system

UR - http://www.scopus.com/inward/record.url?scp=85049998068&partnerID=8YFLogxK

U2 - 10.1007/s10664-018-9635-4

DO - 10.1007/s10664-018-9635-4

M3 - Article

SP - 1

EP - 44

JO - Empirical Software Engineering

JF - Empirical Software Engineering

SN - 1382-3256

ER -