JUGE: An infrastructure for benchmarking Java unit test generators

Xavier Devroey, Alessio Gambi, Juan Pablo Galeotti, René Just, Fitsum Kifetew, Annibale Panichella, Sebastiano Panichella

Résultats de recherche: Contribution à un journal/une revueArticleRevue par des pairs

14 Téléchargements (Pure)

Résumé

Researchers and practitioners have designed and implemented various automated test case generators to support effective software testing. Such generators exist for various languages (e.g., Java, C#, or Python) and various platforms (e.g., desktop, web, or mobile applications). The generators exhibit varying effectiveness and efficiency, depending on the testing goals they aim to satisfy (e.g., unit-testing of libraries versus system-testing of entire applications) and the underlying techniques they implement. In this context, practitioners need to be able to compare different generators to identify the most suited one for their requirements, while researchers seek to identify future research directions. This can be achieved by systematically executing large-scale evaluations of different generators. However, executing such empirical evaluations is not trivial and requires substantial effort to select appropriate benchmarks, setup the evaluation infrastructure, and collect and analyse the results. In this Software Note, we present our JUnit Generation Benchmarking Infrastructure (JUGE) supporting generators (search-based, random-based, symbolic execution, etc.) seeking to automate the production of unit tests for various purposes (validation, regression testing, fault localization, etc.). The primary goal is to reduce the overall benchmarking effort, ease the comparison of several generators, and enhance the knowledge transfer between academia and industry by standardizing the evaluation and comparison process. Since 2013, several editions of a unit testing tool competition, co-located with the Search-Based Software Testing Workshop, have taken place where JUGE was used and evolved. As a result, an increasing amount of tools (over 10) from academia and industry have been evaluated on JUGE, matured over the years, and allowed the identification of future research directions. Based on the experience gained from the competitions, we discuss the expected impact of JUGE in improving the knowledge transfer on tools and approaches for test generation between academia and industry. Indeed, the JUGE infrastructure demonstrated an implementation design that is flexible enough to enable the integration of additional unit test generation tools, which is practical for developers and allows researchers to experiment with new and advanced unit testing tools and approaches.

langue originaleAnglais
Numéro d'articlee1838
Pages (de - à)e1838
journalSoftware Testing, Verification and Reliability
Volume33
Numéro de publication3
Les DOIs
Etat de la publicationPublié - mai 2023

Empreinte digitale

Examiner les sujets de recherche de « JUGE: An infrastructure for benchmarking Java unit test generators ». Ensemble, ils forment une empreinte digitale unique.
  • Basic Block Coverage for Unit Test Generation at the SBST 2022 Tool Competition

    Derakhshanfar, P. & Devroey, X., 9 mai 2022, Proceedings - 15th Search-Based Software Testing Workshop, SBST 2022. ACM Press, p. 37-38 2 p. (Proceedings - 15th Search-Based Software Testing Workshop, SBST 2022).

    Résultats de recherche: Contribution dans un livre/un catalogue/un rapport/dans les actes d'une conférenceArticle dans les actes d'une conférence/un colloque

    Accès ouvert
    File
    85 Téléchargements (Pure)
  • JUGE: JUnit Generation Benchmarking Infrastructure

    DEVROEY, X., Gambi, A., Galeotti, J. P., Just, R., Kifetew, F. M., Panichella, A. & Panichella, S., juin 2021

    Résultats de recherche: Forme non textuelleLogiciel

    Accès ouvert
  • Java Unit Testing Tool Competition - Eighth Round

    Devroey, X., Panichella, S. & Gambi, A., 27 juin 2020, Proceedings - 2020 IEEE/ACM 42nd International Conference on Software Engineering Workshops, ICSEW 2020. ACM Press, p. 545-548 4 p. (Proceedings - 2020 IEEE/ACM 42nd International Conference on Software Engineering Workshops, ICSEW 2020).

    Résultats de recherche: Contribution dans un livre/un catalogue/un rapport/dans les actes d'une conférenceArticle dans les actes d'une conférence/un colloque

Contient cette citation