Automatically generated test cases are usually evaluated in terms of their fault revealing or coverage capability. Beside these two aspects, test cases are also the major source of information for fault localization and fixing. The impact of automatically generated test cases on the debugging activity, compared to the use of manually written test cases, has never been studied before. In this paper we report the results obtained from two controlled experiments with human subjects performing debugging tasks using automatically generated or manually written test cases. We investigate whether the features of the former type of test cases, which make them less readable and understandable (e.g., unclear test scenarios, meaningless identifiers), have an impact on accuracy and efficiency of debugging. The empirical study is aimed at investigating whether, despite the lack of readability in automatically generated test cases, subjects can still take advantage of them during debugging. © 2012 IEEE.

Ceccato, M., Marchetto, A., Mariani, L., Nguyen, C., Tonella, P. (2012). An Empirical Study about the Effectiveness of Debugging when Random Test Cases Are Used. In Proceedings of the International Conference on Software Engineering (ICSE) (pp.452-462). Piscataway, NJ : IEEE Press [10.1109/ICSE.2012.6227170].

An Empirical Study about the Effectiveness of Debugging when Random Test Cases Are Used

MARIANI, LEONARDO;
2012

Abstract

Automatically generated test cases are usually evaluated in terms of their fault revealing or coverage capability. Beside these two aspects, test cases are also the major source of information for fault localization and fixing. The impact of automatically generated test cases on the debugging activity, compared to the use of manually written test cases, has never been studied before. In this paper we report the results obtained from two controlled experiments with human subjects performing debugging tasks using automatically generated or manually written test cases. We investigate whether the features of the former type of test cases, which make them less readable and understandable (e.g., unclear test scenarios, meaningless identifiers), have an impact on accuracy and efficiency of debugging. The empirical study is aimed at investigating whether, despite the lack of readability in automatically generated test cases, subjects can still take advantage of them during debugging. © 2012 IEEE.
paper
Test case generation, random testing, human study, debugging, automatic tests
English
International Conference on Software Engineering (ICSE)
2012
Glinz, M; Murphy, G; Pezzè, M
Proceedings of the International Conference on Software Engineering (ICSE)
978-1-4673-1067-3
2012
452
462
open
Ceccato, M., Marchetto, A., Mariani, L., Nguyen, C., Tonella, P. (2012). An Empirical Study about the Effectiveness of Debugging when Random Test Cases Are Used. In Proceedings of the International Conference on Software Engineering (ICSE) (pp.452-462). Piscataway, NJ : IEEE Press [10.1109/ICSE.2012.6227170].
File in questo prodotto:
File Dimensione Formato  
Ceccato-EmpStudy-ICSE-2012.pdf

accesso aperto

Tipologia di allegato: Other attachments
Dimensione 113.19 kB
Formato Adobe PDF
113.19 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/42653
Citazioni
  • Scopus 19
  • ???jsp.display-item.citation.isi??? 15
Social impact