Optimizing the readability of tests generated by symbolic execution

Бесплатный доступ

Taking up about half of the development time, testing remains the most common method of software quality control and its disadvantage can lead to financial losses. With a systematic approach, the test suite is considered to be complete if it provides a certain amount of code coverage. At the moment there are a large number of systematic test generators aimed at finding standard errors. Such tools generate a huge number of difficult-to-read tests that require human veri- fication which is very expensive. The method presented in this paper allows improving the readability of tests that are automatically generated using symbolic execution, providing a qualitative reduction in the cost of verification. Experi- mental studies of the test generator, including this method as the final phase of the work, were conducted on 12 string functions from the Linux repository. The assessment of the readability of the lines contained in the optimized tests is comparable to the case of using words of a natural language, which has a positive effect on the process of verification of test results by humans.

Еще

Dynamic symbolic execution, natural language model, the problem of tests verification by humans

Короткий адрес: https://sciup.org/148321894

IDR: 148321894   |   DOI: 10.31772/2587-6066-2019-20-1-35-39

Список литературы Optimizing the readability of tests generated by symbolic execution

  • Anand S., Burke E. K., Tsong Yueh Chen et al. An Orchestrated Survey of Methodologies for Automated Software Test Case Generation. Journal of Systems and Software. 2013, Vol. 86, No. 8, P. 1978-2001. DOI: 10.1016/j.jss.2013.02.061
  • Cadar C., Godefroid P., Khurshid S. et al. Symbolic Execution for Software Testing in Practice: Preliminary Assessment. Proceedings of the 33rd International Conference on Software Engineering (ICSE '11). ACM, New York, 2011, P. 1066-1071. DOI: 10.1145/1985793.1985995
  • Tracey N., Clark J., Mander K. et al. An automated framework for structural test-data generation. Proceedings of the 13th IEEE International Conference on Automated Software Engineering. 1998, P. 285-288. DOI: 10.1109/ASE.1998.732680
  • Cadar C., Ganesh V., Pawlowski P. M. et al. EXE: Automatically Generating Inputs of Death. Proceedings of the 13th ACM Conference on Computer and Communications Security (CCS '06). ACM, New York, 2006, P. 322-335. DOI: 10.1145/1180405.1180445
  • Godefroid P., Klarlund N., Sen K. DART: Directed Automated Random Testing. Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation. 2015, P. 213-223. DOI: 10.1145/1064978.1065036
Еще
Статья научная