Evaluating and comparing size, complexity and coupling metrics as web applications vulnerabilities predictors
Автор: Mohammed Zagane, Mustapha Kamel Abdi
Журнал: International Journal of Information Technology and Computer Science @ijitcs
Статья в выпуске: 7 Vol. 11, 2019 года.
Бесплатный доступ
Most security and privacy issues in software are related to exploiting code vulnerabilities. Many studies have tried to find the correlation between the software characteristics (complexity, coupling, etc.) quantified by corresponding code metrics and its vulnerabilities and to propose automatic prediction models that help developers locate vulnerable components to minimize maintenance costs. The results obtained by these studies cannot be applied directly to web applications because a web application differs in many ways from a non-web application: development, use, etc. and a lot of evaluation of these conclusions has to be made. The purpose of this study is to evaluate and compare the vulnerabilities prediction power of three types of code metrics in web applications. There are a few similar studies that targeted non-web application and to the best of our knowledge, there are no similar studies that targeted web applications. The results obtained show that unlike non-web applications where complexity metrics have better vulnerability prediction power, in web applications the metrics that give better prediction are the coupling metrics with high recall (> 75%) and fewer costs in terms of inspection (<25%).
Software Vulnerability, Web Application Security, Information Privacy, Code Metrics, Prediction Models, Machine Learning, Software Engineering
Короткий адрес: https://sciup.org/15016372
IDR: 15016372 | DOI: 10.5815/ijitcs.2019.07.05
Список литературы Evaluating and comparing size, complexity and coupling metrics as web applications vulnerabilities predictors
- S. Zhang, D. Caragea, and X. Ou, “An empirical study on using the national vulnerability database to predict software vulnerabilities,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 6860 LNCS, no. PART 1, pp. 217–231, 2011.
- J. Walden, J. Stuckman, and R. Scandariato, “Predicting vulnerable components: Software metrics vs text mining,” Proc. - Int. Symp. Softw. Reliab. Eng. ISSRE, pp. 23–33, 2014.
- M. Alenezi and I. Abunadi, “Evaluating software metrics as predictors of software vulnerabilities,” Int. J. Secur. its Appl., vol. 9, no. 10, pp. 231–240, 2015.
- I. Abunadi and M. Alenezi, “Towards Cross Project Vulnerability Prediction in Open Source Web Applications,” in Proceedings of the The International Conference on Engineering & MIS 2015 - ICEMIS ’15, 2015, pp. 1–5.
- S. Moshtari and A. Sami, “Evaluating and comparing complexity, coupling and a new proposed set of coupling metrics in cross-project vulnerability prediction,” in Proceedings of the 31st Annual ACM Symposium on Applied Computing - SAC ’16, 2016, pp. 1415–1421.
- B. Turhan, A. Bener, and T. Menzies, “Nearest neighbor sampling for cross company defect predictors,” in Proceedings of the 1st International Workshop on Defects in Large Software Systems (DEFECTS’08), 2008, p. 26.
- B. Turhan, G. Kocak, and A. Bener, “Data mining source code for locating software bugs: A case study in telecommunication industry,” Expert Syst. Appl., vol. 36, no. 6, pp. 9986–9990, 2009.
- K. Gao, T. M. Khoshgoftaar, H. Wang, and N. Seliya, “Choosing software metrics for defect prediction: an investigation on feature selection techniques,” Softw. Pract. Exp., vol. 41, no. 5, pp. 579–606, Apr. 2011.
- T. Menzies, J. Greenwald, and A. Frank, “Data Mining Static Code Attributes to Learn Defect Predictors,” IEEE Trans. Softw. Eng., vol. 33, no. 1, pp. 2–14, 2007.
- H. Watson, T. J. McCabe, and D. R. Wallace, “Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric,” NIST Spec. Publ., pp. 1–114, 1996.
- V. Y. Shen, S. D. Conte, and H. E. Dunsmore, “Software Science Revisited: A Critical Analysis of the Theory and Its Empirical Support,” IEEE Trans. Softw. Eng., vol. SE-9, no. 2, pp. 155–165, 1983.
- “Promise software engineering repository.” [online]http://promise.site.uottawa.ca/SERepository/datasets-page.html (Accessed 20 July 2018).
- P. Morrison, K. Herzig, B. Murphy, and L. Williams, “Challenges with applying vulnerability prediction models,” in Proceedings of the 2015 Symposium and Bootcamp on the Science of Security - HotSoS ’15, 2015, vol. 14, no. 2, pp. 1–9.
- C. Catal, A. Akbulut, E. Ekenoglu, and M. Alemdaroglu, “Development of a Software Vulnerability Prediction Web Service Based on Artificial Neural Networks,” in Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2017, pp. 59–67.
- Y. Shin, A. Meneely, L. Williams, and J. A. Osborne, “Evaluating complexity, code churn, and developer activity metrics as indicators of software vulnerabilities,” IEEE Trans. Softw. Eng., vol. 37, no. 6, pp. 772–787, 2011.
- M. Siavvas, E. Gelenbe, D. Kehagias, and D. Tzovaras, “Static analysis-based approaches for secure software development,” Commun. Comput. Inf. Sci., vol. 821, no. April, pp. 142–157, 2018.
- Y. Shin and L. Williams, “An empirical model to predict security vulnerabilities using code complexity metrics,” in Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement - ESEM ’08, 2008, p. 315.
- A. Nakra, “Comparative Analysis of Bayes Net Classifier , Naive Bayes Classifier and Combination of both Classifiers using WEKA,” I.J. Inf. Technol. Comput. Sci., vol. 11, no. March, pp. 38–45, 2019.
- G. Holmes, A. Donkin, and I. H. Witten, “WEKA: a machine learning workbench,” in Proceedings of ANZIIS ’94 - Australian New Zealnd Intelligent Information Systems Conference, pp. 357–361.
- R. Ihaka and R. Gentleman, “R: A Language for Data Analysis and Graphics,” J. Comput. Graph. Stat., vol. 5, no. 3, p. 299, Sep. 1996.
- “Vulnerability dataset.” [online] http://seam.cs.umd.edu/webvuldata (Accessed 01 July 2018).