Technical debt in the software development lifecycle: code smells
https://doi.org/10.15514/ISPRAS-2021-33(6)-7
Abstract
This paper is dedicated to the review of the most popular code smells, which is one of the technical debt components, in addition to methods and instruments for their detection. We conduct a comparative analysis of multiple instruments such as DesigniteJava, PMD, SonarQube. We apply existing tools to set of open-source projects to deduce detection precision and coherence of the selected instruments. We highlight strengths and weaknesses of the approach based on metrics computation and threshold filtering that is used in instruments. Citing of code smells detected by the instruments shows low percentage of true positives (10% for god class and 20% for complex method). We perform literature review of papers suggesting enhancements of the standard approach and alternative approaches that doesn’t involve metrics. To assess the potential of alternative methods we introduce our long method detection prototype with a false positive filtering system based on machine learning methods.
About the Authors
Vladimir Vladimirovich KACHANOVRussian Federation
PhD Student at MIPT, Programmer at ISP RAS
Mikhail Kirillovich ERMAKOV
Russian Federation
Candidate of Technical Sciences, Researcher
Georgiy Alexandrovich PANKRATENKO
Russian Federation
Intern Researcher
Alexander Vyacheslavovich SPIRIDONOV
Russian Federation
Programmer
Alexander Sergeevich VOLKOV
Russian Federation
Researcher
Sergei Igorevich MARKOV
Russian Federation
Researcher
References
1. Fowler M. Refactoring: Improving the Design of Existing Code. Addison-Wesley, 1999, 431 p.
2. Lanza M., Marinescu R. Object-Oriented Metrics in Practice: Using Software Metrics to Characterize, Evaluate, and Improve the Design of Object-Oriented Systems. Springer, 2010, 231 p.
3. van Emden E., Moonen L. Java quality assurance by detecting code smells. In Proc. of the Ninth Working Conference on Reverse Engineering, 2002. pp. 97-106.
4. Khomh F., Di Penta M., Gueheneuc Y.G. An exploratory study of the impact of code smells on software change-proneness. In Proc. of the 16th Working Conference on Reverse Engineering, 2009, pp. 75-84.
5. Langelier G., Sahraoui H., Poulin P. Visualization-based analysis of quality for large-scale software systems. In Proc. of the 20th IEEE/ACM International Conference on Automated Software Engineering, 2005, pp. 214-223.
6. Olbrich S., Cruzes D., Sjberg D. Are all code smells harmful? A study of God Classes and Brain Classes in the evolution of three open source systems. In Proc. of the IEEE International Conference on Software Maintenance, 2010, pp. 1-10
7. Paiva T., Damasceno A. et al. On the evaluation of code smells and detection tools. Journal of Software Engineering Research and Development, vol. 5, no. 7, 2017, article no. 7.
8. Palomba F., Bavota G. et al. Do they really smell bad? a study on developers’ perception of bad code smells. In Proc. of the IEEE International Conference on Software Maintenance and Evolution, 2014, pp. 101-110.
9. Tufano M., Palomba F. et al. When and why your code starts to smell bad. In Proc. of the IEEE/ACM 37th IEEE International Conference on Software Engineering, 2015, pp. 403-414
10. Fenton, N.E., Neil M. Software metrics: successes, failures and new directions. Journal of Systems and Software, vol. 47, no. 2, 1999, pp.149-157.
11. PMD. URL: https://pmd.github.io/latest/index.html, accessed: 25/10/2021.
12. DesigniteJava. URL: https://github.com/tushartushar/DesigniteJava, accessed: 25/10/2021.
13. SonarQube. URL: https://www.sonarqube.org/, accessed: 25/10/2021.
14. Ferme V. JCodeOdor: A Software Quality Advisor Through Design Flaws Detection. Master’s thesis, Università degli Studi di Milano-Bicocca, Italy, 2013.
15. Arcelli Fontana F., Mantyla M., Zanoni M., Marino A. Comparing and experimenting machine learning techniques for code smell detection. Empirical Software Engineering, vol. 21, issue 3, 2016, pp. 1143-1191.
16. Arcelli Fontana F., Zanoni M. Code smell severity classification using machine learning techniques. Knowledge-Based Systems, vol. 128, 2017, pp. 43-58.
17. Qualitas Corpus. URL: http://qualitascorpus.com/docs/history/20120401.html, accessed: 25/10/2021
18. Hamdy A., Tazy M. Deep hybrid features for code smells detection. Journal of Theoretical and Applied Information Technology, vol. 98, 2020, pp. 2684-2696
19. Bank D., Koenigstein N., Giryes R. Autoencoders. arXiv:2003.05991, 2021.
20. Barbez A., Khomh F., Gu´eh´eneuc Y.G. A machine-learning based ensemble method for anti-patterns detection, arXiv:1903.01899, 2019.
21. Zhang P. Neural networks for classification: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 30, no. 4, pp. 451-462.
22. Sharma T., Efstathiou V., Louridas P., Spinellis D. On the feasibility of transfer-learning code smells using deep learning, arXiv:1904.03031, 2019.
23. Madeyski L., Lewowski T. Mlcq: Industry-relevant code smell data set, In Proc. of the International Conference on Evaluation and Assessment in Software Engineering, 2020, pp. 342-347.
Review
For citations:
KACHANOV V.V., ERMAKOV M.K., PANKRATENKO G.A., SPIRIDONOV A.V., VOLKOV A.S., MARKOV S.I. Technical debt in the software development lifecycle: code smells. Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS). 2021;33(6):95-110. (In Russ.) https://doi.org/10.15514/ISPRAS-2021-33(6)-7