Code review

Code review (sometimes referred to as peer review) is a software quality assurance activity in which one or more people check a program, mainly by viewing and reading parts of its source code, either after implementation or as an interruption of implementation. At least one of the persons must not have authored the code. The persons performing the checking, excluding the author, are called "reviewers".[1][2]

Although direct discovery of quality problems is often the main goal,[3] code reviews are usually performed to reach a combination of goals:[4][5]

  • Better code quality  Improve internal code quality and maintainability (such as readability, uniformity, and understandability)
  • Finding defects  Improve quality regarding external aspects, especially correctness, but also find issues such as performance problems, security vulnerabilities, and injected malware
  • Learning/Knowledge transfer  Help transfer codebase knowledge, solution approaches, and quality expectations, both to the reviewers and the author
  • Increase sense of mutual responsibility  Increase a sense of collective code ownership and solidarity
  • Finding better solutions  Generate ideas for new and better solutions and ideas that transcend the specific code at hand
  • Complying to QA guidelines, ISO/IEC standards  Code reviews are mandatory in some contexts, such as air traffic software and safety-critical software

This definition of code review distinguishes it from related software quality assurance techniques, such as static code analysis, self checks, testing, and pair programming. In static code analysis the main checking is performed by an automated program, in self checks only the author checks the code, in testing the execution of the code is an integral part, and pair programming is performed continuously during implementation and not as a separate step.[1]

Review types

There are many variations of code review processes, some of which are detailed below. Additional review types are part of IEEE 1028.

IEEE 1028-2008 lists the following review types:[6]

  • Management reviews
  • Technical reviews
  • Inspections
  • Walk-throughs
  • Audits

Inspection (formal)

Historically, the first code review process that was studied and described in detail was called "Inspection" by its inventor, Michael Fagan.[7] This Fagan inspection is a formal process which involves a careful and detailed execution with multiple participants and multiple phases. Formal code reviews are the traditional method of review, in which software developers attend a series of meetings and review code line by line, usually using printed copies of the material. Formal inspections are extremely thorough and have been proven effective at finding defects in the code under review.[7]

Regular change-based code review (Walk-throughs)

In recent years, many industry teams have introduced a more lightweight type of code review in which the scope of each review is based on the changes to the codebase performed in a ticket, user story, commit, or some other unit of work.[8][3] Furthermore, there are rules or conventions that embed the review task into the development process (e.g., "every ticket must be reviewed"), commonly as part of a pull request, instead of explicitly planning each review. Such a review process is called "regular, change-based code review".[1] There are many variations of this basic process. A survey among 240 development teams from 2017 found that 90% of the teams use a review process that is based on changes (if they use reviews at all), and 60% use regular, change-based code review.[3] Also, most large software corporations such as Microsoft,[9] Google,[10] and Facebook follow a change-based code review process.

Efficiency and effectiveness of reviews

Capers Jones' ongoing analysis of over 12,000 software development projects showed that the latent defect discovery rate of formal inspection is in the 60-65% range. For informal inspection, the figure is less than 50%. The latent defect discovery rate for most forms of testing is about 30%.[11][12] A code review case study published in the book Best Kept Secrets of Peer Code Review contradicted the Capers Jones study,[11] finding that lightweight reviews can uncover as many bugs as formal reviews, but were faster and more cost-effective.[13]

The types of defects detected in code reviews have also been studied. Empirical studies provide evidence that up to 75% of code review defects affect software evolvability/maintainability rather than functionality,[14][15][4][16] suggesting that code reviews are an excellent tool for software companies with long product or system life cycles.[17] This also means that less than 15% of the issues discussed in code reviews are related to bugs.[18]

Guidelines

The effectiveness of code review was found to depend on the review speed. Code review rates should be between 200 and 400 lines of code per hour.[19][20][21][22] Inspecting and reviewing more than a few hundred lines of code per hour for critical software (such as safety critical embedded software) may be too fast to find errors.[19][23]

Supporting tools

Static code analysis software lessens the task of reviewing large chunks of code on the developer by systematically checking source code for known vulnerabilities and defect types.[24] A 2012 study by VDC Research reports that 17.6% of the embedded software engineers surveyed currently use automated tools to support peer code review and 23.7% expect to use them within two years.[25]

See also

References

  1. Baum, Tobias; Liskin, Olga; Niklas, Kai; Schneider, Kurt (2016). "A Faceted Classification Scheme for Change-Based Industrial Code Review Processes". 2016 IEEE International Conference on Software Quality, Reliability and Security (QRS). pp. 74–85. doi:10.1109/QRS.2016.19. ISBN 978-1-5090-4127-5. S2CID 9569007.
  2. Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management. Wiley-IEEE Computer Society Press. p. 260. ISBN 978-0-470-04212-0.
  3. Baum, Tobias; Leßmann, Hendrik; Schneider, Kurt (2017). "The Choice of Code Review Process: A Survey on the State of the Practice". Product-Focused Software Process Improvement. Lecture Notes in Computer Science. Vol. 10611. pp. 111–127. doi:10.1007/978-3-319-69926-4_9. ISBN 978-3-319-69925-7.
  4. Bacchelli, A; Bird, C (May 2013). "Expectations, outcomes, and challenges of modern code review" (PDF). Proceedings of the 35th IEEE/ACM International Conference On Software Engineering (ICSE 2013). Retrieved 2015-09-02.
  5. Baum, Tobias; Liskin, Olga; Niklas, Kai; Schneider, Kurt (2016). "Factors Influencing Code Review Processes in Industry". Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering - FSE 2016. pp. 85–96. doi:10.1145/2950290.2950323. ISBN 9781450342186. S2CID 15467294.
  6. IEEE Standard for Software Reviews and Audits. IEEE STD 1028-2008. August 2008. pp. 1–53. doi:10.1109/ieeestd.2008.4601584. ISBN 978-0-7381-5768-9.
  7. Fagan, Michael (1976). "Design and code inspections to reduce errors in program development". IBM Systems Journal. 15 (3): 182–211. doi:10.1147/sj.153.0182.
  8. Rigby, Peter; Bird, Christian (2013). "Convergent contemporary software peer review practices". Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering. pp. 202–212. CiteSeerX 10.1.1.641.1046. doi:10.1145/2491411.2491444. ISBN 9781450322379. S2CID 11163811.
  9. MacLeod, Laura; Greiler, Michaela; Storey, Margaret-Anne; Bird, Christian; Czerwonka, Jacek (2017). "Code Reviewing in the Trenches: Challenges and Best Practices" (PDF). IEEE Software. 35 (4): 34. doi:10.1109/MS.2017.265100500. S2CID 49651487. Retrieved 2020-11-28.
  10. Sadowski, Caitlin; Söderberg, Emma; Church, Luke; Sipko, Michal; Baachelli, Alberto (2018). "Modern code review: A case study at google". Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice. pp. 181–190. doi:10.1145/3183519.3183525. ISBN 9781450356596. S2CID 49217999.
  11. Jones, Capers (June 2008). "Measuring Defect Potentials and Defect Removal Efficiency" (PDF). Crosstalk, The Journal of Defense Software Engineering. Archived from the original (PDF) on 2012-08-06. Retrieved 2010-10-05.
  12. Jones, Capers; Ebert, Christof (April 2009). "Embedded Software: Facts, Figures, and Future". Computer. 42 (4): 42–52. doi:10.1109/MC.2009.118. S2CID 14008049.
  13. Jason Cohen (2006). Best Kept Secrets of Peer Code Review (Modern Approach. Practical Advice.). Smart Bear Inc. ISBN 978-1-59916-067-2.
  14. Czerwonka, Jacek; Greiler, Michaela; Tilford, Jack (2015). "Code Reviews do Not Find Bugs. How the Current Code Review Best Practice Slows Us Down". 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering (PDF). Vol. 2. pp. 27–28. doi:10.1109/ICSE.2015.131. ISBN 978-1-4799-1934-5. S2CID 29074469. Retrieved 2020-11-28.
  15. Mantyla, M.V.; Lassenius, C. (2009). "What Types of Defects Are Really Discovered in Code Reviews?" (PDF). IEEE Transactions on Software Engineering. 35 (3): 430–448. CiteSeerX 10.1.1.188.5757. doi:10.1109/TSE.2008.71. S2CID 17570489. Retrieved 2012-03-21.
  16. Beller, M; Bacchelli, A; Zaidman, A; Juergens, E (May 2014). "Modern code reviews in open-source projects: which problems do they fix?" (PDF). Proceedings of the 11th Working Conference on Mining Software Repositories (MSR 2014). Retrieved 2015-09-02.
  17. Siy, Harvey; Votta, Lawrence (2004-12-01). "Does the Modern Code Inspection Have Value?" (PDF). unomaha.edu. Archived from the original (PDF) on 2015-04-28. Retrieved 2015-02-17.
  18. Bosu, Amiangshu; Greiler, Michaela; Bird, Chris (May 2015). "Characteristics of Useful Code Reviews: An Empirical Study at Microsoft" (PDF). 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories. Retrieved 2020-11-28.
  19. Kemerer, C.F.; Paulk, M.C. (2009-04-17). "The Impact of Design and Code Reviews on Software Quality: An Empirical Study Based on PSP Data". IEEE Transactions on Software Engineering. 35 (4): 534–550. doi:10.1109/TSE.2009.27. hdl:11059/14085. S2CID 14432409.
  20. "Code Review Metrics". Open Web Application Security Project. Archived from the original on 2015-10-09. Retrieved 9 October 2015.
  21. "Best Practices for Peer Code Review". Smart Bear. Smart Bear Software. Archived from the original on 2015-10-09. Retrieved 9 October 2015.
  22. Bisant, David B. (October 1989). "A Two-Person Inspection Method to Improve Programming Productivity". IEEE Transactions on Software Engineering. 15 (10): 1294–1304. doi:10.1109/TSE.1989.559782. S2CID 14921429. Retrieved 9 October 2015.
  23. Ganssle, Jack (February 2010). "A Guide to Code Inspections" (PDF). The Ganssle Group. Retrieved 2010-10-05.
  24. Balachandran, Vipin (2013). "Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation". 2013 35th International Conference on Software Engineering (ICSE). pp. 931–940. doi:10.1109/ICSE.2013.6606642. ISBN 978-1-4673-3076-3. S2CID 15823436.
  25. VDC Research (2012-02-01). "Automated Defect Prevention for Embedded Software Quality". VDC Research. Retrieved 2012-04-10.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.