Volumes have been written on the subject of code reviews and various processes for conducting them. Most processes describe particular code review roles such as reader, moderator etc. and formats for conducting the code review meeting. These roles are certainly important to keep the code review meeting on track and to avoid the review degenerating into religious wars on commenting styles and magic numbers. However, I have witnessed many code reviews that had brilliant moderators and readers but failed to find any significant defects. It wasn’t because the code under review was defect free because serious defects were often found during testing. At one point a team was seriously considering scraping code reviews due to low return for a huge effort. So what went wrong?
I believe there were two general problems. The first was that the areas under review were rather specialist areas involving complex design and code. In general, most of the reviewers were not familiar with either the code or the details of the design prior to the review. The second problem was that in many cases the people reviewing the code did not have sufficient levels of experience in the area under review to be considered experts in that area. On average more than 50% of the reviewers were new developers ramping up on the technology area and in some cases even ramping up on the programming language itself. The code reviews were used as a ramp-up task for these developers and while it certainly provided an opportunity for these developers to ask questions about various parts of the code, they were not in a position to be able to critically review the code.
So what can we learn from this? My key takeaways are:
- Ensure all reviewers are sufficiently familiar with the design and the code prior to the review. A one hour presentation on the design is usually not enough to provide this familiarization.
- Ensure sufficient experts are involved in the review.
- Using code reviews as part of a newbie’s ramp up is a good idea, but the newbies shouldn’t be relied upon to provide the critical feedback for the review. As a guideline, newbies should not compromise more than 25% of the reviewers.
- If any of the above are not satisfied then postpone the review until they can be satisfied.