In “If code reviews take too long, do this first“, I said that the first step was to gather some data on your code reviews to identify pull requests that take longer than your goal time. One common reason I have seen is that code reviews take too long because there are excessive small, unimportant suggestions.
Not all small and unimportant suggestions are bad. The problem happens when a it’s just an opinion and the code isn’t objectively wrong. So, for example, a typo in a code comment is worth pointing out because the author can fix it without further questions, and the comment is improved. Another example is a clear violation of the style guide, like using a variable with underscore separators when your standard is camelcase. In these cases there is no argument that the original code is correct, and it’s trivial to fix and recheck.
But comments that are just an opinion like: “I don’t like this variable name” or “you could use map() instead of iterating the array” or “this function is too long” are a matter of opinion in most cases. It’s not as clear what to do about it and might end up causing some more conversation. The author’s second try might also not satisfy the reviewer. If this code was clearly wrong, that would be fine, but it’s not worth the extended time to fix a difference in opinion.
To address the problem of excessive nitpick suggestions, the team should adopt a standard that a PR comment should be pointing out a defect. Here is a short list of some obvious defects:
- An off-by-one bug in a loop
- The screen doesn’t match the specification
- An error case is not checked
- Typos in variables, function names, or comments
- Using tabs when team standard says to use spaces (or vice versa)
- Putting business logic in a view when the team requires an MVVM pattern
- The code reimplements a function when it should use the one in our shared library
This is not an exhaustive list, but the pattern should be clear. A comment is pointing out a defect when it can compare the code to some agreed upon requirement or standard. It’s also clear what to do about it, and that it must be done.
So, if you find in your data that you have a lot of nitpick comments, gather them all together and go through each one and categorize them:
- OK: If the comment is pointing out a clear defect against an agreed upon requirement, then that’s a good comment, and the problem is not the comment.
- To document: If the comment is not a defect, but you think the comment is good, and there is a pattern of this kind of comment, then maybe this belongs in your coding standard guide such that it’s clear that the code is defective and what exactly to do about it. After this is documented, future comments of this nature are OK.
- To eliminate: If the comment is not a defect and you don’t even think it should be a standard, then this comment should never have been made.
After doing this step, in the future, most comments should be OK, and sometimes there will be a comment where it’s unclear whether it should become a new standard or not an acceptable comment. Things like that can be hashed out using whatever process you have.
It could still be the case that there are too many comments and that is still the main reason that code reviews are taking too long. In the next post, I will explain what to do about the problem when you have too many good comments on your code reviews.