Reviews and Tests

Usually, a Review is understood to be to look at something again, to find any mistakes and to correct them, or to improve what might already be right. A Test can also be understood to check something again to find any mistakes, and to then accept or reject/rework the thing. Is there any difference between a Test and a Review?

Looking for synonyms for the word ‘Review’, we find – analysis, audit, check, inspect, report, revision, scrutiny, survey, test, and a few others. ‘Test’ is also among these, and it is going to be fun exploring the similarities and differences between the two. By the way, there are many kinds of reviews, as many as 14 different types (See: https://guides.mclibrary.duke.edu/sysreview/types)

Reviews are certainly done as a second (or further) look at something, but the interesting fact about a review is that it cannot be effective unless the reviewer has (prepared) another (relevant) view, and then does a re-view. It is important for the reviewer to have a (independent) view of their own about the ‘what’ and ‘how’ of what they review, before their review can effectively add value.

Ideally the reviewer should have used the same inputs and prepared the same output as the work they are reviewing, and then compared the received output/document with what they themselves arrived at and how. As the reviewer matures in their experience, they may not need to go through the entire process every time, but they still need to have their own view of what and how the output should be made. Unless they have their own view before they look at the submission, they will not be able to hold on to their own (different) view, and in all likelihood, they will get ‘absorbed’ by the submission’s approach for ‘what’ and ‘how’. In the absence of this preparation, most reviews have only one or two of ‘what’ comments, while most comments are about how it appears – the layout, cosmetics, language, etc.

Reviews must examine the decisions made and principles used by the developer, while Tests compare functionality against the requirements. This also explains why Designs need to be reviewed, but Code/Output needs to be tested to verify adherence to design and fulfilment of requirements. Of course, sometimes the code also needs to be reviewed, but that is usually to provide developmental inputs to the developer. Tests focus on the specifications and requirements, but what should Reviews focus on? Let’s look at some phenomena found in reviews.

If you look at the kinds of comments that are reported in a review, you may find that most of the comments relate to what is common for the (that particular) reviewer to look for. For instance, when I did this analysis for my own reviews, I found that I was reporting mainly language (grammar, punctuation) errors in storyboards I reviewed. Also, out of these, 90% used to be about the use of commas. It was evident that ‘language’ was something I was focused on. Similarly, someone else may have their favorite aspect as length of sentences, or voice, or you-name-it.

Another realization that emerged as I saw the reviews done in various projects, was that as a team we ended up spending more project time reviewing and correcting what we were good at, rather than review and correct what we were anyway weaker at. Reviews had a tendency to go into multiple rounds of reviewing and fixing, with new corrections coming up at every iteration. This was not because earlier reviews were not complete, but because ‘lower level’ hygiene issues needed to be resolved first for a focused ‘higher order’ review to be possible.

Generally, I realized that reviews were also not planned for any specific aspects to be focused on by reviewers. Several different reviewers gained a reputation from their own unique strengths and were fed the deliverables for review without any focus(es) defined, assuming they would respond to issues of the kind they were known to be good at avoiding in their own output. In hindsight, this seemed a quality plan ‘taken for granted’, without considering the wastage of overlapping reviews, and the risks of reviews missing out focusing on objectives required by the customer.

Tests, on the other hand, are usually conducted with a Test Plan, with Test Cases and expected Behavior. Compared to a review, a test is more defined, focused, and consistent in its outcome when done by different testers on the same output. Creativity is best reviewed, and execution is best tested.

I finally learned to plan for reviews through multiple iterations, each time focusing on a set of aspects which I found most efficient and effective to review together in the same iteration. The aspects that Reviews should focus on are those that are defined in the Quality Assurance Plan and must match the relative importance among them as needed, expected, and required by the customer. This taught me how to build a development and review plan that assured quality. I called this a Quality Plan, or QPlan, which was published as a paper at the QAI Conference on Quality in 2001.

It is interesting to observe that REVIEW is a backronym…
R.E.V.I.E.W.: Reconsidered Effective Verification of the Integrity and Excellence of Work.
T.E.S.T.: Truth Evaluated through Systematic Trial.

Here, WORK is also a backronym…
W.O.R.K.: When Outcomes Result from Knowledge.

It seems interesting to understand that if work doesn’t lead to any outcome, it is a waste of time. Also, if the work you do is not using your knowledge, then it is someone else’s work you are doing, usually as an assignment. This ties in well with the understanding of ‘karma’ in the Indian perspective.

ReviewTest
Focuses on appropriateness of decisions and principles used in the process.Focuses on the excellence of the product in meeting requirements and specifications.
Focuses on processFocuses on product
When done by different reviewers, can result in diversely valuable insightsWhen done by different testers, will likely result in the same findings
Comparing Reviews with Tests

— O —