6.2. The Principles of Reviews
Learning objectives |
---|
Recall of content only |
Let's look at some review principles
that were explained in the Foundation syllabus. First, as mentioned a moment
ago, a review is a type of static test. The object being reviewed is not
executed or run during the review. Like any test activity, reviews can have
various objectives. One common objective is finding defects. Others, typical of
all testing, are building confidence that we can proceed with the item under
review, reducing risks associated with the item under review, and generating
information for management. Unique to reviews is the addition of another common
objective, that of ensuring uniform understanding of the document—and its
implications for the project—and building consensus around the statements in the
document.
Reviews usually precede dynamic
tests. Reviews should complement dynamic tests. Because the cost of a defect
increases as that defect remains in the system, reviews should happen as soon as
possible. However, because not all defects are easy to find in reviews, dynamic
tests should still occur.
audit: An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications, and/or procedures based on objective criteria, including documents that specify (1) the form or content of the products to be produced, (2) the process by which the products shall be produced, and (3) how compliance to standards or guidelines shall be measured.
inspection: A type of peer review that relies on visual examination of documents to detect defects, e.g., violations of development standards and nonconformance to higher-level documentation. The most formal review technique and therefore always based on a documented procedure.
management review: A systematic evaluation of software acquisition, supply, development, operation, or maintenance process performed by or on behalf of management that monitors progress, determines the status of plans and schedules, confirms requirements and their system allocation, or evaluates the effectiveness of management approaches to achieve fitness for purpose.
technical review: A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken.
walk-through: A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. |
Woody Allen, the New York film
director, is reported to have once said that "80 percent of success is showing
up." That might be true in the film business, but Woody Allen would not be a
useful review participant. Reviews require adequate preparation. If you spend no
time preparing for a review, expect to add little value during the review
meeting.
In fact, you can easily remove value
by asking dumb questions that you could have answered on your own had you read
the document thoroughly before showing up. You might think that's a harsh
statement, especially in light of the management platitude that "there are no
dumb questions." Well, sorry, there are plenty of dumb questions. Any question
that someone asks in a meeting because of their own failure to prepare,
resulting in a whole roomful of people having to watch someone else spend their
time educating the ill-prepared attendee on something he should have known when
he came in the room, qualifies as a dumb question. In fact, to me showing up for
a review meeting unprepared qualifies as rude behavior, disrespectful of the
time of the others in the room.
Because reviews are so effective when done
properly, organizations should review all important documents. That includes
test documents. Test plans, test cases, quality risk analyses, bug reports, test
status report, you name it. My rule of thumb is, anything that matters is not
done until it's been looked at by at least two pairs of eyes. You don't have to
review documents that don't matter, but here's a question for you: Why would you
be writing a document that didn't matter?
So, what can happen after a review?
There are three possible outcomes. The ideal case is that the document is okay
as is or with minor changes. Another possibility is that the document requires
some changes but not a re-review. The most costly outcome—in terms of both
effort and schedule time—is that the document requires extensive changes and a
re-review. Now, when that happens, keep in mind that, while this is a costly
outcome, it's less costly than simply ignoring the serious problems and then
dealing with them during component, integration, system, or—worse yet—acceptance
testing.
In an informal review, there are no
defined rules, no defined roles, no defined responsibilities, so you can
approach these however you please. Of course, keep in mind that Capers Jones has
reported that informal reviews typically find only around 20 percent of defects,
while very formal reviews like inspections can find up to 85 percent of
defects. If
something is important, you probably want to have a formal review—unless you
think that you and your team are so smart that no one is going to make any
mistakes.
During a formal review,
there are some essential roles and responsibilities:
The manager. The manager allocates
resources, schedules reviews, and the like. However, they might not be allowed
to attend based on the review type.
The moderator or leader: This is the
chair of the review meeting.
The author: This is the person who
wrote the item under review. A review meeting, done properly, should not be a
sad or humiliating experience for the author.
The reviewers: These are the people who
review the item under review, possibly finding defects in it. Reviewers can play
specialized roles, based on their expertise or based on some type of defect they
should target.
The scribe or secretary or recorder:
This is the person who writes down the findings.
informal review: A review not based on a formal (documented) procedure.
inspection leader (or moderator): The leader and main person responsible for an inspection or other review process. |
Now, in some types of reviews, roles
can be combined. For example, the author, moderator, and secretary can be the
same person. In fact, as test manager, when I've had the test team review my
test plans, I've often been the manager, the author, the moderator, and the
secretary.
Some additional roles might be involved,
depending on the review. We might involve decision makers or project
stakeholders. This is especially true if an ancillary or even primary objective
of the review is to build consensus or to disseminate information. In some
cases, the stakeholders involved can be customer or user representatives. We
recently did a review of mock-ups for our new website with our marketing team,
our outsource web development team, and our company executives. Because we had
hired the web development team, for them we were the customers and, for some
features, the users.
Certain very formal review types also
use a reader, who is the person responsible for reading, paraphrasing, and
explaining the item under review.
You can use more than one review type in
an organization. For some documents, time is more important than perfection. For
example, on our test teams, we apply the "two pairs of eyes" rule to mean that a
tester must read another tester's bug report before it can be filed. However,
for more visible documents like test plans, we use a walk-through with the
entire test team. For critical documents, you can use more than one review type
on a single item. For example, when writing this book, we had an informal review
with a core set of RBCS associates and partners followed by a broader, more
formal review as the final book came together.
No comments:
Post a Comment