Friday, November 13, 2009

Selecting metrics














5.4 Selecting metrics



The goals of the SQS lead to questions about defects and their effects. Those questions, in turn, lead to metrics that may provide defect information. The metrics then lead to the measures that must be collected.


In some organizations, a record of unrepeatable defects is kept. While a suspected defect could not be repeated on one project, it might occur again on another project, and its root cause eventually determined.




5.4.1 Available metrics


It is possible to create any number of metrics. In the most trivial sense, every measure can be combined with every other measure to create a metric. Metrics, in and of themselves, are not the goal of a beneficial SQS. Just because one organization uses a metric does not mean that metric will have meaning for another organization. An organization that does not use CICS will not be interested in a metric expressing CICS availability. One that uses LOC as their size measure will not want to compute function points per person-month.


Other metrics may be too advanced for some organizations. Organizations just starting to develop metrics will likely be ready to monitor open and closed problem reports. They are not likely to use metrics that attempt to express the validity of the software system development estimation algorithm until they have been using metrics for some time. In the case of estimation, a new metrics-using organization may not even have a repeatable method of estimating the size and cost of software systems development.


The selection of specific metrics to develop is a function of the goals of the SQS. This book does not intend to imply that there are certain metrics that all organizations should use. This section merely introduces a few sample metrics that some organizations have found useful. A more detailed discussion of specific metrics is given in Section 5.4.3. The metrics identified here are not all-inclusive by any means. They are intended to give the new defect analysis or SQS implementer ideas of what metrics to use and what questions to ask.






5.4.2 Applicable metrics


Most metrics are either product oriented or process oriented.




5.4.2.1 Product-oriented metrics


Of significant interest to software quality assurance practitioners is the product defect experience of the current software development project and its predecessors. For a given software product, defect experience can serve as an indication of its progress toward readiness for implementation. Some metrics can lead to identification of error-prone modules or subsystems. Others indicate the reduction in defect detection as defects are found and removed.


Rework generally is considered to be any work that is redone because it was not done correctly the first time. Most frequent causes of rework are corrections needed to resolve defects or noncompliance with standards. Monitoring rework metrics can help the software quality assurance practitioner demonstrate the advisability of better project planning and closer attention to requirements.






5.4.2.2 Process-oriented metrics


Of historical interest are much the same set of metrics, but for the body of software products already completed rather than for just the current project or products. Long-term defect experience helps us understand the development process and its stability, predictability, and level of improvability. The software quality assurance practitioner will track trends in defect detection and correction as indicators of process maturity.


Productivity metrics give indications of process effectiveness, quality of estimation techniques, quality of defect detection techniques, and the like. Some are based on defects, some on nondefect data.


Trend analysis is the long-term comparison of measures and metrics to determine how a process is behaving. Statistical process control, error detection rates, output over time, cost of quality, and help line usage are all examples of measures and metrics that can be studied over a period of time. Such study looks for trends that will describe the development process or its reaction to intentional change. The use of process control charts for software can help describe the behavior of the development process. The charts can also help identify process behavior changes in response to intentional changes made to the process.






5.4.2.3 Cost of quality


The cost of quality (COQ) is often used to measure the value of the SQS. By combining the costs of resources expended to prevent errors from happening and to appraise the quality of a product, we can find the cost of achieving quality (COA). This value is added to the costs of resources expended because quality was not achieved-the cost of failure (COF). The sum of COA plus COF represents the COQ.


Prevention costs include such items as training, the purchase of a methodology, the purchase of automated tools, planning, standards development, and other similar items. These are costs incurred to reduce the likelihood that an error will be made in the first place.


Appraisal costs result, for the most part, from reviews and testing. These are costs incurred to look for errors once the product has been produced.


Failure costs are incurred when a product manifests an error. It is important to recognize that only the first review of a product, or the first test of a piece of code, counts as an appraisal cost. Any rereviewing or retesting required because a defect has been found and corrected is a COF. This is a cost that would not have incurred had the task been done correctly the first time.


Failure costs include the cost of rework, penalties, overtime, reruns of applications that fail in production, lawsuits, lost customers, lost revenues, and a myriad of other costs. The COF in most companies may be found to contribute half to three-quarters of each dollar spent on the overall COQ.



Table 5.2 presents some typical components of the COQ.






















Table 5.2: Quality Cost Contributors


COQ Category




Representative Contributing Costs



Prevention



Training and education


CM


Planning


Standards



Appraisal



Reviews (until defect found)


Testing (until defect found)



Failure



Rework


Rewriting specifications and plans


Rereviewing (after defect correction)


Retesting (after defect correction)


Scrap


Customer complaint handling


Lost customers


Missed opportunities









5.4.3 SQS goal-oriented metrics


There is a seemingly unending list of metrics that can be developed. It is important that the organizational goals and those of the SQS be understood before metrics are haphazardly chosen and applied. The metrics types mentioned so far are reasonable for consideration by organizations just beginning to use metrics in their SQSs. As each organization grows in experience with its SQS, additional goals, questions, and metrics will become useful. More advanced metrics will come to light as the SQS is applied over time.



Table 5.3 suggests possible goals of the SQS and some representative metrics that could apply or be beneficial in reaching those goals.































Table 5.3: Goals and Metrics


SQS Goal




Applicable Metric



Improved defect management



COQ changes/SQS implementation schedule


Cost of rejected software (scrap)/total project cost


Cost of defect corrections/cost of defect detection


Defect density/software product


Defect density/life-cycle phase


Defects found by reviews/defects found by testing


User-detected defects/developer-detected defects


STRs closed/total STRs opened


STRs remaining open/STRs closed


STRs open and closed/time period


Mean time between failures


Software product reliability


Help line calls/software product



Improved requirements



Changed requirements/total requirements


Implemented requirements/total requirements


Requirements errors/total errors



Improved defect detection



Tests run successfully/total tests planned


Defects found by reviews/defects found by testing


Defect density/software product


Defects inserted by life-cycle phase/ defects detected by life-cycle phase


User-detected defects/developer-detected defects



Improved developer productivity



KLOC or function points/person-month


Schedule or budget actuals/estimates


Budget expenditures/schedule status


Mean time to repair a defect


Defects incorrectly corrected/total defects


Software product defects/software product complexity



Improved estimation techniques



Schedule or budget actuals/estimates


Mean time to repair a defect


Budget expenditures/schedule status



Increased data center throughput



Incorrect corrections/total corrections


Mean time to repair a defect


User-detected defects/developer-detected defects



None of the metrics suggested in Table 5.3 should be construed to be required or even desirable for all organizations. No text on metrics could cover the vast array of potential metrics available to the developer of an SQS. Even less likely is that a book could guess the exact set of metrics that applies to every possible organization. The intention of this chapter is to identify typical metrics so that the implementing organization will see the types of concerns that its SQS could address.

















No comments: