Software Quality Metrics

We best manage what we can measure. Measurement enables the Organization to improve the software process; assist in planning, tracking and controlling the software project and assess the quality of the software thus produced.

It is the measure of such specific attributes of the process, project and product that are used to compute the software metrics. Metrics are analyzed and they provide a dashboard to the management on the overall health of the process, project and product. Generally, the validation of the metrics is a continuous process spanning multiple projects.

The kind of metrics employed generally account for whether the quality requirements have been achieved or are likely to be achieved during the software development process. As a quality assurance process, a metric is needed to be revalidated every time it is used. Two leading firms namely, IBM and Hewlett-Packard have placed a great deal of importance on software quality.

The IBM measures the user satisfaction and software acceptability in eight dimensions which are capability or functionality, usability, performance, reliability, ability to be installed, maintainability, documentation, and availability. For the Software Quality Metrics the Hewlett-Packard normally follows the five Juran quality parameters namely the functionality,the usability, the reliability, the performance and the serviceability.

In general, for most software quality assurance systems the common software metrics that are checked for improvement are the Source lines of code, cyclical complexity of the code, Function point analysis, bugs per line of code, code coverage, number of classes and interfaces, cohesion and coupling between the modules etc.

Common software metrics include:

  • Bugs per line of code
  • Code coverage
  • Cohesion
  • Coupling
  • Cyclomatic complexity
  • Function point analysis
  • Number of classes and interfaces
  • Number of lines of customer requirements
  • Order of growth
  • Source lines of code
  • Robert Cecil Martin’s software package metrics

Software Quality Metrics focus on the process, project and product. By analyzing the metrics the organization the organization can take corrective action to fix those areas in the process, project or product which are the cause of the software defects.

The de-facto definition of software quality consists of the two major attributes based on intrinsic product quality and the user acceptability. The software quality metric encapsulates the above two attributes, addressing the mean time to failure and defect density within the software components. Finally it assesses user requirements and acceptability of the software.

The intrinsic quality of a software product is generally measured by the number of functional defects in the software, often referred to as bugs, or by testing the software in run time mode for inherent vulnerability to determine the software "crash" scenarios.In operational terms, the two metrics are often described by terms namely the defect density (rate) and mean time to failure (MTTF).

Although there are many measures of software quality, correctness, maintainability, integrity and usability provide useful insight.

Correctness

A program must operate correctly. Correctness is the degree to which the software performs the required functions accurately. One of the most common measures is Defects per KLOC. KLOC means thousands (Kilo) Of Lines of Code.) KLOC is a way of measuring the size of a computer program by counting the number of lines of source code a program has.

Maintainability

Maintainability is the ease with which a program can be correct if an error occurs. Since there is no direct way of measuring this an indirect way has been used to measure this. MTTC (Mean time to change) is one such measure. It measures when a error is found, how much time it takes to analyze the change, design the modification, implement it and test it.

Integrity

This measure the system’s ability to with stand attacks to its security. In order to measure integrity two additional parameters are threatand security need to be defined. Threat – probability that an attack of certain type will happen over a period of time. Security – probability that an attack of certain type will be removed over a period of time. Integrity = Summation [(1 - threat) X (1 - security)]

Usability

How usable is your software application ? This important characteristic of your application is measured in terms of the following characteristics:

  • Physical / Intellectual skill required to learn the system
  • Time required to become moderately efficient in the system.
  • The net increase in productivity by use of the new system.
  • Subjective assessment(usually in the form of a questionnaire on the new system)

Standard for the Software Evaluation

In the context of the Software Quality Metrics, one of the popular standards that addresses the quality model, external metrics, internal metrics and the quality in use metrics for the software development process is ISO 9126.

Defect Removal Efficiency

Defect Removal Efficiency (DRE) is a measure of the efficacy of your SQA activities.. For eg. If the DRE is low during analysis and design, it means you should spend time improving the way you conduct formal technical reviews.

DRE = E / ( E + D ) 

Where E = No. of Errors found before delivery of the software and D = No. of Errors found after delivery of the software.

Ideal value of DRE should be 1 which means no defects found. If you score low on DRE it means to say you need to re-look at your existing process. In essence DRE is a indicator of the filtering ability of quality control and quality assurance activity. It encourages the team to find as many defects before they are passed to the next activity stage. Some of the Metrics are listed out here:

Test Coverage = Number of units (KLOC/FP) tested / total size of the system Number of tests per unit size = Number of test cases per KLOC/FP Defects per size = Defects detected / system size Cost to locate defect = Cost of testing / the number of defects located Defects detected in testing = Defects detected in testing / total system defects Defects detected in production = Defects detected in production/system size Quality of Testing = No. of defects found during Testing/(No. of defects found during testing + No of acceptance defects found after delivery) *100 System complaints = Number of third party complaints / number of transactions processed Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation Test Execution Productivity = No of Test cycles executed / Actual Effort for testing Test efficiency= (number of tests required / the number of system errors)

Measure

Metrics

1. Customer satisfaction indexNumber of system enhancement requests per year Number of maintenance fix requests per year User friendliness: call volume to customer service hotline User friendliness: training time per new user Number of product recalls or fix releases (software vendors) Number of production re-runs (in-house information systems groups)
2. Delivered defect quantitiesNormalized per function point (or per LOC) At product delivery (first 3 months or first year of operation) Ongoing (per year of operation) By level of severity By category or cause, e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect introduced by fixes, etc.
3. Responsiveness (turnaround time) to usersTurnaround time for defect fixes, by level of severity Time for minor vs. major enhancements; actual vs. planned elapsed time (by customers) in the first year after product delivery
7. Complexity of delivered productMcCabe's cyclomatic complexity counts across the system Halstead's measure Card's design complexity measures Predicted defects and maintenance costs, based on complexity measures
8. Test coverageBreadth of functional coverage Percentage of paths, branches or conditions that were actually tested Percentage by criticality level: perceived level of risk of paths The ratio of the number of detected faults to the number of predicted faults.
9. Cost of defectsBusiness losses per defect that occurs during operation Business interruption costs; costs of work-arounds Lost sales and lost goodwill Litigation costs resulting from defects Annual maintenance cost (per function point) Annual operating cost (per function point) Measurable damage to your boss's career
10. Costs of quality activitiesCosts of reviews, inspections and preventive measures Costs of test planning and preparation Costs of test execution, defect tracking, version and change control Costs of diagnostics, debugging and fixing Costs of tools and tool support Costs of tools and tool support Costs of test case library maintenance Costs of testing & QA education associated with the product Costs of monitoring and oversight by the QA organization (if separate from the development and test organizations)
11. Re-workRe-work effort (hours, as a percentage of the original coding hours) Re-worked LOC (source lines of code, as a percentage of the total delivered LOC) Re-worked software components (as a percentage of the total delivered components)
12. ReliabilityAvailability (percentage of time a system is available, versus the time the system is needed to be available) Mean time between failure (MTBF) Mean time to repair (MTTR) Reliability ratio (MTBF/ MTTR) Number of product recalls or fix releases Number of production re-runs as a ratio of production runs

Source: http://it.toolbox.com/wiki/index.php/Software_Quality_Metrics

TEST MY PROJECT