Software Test Implementation Measurements

Software Test Implementation Measurements

Software bugs need to be recorded and tracked until their resolution. Such software bug handling is normally considered part of software testing process.

Nevertheless, the handling of software bugs identified at the time of software testing is not extremely different from that of bugs identified at the time of other quality assurance activities.

Full-length data about defect monitoring is required for error diagnosis and software bug correcting. Such data could be accumulated either when the defect were observed and recorded or when the defects were corrected, or even subsequently.

When defects are not observed, the measurement of test runs may be used to show software product durability or proper handling of input and dynamic situations.

Different other dimensions can be taken at the time of test implementation for subsequent analysis and follow-up actions. It is also necessary to record successful implementations for different goals, such as documentation of test activities, possible use as oracle to examine future implementation outputs. This is greatly significant for regression testing and for legacy software products that will be modified and evolved over the whole product lifespan.

In adjunct to the “on-line” measurement of the dynamic test runs and related defect data, the corresponding static test cases can be measured “off-line” to avert obstruction with test implementation.

A lot of different other data could be accumulated, including testing team, environment, configuration, test object and other.

This is an example pattern for test implementation measurements accumulated for a software product at the time of system testing:

rid   run identification:
  sc  scenario class
  sn  scenario number
  vn variation number
  an  attempt number for the specific scenario variation
tester   software tester who attempted the test run
timing   start time t0 and end time t1
trans    transactions handled by the test run
result   output of the test run
Knowledge Center
Knowledge Center

*- required fields