The main goal of regression testing is to exercise all code paths entirely and to verify that the software product persists to act in an expected way. In this case the tests act as an insurance policy to assist identification of unexpected status changes.
To reach this purpose regression tests must comprise regular check points where the current status can be compared with the expected and any discrepancy reported straightway.
Not every component of functionality can be linked to visual display elements. It is very helpful if features can be enabled at the time of software testing to create supplementary trace output which can be used in snapshot comparisons.
For some elements, it is workable to conduct isolated testing using the test harness applications while database tools may provide facilities for extracting data.
The bigger the quantity of check point comparisons, the more likely the regression tests are to find discrepancies. Tests must be designed cautiously to minimize needless maintenance work on test scripts whenever software modifications are made, yet hold their efficacy at detecting discrepancies.
Failure to identify a discrepancy means that the influence of a modification may pass unnoticed leading to potential software failure downstream. Regression tests around software integration points must be created to guarantee influence on external systems is identified at early stages.