Do I Need to Test the Automated Tests?

Automated software testing can be expressed in different forms in terms of objectives, approaches and implementation. But the main point is: automated tests  are software modules that allow to verify the behavior of the application under test for compliance to requirements or providing enough information to carry out such verification (performance tests may content oneself with the giving out of statistics, which then analyzes the person).

The key point is that the automated tests - are essentially the same software as the application under test, which means that automated tests in the same way may contain errors of implementation. That is, it necessary to check its efficiency periodically or at least establish some means of control because the tests are not less sensitive to changes in the application under test than the other software modules that were affected during the changes.

In part, the probability of error in automated tests is reduced by the simplicity of tests. In particular, white-box tests in many cases represent a simple structure that call the module under test and catch exceptions and / or process the return code. That is, it largely fits into the template. Functional tests, in most cases represent a linear script.

How Do Errors Appear in Automated Tests?

Nevertheless, the automated tests may use assistive solutions and components: directly test engine, the additional functions / methods, additional functions and methods, as well as window declaration, if to start a conversation about GUI testing. All of this may one day fail due to changes in the application under test, the environment and other external factors. And it would be very useful to isolate the problem directly to the place where it had been originated.

Now examine what and how we can automatically test:

  • Direct automated tests - dynamically these components are verified directly during the test runs that do not pick out testing automation solutions from the context of automated testing, for which the tests are designed. So the most effective way to test the automated tests - is to perform these automated tests.
  • Utility classes / functions / methods - as such components represent a certain program code, so nothing prevents to apply the traditional practices of white-box testing.
  • Window objects - this type of components is specific for automated testing, in particular for the GUI-level testing. Therefore, to test these components you should develop a workflow-script, which will affect all (or at least a simple majority) of the window objects.

How to Implement Testing of Automated Tests?

How to implement this? It is good to implement at the development stage. For example, in the development of helper classes for automated tests you can use the practice of TDD to control the quality of produced components. This will let to create a set of tests, which can then be run immediately before starting the main automated tests. This is unit-testing of automated solution.

For window objects, you can create a separate test (or set), which primarily pay attention to navigation (one of the most important parts for the stability of automated tests). In addition, verify that during tests desired windows open. And then you can verify that all declared elements of window are present. Moreover, it is done gradually, with the increasing descriptions of windows.

Example of Smoke Test in SilkTest

For example, you have the task of describing a window. Accordingly, after the first implementations of the description you should open a script that tests the declaration of windows, add a single workflow, which will open window under test, verify its existence, and then return the system to its original state. Also, to verify the existence of this window you can write the subsidiary functional, which recursively will ping all declared child elements of the test window and check for their existence. In particular, the implementation of such a function in SilkTest can look like this: 

Such test that checks all the windows can then be used as a smoke test, and performed before starting the main package of automated tests to ensure that resources are adapted to the current version of the application under test.

All of the above-listed approaches show that the process of test automation in many ways similar to the process of development, at least at the stage of production, these processes are inherently identical. Respectively for automated testing are applicable practices used in the development, to improve the quality of software code.