Knowledge Center

Wasting Your Time By Not Writing Tests

Automation is the key to successful testing. This post explains why other ways of software verification are a waste of time.

If You Don't Test Then ...?

Even if you don’t write tests you surely perform some other operations to verify that your code works properly. In order to make sure sure that the code does what you expect it to do and to find bugs you can:

  • have a debugging sessions (with help of your great IDE),
  • add a lot of log messages so you can later browse log files,
  • click through the user interface of your application,
  • preform frequent code reviews.

All of the above techniques have their legitimate use. Visual inspection is useful. Debugger and logs can save your life sometimes. Clicking through GUI will help you feel like your user. Code reviews will help to find various weaknesses in your code. But if these techniques are the only ways you verify the proper functioning of your code then you are doing it wrong.

Time Is Money

The main problem is that all these actions are very time consuming. Even if your experience allows you to use debugger very effectively or you have strong Linux shell skills that makes finding stuff in tons of logs trivial it still takes some time. Clicking GUI can’t really be accelerated - you wait for application to fetch data, for browser to render them, and for your brain to locate the right place to click. And a decent code review can’t be done in two minutes.

The time factor is crucial. This simply implies that you will not have time to repeat the process. You will check once that it works and voila! - finished. After you go back to this code (i.e. to add/update functionality) you will skip the already tested part (because it simply hurts to think that you have to do the logs browsing again).

Remember, your time and skills are too precious to waste them on simple, repeatable tasks that can be done more effectively by machines.

Your Brain Is Not Good Enough, Sorry

The second problem is that you are trusting your senses, your judgement and your honesty here. This brings few problems:

  • you can overlook something (e.g. omit a line of log files, forget to check everything that should be checked etc.),
  • you can misunderstand or forget criteria of verification and accept failed tests as passed,
  • you can cheat yourself that it works even if it doesn’t.

Yes I know, you don’t make mistakes so you can not possibly miss a single log line, and of course you are 100% honest to yourself… Well, but if you don’t make mistakes then how did the bug you are looking for happened in the first place? And about honesty… I can speak only for myself, but it happens that I see only things that I want to see and ignore any signals that speak against my wishful thinking. Has it ever happened to you? No? Good boy!

What makes it ever more painful is that clicking GUI again and again or browsing log files for the n-th time is so boring! Your mind will scream at you "get me out of here, I want to do some coding!" and it will likely tell you that "everything works fine" just to move to some more interesting tasks.

Some Conclusions

In short, verification methods that are not automated suffer from the following:

  • they are time-consuming, and as such, they are first candidates to be abandonned when deadline is getting near,
  • the criteria of verification might not be clear and the result of verrification can be skewed by human error,
  • they are boring which makes people do them weakly or avoid them at all,
  • might be hard to repeat them in exactly the same way (it is easy to omit some steps - in configuration or execution phase of test),
  • is might be hard to deduce from log files where is the source of a bug and sometimes long investigation is required to find it,
  • they are usually not included in build process and are run after some time the new features or changes were introduced into software which makes the feedback they give less valuable (in other words: it costs much more to repair the damaged parts that were not discovered right after they were damaged).

You Need A Safety Net Of Automated Tests

My experience is that the most of errors occur when the code is changed, not when it is written for the first time. When you implement a new functionality the list of requirements is usually clear. You simply implement them one by one, making sure that all works fine. This is easy. More problems emerge when you are asked to introduce some changes.

Ooops, the original list of requirements is long gone, the people who wrote the original code are not around anymore and your manager assumes that "adding this small piece of functionality shouldn’t take long, should it?". And then, if you don’t have a safety net of automated tests you are in trouble. While adding or changing new functionality you are likely to break something that used to work fine. They call them regression bugs. Automation of tests make it harder for them to creep in.