Tuesday, 26 July 2016

The case for manual regression testing.

In our team, as part of our release process, we have a manual regression testing part. We have a rather long regression suite document that consists of sections and bullet action points.  As a quality insurance step before each release, we split this between members of the team and go through all the action points manually in our staging environment, hunting for regressions on the way and making sure the reality agrees with the document.

Very often members of other teams who see us doing that are quite sceptical about the benefits of doing such a thing. The standard argument against manual regression testing is the following:"We are developers, this should be automated. Look at phantom.js, and selenium and <insert any cool testing framework here>".

These critics have a point. Automated testing is good. And if fact we do have some quite extensive phantom.js automated tests. They make sure we don't break the most critical and essential features of our product's frontend (we use Ember.js). I will not dwell on selenium as over the time, I have never seen it being effectively useful for anyone. Tell me I am wrong in the comments.

However, human driven manual regression testing is very useful. Here are some reasons why:

Spreads product knowledge

As a positive side effect of manual regression testing, you will spread the product knowledge across your team. This is highly beneficial for new joiners, as it gives them a regular occasion to look in details at parts of the product they have never worked on. Also, having fresh pairs of eyes looking at your regression document will point out very quickly its flaws and inaccuracies.

Makes the team understand software development better

It's not always easy to explain the concept of regression to someone who is not accustomed with software development. Usually when a developer finds a bug, the first natural reflex is to fix it, and if you're lucky to open a ticket describing it correctly. But under the pressure of releasing on time, it is the perfect moment to say "If it is broken on live, it's not a regression, open a ticket and we will fix it later. Don't let that block our release".

A good occasion to bug chase

When was the last time you had the occasion to go systematically through all your product and find bugs pro-actively? Manual regression time is perfect for that. Don't wait until you get issues from customers, capture bugs before they suffer from them and stay on top of the game.

All regressions are not equal

A lot of things can go wrong when it comes to software. Automated testing can protect you against the biggest mistakes, by testing things you know about in advance. This is the key point here. You should also be able capture the regressions you cannot know about in advance, or are very very difficult to automatically test.

Some examples: The scroll bar of a dynamic drop down is covered by some other element when the page is halfway scrolled down. Some new design element colour look positively ugly against a specific background.

The list can go on and on. And only humans can detect unknown issues. Automatic tests can only detect what they were programmed for. This is good, but certainly not enough if your customers are human.

Make the team do some work as a team

As developers, we tend to try shining as a single star, doing complex things in isolation from others. Going through a simple bullet point list of rather boring things as a team; coordinating and reporting towards the common goal of releasing is in my mind a very useful exercise. Experience proves that such a thing is not so trivial to coordinate.


Yes we are developers, and in our wildest fantasy, we live in a work where machines do all the work for us. Even the work of being users of our products. But if you are developing products where your users are living creatures, you definitely want to be in their shoes regularly. What we techies see as 'not a bug' can have a huge impact on the perception and experience of users. On the other hand, what we see as a bug is sometimes just OK for users to live with and can be fixed later.

Check you are providing not only a "machine perfect" system, but also something that makes sense to mere human beings. Use manual regression as part of your release process.