You are starting a new project with a new team and a new website. Everyone agrees on practices (such as agile, unit testing, test driven development, everyone should be in the same room for better communication, etc). The team is happy and working together and things are great.
Then the big Questions come up.
“Who should own the automated regression suite? Who should write it?”
This tricky question around developing and maintaining automated regression tests can split a team who is otherwise on the same page. Given the tests are code, the obvious option looks like the devs! Yet, the tests are there to make QAs’ job easier by removing the need to do tedious manual regression! Before you know it, you have a lively debate on your hands around how to handle these tests.
There is a tendency in the industry to espouse the idea that automated regression tests are the Holy Grail of QA and will fix all the problems introduced by manual testing. After all, no one wants to sit there and hit the same buttons in the same sequence over and over again.
Yet, while automated regression tests are useful, there is a large caveat we rarely talk about.
They are a tool, just like any other tool we use. There are good times and bad times times to use them. When the tool fits the job, it works well and most people will agree it is useful. When things are not as clear cut, the tool can cause dissonance in a team. Splits form between those who like it and those who do not. When the tool does not fit, teams tend to dislike the tool itself for being a complete pain to work with a vow to never use such a thing again. So, how do we avoid falling into the trap and use the tool appropriately?
First, we need to define what automated regression tests are. A lot of teams will call these tests functional, ui, etc, but these names are also used for different sort of tests and can lead to much confusion (see What is in a Name. For this article, we are talking about UI-driven full stack tests which aim to replicate a user going through the app and ensure various scenarios can are possible.
The second thing which makes using these tests difficult to maintain, is the question of who are the tests for, and thus who should have ultimate responsibility over them. Are they for devs to act as higher level tests? Are they for QAs to avoid doing repetitive, manual testing? I know a lot of people would argue they are for both, or that the intended person is the development team as a whole, but it is my opinion that serving two masters/purposes weakens how much of an effect something can have.
Perhaps this is actually the wrong question. Instead of asking who they are for, let us focus on what the intention is. So, what is the intention of an automated regression suite? In the typical testing pyramid, these tests occupy the top of the pyramid, indicating there should be few of them. There should be few of them because they are usually slow and the technologies powering them tend to be flaky, whereas unit tests tend to be quick and stable.
This gives us a guideline on how many, but not what they are or what their purpose is. Still following the testing pyramid, you usually try to push a test down the pyramid; this indicates the tests on the pyramid are for the same purpose (making sure code changes do not break prior functionality).
Despite this, we also tote these tests as a replacement for rote verification testing. The thing is, the bottom two layers of the testing pyramid are all about whether some piece of logic works - does the code work. QAs test the user interactions - does the code work from the user’s point of view. These are two different sets of tests, also known as black box and white box testing. Automated regression tests are black box tests from UI with the only internal interaction happening at the beginning to set up data.
I argue the fact these tests are black box tests indicates they should be kept separate from your lower level tests. Instead of focusing on code coverage and changes, these tests should focus on User Flows.
Ideally, a QA and BA will work together to identify critical business user flows to automate. The primary reason to limit this is because automated regression tests tend to be slow and flaky, so the more of them which exist, the less useful they are due to maintenance cost. If you have stable and quick tests, there is nothing wrong with adding more scenarios (going down the priority ladder).
As a result, I believe these are the ownership of the team as a whole, but QAs should have full ownership of what the suite tests. It is a tool for them to aid in making sure their regression can be as automated as possible.