When I think of automated testing, I think of the old proverb: “The best time to plant a tree was 20 years ago. The second best time is now.” Automated regression testing can be your greatest timesaver: more than good architecture, more than good requirements and more than hiring overpriced genius talent. It’s the thing that reduces downtime, eliminates cycle time, and gives you the sense of comfort your cell phone won’t be ringing at dinner tonight. And it doesn’t take significantly longer to write an automated test than it does to perform a similar manual test only once or twice. Writing automated regression tests are like buying a tool for $20 once instead of renting a tool every month for $15. But even that analogy doesn’t work, because the benefits are actually exponential due to the applicability of the tests on normal and critical releases.
Let’s take a case where a team of 5 QA resources perform manual regression testing on an application and the full testing cycle can be completed in two weeks. They receive a release candidate and perform work on Monday with a goal of releasing the software for the next Friday. Sounds simple, right? During the 4th day (Thursday) of testing, a significant bug is found. The QA team sends the bug back to the development team who fix the bug in 2 days (Monday morning), resulting in changes across significant portions of a core workflow, with resulting changes affecting the database, application and even changing the tests themselves as part of the bug fix involved changing the workflow slightly.
- What is the QA team supposed to do after the bug is submitted? Do they continue to work on testing other parts of the application? They really don’t know at that point what parts of the application are going to be affected by the code changes for the bug fix.
- The QA team get the fixed release. Does the team need to start testing from scratch? To perform a full regression and make sure the bug fix didn’t inadvertently break something they already tested, they will need to start from scratch. Any other choice creates unknowns and that is where downtime lives. Some may make an argument that the first 2 days of testing where on completely unrelated modules, however if a developer accidentally checked in something wrong or a configuration change didn’t get redeployed with the newer version, then the opportunity for failure is there.
- The newly delivered software starts testing again and on day 4 (Thursday), two significant bugs were found. At this point, the software that should be delivered tomorrow is now not even going to start their full 2 week regression test until at least the next Monday (based on developer estimates) so the release is already two weeks late. Any other bugs could cause similar delays. At this point, the business users are forced to make a decision: release the software with incomplete testing and hope the changes made during the bug fixes don’t affect previously tested modules. Hoping for best cases is a terrible place to live.
Let’s take a case for a critical bug fix. A recently released upgrade is broken due to a bug in the order processing workflow for your largest client. Your largest client cannot perform their business! The ticket is opened and the development team works 20 hour days for 3 days to fix the issue. By the time the fix is provided, it’s been coded by developers who haven’t slept and have been given permission by management to do anything regardless of what it is to get this fixed. Overworked and under intense pressure, with code commits coming from 4 different developers, the development team gives a new release that they say “should fix the problem” even though they only tested it once or twice. The QA team receives the new release of the software.
- Are they supposed to perform a full regression test? Two weeks until your largest client can process their orders? Do you want to lose them?
- What part of it are they supposed to test? Obviously they would test the particular client can process their orders but do they test other clients? How many other clients? What processes were touched by the code changes? At this point, the business users are forced to make a decision: release the software with incomplete testing and hope the changes made during the bug fixes don’t affect previously tested modules. Hoping for best cases is a terrible place to live and eventually, hope does not survive.
How to Start?
First, make sure there is emphasis on both parts of the term “automated” and “regression”. If the process is not complete, consistent and idempotent – meaning the results should cover all functionality and should be the same given the same inputs every time – and able to be run in a short amount of time, then it’s not going to be of value. Most of the time, these automated regression tests can be linked to an existing build or release process but don’t get bogged down with automating your entire release flow – even an automated regression test that requires a user to go and point it at an environment and click a button is better than the alternative.
If you are starting a greenfield application, it’s really easy to get this done. However, most of the time the application that needs an automated regression the most is the lumbering technical debt wasteland of your most antiquated legacy systems. The goal should be totally a kaizen approach – every time a developer commits code, make a new regression test. If 5 regression tests are written every week, you will end up with over 65 regression tests in less than a quarter and after a year, with 250+ regression tests, you should have significant coverage. Don’t care about where you start, just recognize the value and get it done. Even if you can automate half of the regression tests needed to release, it still creates 24 weeks of slack to allow your QA team to work on writing more tests or even more regression tests.
Where to Start?
Dave Ramsey. The guy is a genius not because his total money makeover is complex, but because it is simple. Take all your debts (in this case, a full list of everything you need to get tested) and line them up in terms of value. What one test can you write today that would allow you to sleep better at night after a deployment because you know that piece works? That’s what you start with. Make a team goal to write 5 tests a week and before you know it, you’ll sleep like a baby. The added benefit is the snowball effect this will bring to the team. More automated regression means less critical bugs which means more throughput for the team which means more time allocated to writing regression tests.
A small note for the business, if you are always wondering why it takes so long to release software, this is usually the bottleneck. If bugs are consistently being released or you ever have the same bug happen that was supposedly fixed in previous releases, then go ahead and throw the automated regression tests into your backlog.