Has Your Test Automation Become A Big Ball Of Mess?
By Mary Thorn, Director, Ipreo
There are several reasons why this happens in my experience. It usually starts when the automation framework is created and the first tests are written. The easiest tests to automate usually are at the user interface therefore usually slower and brittle. The testers who write these tests are also typically inexperienced in writing the tests therefore the tests end up inefficient and now your foundation of your framework is not solid or scalable. Some automation strategies believe when you start automating to just let the team start writing tests to get used to how to write them. However, in my opinion you need to have a strategy and doggedly pursue it. Mike Cohn’s automation pyramid strategy is my prefered strategy. At the base of the pyramid you have unit automated tests where the majority of your tests should be written. In the middle of the pyramid you have your API/services tests that have quite a few but not as many as unit. Lastly you have UI tests which only make up about 10 percent of your automation.
While implementing this strategy, first, train the whole team on the automation framework, so the testers and developers know how to write tests in the framework. Secondly, decide upon basic coding standards. Just like code for the application should be treated as production level, your test automation code should be the same. The same type of coding standards should be defined so that when you do code reviews you have standards to review against. Lastly, discuss at what level you should write the automation test, at the UI, API or Unit test level.
Maintaining the test automation strategy has been one of the biggest challenges to date in agile software development
For every sprint there should be a conversation that confirms which tests will be written at what level that the whole team agrees to and the tests are not duplicated in the framework.
Another reason the test automation becomes a big ball of mess is because teams never get into the “Stop the Line” mindset when a test fails. Without this culture, more tests get put into an “ignored” state and or are just plain ignored. This accrues debt as team members have to spend time analyzing these failed tests instead of fixing tests or fixing code. I have seen instances where a test automation run takes two weeks to analyze all the failures. In that case just throw out the tests and start over. If your automation is in this state today you can first move all your “unstable” tests out of your regression run and put them in their own automation run. Then negotiate with the business time to re-factor those tests, if you find that they still have value. The goal at the end of this is that once you put focus to get these tests “passing” the team moves to a culture that if the test fails you stop everything and fix it. Second option is to throw them out completely, which from my experience is just an equal option.
Lastly and the main reason tests become inefficient is that we don’t spend time in refactoring them. Tests like code can become brittle, obsolete, slow, and not valuable. You should put effort in quarterly to review your slowest test steps, slowest tests cases, and slowest test suites. Spend time refactoring them to speed them up or delete the tests, yes, I just said delete the tests. If a test has never failed in five years of running then maybe it is not a good test. Spend time reviewing the tests that have never failed, make sure that they are good tests, or maybe see if this test is duplicated at a lower level of automation therefore being found before your test runs.
Maintaining the test automation strategy has been one of the biggest challenges to date in agile software development. This is mainly due to the fact that most frameworks are maintained by a few people and those people leave after a few years. They attributed their leaving was due to “maintaining” old tests versus “creating” new tests. One team member even went on to say he was on the maintenance team as the test automation was so complex and robust that the only thing to do was to maintain them and modify a few scenarios here and there when new functionality was being added.
Here is where the cyclical problem occurs and why your test automation can get out of control. When a new team member comes on on-board if we don’t review the automation strategy, we don’t train them on the framework, we don’t focus on the importance of the tests passing, we don’t emphasize the importance of refactoring their tests, we end up in this big ball of mess. What’s interesting is that we usually will put that level of diligence of on-boarding for our production code. Just imagine if we treated our test code to the same level of craftmanship as our production code how we might never have gotten in this mess in the first place?