Thank you for Subscribing to CIO Applications Weekly Brief
How We Sharpened the Edge of Our Approach to QA and Testing
Tanvi Gupta, Director of QA, Green Dot Corporation
First, we must understand the strength and weaknesses, only then can a team be built with the right skills to manage the fast pace of Agile. At Green Dot, we quickly realized that a centralized automation team is needed to avoid fragmentation of the code being utilized to test frontend and backend features. Without it, the delivery teams were spending quite a large amount of time maintaining the framework. This team focus is to manage the frameworks, upgrades, and to have the bandwidth from time to time to convert various testcases from manual to automation in order to provide a boost to the delivery team. This team has little ties into scrum teams that are testing the new or updated requirements for features.
Epics by their nature in Agile can and do span multiple teams. The first step from quality in managing any requirement is to identify the requirement and the workflows needed for test coverage. Here, the delivery teams analyze concurrency scenarios, dependencies that can be impacted, risk analysis and finally positive as well as negative scenarios. With having multiple teams, it’s is easy for each team to go off and do their own thing, which in turn will be harder to maintain long-term. Utilizing Jira, it is vital to have each team create the same workflow on how a requirement will make its way to production. This comes in the form of subtasks. We have multiple subtasks that need to be completed prior to the completion of that requirement being delivered to production. The power of having the same tasks across multiple teams ensures that if ever a call is made for an RCA (root cause analysis) on an issue, we can trace back to what coverage was indeed managed by the team.
With a dedicated centralized team working on frameworks the benefit has been standards on framework code, stability of code check-ins, and an increase in automation coverage
The first subtask, testcase creation, requires the tester to sit and think through the grouping of the tests, from end to end, positive and negative scenarios, boundary testing, to exploratory. Once the tester has a good set, these tests are then prioritized into the ones that are necessary to run all the time, to those that we only want to run now. This classification is important because it will lend it self to figuring out in a timebox manner which testcases need to be automated first and which can be shelved to be worked on at later time.
The second subtask, testcase review, is broken down into review process with your peer, the product owner, and finally with your developer. This is there to confirm the various interpretations that can be associated from the acceptance criteria. It gives time for the team to confirm the requirements prior to its release.
The most important subtasks listed is the need to automate the feature quickly in the limited time that is given. At the creation of the testcases, grouping with prioritization will now play a major role. Higher the priority of the testcase, the more attention is paid to managing the automation cycle and maintenance thereafter. It is the responsibility of each delivery team to be confident in its ability to create smoke, sanity, and a robust regression suite that is end to end automated.
This all lends itself to the final task which is execution on multiple internal test environments. Because of the framework and the attention provided to it, the tests that have been automated can run on multiple test environments at the same time. This leaves the tester to continue to add more to the automation or to finish executing the remaining testcases manually that are not as high in priority to be converted to automation.
The final version is then delivered to production, a feature with quality.