During the last six years in Sears Israel, we have kept shortening the release cycle. During the first two years we deployed a version every two months (give or take a week or two). Later, sometime in 2010, we opted for a two weeks release cycles. This posed many challenges to all teams involved in the process, but we all adapted and the classic roles of the teams did not change much: Product team wrote the spec, R&D wrote the code, QA tested and approved the quality and Operations team deployed it. When we decided to deploy the code "when the feature is done", things had to change. We could no longer afford to have the QA team standing as a Gate Keeper, approving every version thus slowing the process down. Responsibility for the quality of the features was officially shifted from the QA team to the developers and "QA approval" was no longer a mandatory step in order to deploy to Production.
This shift posed a big challenge to the QA team. If we're not the gate Keepers, who are we? If developers are not
required to get QA approval, why would they go through all the hassle of "version to qa-> bugs -> fix -> version to qa" ad nauseam? Looking for answers in the web was not very encouraging, as we quickly found out that
Facebook doesn't have a QA team and Google says
Test is Dead so what hope did our little team have? Many people were concerned: mostly QA engineers failing to understand their future role (some probably fearing for their jobs) and developers not too happy about losing their testers and having to dirty their hands with the undignified work of manual testing.
The journey took almost two years. From a team of concerned and doubtful QA engineers and managers we shaped a team of professionals who know exactly why are they needed in the release cycle even if the are not mandatory. The major state of mind shift was from a acting as a mandatory link between R&D and Operations to service providers, catering to the needs of other groups in the organization. We're not the Gate Keepers anymore, but we don't want to be.
We are the testing experts. Not manual tests, mind you (though we will lend a hand if a developer asks nicely) but other, specialized testing forms: performance testing, subjective quality (search results quality, recommendations quality, newsfeed quality, etc.), mobile testing, fault and recovery and more.
We provide Risk Analysis and assist developers and product managers see risks in integration with other interfaces. Over time, our application grew more complex, with a lot of moving parts interacting with each other. Due to the QA team being small, sharing information is relatively easy for us and we invest time and resources keeping our QDOs informed on other components they may not have a day to day experience with. This knowledge allows each QDO to identify risks not only in his domain and feature but also in the interaction of new features with other interfaces and domains.
We are the reporters and intelligence officers. When you deploy a version every few weeks, there's plenty of time to investigate and reflect what went wrong and what went right, what are the current problems and where should go next. This task gets more complex when you deploy a version several times per day. When, exactly, are you supposed to stop and reflect? One of the services our QA team provides is weekly reports in which we connect the dots, so to speak, and tell interesting and important stories reflecting the quality in each domain. Not automatically generated bugs list, mind you, but thoughtful analysis of the current quality in a domain, usage patterns analysis, recommendations on where should we invest development resources and more.
Theses are the main services we provide, but there are numerous others as well: we manage bugs reported by support and internal users, we run usability and functionality analysis, we champion specific bugs that need to be fixed and fall between the cracks, we manage interfaces with other QA teams in the organization and we go over domain specific bugs with the relevant domain stakeholders and assist in prioritizing them.
What about automation, I hear people ask. What about what used to be the holy grail of QA future? While we have some automation capabilities, we believe that since the developers own the quality of their code, they should also be the ones concerned with automating the tests. We assist and have our own automation solutions but the majority of automation in our organization lies on the R&D side and it makes perfect sense to everyone.
The software industry changed dramatically during the last 10 years and the QA needs to change and adapt. We were once the gate keepers but we evolved, offering unique services no other team in the organization can: testing expertise, analysis, connecting the quality dots and providing quality related services to whomever needs them. If your organization faces the same challenges, you had better think in advance where you want the QA team to be and what uniqueness can they bring to the game. As the old saying goes, evolve or die.