Tuesday, October 4, 2011

The role of QA in a Continuous Development environment

One of the challenges we, as a QA organization, face in a Continuous Development environment, is that the "old" definitions of QA are no longer valid for us. We are not the "gate keepers". We do not give a "green light" or "Go-no-go" signals. Why? Because if we want to deploy a new version with new features on a daily basis in production, we need to move fast and repetitive dev and QA cycles are a huge overhead.

In the following bullets I have summarized the challenges our R&D organization faces and the way we in the QA team should tackle those challenges.

  • What do we want to achieve?

o We want to release features as soon as they are ready, regardless of other features’ readiness.

o We want to keep the time between the developer starts coding and the feature is deployed in Prod to a minimum.

  • · How are we going to keep the development cycle short while releasing on a daily basis?

o Reduce the amount of “links” in the release chain – only one person (FO, development Feature Owner) will be needed to take a version to Production (no dev->QA->dev->QA->OPS->Prod, but instead only dev->Prod)

o Reduce the overhead of Dev/QA interaction. This overhead is bearable when releasing a version every two weeks (if the QFO (aka tester in some organizations) is stuck she can do something else) but becomes unbearable when releasing on a daily basis (if the QFO is stuck, no QA work is done and as result the whole process is stuck).

  • · How are we going to reduce the number of links in the chain and reduce dev/QA friction?

o QFOs (testers) will no longer be a “required phase” for releasing the version. Instead, QFOs will now specialize in providing tools for the development Feature Owners to verify that the released version is of high quality. If the QFOs provide high quality tools and the FO uses them properly, the version will reach production in high quality.

  • · What tools will the QA provide for the FOs?

o Critical review of Spec and Design

o Test cases

o Automatic tests (where the QFO is able to code automation)

o Exploratory tests once the feature reaches production

  • · Who does the actual testing? It depends:

o Automation can be written either by the development Feature Owner or QA Feature Owner, depending on availability and proficiency in automation coding.

o High priority manual tests are usually performed by FO. QFO assists when needed.

o Low priority tests are QFO domain

o When a feature reaches production, after it is deployed, QA run exploratory tests on it, to see if we missed anything

  • · Who owns the quality?

o FO owns the quality of each release. The FO bases her assessments of the release quality based on the tools provided by QA.

§ Why can’t QA own the quality of each release? Because if the QA is to own it, we will need time testing it and this is the exact overhead we’re trying to cut out.

o QA owns the general quality of the application. They are required to reflect the quality of the application to decision makers and help them focus the bug fixing resources on the most painful areas.


Our CEO, having seen the above bullets in an email thread, challenged me and asked: "so if you guys are not doing the testing and not the owners of quality, why do we need QA?". Fair question, to which I gave the following reply:

Why do we need QA?

  • (Most) developers don’t write good tests.

o Developers tend to think positively instead of negatively. The tests that they will write will make sure that the feature works where used properly, but will not find where it breaks.

o QFOs usually have broader view of the system and see interactions between components developers tend to neglect when designing tests.

  • QA still owns the quality of the application end to end.

o QA is responsible for examing all the exiting problems in the system and generating relevant heat maps that will serve decision makers when deciding resource balancing between developing new features and fixing bugs.

o Even if developers test their features perfectly without QA assistance, QA is needed to reflect whether the experience is good. Are the response times reasonable? Is it easy (not only possible) to register? Is the newsfeed interesting (instead of merely working)?

o QA is still the team to ask “what is the quality of the our offering?”

  • I believe that QFO can also take ownership of automating the tests, as we see in some cases. Developers will code the feature and QFOs will code the tests and hand them to the developers.

3 comments:

ripper234 said...

Interesting evolution in the time of Continuous Deployment.

I do think that the bullet "QA is needed to reflect whether the experience is good" doesn't really belong here. The bullet is true, of course, but the "experience is good" is as much the FOs responsibility as the feature's functionality.

The FO is responsible for his feature being correct, performant, and relevant - so if the newsfeed isn't interesting, the FO can't say "well I just implemented the spec", and hand this off to QA. It's good that QA verify these parameters, but the primary responsibility still lies with the FO.

Omri Lapidot said...

Testing the *experience* of a feature, or its subjective quality, is often an ongoing, prolonged and complicated task which can (and often should) be performed after the feature is released. While the development teams live on a feature-by-feature life cycle, QA teams are more suited for a continuous and repetitive tasks such as testing subjective quality of features.

Innovapath said...

Excellent Post. Also visit http://whiteboxqa.com/.