Friday, May 7, 2010

Automatic tests in Delver: past and present

Our automation efforts started in May/June 2009 and have changed and evolved rapidly to their current form. In this post I'll review the process, what are we currently doing and where do we want to go.

We started by programming Greybox testing. After reading a product's spec, we have developed code which tested each method from QA point of view, going deeper and broader than what developers tested in their unit and integration tests. We were quite happy for a while: we found bugs early, the bugs were fixed before the final version reached QA and we did not need to wait for the final version to start testing. Cool, huh?

Not really. On July we had 3 testers writing code and one manual tester. However, researching the number of bugs found via automated testing versus bugs found manually it quickly became apparent that ~80% of the bugs were detected by one manual tester after the version reached QA while the remaining 20% was detected by three testers writing automated tests. What was the problem? Apparently many of the bugs were a result of missing components (the button wasn't there), broken components (the button was there but it didn't work) and other UI shenanigans. Yes, the backend was working quite efficiently, but the application was buggy as hell. If hell has bugs, that is.

Another problem we found with Greybox testing is that we have realized we were writing tests developers were supposed to write. The developers liked it, of course, but we felt we are not doing our job.

Next step was to switch to Automatic UI tests, or AUI. We have reviewed some solutions and finally settled on Selenium, which we still use today and will probably continue using in the foreseeable future.

We use AUI tests as an extension of our QA Feature Owners (QFOs, for short). AUI tests are tests written by the QFOs and are automated by our Automation Engineers (AEs) based on priority (for the QFO) and complexity of automation (for the AEs). Tests which are both of high priority and low complexity are automated. It is important to note that after tests are automated their execution is the responsibility of the QFO. QFOs are the driving force of Delver's QA team and AUI tests are tools at their disposal.

One of the greatest challenges we faced when starting automate UI tests is switching manual tests to tests which are easy for our AEs to automate. In a way, manual tests are the complete opposite of automated tests:
  • It is effective to write long manual tests which cover several aspects/features of the application in one flow. If this approach is used in AUI tests, when a test fails, you have no clear indication of what exactly failed. For example if you mean to test feature A but you go through feature B and feature C on your way to A, if the AUI test fails, how can you be sure whether the test failed at A, B or C?
  • In manual tests there's no problem with copying and pasting blocks of instruction texts ("Login, , Search, click on button A") and modifying them per test ("Login, , Search, click on button B"). This poses a major headache for AEs when they struggle to pinpoint minuscule differences between tests (Button A in the first example, Button B in the second).
After understanding the problems AEs face, we have changed the way tests are written. QFOs write tests which are short, specific and very focused. For the sake of automation, we encourage writing multiple short tests over fewer, longer ones. This change wasn't easy for the QFOs, but they adapted rapidly. The alternative, I explained to them, was manually running their tests every two weeks.

After the test is written by the QFO and we decide to automate it, QFO and AE meet and we make sure the AE understands what the tests does and does it test. The fact that all of the AEs are required to run some manual tests in each version also helps in that regard: the application is not "the QFO's problem" but a living entity they interact with on regular basis.

After an AE is done writing the test, she is required to hold two reviews. A Code Review with a fellow AE, in which they run over the effectiveness of the code written in the same manner every code writing member of Delver R&D is required to hold Code Review. The second review is Test Review in which the QFO verifies that the code actually tests what it is supposed to. In the past, Test Reviews were painful to us because large gaps between QFO's intention and AE's execution were revealed in them but as QFOs began writing more automation oriented tests and AEs becoming more familiar with the application and testing focus, the reviews are shorter and easier now.

Here is the procedure we use now for QFO/AE interaction:
  • QFO writes a test.
  • If we decide to automate the test, QFO and AE meet in order to briefly go over the test and make sure the AE understands it completely. This meeting is not mandatory.
  • AE writes the test. This typically takes one day.
  • AE reviews the code with another AE.
  • QFO reviews the final product.
In order to be where we are today, QFO's needed to adjust the way they write tests to a way which is not optimal for manual tests and AEs were required to make sure their code reflects QFO's tests intention. We have come a long way in less than a year and I am currently happy with the results. If the QFOs are the heart of the QA team, automation are their cybernetic legs and arms allowing them to move and test faster.

I just wish I could find a metaphor for QFOs shooting laser out of their eyes, that would have been so cool.

2 comments:

פשה said...

Please mention what's a "QFO" and an "AE".

Unknown said...

Usually the blog you posted is very useful to us thanks for posting this blog.
Please Click Here For More Information About Any Course or Training Institute all over the world
SharePoint Training in Houston