June 21, 2010

Mozilla WebQA Automation and You: a Primer

I’ve alluded to (in small pieces) the automation work we’re doing here in Mozilla Web QA, but I’ve been meaning to give a more through overview (rather than merely a progress report) of what we’re doing, and how we’re doing it.

What are we doing?

For the top three Mozilla websites (AMO, SUMO, Mozilla.com), we’re automating tests for key functionality; tests that, were they to break, would indicate serious regressions -- things that would block a release.

How are we doing it?

We’re using a few open-source tools (it’s in our DNA) to accomplish this (I’ll explain each in more detail):

Selenium RC provides us with the Selenium Core runtime, and the abstraction layer that our tests (written in Python) run through.

Selenium Grid is responsible for distributed scheduling and executing of Selenium RC-run tests; it generally does a good job of firing up and tearing down Selenium Remote Control instances (there are occasions, however, when a browser hangs, and a manual restart is required on our end).

Hudson is our continuous-integration server, and interfaces with Selenium Grid by picking up changes from our test repositories in SVN (like a cron job, it polls for additions on a schedule), issuing commands to Selenium Grid to start new builds, and a whole host of other important things, such as providing pass/failed status for builds, histograms of results over time, etc.

Python is our language of choice for many reasons, not the least of which is our awesome WebDev team has largely standardized on it for their own unittests (they also use Hudson, and are the reason we are, too). Also, in addition to the Python language being robust, it has a tremendous community (and the amazing Django framework, which WebDev also uses for both AMO and SUMO development, in the Zamboni and Kitsune projects, respectively).

When a build passes, its light goes green in Hudson; when it fails, we see red, and we get both email and IRC notifications sent. For failures, we get the Python traceback (which comes to us directly from Selenium), and we’re usually able to pretty quickly troubleshoot and either fix the problematic/incorrect test, or file a bug on development (as appropriate); maintaining tests can be a big part of automation, so it’s important to write them both flexible and granular enough, which is always a balancing act.

Where are we today, with regards to automated test coverage?

AMO - 25 tests; might not sound like much, but it actually covers quite a bit of functionality, as most of the tests come pretty close to covering a particular feature (e.g. Personas, category landing pages, etc.) The AMO tests are modular, too; they use:

SUMO - 61 tests; like its big brother (AMO), SUMO has started down the path of the same setup:

Mozilla.com - 3 tests; we’re working on ensuring that the Mozilla.com download buttons and browser-specific redirects (Opera/Safari/Chrome, IE, current Firefox version, old Firefox versions, etc.) work.

Where a few of the AMO tests were converted from our old Selenium IDE tests, most were written from scratch; in the case of SUMO, most were converted and cleaned up/fixed from the IDE. We’re still ironing out and refining our framework and test setup(s), as well as continually sharing best practices from the automation projects; if you’re interested in helping out by writing Python tests, please take a look at the projects and dive in. We’re reachable at mozwebqa@mozilla.org.

Soon, I’ll be rewriting our Contribute page to better organize, solicit, and engage test-automation efforts (as well as end-user testing). Stay tuned!

Thanks!

Posted by stephend at June 21, 2010 9:28 PM
Comments
Post a comment