June 2010 Archives

Building off the momentum from the SUMO 2.1 release, which converted the old Tiki-Wiki-based Contributors/Off Topic/Knowledge Base forums to Python (code-named Kitsune), we’re redesigning (and reimplementing from scratch) the more prominent and feature-rich Firefox Support forums.

It's starting to take its shape in our 2.2 milestone.

As is always the case with Mozilla projects, we value and need your input and help in reporting issues/feedback along the way; how can you contribute?

Simple: take a look at the mockups (and their discussion) in bug 556287 and bug 556285, and start testing on http://support-stage-new.mozilla.com/en-US/questions, taking note of the open bugs for unimplemented feature work and known issues.

As you can see from the above mockup, we're introducing more crowd-sourcing, placing solutions more up-front and marking them better, and just cleaning things up and optimizing both the question-asking and answering workflows. A lot remains to be finished, however the SUMO team is working as quickly as they can to wrap up a ton of features in the coming couple weeks.

I'm excited to see what they can do with your help! Although our previous 2.1 release took a little time getting out of the gate, it's a solid release, and we're still keeping largely on track with the development timeline.

If you're looking for testing tips, drop by our Mozilla Web QA channel, on irc.mozilla.org, at #mozwebqa, or, if you already know what you're doing, please file bugs in the SUMO product, Forum component.

I’ve alluded to (in small pieces) the automation work we’re doing here in Mozilla Web QA, but I’ve been meaning to give a more through overview (rather than merely a progress report) of what we’re doing, and how we’re doing it.

What are we doing?

For the top three Mozilla websites (AMO, SUMO, Mozilla.com), we’re automating tests for key functionality; tests that, were they to break, would indicate serious regressions -- things that would block a release.

How are we doing it?

We’re using a few open-source tools (it’s in our DNA) to accomplish this (I’ll explain each in more detail):

Selenium RC provides us with the Selenium Core runtime, and the abstraction layer that our tests (written in Python) run through.

Selenium Grid is responsible for distributed scheduling and executing of Selenium RC-run tests; it generally does a good job of firing up and tearing down Selenium Remote Control instances (there are occasions, however, when a browser hangs, and a manual restart is required on our end).

Hudson is our continuous-integration server, and interfaces with Selenium Grid by picking up changes from our test repositories in SVN (like a cron job, it polls for additions on a schedule), issuing commands to Selenium Grid to start new builds, and a whole host of other important things, such as providing pass/failed status for builds, histograms of results over time, etc.

Python is our language of choice for many reasons, not the least of which is our awesome WebDev team has largely standardized on it for their own unittests (they also use Hudson, and are the reason we are, too). Also, in addition to the Python language being robust, it has a tremendous community (and the amazing Django framework, which WebDev also uses for both AMO and SUMO development, in the Zamboni and Kitsune projects, respectively).

When a build passes, its light goes green in Hudson; when it fails, we see red, and we get both email and IRC notifications sent. For failures, we get the Python traceback (which comes to us directly from Selenium), and we’re usually able to pretty quickly troubleshoot and either fix the problematic/incorrect test, or file a bug on development (as appropriate); maintaining tests can be a big part of automation, so it’s important to write them both flexible and granular enough, which is always a balancing act.

Where are we today, with regards to automated test coverage?

AMO - 25 tests; might not sound like much, but it actually covers quite a bit of functionality, as most of the tests come pretty close to covering a particular feature (e.g. Personas, category landing pages, etc.) The AMO tests are modular, too; they use:

SUMO - 61 tests; like its big brother (AMO), SUMO has started down the path of the same setup:

Mozilla.com - 3 tests; we’re working on ensuring that the Mozilla.com download buttons and browser-specific redirects (Opera/Safari/Chrome, IE, current Firefox version, old Firefox versions, etc.) work.

Where a few of the AMO tests were converted from our old Selenium IDE tests, most were written from scratch; in the case of SUMO, most were converted and cleaned up/fixed from the IDE. We’re still ironing out and refining our framework and test setup(s), as well as continually sharing best practices from the automation projects; if you’re interested in helping out by writing Python tests, please take a look at the projects and dive in. We’re reachable at mozwebqa@mozilla.org.

Soon, I’ll be rewriting our Contribute page to better organize, solicit, and engage test-automation efforts (as well as end-user testing). Stay tuned!


About this Archive

This page is an archive of entries from June 2010 listed from newest to oldest.

May 2010 is the previous archive.

July 2010 is the next archive.

Find recent content on the main index or look in the archives to find all content.


Powered by Movable Type 5.12