Recently in testing Category

In a previous post, I wrote about Sauce Labs, and detailed how their Selenium-in-the-cloud service ("OnDemand") helped Web QA--particularly yours truly--quickly make sense out of a perceived performance regression, with really nice video support. It turns out that we keep having reasons to look to them for additional capacity and capabilities. And, thanks to a great working business relationship, they're helping us meet those needs.

The problem: as stated in our Q2 goals, our team has to "Plan for and support a high-quality launch of http://persona.org". The goal goes on, stating that we'll support desktop on a variety of browsers, as well as mobile.

Our list of supported, in-house browser and operating-system environments is here: Mac, Windows 7, and Windows Vista. Until recently, we didn't have in-house Opera or Chrome support, so calling out to Sauce for those browsers has been really beneficial. (When we launch a new site, we definitely cover all major browsers, manually, but due to our Grid infrastructure, the need to run frequent tests, and the maintenance cost, we try to keep it to the essentials; other teams have different, more-stringent browser requirements, so it was necessary to add in both Chrome and Opera to our automation.)

The solution: It's often been said that companies should "focus on their core competencies" (I'll admit there are counter-points aplenty), and it's a view that I happen to subscribe to, especially given the demands on our team. To that end, instead of planning for, provisioning, and then maintaining the morass of browser environments on a plethora of operating systems, we increased our scope and frequency-of-use with Sauce Labs, covering what we reasonably could, in-house, at present, while immediately spinning up test coverage, and making our developers and supported Identity/Services QA team very, very happy.

The here and now: as of this moment, we've been able to bring up 42 test jobs against myfavoritebeer.org (dev/beta) and 123done.org (dev/beta), covering the most-critical paths of login and logout; we're closing in on new-user registration and changing passwords, too.

The future: as our testing needs increase, so, likely, will our usage of Sauce, and it'll be as easy as cloning a job in Jenkins and changing a single parameter, thanks to the built-in flexibility that Dave Hunt's pytest-mozwebqa plugin provides.

So, thanks again, Sauce; just another example of helping the Selenium [1] and Open Source communities, and, in particular, Mozilla!

[1] Sauce Labs powers the critical Selenium-core, continuous-integration tests for the project: http://sci.illicitonion.com:8080/

While we support many Mozilla projects on the Web QA team, I'd like to highlight one that I've been working on for the last couple releases now: Mozilla Reps.

We'll do a testday on it soon, I just wanted to encourage anyone interested to start looking at it now :-)

How can you help? Like all Mozilla projects, there's always a list of fixed bugs/features to verify, as well as a good amount of exploratory testing. For examples of the kinds of things we'd test for, check out a couple blog posts by yours truly: part 1, part 2.

What makes testing this particular site a bit unique is that Mozilla Reps is made up by a community committee, so most of the functionality depends on being a bonafide member; for testing purposes, we can bypass the usual application form, by making you a "vetted 'Reps' member" (though, of course, we encourage you to apply for real!)

Your best steps to begin, are to visit the team in #remo-dev on irc.mozilla.org, and letting anyone in the channel know that you're interested in helping test!

Here are some more general resources to help get you familiar:

* Mozilla Reps wiki page, for general information/an overview
* Mozilla Reps "Website" page, for specific website-release information
* a list of open bugs, so you don't file duplicates
* all Resolved FIXED bugs needing verification

Eventually--I'm told--the plan is to incorporate elements (or the site wholesale) into Mozillians (https://wiki.mozilla.org/Mozillians), so look for that in the coming future!

And, lastly, a link back to the Web QA homepage, in case you have any further questions, about this or any web-testing project(s) @ Mozilla!

In addition to co-hosting/sponsoring many Selenium Meetups, we've been really grateful to have a generous and accomodating business relationship with Sauce Labs, which allows us to augment and complement our own custom-built Selenium Grid testing infrastructure with theirs. For our most recent Socorro release, I and the whole team (dev +QA) really came to appreciate one feature in particular: video, and for two reasons.

To get you familiar with Mozilla's Selenium Grid setup, let me back up just a second; Dave Hunt has previously highlighted his pytest-mozwebqa plugin (still in beta) for our WebQA team, which--among all the many other awesome things it does--allows us to very easily use Sauce Labs' API to integrate with their OnDemand service.

Inspired by the wealth of build-reporting information we get from Sauce, Dave set out some time ago to incorporate some of it--particularly capturing screenshots--as well as writing out HTML reports that link us to (if the job was Sauce-run) a Sauce Labs job #, complete with video for each tests. We looked at incorporating video a bit into our own infrastructure (as did David Burns in Lights! Camera! Action! Video Recording of Selenium), but soon realized that--given all the other challenges of shipping, on average, 7 websites per week, and dealing with our scale of 6 Mac Minis, archiving, disk space, etc.--it wasn't something we wanted to pursue, for now.

For this particular Socorro release, we thought we had--immediately after we deployed--a huge performance regression. See bug 718218 for the gory details, and all the fun that ensued (over the weekend!).

(Click on the image to view the test run.)

Typically, video is a "nice to have" feature, for us, as our team is quite familiar with debugging failing Selenium tests locally, and our HTML reports and screenshots go a long way, too; for this release, though, I was playing back-up for Matt Brandt, and, as the requisite pointy-hair, couldn't code or run a Selenium test to save my life (not quite true, but close!) When I began investigating why our Socorro test suite suddenly ran ~20 minutes longer than it ever conceivably should, I discovered it was in one of the search tests, test_that_filter_for_browser_results, which we later realized had an erroneous |while| clause that never eval'd to |true|.

While I first looked on our own Grid instance, to try to watch (via VNC) our browsers running this particular test, Grid's distributed test-execution nature means that it never predictably ran on the same node each time. (In retrospect, I could've used pytest's keyword (-k) flag from a job in Jenkins to run the suspect test in isolation, but I digress.) I needed a way to quickly show our dev team the individual test, without having them or me set up a test environment, and firing off that test. Luckily, since Dave made the default for our Sauce Labs jobs public, all it took was a single run of the socorro.prod job via Sauce, find the individual test, and copy the link, which has nice annotated timestamps for each Selenium command, and quick paste in #breakpad.

Earlier, I mentioned that their video support is great for two reasons: in addition to the above (the ability to view archived videos from past-run jobs), we also took advantage of the ability to watch the individual test--as it ran in real-time--and correlate the currently-issued Selenium command with the hold-up/"perf regression."

We quickly isolated it to the aforementioned, bogus |while| loop, commented it and the accompanying assert out, and certified the release after more careful checking. Because it was the weekend, and folks were available at different times, once each member popped onto IRC, we simply showed them the "before and after" videos from Sauce Labs, and the team regained its confidence in the push.

So, thanks, Sauce, for such a useful feature, even if it's (thankfully) not necessary most of the time!

Mozilla, and WebQA in particular, looks forward to continuing to work with Sauce Labs and the Selenium project, in exploring ways in which we can work together to help further and bolster Selenium and Firefox compatibility.

(Title get your attention? Good! Keep reading...)

My awesome coworker Matt Brandt has highlighted the awesome work and contributions that Rajeev Bharshetty is doing for and with, Mozilla WebQA.

I'd like to take this opportunity to call out another great contributor to (and with) WebQA: Sergey Tupchiy, who we nominated for Friend of the Tree (along with Rajeev) in this past Monday's meeting.

3445ab5.jpg

Sergey has been instrumental: he helped us turn around a Q4 goal of having test templates custom-tailored for brand-new Engagement/Marketing websites (and which, quite frankly, has application and implications for our standard test templates, going forward.

His flurry of awesome GitHub activity should be self-evident :-)

In 2012, and kicking it off this quarter, Mozilla WebQA (and the rest of Mozilla QA) will be taking a hard look at, and hopefully making many inroads into, a better community on-boarding story; we're blessed to have the great contributors we already do, but need to grow and scale to meet Mozilla's ever-increasing challenges.

We're looking for contributors of all expertise areas, skill levels, and project interest; if interested, or just want to find out more information, please contact us via whichever below method best works for you, and we'll happily help you get started!

IRC: #mozwebqa on irc.mozilla.org
QMO homepage: https://quality.mozilla.org/teams/web-qa/
Email: mozwebqa@mozilla.org
Team Wiki: https://wiki.mozilla.org/QA/Execution/Web_Testing

(And a huge thanks, again, to Sergey, and other awesome contributors like Rajeev!)

Catchy title, no? Well, you're reading this...

Today, our legacy (and pretty comprehensive) Selenium tests for the Mozilla Add-ons website (both written in Python), were forced into retirement; the reason?: our previously-blogged-about AMO redesign--code-named "Impala"--went live for the homepage and the add-on detail pages.

New AMO homepage

Tests, of yore:

To be sure, the tests served us well for the past 14 months: when originally written, they covered nearly everything about the end-user experience (except for downloading an add-on, which we couldn't do with Selenium's original API). While not an exhaustive list by any means, we had tests for:


  • navigation, including breadcrumbs, the header/footer

  • locale/language picker options

  • switching application types (Firefox, SeaMonkey, Thunderbird, Sunbird, Mobile)

  • categories (appropriate for each application type)

  • login/logout

  • sorting and presence/correct DOM structure of add-ons (featured, most popular, top-rated)

  • reviews (posting, author attribution, cross-checking star ratings, as we found bugs in updating these across the various site aspects, via a cron)

  • collections (creation, tagging, deletion, etc.)

  • personas (again, different sort criteria was also covered)

  • search (popular, top-rated add-ons, substrings, negative tests, presence of personas/collections in search results, properly, etc.)

  • proper inclusion of Webtrends tagging

  • ?src= attributes on download links, to help ensure we didn't break our metrics/download-source tracking

  • ...and much, much more


...and then the testcase maintenance started:

  • we went through a few small redesigns, here and there (one of which was the header/footer, which for a brief time period we shared with Mozilla.com, around the launch of Firefox 4)

  • we changed where and when categories showed up, and their styling

  • we kept running into edge-cases with individual add-ons' metadata (titles, various attributes -- some valid bugs, some that our tests' routines just couldn't anticipate or deal well with, like Unicode)

  • A/B testing hit us, and we had to deal with sporadic failures we couldn't easily work around

  • ...and so many more aggravating failures, and subsequently, difficult refactoring attempts (most of which were abandoned)


Lessons learned (the hard way):


  • don't try to test everything, or even close to everything -- you really have to factor in a lot of time for testcase maintenance, even if your tests are well-written

  • keep tests as small as possible - test one thing at a time

  • on the same token, don't try to iterate over all data, unless you're trying to ascertain which cases to cover -- it'll just cost you test-execution time, and --again-- testcase maintenance (eventually, you'll either cut down the test iterations, or you'll give up and rewrite it)

  • separate your positive and negative tests: reading long-winded if/else branches or crazy, unnecessary loops when you're already annoyed at a failing test just makes it worse

  • don't couple things that really shouldn't have an impact on each other; for instance, don't repeat search tests while logged in and logged out, unless you have a good reason

  • same goes for user types: while it's seemingly noble to try to cover different user types, don't test everything again with those -- just the cases where functionality differs

  • safeguard your tests against slow, problematic staging servers, but don't go so far as to mask problems with large timeout values

  • fail early in your tests -- if you have a problem early on, bail/assert ASAP, with useful error messages; don't keep executing a bunch of tests that will just fail spectacularly

  • go Page Object Model [1] or go home, if you're writing tests for a website with many common, shared elements

What's next?


  • a strong, well-designed replacement suite we've been running in parallel: https://github.com/mozilla/Addon-Tests

  • an eye in mind for Continuous Deployment (thanks for the inspiration, Etsy!), including, but not limited to:

  • better system-wide monitoring, via Graphite

  • to that end, much closer collaboration with our awesome dev team on unit vs. Selenium-level coverage balance


Phew!

Ironically, the same time our legacy test suites started failing, so did a large portion of our rewrite (which also began before the Impala rewrite officially landed); the beauty, though, is that it's immeasurably easier to fix and extend, this time around, and we aim to keep it that way

[1] (Huge shout out to Wade Catron, from LinkedIn, who runs a tight Ruby-based test framework, and who helped us develop a great initial POM, which we've refined over the past months; video, below):

About this Archive

This page is an archive of recent entries in the testing category.

community is the previous category.

webqa is the next category.

Find recent content on the main index or look in the archives to find all content.

Pages

Powered by Movable Type 5.12