The Testing Taxonomy: How to Make Sure QA Doesn't Fall Through the Cracks

Josh Korr, Former Product Strategy Director

Article Category: #Process

Posted on

There comes a time in every PM's life when you're this close to a project launch and suddenly you and/or your colleagues have a moment:

No, you haven't forgotten this adorable little scamp:

You've forgotten testing. Or more charitably, testing has fallen through the cracks.

Last-minute scrambling ensues; frustration abounds; everyone feels like this:

Nobody wants to feel like that (I'd rather sleep in the same bed as Fuller). So how can we make sure testing stops falling through the cracks?

First we need to diagnose the real problem: I think testing frequently gets lost because everyone means something different when they say "testing" or "QA" or "UA" or "end-to-end testing" or "smoke testing."

This makes it tough to pin down what needs to be done, who is going to do it, and when.

The solution? I give you: A Testing Taxonomy!

By breaking down the fuzzy notion of "testing" into concrete types of testing, we can talk in specifics about what work actually needs to be done.

Now that you're hopefully fired up, let's dig into ... testing taxonomy nomenclature! Woo! Party!

Functional Testing

Functional testing asks: Does the site function — literally, and in an aesthetically- and UX-agnostic way, do things work?

Functional testing covers things like:

  • Workflows and logic
  • CRUD functionality
  • Permissions
  • Visible system behaviors
  • Wired-up data
  • JavaScript interactions and behaviors (carousels, modals, toggles, drag-and-drop)
  • Third-party integrations (videos, maps, share buttons)

Common functional testing questions include:

  • Do database or server errors occur when a user does a thing or goes to a page?
  • Does a URL 404?
  • When a user clicks the "Submit" button on the "New Foo" form with no validation errors, is the form submitted? If so, is the foo created?
  • If a given user type is not permissioned to view a given URL, does that URL 404 or redirect as expected for a user of that type?
  • Is all correct data displaying on a wired-up, CMS-driven URL?
  • For a carousel:
    • Do the images display as expected?
    • When a user clicks to advance a carousel in a supported browser, does the carousel advance as expected?
  • Does a third-party video load as expected?
  • Is a "share" modal prepopulated with the correct data?
  • Does clicking a toggle button toggle the UI as expected?

Automated Functional Testing

Even while PMs (or other team members or client users) are manually doing the above functional testing, ideally developers are also doing automated testing. (If they aren't, bug them until they do.) At Viget, for example, our Rails apps have 100% automated test coverage. 

Quoth Viget's internal glossary: "An automated test is a bit of code that verifies a specific piece of code in the web application. Tests are used to ensure that a site’s features work correctly and continue to work correctly as changes are made to the code. Every time the application is changed, the test suite is automatically run and developers are notified of any failures. By using automated tests, we can better guarantee that the application works as expected and catch many problems before they make it to the live site."

There are two common types of automated testing, described here by developer David Eisinger:

Unit tests are small, focused tests concerned with individual methods in your code: given this input, I expect to receive that output. A method name which concatenates a user's first and last names might have a unit test like this:

assert_equal "Josh Korr", User.new(first_name: "Josh", last_name: "Korr").name

We put a major emphasis on keeping our unit tests 1) small, 2) fast, and 3) contained (they shouldn't require too much setup).

Whereas unit tests exercise the components of the application in isolation, integration tests ensure that they work properly together. Often, these take the form of simulated browser interactions:

visit "/account/settings"
fill_in "Email", with: "email"
click_button "Save"
page.should have_content "There was an error updating your account."

Integration testing can take other forms, though — testing an API, for example, requires making code-based API requests and then asserting things about the response. The overarching principle is that they exercise your app end-to-end, from the routing of requests to the authentication layer, to the business logic, to the data store, and all the way back to the user.

Interface Testing

Interface testing asks: Is the UI consistent and usable?

Interface testing covers:

  • User flows
  • UI elements: Forms, buttons, images, interactions, patterns

Common interface testing questions include:

  • Can users achieve their goals?
  • Does the signup flow make sense?
  • Is the same button style used for similar actions across the site?
  • Is form UI (validations, field label placement, field types) consistent across the site?
  • For a carousel:
    • Is there an appropriately sized click area on the controls?
    • Is the slide-change speed appropriate?
    • Does the carousel pause on hover (if that's what was expected)?

Interface testing either encompasses or is a sibling to usability testing, depending on your point of view. I think of usability testing as a type of interface testing that's generally done by people outside the project team, and a bit outside the functional-focused testing context.

Visual Testing

Visual testing asks: Is the site visually/aesthetically consistent and polished?

Visual testing covers:

  • Built-out HTML/CSS

Common visual testing questions include:

  • Are font-size, colors, padding, etc. what the designer intended?
  • Is overall fidelity to comps (or to designer's imagination) acceptable?
  • Unscientifically, does the site look good?

Multi-Screen Testing: Cross-Browser, Responsive, and Touch Testing

Multi-screen testing crosses over among functional, interface, and visual testing. But it's useful to think of cross-browser, responsive, and touch-device testing as distinct types to ensure they don't fall through the cracks.

Cross-Browser/Cross-OS Testing

Covers:

  • Interface, visual, and front-end-functional testing in different browsers/operating system (including on mobile devices)
  • Typically is less about dev functional testing (i.e. if a URL throws a 500 error in one browser, it'll throw the 500 in all browsers)

Common questions:

  • Visual: Is overall build-out consistent across browsers/OSes?
  • Functional (front-end): Do CSS (rounded corners, animations, hover states) and JS interactions behave as expected in standard browsers and degrade acceptably (as defined by team and client) for non-standard browsers?

Responsive Testing

Covers:

  • Primarily interface and visual testing at different break points/browser widths
  • May include front-end functional testing, e.g. if JavaScript or form behaviors change at certain break points
  • Typically doesn't include dev functional testing

Common questions:

  • Do expected changes occur at a given break point?
    • Image changes, e.g. loading smaller images and/or images of different dimensions
    • Stacking or other layout changes
    • Content hidden or revealed
    • If functionality is revealed, does it work?

Touch Testing

Covers:

  • Functional testing of touch-device-specific UI and features.

Analytics Testing

Covers:

  • Implemented analytics code (GA, ClickTale, Omniture)

Common questions:

  • Is all analytics code implemented as defined by the team and client?
  • Are all events firing?
  • Is all data successfully being tracked as expected?

Content Testing

Covers:

  • Hard-coded and admin-entered content

Common questions:

  • Is spelling and grammar correct? (i.e. editing and proofreading)
  • Does the content meet the client's editorial expectations? (i.e. content strategy)
  • Does the real content fit the designs as expected?

Performance Testing

Covers:

  • Snappiness (super-technical term)
  • Concurrent use (aka load testing)

Common questions:

  • Are page load times and page scrolling acceptable, especially on pages with images, video, 3rd-party widgets/JS, or CPU-heavy CSS animations/transitions?
  • Is the site experience generally smooth and speedy?
  • Does the site function, and is it acceptably snappy, at various levels of concurrent use?

Wishlist Creation != Testing

A huge caveat: When working with clients or internal stakeholders, they love to mix actual testing with wishlist creation. When they finally get their hands on the site, they start thinking about all the things from the two-month-old designs that they now want to change, and come up with a ton of new ideas while playing with their new toy.

Both of those things are natural and fine. But don't confuse the subsequent change and feature requests with functional bugs.


A Couple Notes About Process

The testing taxonomy is intentionally process-agnostic. Every company will have its own names, tools, and rituals for its overarching testing process. But any testing process fundamentally exists to do some combination of the above types of testing.

That said, I want to introduce one process concept as part of the testing taxonomy:

  • Testing is most effective when there is an ongoing testing phase as well as a polish testing phase.

At Viget, we historically talked about "QA" as a two-week thing that happens at the end of projects. This is cray, as most projects have way too much to effectively test in such a short timeframe. So now we explicitly talk about — and budget for — ongoing testing as well as an inevitable polish testing phase once everything is fully implemented.

Calling All Taxonomists

I'd love to hear your feedback and thoughts on the taxonomy. This is all just made up, after all. (Special thanks to my colleague Kevin Vigneault for helping make up the functional/interface/visual/multi-screen framework.)

Related Articles