Testing and QA: Make it Happen
I’ve heard you should spend one hour testing for every development hour you put into a site. Holy cats, friend. That’s a ton of work. But in many web shops, there’s not always a person assigned to plan and tackle testing. So whose responsibility is it?
Most of the time QA (quality assurance) ends up being everyone’s job. But when a task is on everyone’s to-do list, it gets pushed aside or rushed. No one owns the process, so it doesn’t happen because, hey, someone else will do it, right?
Although QA might not be in our formal job descriptions, web strategists and web content experts are particularly poised to pitch in on QA, and perhaps even lead the effort, because we’re not directly coding and building functionality. If there’s no one in charge of QA, it’s our job to make it happen.
QA Isn't Usability Testing
When I talk about QA, I don’t mean usability testing that shows us how an audience uses a website. You might spot opportunities to improve usability in the QA process, but usability research isn’t the goal. The testing I’m talking about is the final testing you do to make sure you meet your specs and deliver something functional. Usability research, in contrast, is typically done earlier in the process when you’re defining your requirements and trying out options.
My insights here come from experience in interactive production and CMS management. There are lots of ways to QA and test, and I’m not formally schooled in the craft. But I’ve learned it doesn’t really matter what methods you use as long as you have them, they work for your team, and they catch major issues before you go live.
Identifying What to Test
It can be daunting to know where to start testing, especially if you’re rolling out a major upgrade or new website. To make QA more manageable, I group it into phases. This gives me a roadmap and ensures I cover the bulk of what needs testing.
Basic Functionality
I typically start by testing the basic functionality of a new website or update. No need for a scientific process here, I just start interacting. Does the form submit? Are major interactions operative after a template change? Does the navigation work? Because it doesn’t really matter if we’re missing a navigation item if I can’t get the nav to expand or drop. First things first. It’s gotta work.
Begin this testing as soon as your development team has something they think is functional. The goal here is to uncover and resolve big issues before releasing it to others for review. Think of this step as a sort of failure or destructive testing. Try to break it. Test areas of the site that have nothing to do with the update to make sure they’re still humming. Interact with your site in ways you think no one possibly would because, believe me, they will. Report anything that is a must-fix issue.
This initial testing is usually best performed by people who can partner closely with your development team to troubleshoot and communicate issues. Your developers will problem solve on the fly and try different approaches to make a fix, so they’ll need testers who have availability and can articulate clearly what they’re experiencing.
If your testing time is tight, ask the development team to identify key areas they’re worried about or were most affected by their work. They usually have a good sense of what they’ve tested well and where potential issues lurk.
Look and Feel
After you’ve resolved major bugs and confirmed that core functionality works, you can request more subtle tweaks and updates.
Check the aesthetics and appeal to ensure what you’re building meets brand standards. Does it match the approved design or fit the overall feel of the site? Are you using the right logo? You’ll likely want to pull in your designer (if they’re not coding the project), brand manager, or whoever can sign off on the looks.
You can also work with your development team on other enhancements to improve the experience. Is there a wonky interaction? Does it meet the web standards you want to enforce? Work with your programmers to prioritize these items (must-fix and would-be-nice) because, depending on complexity, they might not all get resolved before go live.
Specs and Scenarios
After your site is in pretty good shape, it’s time to do a thorough run through of your specs and key user scenarios so you can account for all promised deliverables.
Track down any spec docs, wireframes, and user stories to create a master list of all requirements that need testing and confirmation. For example, if you’ve updated the site registration and login process, map out all the steps:
Site registration button
Site registration form with email and first name
Confirmation and password set up email
Login button
Logout button
Update account preferences
Password reset/recovery process
Etc.
Remember to think about processes that aren’t necessarily visible to the user. If you’re collecting data in a database, include this in your specs.
You can also choose to coordinate final content proofing as part of your QA process. Create a separate list to track pages that need proofing or use CMS workflow tools if they're part of your system.
Once you have your list, buzz through and add any additional instructions or links so it’s easy to distribute testing. Then, share it out in a systematic way with folks on your team who can help test. I’ve used Google Sheets, GitHub cases, and Basecamp docs to assign and track what’s been tested. They each have quirks and benefits, and there are certainly other collaborative tools available.
Devices and Browsers
Throughout all testing, you should be looking at your web product on multiple browsers and platforms. If you can’t access your development site on mobile devices, use the tools in your browser settings to shrink your screen to other sizes until you’ve rolled out your site to a more accessible environment. These browser tools aren’t perfect, but they’ll help you catch major issues early.
When you’re assigning scenarios and specs for testing, you can also assign browsers and devices.
First, you’ll want to determine how to prioritize your devices and browsers depending on what makes best sense for your project and resources. Then, create A and B lists because testing takes time and you want to make sure you hit all your major environments first.
I recommend testing in actual environments when possible instead of emulators and simulators, especially for your A list. Old school, I know, but it’s the real deal so I like it. If actual environments aren't available, services like BrowserStack or CrossBrowserTesting give you quick access to a full suite of browsers and mobile emulators. Work with your web and IT team ahead of time to come up with your in-house list of testing platforms so they're available when you need them.
Analytics and Tracking
Don’t forget to test your analytics. As someone who doesn’t live and breath analytics, I know this can be tricky. One of my programming friends recommends using browser extensions to verify analytics are running as expected.
If you’re using Google Analytics, you can install these Chrome extensions to help you test:
Tag Assistant - This handy extension quickly identifies if GA is running on the page and what accounts are connected.
Google Analytics Debugger - When you open your developer console, GA Debugger logs the specifics of your analytics events so you can see how they’re labeled and what’s tracking.
Both extensions also identify potential errors if too many accounts are running on the page or if analytics isn’t firing properly.
If you’re running Site Catalyst or another analytics software, look through your browser extensions to find similar tools to help you spot check your tracking.
Also, use analytics testing as another opportunity to check any databases collecting user info. Make sure the data you're storing is exactly what your analytics team or clients need to track and support user engagement.
Reporting Issues
Now that we've got a handle on what to test, we need to figure out how to report bugs and coordinate updates with our development team.
To make testing easy to manage, it’s best to use an incident base or bug tracking system to log all issues or updates for your programmers. Your tool doesn’t need to be fancy. There are plenty to choose from. I’ve used GitHub, BugTracker, and Google Sheets or a combination of these tools for various projects. All have free options. I’ve used paid systems, too. They work.
The important thing is having a system to track:
What the issue is
Status (open, in progress, done, ready for testing, etc.)
Assignee
Who reported it
Priority
Detailed description of the issue
Attachment or link for screenshots
Link to webpage
Device and browser
When you’re reporting issues, don’t shove a bunch of bugs into the same ticket or record. If you do, your programmers will have to spend valuable time sorting out the details and updates will get missed.
Instead, create a separate item for each issue and be clear in your labeling and tagging. When I open a new issue, I try to give programmers as much context as possible in the title so they can organize, sort, and gauge complexity without having to click through and read lengthier descriptions.
For example, I title each new ticket with the area of the site/template, the functionality, and the issue:
Homepage - slideshow - tiles not rotating
Post pages - author info - make author names bold
Footer - registration button - not clickable
From here, developers can easily group issues together, be it by area of the site or function, and decide how they’re going to tackle their list. Some bug tracking tools allow you to set up tags or categories. In this case, you can create a tagging system instead of loading the title with the details.
Then, it’s important to use your tools to move issues along until they're resolved. When a programmer has made a fix, they can assign the issue back to the tester for more QA and review. If the programmer needs some help, they can pass the issue to another developer for insight. The issue should go back and forth until a tester has verified it is resolved. Then, mark it as done so it drops off the queue.
This might sound like common sense, but it’s easy to get lazy and start sending emails, showing up at colleagues' desks, or sending chat messages instead of opening up a new issue and providing all the details. Not that these communication methods are bad, they just don’t allow you to easily distribute the work and track what’s happening.
Advocating for QA
When we don’t have dedicated testers on staff, it’s our responsibility to advocate and make a plan. We also have to build QA into our project timelines. Too often I see web teams getting crushed by mismanaged deadlines and sending out code before thorough testing. We have to expect functionality to break. We have to expect problems and plan time to solve them correctly rather than slapping a band-aid on something that will cause issues down the road.
If QA is everyone’s job, that means it’s ours. So let’s roll up our sleeves, exterminate some bugs, and ship our work with confidence.