Testing .. 1,2,3

A good testing plan for new systems will minimize issues, maximize results

By Laura Haight
Originally published as the Digital Maven by Upstate Business Journal

New year, new stuff.

Many companies steer clear of major system upgrades in the fourth quarter. So depending upon your industry, it’s very likely you’ve got an implementation coming up. To that end, I want to talk to you about one of the two most overlooked aspects of adding new technology: Testing.

From a form to a website to a whole new customer relationship management or enterprise resource planning system, good testing is key to maximizing benefits and minimizing problems - both internal and external.

“Well of course, we’re going to test,” I hear you saying. Like most things, there’s doing it and then there’s doing it right.

Businesses can save a lot of time, money, and prevent some reputational hits by doing a better job of testing on the back end.

Who tests?

Often there’s a system implementation team involved in any new product or system deployment. Too often, those folks are the only testers. Naturally, those involved in purchasing and designing or developing the system will test to be sure that what they expected works as they expected it to. But that’s not the end of testing.

For internal systems, a group of users who are familiar with the functions the system should manage, but who have not been part of the system design/purchase, should be deployed to test it. This is where you’ll find what critical function has been left out - from the people who do those jobs every day. If the system is public facing - like a website - enlist some “regular users” to test. People who may have used your system or website previously will be the best barometer of how easy to navigate the new one is; and whether the functions your actual customers use are easy to find and use.

What to test?

Don’t assume your testers know how to test. Make sure they have a detailed testing checklist and that they turn it back into you upon completion. It’s a good idea to have multiple testing plans so different groups test different things. Asking a tester to do too much may result in skipped steps.

Why do you test?

Test for failure not success. This is the biggest mistake most businesses make in testing. If we test to make sure it all works; the most likely result will be that it will all work. Consider this example: When testing a new phone system, internal employees call the main number, enter their work extension, listen to the recording, and leave a message. All good. But when a customer tests, they dial the main number, don’t know how the right extension, get the company directory, misspell the person’s name or typo it on the keypad, and end up in an endless loop of recordings. Not good.

Make sure part of your test plan focuses on trying things that should not work and making sure that the tester knows the function failed and the system gives guidance on what to do next.

Where to test?

To give the truest result, test in a number of different environments. Test on the worst computer you have, not the best. Test on spotty networks, not your rock-steady internal network. If the system is public facing, test on every device possible: smartphones of different flavors, different browsers, and tablets.

No matter how hard you try, there will still be things you didn’t anticipate, that testing didn’t identify and didn’t get fixed before launch. But a thoughtful, organized testing plan spread across a number of groups of users and internal/external customers will improve the results and minimize the amount of time you spending fixing and apologizing after you launch.

Got Vendors?

Got Vendors?

Data Privacy Day 2019

Data Privacy Day 2019