When A Little Bit Of Testing Is Better Than….

…None At All!

I think most of us are familiar with the following legal saying:

“A man who is his own lawyer has a fool for his client”

Fool? Yes, a little severe when applied to other professions but nevertheless it’s a principle that encourages further review. As a QA Consultant/Tester, I’ve made contact with many agencies who advise that “we’re going to test it ourselves” – often as a money-saving initiative. “Oh, ha ha – we already KNOW a thing or two about QA!”. Then why are so many flawed sites being launched?

There was an agency that I engaged that advised that they were going to proceed with testing a web site on their own, dividing the chore between a developer and a project manager. Giving up the use of a professional in this area, they compromised the opportunity to leverage experience, methodology, context, and perhaps most importantly the opportunity to EXCEED their client’s expectations. And how is that last part done? By having a professional review your site with the attention of a stakeholder that has a client’s business goals in mind. “Bad” QA testing simply “pushes buttons to see if things make go good. ugg.” whereas “good” QA always has its sights set on not only functional response, but ensuring the best possible user experience is delivered for the client’s goals and intentions.

A lot of this is pre-determined, of course, by the efforts of team members versed in creating requirements, wireframes, and attention to a clear and concise copy deck. However, the first opportunity to see how these disciplines come together is during a testing phase where you can SEE these come together and guage whether or not the experience is what you’ve intended, what you’ve communicated to your client, and that it will fullfill your client’s goals. Also important is the opportunity here for self-assessment – are we finding true “bugs”, or have we uncovered an issue with a step PRIOR to development (developers – do you hate the term “bugs” as much as I do because it’s often so inaccurate?). Was any information missing from a wireframe? Does a copy deck carried an “implied” functionality that didn’t carry over to a build (this happens more often than you might think)? The opportunity to influence HOW an agency works is very profound through responsible QA. And if self-improvement isn’t on your agenda….

That said, and I’ll touch on “risk” more in the future, there’s still the weighty concern of budget. “This is only a small build, so we didn’t really have a budget to bring in a full QA cycle”. Let me get back to the aforementioned company that advised me that they were going to proceed with their testing themselves. After noting that they announced a launch with all the appropriate “Rah Rah! Go team!” hype right here on LinkedIn, I thought I’d take a quick look – after all, I was also interested in the site’s subject matter. Within 15 minutes, I found 5 variances that ranged in severity from minor to critical (they actually had pages redirecting to a development server. Ouch). Because I’m so nice (ask around!), I advised the agency of the variances and they were off-and-running with their post-launch fixes. A “thank you so much!” and a note that “we’ll call you in the future!” was received by email. That’s the last I ever heard from them – perhaps “punishment” for sucking the fun out of things!

This forces the question, “Would SOME professional testing have been better than none at all”? Perhaps budgetary concerns can be addressed by not asking, “How much will a full QA cycle cost?” but rather “We have $XXX.XX dollars – what kind of coverage will that buy us?”, at least until the expertise is applied to estimating on future projects with a consideration for such a critical step.

“Of course, YOU’RE going to say all this, Claudio! You’re a QA Tester, after all!”. Sure, the above serves yours truly well – but I think it’s safe to say it serves all.

Especially your client!

– Claudio Sossi