Thursday, September 15, 2011

Manual testing will never go away

What do you do for a living? Test?

When people ask “So, what do you do?” is that what you tell them? It’s the generally used professional term – “Test” – but someone outside the profession probably has a greatly simplified view of what that means, which means it sounds like a very simple job. The inquirer might think “oh, that sounds easy, I can do that” but be too polite to say anything.

I have a bit of “dev envy” because everybody and their boss knows what developers do – they make computers do stuff. That sounds intimidating to someone who doesn’t work with computers, as it should – devs have a very challenging job. Devs tend to be smaa-art people, and others know it!

Testers OTOH, they just try a few things and see if it works, right? How hard can that be?

Since you’ve read this far, I presume you know that testers have very challenging jobs too. Sometimes I think testers have a job that requires even more smarts than devs – because good ones have to know almost as much as the devs do, plus they keep up on many different aspects of product development and quality, plus they tend at times to be much more interrupt-driven than devs, plus they have to interface with test members of many different roles as well as represent the customers’ interests. I’ve done both dev and test, and IMO test is more challenging.

At a large software company in Redmond WA, testers are expected to automate all of their tests (at least, that was my experience), which becomes even more challenging after running automated tests for a while because they tend to fail. James Bach offers an interesting perspective on this phenomenon here: http://www.satisfice.com/articles/test_automation_snake_oil.pdf For example, Bach notes that automated tests don’t catch bugs by default; manual testers do.

Adam Yuret (his blog is here http://contextdrivenagility.com ) reminded me of a great perspective on the process of automating tests: if testers define a test “case” exercising the product, a good tester executing the test case will find bugs by default. People are generally smart and observant, and they’ll spot things that are out of whack.

Common testing wisdom says that if that test case is automated, the benefit of the manual test is amplified because a) it’s run more often b) more quickly and c) more reliably. Unfortunately, that’s not generally the case; as Yuret points out, what a manual test generates in terms of a measurable result for product quality is greater than that same test run as an automated test, especially for graphical user interface (GUI) tests. It gets worse than that for automated tests: automated tests have to measure a result somehow, and by default they don’t. Your automated test has zero or more “checks” of product behavior.

If the automated test has zero checks, the result of the automated test will always be PASS no matter what happens. The value of the automated test case is now negative, because the product team is operating on information that the test case is being “covered” by the test part of the team, whereas actually it’s not.

Bach notes an example of this in his article (linked above): “The dBase team at Borland once discovered that about 3,000 tests in their suite were hard-coded to report success…” I’ve seen even worse: I’ve reviewed an automated test to discover that the result was always success, despite the fact that the product wasn’t driven to do anything at all.

The challenge is not just to make sure that the checks of product behavior are in there – i.e. the test fails if the product does not behave as expected – but that in case of failure, the automated test result is descriptive. A Boolean result of FAIL with no other information is not very useful other than as an indication that the test team has work to do on that automated test.

The value of manual test never goes away, at any point in the software development lifecycle (SDLC). Automation can be a powerful tool, but it’s not at all easy to do well. Testers have a very difficult and challenging job.

Next time (tomorrow), I’ll post about the value of automation, and then next week, move into metaautomation.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.