One of the very basic values of MetaAutomation is to know
what the manual testing role is good at, and is, in fact, indispensable.
By “manual testing role” I mean any person on the team who
ever does anything with the software system under test (SUT) and is in a
position to notice something awry, some issue where the software behavior (or
even non-functional quality issue) does not meet the requirements or somebody’s
expectations, and can characterize and record the issue (i.e., “bug” it) for
the team so the issue can be considered for a potential action item to fix it.
So, this does not necessarily require a person who is full-time committed to a test
role or a manual test role.
People are smart. People are clever, innovative, flexible
and observant. People notice stuff and can communicate it to other people (or,
record it for their own records). Quality automation, or the automation that
makes and communicates quality measurements, is very good and powerful at
measuring and reporting on performance and steps in driving the product, and
regressing functional requirements, but there’s a lot of stuff that it can’t
see at all because it’s difficult, risky, or impossible to measure many things
about the SUT.
For example, a web page layout: is the page attractive,
readable, and usable? Quality automation won’t tell you. One needs the manual
test role to follow up. Fortunately, you’re doing this anyway, even if nobody on
the team thinks of him or herself as a “tester,” assuming somebody on the team
checks over the product before it goes live.
Poor understanding of the boundary between what manual test
is better suited for, and what quality automation should be written to do, is
expensive in terms of product cost and risk.
For example, I’ve seen too much put inappropriately on the
manual testing role; on a credit union web app, giving manual testers ownership
to verify correct bank balances is very expensive and risky because that aspect
of the product is very high priority but it might not get verified reliably by
manual testers for any number of reasons, and in any case, it’s slow to do that
manually. The result is extra cost and risk.
I’ve also seen too much put on quality automation: trying to
verify many low-priority and relatively superficial aspects with quality
automation can be tricky to write and maintain and flaky to run. For example,
is a control on the screen the correct color? Unless there’s a high-priority
functional requirement there, it’s better to skip that with quality automation.
Checking such properties with quality automation can make checks that are too
complicated, slow or flaky, or too many checks run to verify low-priority
aspects of product quality.
Quality automation and manual testing have (or should have) a
relationship: quality automation checks these things, and manual testing
verifies the rest, notices odd stuff, and does exploratory testing. The
relationship depends on knowing where the team has decided the boundaries and
limits are; if the manual testing role doesn’t know what has already been
checked, for a given product version and build, there will be missed stuff and
duplicated work.
Keep checks simple and well-documented so what is verified
is clearly understood. The Atomic Check pattern of MetaAutomation describes how
to make checks “atomic,” indivisible really, so that they can’t be simplified
by breaking up the check into smaller checks.
Documentation is good but can be expensive and risks falling
out of date due to minor changes. Atomic Check describes the ideal solution
here: self-documentation of the checks! Even better, naturally hierarchical
check steps enable self-documentation of the business-facing logic of the check,
easily displayed, and the atomic technology-facing check steps at the same
time.
How does that work?
Download and run one or both samples on http://MetaAutomation.net to see this in
action. Step through the code, make changes, and even implement your own atomic
checks with hierarchical self-documenting steps.
If that is too much change for your team right now, here’s a
take-away you can use immediately: knowing the boundary between quality
automation and manual test reduces risk and effort, and ensures that no aspect
of product quality is unintentionally missed.
This page is #6 in a series of six, with the original post
here.