The world of software automation for quality is riddled with
failures. When the people creating the automation of the software under test
(SUT) fail to create reliably-running tests, or it becomes clear that this
effort takes more time than first estimated, management and the rest of the
team lose confidence. In any case, driving the SUT through scenarios is too
often seen as a risky, low-value afterthought. After all, the developers can
test the product themselves and learn things that they thought the test team
was going to tell them anyway, but for some reason can’t reliably deliver.
In any case, the conventional approach to software
automation for quality creates a losing situation for the people doing the
work.
If they are told that the highest-value automation is
end-to-end automation of the product, including the web page or GUI, they are
likely doomed to write a system that creates many false positives – i.e., test
failures that have nothing to do with product quality - which in turn create
more work for them because they must follow up with a debug session just to
discover if there is an actionable piece of information for the rest of the
team.
The broader team pays little attention to the results from
the checks because they know
1.
False positives are common, and if there really
was a product bug, the authors of the check would discover that in a debug
session and tell them.
2.
The checks don’t measure what they’re designed
to do, because they can’t possible match the perception and smarts of a human
testing the SUT directly.
With the correct focus on verifying and regressing the
business requirements of the SUT, rather than on automating the SUT do make it
do stuff, the false-positive problem and the what-is-the-check-verifying
problem go away. I created MetaAutomation to describe how to take the optimal
approach to solving these problems and
creating many other benefits along the way:
·
The focus is on prioritized business requirements,
not manual tests or scenarios
·
Checks run faster and scale better with
resources
·
Check steps are detailed and self-documenting,
with pass, fail or blocked status recorded in the results
·
Check artifacts are pure data, to enable robust
analysis on results and across check runs
·
The quality measurement results are transparent
and available for sophisticated queries and flexible presentations across the
whole team
·
Performance data is recorded in line with the
check step results
With MetaAutomation, the test and QA role can produce the
speedy, comprehensive, detailed, and transparent quality information to ensure
that functional quality always gets
better.
If you want to give the test/QA role the importance and
respect it deserves, try MetaAutomation!
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.