If you’ve
written automation for software quality, there’s a good chance you did it
wrong. Don’t feel bad; we’ve all been doing it wrong. It’s not our fault.
We were led
astray by the word “automation.”
Automation is
about automatically driving human-accessible tools to accomplish specific tasks.
When people
starting programmatically driving their software system under test (SUT) for
quality purposes, overloading the word “automation” seemed like the best way to
describe the new practice, but actually the word applies poorly to software
quality. “Automating” your SUT means making it do stuff, and that’s usually how
management measures output. Quality verifications are added as an afterthought,
and have little to do with the “automation” objective so they tend to be poorly
planned and documented.
The word
“automation” for software quality distracts people from the business value of
the activity: measuring quality.
The
misunderstanding that automation for software quality is just doing what humans
do (i.e. manual testing), but doing it faster and more often, causes business
risk:
·
Unless
you’re very clear and specific on what is being measured, the quality measure
is incomplete and manual testers must verify it anyway.
·
Manual
tests are designed by people to be run by people. They do not make ideal
automated measurements because they tend to be long, complicated, and burdened
with too many or too few verifications.
Automated
verifications of services and APIs tend to be more effective, but this isn’t
“automation” either by the definition.
At the core
of the paradigm shift is an important dichotomy:
People are
very good at
·
Finding
bugs
·
Working
around issues
·
Perceiving
and judging quality
But, they’re
poor at
·
Quickly
and reliably repeating steps many times
·
Making
accurate measurements
·
Keeping
track of details
Computers
driving the SUT are very good at
·
Quickly
and reliably repeating steps many times
·
Keeping
track of details
·
Making
accurate measurements
But, computers
are poor at
·
Perceiving
and judging quality
·
Working
around issues
·
Finding
bugs
Conventional
automation for software quality misses these distinctions, and therefore comes
with significant opportunity costs.
To create
software that matters and be effective and efficient at measuring quality, your
team must move away from conventional misguided “automation” and towards a more
value-oriented paradigm. To describe this value, I created MetaAutomation.
MetaAutomation
is a pattern language of 5 patterns. It’s a guide to measuring and regressing
software quality quickly, reliably, and scalably, with a focus on business
requirements for the SUT. The “Meta” addresses the big-picture reasons for the
effort, the nature of what automated actions can do, and the post-measurement action
items and efficient, robust communication where “automation” by itself fails.
MetaAutomation
shows how to maximize quality measurement, knowledge, communication and productivity.
Imagine e.g. self-documenting hierarchical steps automatically tracked and
marked with “Pass,” “Fail,” or “Blocked.” Imagine a robust solution to the
flaky-test problem!
For
trustworthy, prioritized, fast, scalable, clear and around-the-team presentable
quality results, try MetaAutomation!
The other
posts on this blog clarify the MetaAutomation pattern language, but
unfortunately the topic is also much too big for a blog post, so see also the
definitive book on the topic here: http://www.amazon.com/MetaAutomation-Accelerating-Automation-Communication-Actionable/dp/0986270407/ref=sr_1_1
This comment has been removed by a blog administrator.
ReplyDelete