The phrase “Test Automation” is a historical accident, and
not a victimless one either.
“Automation” started around the 1950’s with industrial
automation; building stuff faster, better, and cheaper. It continues to grow
today, with more industrial robots every year.
DevOps is growing too, with automation moving software
through the development process to ship, faster, better and cheaper. DevOps is
true to the meaning of “automation,” because the focus is on the end product:
shipped software.
“Test” is the traditional word used for measuring software
quality and finding issues (bugs). When people started driving the system under
test (SUT) automatically to help with this, “test automation” was an obvious
way to describe it, but it was and is a poor fit: unlike with industrial
automation or DevOps, given that the software product is an SUT (or, at least
it’s being driven with fake, test users and fake, test data) the end product
has zero value; nobody cares. Instead of the end product of the SUT, people
instead focused on the pass or fail result of running a bit of “test
automation.”
“Test automation” encourages the perception that what people
in the manual testing role, what all people involved in software development do
at least part of the time, can be automated away. Wrong! People are smart,
observant, flexible and perceptive. Automated measurements of software quality
can be very powerful for the business (as I describe below) but they can’t do
what people can do.
It got worse with another historical “oops:” In 1979,
Glenford Myers wrote what turned out to be a highly influential book on software
“test,” and in this book he insisted, repeatedly and emphatically, that the
whole point of “test” in general is to find “bugs.” This reinforced the
perception that for “test automation,” if it passed, nothing else matters; it
didn’t find a bug, so we don’t care about any other details. Oops… although in
1979, this was a fairly good approximation, the conceptual mistake remained and
gradually became much more significant through now, 37 years later, and beyond.
In 2016, we have software doing some critically important
things, starting with online banking through web portals, through self-driving
cars and on passenger airplanes. Software flaws could ruin a person
financially, or kill her, and the magnitude of software’s impact on our lives
grows every year.
We can no longer afford to be distracted by the misleading “test
automation” or the idea that test is only about finding bugs. We must pay
attention to how we drive the product, and how it responds. We must know
immediately, and in detail, of flaws and even when some part of the product is
not measurable due to some other failure. We can’t afford to wait for some
person in quality assurance (QA) to debug through a problem, especially not at
nighttime or across a geographically dispersed team. We can’t afford to
continue dropping important, actionable quality information, functional and
performance information, on the floor, and hope that if it was important and actionable,
somebody on the team might dig it out later.
For software that matters, we must record that information,
store it and act on it if appropriate, with automation in real time and, also,
make it relevant and queryable to anybody on the team who cares to look to get
information on functional and performance quality, including near-term events
and long-term trends.
Replace “test automation” with quality automation. “Test
automation” only really works for the QA team, and not very well either.
Quality automation avoids the misconceptions of “test automation” and, by
focusing and using what automation
does and does well, delivers transparency and business value across the larger
software team.
Log statements in the automation do add some value, but any
structure in the actions or log statements is lost because it becomes a list of
loosely-formatted statements, lossy of information, not very queryable, not
friendly to automated processes after the measurements are completed.
BDD and Gherkin offer a way of logging business-facing steps
with check runs, but this requires a tool to be installed and distributed and
interpreted keywords to be implemented. Information on technology-facing steps
is lost. Implementation of keywords tends to drift when reuse is required (I
know, been there and done that) and this information is lost, too, to the
relative obscurity of the keyword implementation source code, and never to be
exposed outside QA.
MetaAutomation shows a better way, starting with the Atomic
Check pattern: don’t interpret or implement keywords; instead, drive the
product with self-documenting steps. Put the steps in a natural hierarchy, like
top-down modelling in business process modelling, and have them document
themselves in this hierarchy.
Now, the business-facing steps and every technology-facing
step that drives the product is self-documenting, in compact and highly
queryable valid XML, in a hierarchy that reflects the process of driving the
SUT for every check. Every step records how many milliseconds it took to
complete that step, at every node in the hierarchy.
No interpreter needed! No keywords to implement, no Gherkin
language to adapt and learn! No 3rd-party tools to install, deploy,
or update!
This is just the start for quality automation, though: the
detailed self-documentation of the checks supports the Smart Retry pattern.
This is the answer to working with checks that can fail intermittently due to a
variety of reasons: now that a check result documents itself in how the product
was driven, in the context of driving the product (and potentially, with additional
instrumentation data placed in the structured data result of the check) root
cause is now nicely recorded. An implementation of Smart Retry can now answer
these questions, and take action, in
real time:
·
Was the failure due to an external dependency?
·
Should I retry the check, or not?
·
Did I just reproduce the failure?
All the data is recorded. Flaky tests are a thing of the
past; no need to mark tests as “untrustworthy,” no need to interrupt an
engineer’s workflow with an impromptu debug session.
Atomic Check also ensures that the data is there so
Automated Triage will work. Emails send to long DLs to beg a group of busy
engineers “will somebody please follow up on this? There might be a problem
here” are a thing of the past. Email folders filled with what is commonly
viewed as annoying SPAM, are no more.
http://MetaAutomation.net has more information and two working
samples to illustrate:
The first is easy to set up, run, modify, and reuse, and
will run across processes on one machine. For example, one check can run across
any number of processes.
The second is more work to set up, but will run checks
across any number of machines or VMs. Running a single check across multiple
machines is how one does end-to-end checks for the Internet of Things.
Both samples use a free version of Visual Studio 2015, and
come with instructions.
Recognize the “oops!!” of “test automation” and “test is
just about finding bugs.” Grieve for a moment – we’re all entitled to that! - then
move on to something much better.
Quality automation is inevitable. MetaAutomation describes a
language-independent and platform-independent way to get there. Why a pattern
language of six patterns? Because that’s by far the best way I can describe it.
Vastly greater productivity, transparency, and team
happiness await.
Organized content is the best way to display your useful about information Test Automation. Keep going with your good works.
ReplyDeleteHome Automation Surrey | Security system New Westminster">
This comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDelete