Tuesday, July 14, 2015

Automation Gets No Respect


The world of software automation for quality is riddled with failures. When the people creating the automation of the software under test (SUT) fail to create reliably-running tests, or it becomes clear that this effort takes more time than first estimated, management and the rest of the team lose confidence. In any case, driving the SUT through scenarios is too often seen as a risky, low-value afterthought. After all, the developers can test the product themselves and learn things that they thought the test team was going to tell them anyway, but for some reason can’t reliably deliver.

In any case, the conventional approach to software automation for quality creates a losing situation for the people doing the work.

If they are told that the highest-value automation is end-to-end automation of the product, including the web page or GUI, they are likely doomed to write a system that creates many false positives – i.e., test failures that have nothing to do with product quality - which in turn create more work for them because they must follow up with a debug session just to discover if there is an actionable piece of information for the rest of the team.

The broader team pays little attention to the results from the checks because they know

1.       False positives are common, and if there really was a product bug, the authors of the check would discover that in a debug session and tell them.

2.       The checks don’t measure what they’re designed to do, because they can’t possible match the perception and smarts of a human testing the SUT directly.

With the correct focus on verifying and regressing the business requirements of the SUT, rather than on automating the SUT do make it do stuff, the false-positive problem and the what-is-the-check-verifying problem go away. I created MetaAutomation to describe how to take the optimal approach to solving these problems and creating many other benefits along the way:

·         The focus is on prioritized business requirements, not manual tests or scenarios

·         Checks run faster and scale better with resources

·         Check steps are detailed and self-documenting, with pass, fail or blocked status recorded in the results

·         Check artifacts are pure data, to enable robust analysis on results and across check runs

·         The quality measurement results are transparent and available for sophisticated queries and flexible presentations across the whole team

·         Performance data is recorded in line with the check step results

With MetaAutomation, the test and QA role can produce the speedy, comprehensive, detailed, and transparent quality information to ensure that functional quality always gets better.

If you want to give the test/QA role the importance and respect it deserves, try MetaAutomation!

Monday, July 13, 2015

The Word “Automation” Has Led Us Astray

If you’ve written automation for software quality, there’s a good chance you did it wrong. Don’t feel bad; we’ve all been doing it wrong. It’s not our fault.
We were led astray by the word “automation.”
Automation is about automatically driving human-accessible tools to accomplish specific tasks.
When people starting programmatically driving their software system under test (SUT) for quality purposes, overloading the word “automation” seemed like the best way to describe the new practice, but actually the word applies poorly to software quality. “Automating” your SUT means making it do stuff, and that’s usually how management measures output. Quality verifications are added as an afterthought, and have little to do with the “automation” objective so they tend to be poorly planned and documented.
The word “automation” for software quality distracts people from the business value of the activity: measuring quality.
The misunderstanding that automation for software quality is just doing what humans do (i.e. manual testing), but doing it faster and more often, causes business risk:
·         Unless you’re very clear and specific on what is being measured, the quality measure is incomplete and manual testers must verify it anyway.
·         Manual tests are designed by people to be run by people. They do not make ideal automated measurements because they tend to be long, complicated, and burdened with too many or too few verifications.
Automated verifications of services and APIs tend to be more effective, but this isn’t “automation” either by the definition.
At the core of the paradigm shift is an important dichotomy:
People are very good at
·         Finding bugs
·         Working around issues
·         Perceiving and judging quality
But, they’re poor at
·         Quickly and reliably repeating steps many times
·         Making accurate measurements
·         Keeping track of details
Computers driving the SUT are very good at
·         Quickly and reliably repeating steps many times
·         Keeping track of details
·         Making accurate measurements
But, computers are poor at
·         Perceiving and judging quality
·         Working around issues
·         Finding bugs
Conventional automation for software quality misses these distinctions, and therefore comes with significant opportunity costs.
To create software that matters and be effective and efficient at measuring quality, your team must move away from conventional misguided “automation” and towards a more value-oriented paradigm. To describe this value, I created MetaAutomation.
MetaAutomation is a pattern language of 5 patterns. It’s a guide to measuring and regressing software quality quickly, reliably, and scalably, with a focus on business requirements for the SUT. The “Meta” addresses the big-picture reasons for the effort, the nature of what automated actions can do, and the post-measurement action items and efficient, robust communication where “automation” by itself fails.
MetaAutomation shows how to maximize quality measurement, knowledge, communication and productivity. Imagine e.g. self-documenting hierarchical steps automatically tracked and marked with “Pass,” “Fail,” or “Blocked.” Imagine a robust solution to the flaky-test problem!
For trustworthy, prioritized, fast, scalable, clear and around-the-team presentable quality results, try MetaAutomation!
The other posts on this blog clarify the MetaAutomation pattern language, but unfortunately the topic is also much too big for a blog post, so see also the definitive book on the topic here: http://www.amazon.com/MetaAutomation-Accelerating-Automation-Communication-Actionable/dp/0986270407/ref=sr_1_1