Tuesday, August 4, 2015
Stronger Quality with MetaAutomation: Part 1 of 3, Fast Quality
Manual testing and programmatically-driven quality measurements are both very important for software quality, however, they are each good at very different things.
Please notice that I’m avoiding the word “automation” here; there’s good reason for that. I write more on this topic here http://metaautomation.blogspot.com/2015/07/the-word-automation-has-led-us-astray.html.
Freedom from the word “automation,” and liberation from assumptions that programmatically-driving quality measurements should be like manual testing in any way, have similar benefits: They open up new frontiers in productivity for managing quality and risk for your software project.
Imagine focusing on prioritized business requirements, at the software layer closest (if at all possible) to where those business items are implemented. Writing just one check – that is, a programmed verification – per business requirement, makes for simpler, faster checks.
This is one of the reasons I created MetaAutomation: to highlight more effective techniques to programmatically measure and report on the quality of the SUT.
The least-dependent pattern of the MetaAutomation pattern language is Atomic Check. This pattern shows specific techniques to create a focused verification, or verification cluster, on an item of business logic. There are no verifications other than the target verification, i.e., the target of the check, and whatever check steps are needed to get to the target.
Simple, focused checks run faster and contain fewer points of failure. Atomic Check also describes how to make the check independent of all other checks, so that running your set of checks will scale across resources and your selected set of checks can be as fast as you need them to be.
Atomic Check also creates an artifact of the check run in strongly-typed and pure data, e.g., a grammar of XML. This has many benefits, including that it enables another very useful pattern, called Smart Retry, which I’ll write more about in part 2, and addresses visibility, quality analysis, and SOX compliance, which I’ll discuss in part 3.