Thursday, August 6, 2015

Stronger Quality with MetaAutomation: Part 3 of 3, Quality through the SDLC


Good, efficient communication is an important asset to the team. MetaAutomation shows how to achieve this over the short and long term.

Imagine a test/QA team that creates a quality data store for the whole team with these properties:

·         Compact records

·         Validated data structures

·         Performance data in-line with check steps and results

·         Check steps marked pass, fail, or blocked

For example, from an intranet site, any team member can query and do analysis directly on the data store. The pure, structured data shows exactly what checks were run and when, and how long each executed step took in milliseconds, as well as the check as a whole, even if the check itself was distributed for multi-tier steps or verifications in the check.

This level of quality detail starts when the system is up and running, and extends throughout the SDLC or through all iterations, depending on the process the team uses to develop and ship their product. Since the check steps are self-documenting, the steps laid out in the check artifact are as stable as the code that runs the check.

With focused, pure and structured data on product quality, including all of the self-documenting steps of a given check, it’s clearly known what’s working and what the verifications are. Trust and communication are greatly improved, between geographically-distributed teams and between the test/QA team, developers, program managers, and leadership.

The quality of the company’s software assets, i.e., the product under development, is clearly expressed in great detail. This helps with SOX compliance as well, so the managers and investors are happy.

The test/QA team gets the exposure and respect it deserves.

Successes and failures in the business requirements of the product are archived in great detail over time, so improvement in quality in the product is evident. Product owners and leaders have a powerful asset to help them manage risk, and they know that failures in core behaviors of the product are fixed quickly, both in theory and in practice. It’s all there in the quality record.

By addressing the business value of programmatically-driven verifications of software product quality, for developing software that matters, MetaAutomation radically increases the value of the quality role to the business. Open-source software will be available this summer to demonstrate scalable, distributed checks and show how to do the same for your team.

Wednesday, August 5, 2015

Stronger Quality with MetaAutomation: Part 2 of 3, Handling a Check Failure


What happens on your team when a check (what some call “automated test”) fails?

If you follow the common practice of automating manual test cases, then the authors of the automation are expected to follow up and diagnose the problem. Usually, it’s a failure in the automation itself, everybody on the team knows that, so it doesn’t get much priority or respect generally, but even if it does, it’s very time consuming and labor-intensive to resolve the failure.

Alternatively, the author of the automation watches the automation proceed to see if something goes wrong. That shortens the communication chain, but it’s very time-consuming, expensive, and doesn’t scale at all.

Regression tests or checks that are effective towards managing quality risk must be capable of sending action items outside the test/QA team quickly. False positives, i.e., messages on quality issues that ultimately turn out to not concern the product at all, are wasteful and they corrode trust in the test/QA team. Therefore, quality communications must be quick and trustworthy for test/QA to be effective.

On check failure, people can look at flight-recorder logs that lead up to a point of failure, but logs tend to be uneven in quality, verbose, and not workable for automated parsing. A person has to study them for them to have any value, so the onus is on test/QA again to follow up. Bottom-up testing, or testing at the service or API layer, helps, but the problem of uneven log quality remains. Mixing presentation with the data, e.g., English grammar or HTML, bloats the logs.

Imagine, instead, an artifact of pure structured data, dense and succinct, whether the check passes or not. Steps are self-documenting in a hierarchy that reflects the code, whether they pass, fail, or are blocked by an earlier failure.

MetaAutomation puts all of this information in efficient, pure data with a schema, even if the check needs to be run across multiple machines or application layers.

A failed check can be retried immediately, and on failure, the second result compared in detail to the first. Transient failures are avoided, and persistent failures are reproduced. Automated analysis can determine whether the failure is internal or external to the project, and even find a responsible developer in the product or test role as needed.

If so configured, a product dev would receive an email if a) the exact failure was reproduced, and b) the check step, stack trace, and any other data added by check code indicates ownership.

Atomic Check shows how to run end-to-end regression tests so fast and reliably, they can be run as check-in gates in large numbers. Check failures are so detailed, the probability that a problem needs to be reproduced is small.

With MetaAutomation, communications outside test/QA are both quick and trustworthy. See parts 1 and 3 of this series for more information.

Tuesday, August 4, 2015

Stronger Quality with MetaAutomation: Part 1 of 3, Fast Quality


Manual testing and programmatically-driven quality measurements are both very important for software quality, however, they are each good at very different things.

Please notice that I’m avoiding the word “automation” here; there’s good reason for that. I write more on this topic here http://metaautomation.blogspot.com/2015/07/the-word-automation-has-led-us-astray.html.

Freedom from the word “automation,” and liberation from assumptions that programmatically-driving quality measurements should be like manual testing in any way, have similar benefits: They open up new frontiers in productivity for managing quality and risk for your software project.

Imagine focusing on prioritized business requirements, at the software layer closest (if at all possible) to where those business items are implemented. Writing just one check – that is, a programmed verification – per business requirement, makes for simpler, faster checks.

This is one of the reasons I created MetaAutomation: to highlight more effective techniques to programmatically measure and report on the quality of the SUT.

The least-dependent pattern of the MetaAutomation pattern language is Atomic Check. This pattern shows specific techniques to create a focused verification, or verification cluster, on an item of business logic. There are no verifications other than the target verification, i.e., the target of the check, and whatever check steps are needed to get to the target.

Simple, focused checks run faster and contain fewer points of failure. Atomic Check also describes how to make the check independent of all other checks, so that running your set of checks will scale across resources and your selected set of checks can be as fast as you need them to be.

Atomic Check also creates an artifact of the check run in strongly-typed and pure data, e.g., a grammar of XML. This has many benefits, including that it enables another very useful pattern, called Smart Retry, which I’ll write more about in part 2, and addresses visibility, quality analysis, and SOX compliance, which I’ll discuss in part 3.