MetaAutomation starts with making automation failures actionable, maximizing the value of automation results, and continues by automating triage. MetaAutomation reduces the cost of fixing existing automation and ensures that automation helps your quality measurements and improvements, rather than hindering them.
Given that the quality bar of software is getting higher (OK, except maybe for some social media applications and games about exploding birds) and our reliance on IT is increasing we really need to create a reasonably complete picture of the product quality before releasing it to the end-user. Actually, we must do it even earlier than that: to manage risk, we need to have some handle on quality while developing the product.
Quality is defined with input from the customer (“I want the software to do this…”) but it’s also complex and difficult and it wouldn’t work to lean on the customer for all aspects of what constitutes quality, especially non-functional quality. So, the team has to define quality in all its gory detail: the product owner, developers, and test / QA people collaborate and flesh out a quality document or documents. The definition of quality can’t be 100% complete, it will (or should!) evolve before product release as the product develops and market context changes, and it must include things like scale, perf, failure scenarios, security issues, discoverability and other things which the end-user can’t be bothered with.
Notice that “some person that matters” is still the end-user customer in the end, but the software development team members are now proxies for the end-user customer. In agile methodology, the product owner (PO) is formally the “voice of the customer” but really, all team members take on variations of that role in different ways. Robust quality requires leadership from the PO and it requires contributions from the other roles as well.
Test isn’t just about finding bugs. It has to measure quality through the whole process, and bugs are just part of the total quality picture. The rest of the team depends on this. Shipping without a complete picture of quality gives a high risk of the end-user finding a high-impact bug (i.e. a bug that the team would never have let out the door, had they known about it), which could be damaging to the business.
For a high-impact application – say, medical or financial software – if the measurement of quality at any point in the SDLC isn’t reasonably complete, the extent of the incompleteness represents unacceptable risk and is similar to lack in quality. The risk of unmeasured quality can increase during the SDLC due to runtime dependencies or design dependencies.
A quick takeaway is: test early and often, and measure Test productivity with breadth and detail of the quality picture, not just by number of bugs found or fixed!
Friday: starts a series of posts on metaautomation