Wednesday, September 14, 2011


How does one define “quality?”

I like the definition that is used by Adam Goucher, Jerry Weinberg and others: “Quality is value to some person that matters.” OTOH, this needs to be more detailed to be meaningful to complex or high-impact software projects.

Given that the quality bar of software is getting higher (OK, except maybe for some social media applications and games about exploding birds) and our reliance on IT is increasing we really need to create a reasonably complete picture of the product quality before releasing it to the end-user. Actually, we must do it even earlier than that: to manage risk, we need to have some handle on quality while developing the product.

Quality is defined with input from the customer (“I want the software to do this…”) but it’s also complex and difficult and it wouldn’t work to lean on the customer for all aspects of what constitutes quality, especially non-functional quality. So, the team has to define quality in all its gory detail: the product owner, developers, and test / QA people collaborate and flesh out a quality document or documents. The definition of quality can’t be 100% complete, it will (or should!) evolve before product release as the product develops and market context changes, and it must include things like scale, perf, failure scenarios, security issues, discoverability and other things which the end-user can’t be bothered with.

Notice that “some person that matters” is still the end-user customer in the end, but the software development team members are now proxies for the end-user customer. In agile methodology, the product owner (PO) is formally the “voice of the customer” but really, all team members take on variations of that role in different ways. Robust quality requires leadership from the PO and it requires contributions from the other roles as well.

Test isn’t just about finding bugs. It has to measure quality through the whole process, and bugs are just part of the total quality picture. The rest of the team depends on this. Shipping without a complete picture of quality gives a high risk of the end-user finding a high-impact bug (i.e. a bug that the team would never have let out the door, had they known about it), which could be damaging to the business.

For a high-impact application – say, medical or financial software – if the measurement of quality at any point in the SDLC isn’t reasonably complete, the extent of the incompleteness represents unacceptable risk and is similar to lack in quality. The risk of unmeasured quality can increase during the SDLC due to runtime dependencies or design dependencies.

A quick takeaway is: test early and often, and measure Test productivity with breadth and detail of the quality picture, not just by number of bugs found or fixed!

Tomorrow: Test
Friday: starts a series of posts on metaautomation


  1. Hello,

    What do you mean with "measure quality"?
    What measures does that include?
    How do you measure quality?

    I glanced through your blog but could not find a description of this.

    1. Software quality is a very open-ended characterization of the product.

      Quality is some value to some person (from Jerry Weinberg) or I could put it like this: quality is how well the product meets customer expectations, without any rude surprises or deceptions. For example, you could download some free product with some malware Trojan, and be happy with the product, but if your identity gets stolen that impacts quality of the product even if you're not aware of it right away.

      How to measure quality? It must be done in many ways, but I'm focused on automated verifications because these can be done much more effectively than conventional practices.

    2. This comment has been removed by the author.

    3. Thx for the reply.
      (Wanted to edit a typo but had to delete the reply)

      Yes, software quality is very open-ended. That is why I am curious in what you mean, and how you do it.

      Can you give me an example of your quality measurements

    4. It's important to measure quality in many ways, but one measure that's very powerful and my focus - and the reason for MetaAutomation - is automated verifications.

      you're creating a web site for a credit union. One of the business requirements is that the customer (end-user) can make a transfer from one account to another, and that the resulting balances be reported correctly on the site.

      So, using the Atomic Check pattern of MetaAutomation, you'd automate all the preliminary steps (log in, select balance transfer, enter the amount, etc.) and then do the target verification or in this case probably a target verification cluster: both balances, the origin and the destination, are reported correctly. That's your atomic check.

      For more information, see my blog or book

    5. Yes, I will start with your blog. One last question:
      Am I totally off in assuming that your use of the term "measure quality" is synonymous with "testing"?


Note: Only a member of this blog may post a comment.