Tuesday, April 30, 2013

The Software Quality Process, part 1 of 3: Creating docs and bugs


This is post 1 of a series on software QA and Test, seen from the process perspective. Links to parts 2 and 3 will be added here as I finish writing and posting those parts. I use the term “Test” with a capital T to mean the test and QA org, the person or people responsible for measuring and communicating quality.

Software is about telling computers what to do with information. The scope of these posts is about the pure information part of that, so I’m skipping over hardware-related issues, but methods described here could be applied to hardware + software systems as well e.g. the mobile-device business.

Early in the software development life cycle (SDLC) Test needs to be involved, to listen and learn, but also to influence. Important questions for Test to address include: is the product testable? Can the team arrive at a good-enough measure of quality soon enough to ship? Where are the biggest risks? Where would the customer pain points likely be, and can these be mitigated with design decisions that are made early in the SDLC?

One product of these meetings is the Test Plan. The Test Plan needs to include either a link or links to a readable, maintainable, high-level view of the product, probably graphical for efficiency, or include a product description itself at a high level – but not both! The goal here is to have enough information that a new team member can quickly figure out what’s going on without being a burden, and there’s something that people can quickly refer to, but to minimize duplication of information, be as agile as makes sense for the product space, and not to spend too much time documenting or making pretty diagrams.

The Test Plan would continue with a strategy for characterizing the quality of the product with sufficient confidence that it becomes OK to ship. There’s much more to this than just finding bugs. the Test Plan must address ALL aspects of the product that can affect quality in any way, including security issues, integration, installation, deployment, scale, stress, usability, discoverability, and so on. More on this “characterizing quality” think with part 3 of this series to follow.

The Test Plan should contain scenarios to describe the product from the perspective of the end-user, or the client business, or services on which it depends, etc. It could contain test cases, too, but a more agile approach is to have the test cases exist as self-documenting entities in the source code. Modern build systems can generate simple docs from the code that are accessible to decision makers on the team even if they don’t have (or choose to build for themselves) access to the actual Test source code.

The Test Plan is generally the first important product from Test for communicating around the software team what Test is up to, and provides a framework for the characterization of the product quality I’ll address in part 3. The rest of this post is about bugs…

Bugs are created as necessary to communicate quality issues around the team. Here are some qualities of good bugs:

·         The title is succinct, descriptive, and includes keywords (to be searchable)

·         The bug is atomic, so addresses just one fix in one code change or change set

·         The bug is clear and has enough detail for the intended audiences, primarily program managers and developers, but also other people in Test and executives

·         The bug has screenshots if that helps at all with making the bug understandable, e.g.

o   A screenshot of a GUI as seen by product end-user

o   A screenshot of a section of an XML document, formatted and colorized per the IDE used in the developer team, if the bug relates to that XML

o   A screenshot of some specific code as seen in the IDE

·         Links to related, dependent bugs, or bugs on which this bug depends

·         Links to tests cases and/or test case failures if those are maintained in the same database

·         Links to documents

That could potentially add up to a lot of work from Test, and very big and detailed bugs. Watch out for too much detail, though; one of the risks of creating these bugs is that it could be too specific, when the problem that the bug reports is part of a larger problem. Addressing the bug as a sub-problem when the bigger issue gets unrecognized risks losing track of the issue, which could create rework and/or product quality risk.

Bugs are work items that have state, e.g.

·         New

·         Active

·         In progress

·         Fixed

·         Postponed

·         Duplicate

·         Won’t fix

·         Closed

As they “bounce” around the group to be handled by team members asynchronously i.e. when the time is most efficient for them. But, they also create a searchable record of product quality issues, a record of which areas Test has been working on the product, and inform the decision of when to ship product.

The product will ship with unfixed bugs! (If not, it hasn’t been tested, and it probably shouldn’t ship at all.) This will be addressed in the following posts.

There are two more posts to this series:

The Software Quality Process, part 2 of 3: Triaging bugs, and bug flow
http://metaautomation.blogspot.com/2013/05/the-software-quality-process-part-2-of.html
The Software Quality Process, part 3 of 3: Quality Characterization, Bugs, and When to Ship
 http://metaautomation.blogspot.com/2013/05/the-software-quality-process-part-3-of.html
 

Wednesday, April 10, 2013

For Your Quality Customers, Add Value with Every Change


Who are your customers?

Of course, you’re developing software for the end user. The other members of your team are the first customers though.

I’ve written about many techniques of advanced software quality that can reduce risk and strengthen your quality story. This is how the Test team can best add value and make the Devs more productive, meaning that you can ship faster and with lower risk!

To inspire trust and reduce risk to the team, every change set that goes into the product must add quality value, that is, information about the quality of the product.

The problem is, very few software projects are starting anew. Most have some quality infrastructure, maybe some copy/pasted scripts, maybe a set of test cases that are run manually. The team members are customers of this existing infrastructure.

So, existing quality assets such as these must be maintained or replaced. At every change to documents are code, value is added, never taken away.

This is important for the same reason that failures in test code, or failures that are perceived as being due to test failures, must be fixed ASAP: the quality knowledge of the product must always advance and improve. If it does not advance, due to dropped coverage from “old” test infrastructure or tests that fail so often they’re perceived as not worth fixing, then parts of the product are not tested anymore, and knowledge of and stability of the product is lost. This kind of project rot must be avoided.

Every change and every addition to product quality infrastructure, no matter how sophisticated, agile, actionable, self-reporting etc. must add to existing knowledge of the product.
This makes a strong and productive team: mutual respect and attention to keeping the quality moving forward.

Tuesday, April 9, 2013

An Organization and Structure for Data-Driven Testing


This post follows up on the one from yesterday:


So, data-driven testing is the way to go for a huge return on finding and regressing product issues and measuring the whole quality picture. How to start?

I like XML files to do this. Here are some reasons:

1.       Your favorite text editor will work for reviewing, editing, and extending your test set.

2.       If given values for a test are optional and you provide defaults as needed, the XML can be “sparse” and even easier to read and edit. The data that drives the test also expresses the focus and reason for the test, in the data itself!

3.       You can be as constrained or as loose with the data schema (the layout of the XML data) as you want.

4.       Extending your data engine can be as simple as allowing and parsing different values. For example, for testing with objects that include pointers or references, you can put “null” as a value in your XML and have your engine parse and use that for the test, in the context as defined in the XML.

There are many engines that help with data-driven tests, or with some time and skill, you can write your own.

To make the tests more readable and extensible, use different XML files to drive different kinds of tests – e.g. positive vs. negative tests, scenario A vs. scenario B, vs. scenario C. With appropriate labels, comments, error messages and bug numbers inline with the data for the individual test, all your tests can be self-documenting and even self-reporting, freeing you from maintaining documents with details about the tests and removing that source of errors and potential conflicts.

A relational database is a more powerful way of handling large amounts of structured data. This would be a better choice for example if you were doing fuzz testing by generating large numbers of tests, according to your randomization scheme, and then saving to and executing from a SQL database. Even with fuzz testing, it’s very important that tests be as repeatable as possible!

 

Monday, April 8, 2013

The Power of Data-Driven Testing


This post assumes a focus on integration and end-to-end testing of the less-dependent parts of a product, where the greatest quality risks are found: in the business logic, data or cloud layers. See this post for a discussion of why this is most effective for a product that has important information: http://metaautomation.blogspot.com/2011/10/automate-business-logic-first.html

Automated testing usually involves some inline code in a class method. A common pattern is to copy and paste code, or create test libs with some shared operations and call the libs from the test method. The tests correspond to the methods 1:1, so 50 automated tests look like 50 methods on a class with minor hard-coded variations between repeated patterns in code.

For repeated patterns like this, there’s a much better way: data-driven testing.

Data-driven tests use a data source to drive the tests. Within the limits of a pattern of testing as defined by the capabilities of the system reading the data to drive the test, each set of data for the pattern drives an individual test. The set of data for each test could be a row in a relational database table or view, or an XML element of a certain type in an XML document or file.

Why is this better?

For one, agility. The test set can be modified to fit product changes with changes in the test-driving data, at very low risk. It can also be extended as far as you want, within limits described by how the data is read.

Helping the agility comes readability, meaning that it’s easy for anyone to see what is tested and what is not for a given test set. It’s easy to verify that the equivalence classes you want covered are represented for a given set, or the pairwise sets are there, boundaries are checked with positive and negative tests, etc. for a given test set.

To help readability, you can put readable terms into your test-driving data. Containers can have “null” or an integer count or something else. Enumerated types can be a label used in the type, say “Green,” “Red” or “Blue”, or the integer -1 or 4 for negative limit tests.

Best of all, failure of a specific test can be tracked with a bug number or a note, for example, “Fernando is following up on whether this behavior is by-design” or “Bug 12345” or a direct link to the bug as viewed in a browser. When a test with a failure note like this fails, the test artifacts will include a note, bug number, link or other vector that can significantly speed triage and resolution.

The next post


Has some notes on organization, structure and design for data-driven tests.