Showing posts with label scale. Show all posts
Showing posts with label scale. Show all posts

Tuesday, April 9, 2013

An Organization and Structure for Data-Driven Testing


This post follows up on the one from yesterday:


So, data-driven testing is the way to go for a huge return on finding and regressing product issues and measuring the whole quality picture. How to start?

I like XML files to do this. Here are some reasons:

1.       Your favorite text editor will work for reviewing, editing, and extending your test set.

2.       If given values for a test are optional and you provide defaults as needed, the XML can be “sparse” and even easier to read and edit. The data that drives the test also expresses the focus and reason for the test, in the data itself!

3.       You can be as constrained or as loose with the data schema (the layout of the XML data) as you want.

4.       Extending your data engine can be as simple as allowing and parsing different values. For example, for testing with objects that include pointers or references, you can put “null” as a value in your XML and have your engine parse and use that for the test, in the context as defined in the XML.

There are many engines that help with data-driven tests, or with some time and skill, you can write your own.

To make the tests more readable and extensible, use different XML files to drive different kinds of tests – e.g. positive vs. negative tests, scenario A vs. scenario B, vs. scenario C. With appropriate labels, comments, error messages and bug numbers inline with the data for the individual test, all your tests can be self-documenting and even self-reporting, freeing you from maintaining documents with details about the tests and removing that source of errors and potential conflicts.

A relational database is a more powerful way of handling large amounts of structured data. This would be a better choice for example if you were doing fuzz testing by generating large numbers of tests, according to your randomization scheme, and then saving to and executing from a SQL database. Even with fuzz testing, it’s very important that tests be as repeatable as possible!

 

Monday, April 8, 2013

The Power of Data-Driven Testing


This post assumes a focus on integration and end-to-end testing of the less-dependent parts of a product, where the greatest quality risks are found: in the business logic, data or cloud layers. See this post for a discussion of why this is most effective for a product that has important information: http://metaautomation.blogspot.com/2011/10/automate-business-logic-first.html

Automated testing usually involves some inline code in a class method. A common pattern is to copy and paste code, or create test libs with some shared operations and call the libs from the test method. The tests correspond to the methods 1:1, so 50 automated tests look like 50 methods on a class with minor hard-coded variations between repeated patterns in code.

For repeated patterns like this, there’s a much better way: data-driven testing.

Data-driven tests use a data source to drive the tests. Within the limits of a pattern of testing as defined by the capabilities of the system reading the data to drive the test, each set of data for the pattern drives an individual test. The set of data for each test could be a row in a relational database table or view, or an XML element of a certain type in an XML document or file.

Why is this better?

For one, agility. The test set can be modified to fit product changes with changes in the test-driving data, at very low risk. It can also be extended as far as you want, within limits described by how the data is read.

Helping the agility comes readability, meaning that it’s easy for anyone to see what is tested and what is not for a given test set. It’s easy to verify that the equivalence classes you want covered are represented for a given set, or the pairwise sets are there, boundaries are checked with positive and negative tests, etc. for a given test set.

To help readability, you can put readable terms into your test-driving data. Containers can have “null” or an integer count or something else. Enumerated types can be a label used in the type, say “Green,” “Red” or “Blue”, or the integer -1 or 4 for negative limit tests.

Best of all, failure of a specific test can be tracked with a bug number or a note, for example, “Fernando is following up on whether this behavior is by-design” or “Bug 12345” or a direct link to the bug as viewed in a browser. When a test with a failure note like this fails, the test artifacts will include a note, bug number, link or other vector that can significantly speed triage and resolution.

The next post


Has some notes on organization, structure and design for data-driven tests.