Monday, July 14, 2014

Time to retire the phrase "Automated Testing" and use "Checking" instead


UPDATE to this post, April 18th, 2015: for purposes of automated testing, I'd like to define "check" as a specialization or subclass of "test" where the verifications are limited to those specifically coded or otherwise determined in advance. The usefulness of "check" is limited to what people also call "automated test," and has a business justification that it avoids confusion and risk: it avoids the confusion that a manual test involving a GUI or web page, once automated, obviates the need for a human to run the manual test, and it avoids the quality and business risk that would result from losing the corresponding measurements of product quality.

***

It’s time to retire the phrase “automated testing.”

Given that software testing is about measuring, communicating and promoting quality, leadership often sees automation – that is, making a software product do things automatically –as a way of doing all of the above faster. Unfortunately, it does not work that way.

People are smart, but computers and computing power are not smart. People running user stories or test cases or doing exploratory testing are very good at finding large numbers of bugs, within the limits of attention, getting tired or bored, etc. People are great at spotting things that are not as they should be, e.g., a flicker in an icon over here or a misalignment of a table over there, or a problem of discoverability.

When automated product testing is done well, it has huge value: it is excellent at regressing product quality issues quickly, repeatedly, and tirelessly. Automation does not get tired or bored. Computers are very good at processing numbers and repeating procedures, and doing them fast and reliably, and at e.g. 3AM local time when your people are home sleeping.

However, automation is not good at finding product bugs or anomalous issues like a flickering icon. You need good human testers for that.

Instead of “automated testing,” it’s time to use a term proposed by James Bach and Michael Bolton (see their post http://www.satisfice.com/blog/archives/856 ) to define automation that drives tests: Checking. A single automated procedure that measures a defined aspect of quality for the SUT is a “check.” The term “check” applies where more commonly a professional in the space might use the term “automated test” but since testing is an intelligent activity done by humans, the term “automated test” becomes an oxymoron; once a test is automated, it is no longer a test in the same sense. If done well, it is fast, reliable, tireless and highly repeatable, but the value is very different from the same procedure run by a testing professional.

A skilled and experienced tester running a manual test can discover, characterize and describe as a bug any of a broad range of issues. The range of potential issues found has few limits, and is driven by the intelligence, creativity and observational skill of the tester. However, to take that manual test, automate it, and then run it offline (without human observation or intervention) or in the lab, severely restricts the discoverable range of issues. Automated tests are capable of flagging issues that block the procedure of the test, and issues that are the topic of explicit verifications or metrics coded into the test or automated test harness, but they do not measure anything else about the product. The automated test will usually run faster than the manual test, and a well-written automated test will run more reliably and repetitively than the manual test, but it does not replace the manual test. If the automated test taken from the manual test above runs and passes a thousand times, running the manual version of the test once could still find important issues.

The team therefore still needs the manual test, and in the context of measuring product quality, there is benefit to tracking the manual test and the manual test results separately from the automated test and the automation results. The manual and the automated version of the test both have their values for quality, and one does not replace the other.

It’s time for the industry to use “check” because this term emphasizes that automating a test is not the same as running the test faster and does not obviate the manual version of the test.  The need for manual testers will always be there for the team; however, a well-designed and frequently-run set of checks can make manual testing faster, more effective, and more fun because it makes for less manual repetition of measurements that are verified by automation and more exploration around the manual test.

In addition, the best-written manual tests are significantly different from ideal checks. Manual (or quasi-manual) tests tend to focus on scenarios or mini-scenarios, because that is the natural usage for end-users, and it gives testers the most opportunities to find issues and characterize them as bugs to be considered by the team. Checks focus on specific verifications, and ideally are as short as possible.

“Check” means that that the verifications are strictly limited to what is specified in advance, either by the coded-in verifications, verifications for the test group (if that is implemented) or by the test harness as a whole. In the context of automated testing, this specification might be specified in prose, but is always specified in the code that is written. This works very well for an automated test, because it is important to be completely consistent over test runs with what is and is not verified about the SUT.

 

This post is based on an excerpt from Matt Griscom’s forthcoming book, MetaAutomation.

1 comment:

  1. Maybe if we ban the word testing entirely we can avoid all the problems created by it.

    ReplyDelete

Note: Only a member of this blog may post a comment.