Monday, February 23, 2015

Don't Drive Around in a Horseless Carriage, and Don't Automate Manual Tests!


One of the memes doing the rounds these days is that automated testing isn’t different from manual testing, and in fact, you should simplify by thinking of these two modes of quality measurement in the same way.

That way of thinking is very self-limiting and therefore expensive. I will illustrate this with an analogy.

Back in the 1890’s, this was a stylish way to get around:


Then, this happened:
 

And, people could ride this:

 

Notice some things about this gasoline-powered car:

The rear wheels are bigger than the front.

One steers the thing with a tiller.

It looks a lot like the horse-drawn buggy that woman is driving, doesn’t it?

But, when people say you should think of automated tests just like manual tests, and even say that the word “automation” should be removed as a meaningless identifier, they’re well-intentioned, but the real consequence of what they’re saying is that they want you to get around town in something like this motorcar from a century ago.

Meanwhile, what you really want to be driving is something like this Tesla Model S:



 

In the last century, people have figured out that with modern infrastructure in modern society, it doesn’t make sense to drive around in something that looks like a horse belongs in front of it.

I submit that it’s time to do test automation that’s optimized for automated processes, and not approach it as if it were the same as manual testing.

The Atomic Check pattern, the least dependent pattern in the MetaAutomation pattern language, does exactly this. To learn more, check out other entries in this blog or see my book on MetaAutomation on Amazon.

 

Monday, February 9, 2015

Automation is key, but don't automate manual tests!


A common pattern in software quality these days is to consider automation as if it were the equivalent of manual testing. With this view, the question of “what to automate” is easy; the team proceeds to automate the existing test cases. These were written as manual test cases, and probably run occasionally too or maybe just as part of developing the test cases. It’s thought that the manual test cases don’t have to be a burden any more on the manual testers, because once automated, the quality measurements that is the subject of those test cases is now taken care of. But, is it safe to assume this?

 

An important difference is that testers notice stuff; they’re smart and observant. Automated tests, on the other hand, only “notice” what is explicitly coded into the automated test, that is, other than the application crashing or some confusing implicit verifications e.g. a null reference exception.

 

Manual tests of a GUI or a web page can be automated, but the value of running that test as automation is very different than running it manually. The automated test might be faster, but it misses all sorts of details that would be very obvious to a human tester, and is prone to a brittleness that that manual test does not suffer. An experienced team would run the “automated” manual test periodically anyway, to confirm all those qualities that the automation probably doesn’t cover (or, if it did cover, would become too brittle to be useful).

 

It doesn’t make sense to automate manual tests. But, quick regression is important to manage the risk around product churn. So, what to automate?

 

A simple but deterministic approach is to verify elements of business behavior, and only the intermediate steps as needed. This way, the automation is faster and more robust and is clear about what it verifies and what it does not.

 

This is the first of several important requirements of the Atomic Check pattern, the least dependent pattern of MetaAutomation. There are other wonderful properties of Atomic Check, but this one is summarized in an activity diagram on page 37 of the book on MetaAutomation (and shown below).