Wednesday, October 28, 2015

Analytics and MetaAutomation


Analytics are very powerful and an awesome developing technology, and your team needs automated verifications of business requirements as well (e.g. using the Atomic Check pattern of MetaAutomation, and dependent patterns).

Why both?

Atomic checks are fast and reliable enough to verify requirements from an end-user perspective quickly, and a set of them are even fast and scalable enough to gate check-ins (especially with the Precondition Pool, Parallel Run and Smart Retry patterns of MetaAutomation). Implementing these patterns can ensure that quality is always moving forward, which is especially important for software that matters.

Analytics can give the team ideas of what the end uses are experiencing. This is awesome, because otherwise you’d have little or no idea of what’s going on for your end users. This is very valuable for, e.g., A/B testing. OTOH, it’s somewhat like the shadows of Plato’s cave: you have to reconstruct and guess what’s really going on in some cases, e.g., through session abandonments on a site.

Analytics need product instrumentation, aka telemetry. Good news; there’s synergy here! Automated verifications can use those as well. Analytics benefits from product instrumentation through logs, and automated verifications can use the same outputs but synchronized for inclusion in the artifact of a check run. The valid XML document created by an atomic check is ideal for this.

 

This diagram gives a view of how product instrumentation can benefit both analytics and automated verifications (checks).

Tuesday, October 27, 2015

Automated Verifications Deserve Better than Flight-Recorder Logs

This post is a follow-up from this one:
http://metaautomation.blogspot.com/2015/10/linear-check-steps-are-good-but.html

"Automated Test" is an oxymoron, because you're not automating what a manual tester could do with the script, and unlike true automation (industrial, operations or IT automation) you don't care about the product output (unless you're doing your quality measurement that way explicitly).

Automation for software quality has some value at authoring time because it's a good way of finding bugs, but really it only has business value with the verifications being done, most of which are coded explicitly (preliminary verifications, and a target verification or verification cluster, per MetaAutomation).

So, call it what it is: automated verifications! That understanding opens up tremendous business value for software quality! If done right, it can help manual testers too by making their jobs more interesting and more relevant.

According to tradition, and according to the origins of "test automation" from industrial automation, the output of automation the software system under test (SUT) is flight-recorder logs and, maybe, a stack trace. But, there's no structure there; it's just a stream of information, some relevant, but most not relevant or interesting.

It's time to leave that tradition behind! Automated verifications have well-defined beginnings and ends. They're bounded in time! Use that fact to make some structure out of the artifacts, i.e., the information that flows from an automated verification.

XML is ideal for this, especially with an XSD schema. With an XML document, there are boundaries and a hierarchical structure, and if the presentation is left out (as it should be) then it's pure data; it's compact and strongly-typed, available for robust and efficient analysis, and completely flexible for presenting the data.

Even better: The least-dependent pattern of MetaAutomation, Atomic Check, shows how to write self-documenting hierarchical steps for your checks (i.e., automated verifications) which means that you'll never have to choose between using your teams domain-specific language (DSL) or ubiquitous language, and using those (sometimes, very important) granular details of what the check is doing with your product or where/why it failed. You can do both!

With self-documenting hierarchical steps, after a given check has succeeded all the way through at least once, if the check fails, all the check steps are recorded with "pass," "fail," or "blocked." This gives another view of course on what the check was trying to do and tells you exactly what parts of product quality are blocked or not measured due to the failure.

Self-documenting hierarchical steps also give you information that's detailed and precise enough to know whether the specific failure has happened before! See the Smart Retry pattern of MetaAutomation for how to do and use this.

Even more: using a structure like XML for your check artifact means that you have a ready placeholder for perf data; it's recorded how many milliseconds a given step took, for every step in the hierarchy. Perf testing doesn't need a separate infrastructure or focus, and perf data is available for robust analysis over the SDLC.

See the samples on MetaAutomation.net to see how this works.

 

Tuesday, October 20, 2015

Linear Check Steps are Good, but Hierarchical Steps are The Way of the Future


A check (what some call an “automated test”) has steps. Typically, these steps are linear: first step 1, then step 2, then etc. That’s useful, because now you know how the check does its thing, assuming you know what the words describing the check steps mean.

If your software development team values communication, maybe they’ve defined a Ubiquitous Language (UL) of terms that are meaningful to the problem space that the product addresses and meaningful to all team members.

The steps of the check document what it does, so define simple steps for the check using terms of your UL. If the steps are too granular for your UL terms, then lump the linear steps together until your steps are expressed with your UL.

Now the check steps use UL, and that’s good, but what happens inside each of those steps? If you lumped steps together to make a linear list of steps, each of which has a meaning in your UL, the steps themselves aren’t trivial. What goes on inside the step? It’s documented in code somewhere, but not generally available to the team. What if the step changes subtly, so that the steps work together in a sequence for different test cases? This happens; been there, done that.

Hierarchical check steps avoid these problems, and greatly improve communication around the team. With MetaAutomation, the check steps are self-documenting in the original hierarchy with results “pass,” “fail,” or “blocked.” The customer of the check results does not always want to drill down to the child steps, but (s)he always has that option, and it would be quick and easy if needed.

I’ll make a more concrete example of the value of hierarchical steps and drill-down capability:

I live in north Seattle. Suppose I want to visit my brother near Boston. Suppose further I’m afraid of flying, and I want to drive a car.

I go to Bing maps, type in the addresses, and presto! Driving directions.

But, there are 40 linear steps to get there, and I presume that they’re all correct, but most of them are not even close to being in the domain language of driving a car for that distance. For example, step 8 is “Take ramp right and follow signs for I-5 South.” If someone asked me “How do I drive to Boston?” and I replied something including “Take ramp right and follow signs for I-5 South” the person would make excuses and go ask someone else.

No, the domain language for driving to my brother’s house includes terms like this:

1.       Do you know how to get to I-5? OK, then

2.       Take I-5 South to I-90

3.       Take I-90 East to eastern Massachusetts

4.       I-95 North to Melrose

5.       …(several more steps)

There are 7 linear steps total. But, there’s a problem with step 3: Are we really taking the shortest, most optimized route? If yes, take I-94 through North Dakota. If doing the more obvious, take I-90 through South Dakota. Which is it? This could be important if the car breaks down and help is needed.

If the driving steps are hierarchical, the customer of that information (the person consuming the directions) has choices: the default view of the steps shows the 7 top-level steps, the same steps as if they were just linear. But, all the information about driving to Boston is there, not just the broad steps.

With hierarchical information, the customer can drill down at step 3 to discover that the most optimized route means getting off I-90 at Billings Montana to take I-94 instead through Custer, Bighorn, etc. and stay on I-94 all the way east to Tomah Wisconsin. If one were driving the route, or if you got the call that the driver is lost or broken down, that would be important information.

Back to software quality:

If a check fails, having the hierarchical information could make the difference between the need to reproduce the problem (if that’s even possible, which in some cases it’s not) and debug through, or having all the information one needs to enter a bug, fix the problem, or write it off as an external transient timing issue (race condition). The hierarchical information makes Smart Retry (a pattern of MetaAutomation) possible because it will make it clear whether a problem is reproduced or unique, external or owned by the developer team.

Hierarchical check steps gives the team the flexibility to use UL terms whenever possible so the whole team benefits from knowing what’s going on with the product, and at the same time have child steps that are finely granular to reflect exactly how the product is being driven, or exactly what the value of a parameter is, or what the result is from some instrumentation etc.

Hierarchical check steps also record performance data for all of the steps, so if a step is running slow, the customer can always drill down to discover which child step is running long.

Performance data in line with the most granular steps is recorded every time those steps are run, so analysis over time can uncover performance limits and trends.

As software gets ever more important in our lives, and more complex at the same time, a better solution for recording and communicating quality is inevitable.

Edit October 27th, 2015:
See also this post on hierarchical check artifacts
http://metaautomation.blogspot.com/2015/10/automated-verifications-deserve-better.html
 

Friday, October 9, 2015

What is MetaAutomation? A Very Brief Technical Overview


What, again, is MetaAutomation all about?

Here is a metaphorical view of the potential of MetaAutomation.

This post is about a technical view.

The least dependent pattern of MetaAutomation is called Atomic Check. There are three main aspects that Atomic Check specifies and they’re somewhat bound together:

First, the check must be atomic i.e. as short and as simple as possible while still measuring the targeted business requirement for the software product. The result of a check run is a Boolean of course, pass or fail.

Second, no check has a dependency on any other check. They can be run in any order and on any number of clients, to scale with available resources.

Third, every check run creates a detailed result, which I call an artifact. Whether it’s pass or fail, the artifact has all of the following characteristics:

1.       It’s pure data. (I use valid XML with an XSD schema.)

2.       All the individual steps of the check are self-documented in the artifact.

3.       Check steps are hierarchical, for better clarity, stability, and robustness in analysis.

4.       Check steps, and the check as a whole, can have additional data in name/value pairs.

5.       Each check step element in the XML has perf data: the number of milliseconds that step took to complete (including child steps, if there are any).

The third aspect is the most radical. What happened to your flight-recorder log? It can be there if you want the help at development time, but that’s the only setting where it’s useful. Afterward, it’s discarded, because the pure-data artifact is more compact, more precise and more accurate, and much better for subsequent analysis and other cool stuff.

Why would I ask you to do something radically different? Because it helps you do very cool and powerful things, like communicate to all roles in the team in as much or as little detail as they want! It enables analysis of product behavior over the entire SDLC, greatly simplifies and normalizes perf information, makes for better information on software asset value for Sarbanes-Oxley (SOX) …

The third aspect I describe above, in combination with the first two, enable the dependent patterns of MetaAutomation to work and deliver quality value to the whole team that’s an order of magnitude better than conventional practice. For software that matters to people, MetaAutomation or the equivalent practice is inevitable.

Please see here for more information on MetaAutomation.

What is MetaAutomation? A Brief Metaphorical View


What is MetaAutomation all about?

MetaAutomation opens a conceptual door, and through that door, it shows how to create powerful quality and communications value for a software team, faster and better, more complete, robust, reliable and satisfying.

That door is currently being held closed by several factors at least:

Conventional wisdom and practice maintains that “automated testing” is just like “manual testing,” in fact, according to this meme the words “automated” and “manual” can be dropped from the software quality profession. There are good intentions here… but, it comes with business risk, and it’s pulling the door closed.

There are tools vendors and open-source projects which support “automating” tests which are originally designed as manual tests, i.e., for a person to execute to observe quality information on a software product under development. There are good intentions here too, but it’s also risky, and it’s pulling the door closed.

The “Context-Driven School” view of testing discourages software quality professionals from seeing general approaches to software quality. This pulls the door closed.

Keyword-driven testing is represented by many tools and techniques, BDD, Cucumber and Gherkin for example, but requires linear runtime interpretation of an invented language (keywords or phrases) to run. This limits both the precision and the accuracy of the artifacts of a check run, which in turn limits value of the automated verifications for communication and for quality measurement. The concept is very popular these days – it does have value! – but it still looks a lot like automating manual tests, with all the risks and limitations that go along with that, and the artifacts aren’t much better. This movement, too, pulls the door closed.

Open the door! Automated verifications is in many ways a distinct discipline with unique powers, relative to other software quality patterns and practices. For repeatable regression checks, for performance data, for managing business risk and shipping software faster and with better quality, for quality communications around all team roles and across distributed teams, there’s huge value on the other side of that door.

That’s what MetaAutomation is all about.
 
Please see here
http://metaautomation.blogspot.com/2015/10/what-is-metaautomation-very-brief.html
for a very brief technical overview.
 (edited 10.12.2015)

Thursday, October 8, 2015

Automated Verifications are Special, and Why This Is Important


MetaAutomation enables automated verifications to be much more powerful than they are with existing practices. However, to realize that value requires, first, an important paradigm shift.

Conventional wisdom about automation, according to current practices, is that there is no fine distinction that separates automated verifications from other aspects of software quality, e.g. manual testing or human-led testing that uses tools (in addition to the SUT, which is itself a kind of tool).

This view can simplify software quality planning because it approaches “test automation” as if this were natural, rather than the contradiction-in-terms that it really is, and that automation is therefore simply an extension of the manual testing effort.

The simplification is an attractive model for understanding the very complex and difficult problem space around software quality. One could argue that this even follows the philosophical principal of Occam’s razor, i.e., the simpler model that explains the observations is the one more likely to be correct or useful.

However, fortunately or unfortunately (depending on your perspective), understanding automation as an extension of the manual testing effort does not explain experiences or capabilities in the space. People with experience in the various roles of doing software quality know well that:

People are very good at

·               Finding bugs

·               Working around issues and exploring

·               Perceiving and judging quality

·               Finding and charactering bugs

But, they’re poor at

·               Quickly and reliably repeating steps many times

·               Making precise or accurate measurements

·               Keeping track of or recording details

Computers driving the product (system under test, or SUT) are very good at

·               Quickly and reliably repeating steps many times

·               Keeping track of and recording details

·               Making precise and accurate measurements

But, computers are poor at

·               Perceiving and judging quality

·               Working around issues or exploring

·               Finding or characterizing bugs

There’s a big divergence between what people are good at, and what computers are good at. This is just one set of reasons that “test automation” doesn’t work; most of what people do can’t be automated. Besides, the term “automation” that comes from industrial automation (building things, faster and more accurately and with fewer people), automation in an airplane (handling and presenting information, and assisting the pilots in flying the plane), or automation in an operations department at a company (allocating network resources through scripts), does not apply to software quality; automation is about output, and if a software product is under development, we generally don’t care about the product output, aside from quality measurements. The “automation” focus of making or doing stuff does not apply to software quality generally.

There is one thing that can be automated, however: verifications around behavior of the software product. Given the above lists on what computers can do well, if we program a computer to do a quality measurement, we’re limited to what eventually amounts to a Boolean quantity: pass or fail.

Note that this is different from using a tool or tools to measure the product so that a person makes the quality decision. People are smart (see above) and they have observational and emotional powers that computers do not. That’s not “automated testing,” either, because all you’re doing is applying a tool (in addition to the software product itself) to make a quality decision. Using a tool could be “automation” like the true meaning of the word (i.e. producing things, modifying or presenting information), but by itself, application of that tool has nothing to do with quality. What the person does with the application of the tool might be related to quality, though, depending on the person’s role and purpose.

I describe the new paradigm like this:

1.       Characterizing software product quality is a vast and open-ended pursuit.

2.       The observational, emotional and adaptive powers of people is indispensable to software quality. (I call this “manual test,” for lack of a better term, to emphasize that it’s a person making quality decisions from product behavior.)

3.       The only part of “automation” that honestly applies to software quality is automated verifications.

4.       Manual test, and automated verifications, are powerful and important in very different ways.

5.       Recognizing the truth of #4 above opens the team up to vast increases in quality measurement efficiency, communication, reporting and analysis.

My last point (#5) is the reason I invented MetaAutomation.

Suppose the quality assurance (QA) team has a Rodney Dangerfield problem (no respect!). MetaAutomation can get them the respect they deserve, by improving speed, quality, transparency of what the software product is doing, exactly what is being measured and what is not being measured. Their achievements will be visible across the whole software team, and the whole team will be grateful.

Suppose the accounting department is preparing for a Sarbanes-Oxley (SOX) audit of the company. They want to know about the value of software developed (or, under development) in the company: what works? How reliably does it work? How fast is it? MetaAutomation answers that need, too, with unprecedented precision and accuracy.

But, MetaAutomation requires step #4 above; it requires people to recognize that automated verifications for quality are very different from manual (or, human-decided) testing.

Once you accept that automated verifications have a special difference and a distinct potential, you open your team to new horizons in productivity and value.