Thursday, August 30, 2012

The MetaAutomation Meme


The word “Meme” was coined by British evolutionary biologist Richard Dawkins to describe the spread of ideas and cultural phenomena, including cultural patterns and technologies.

Metaautomation describes a set of techniques and technologies that enable a view of software quality that is both deeper and broader than is possible with traditional software automation alone, and given sufficient investment, this can be taken further to do smart automated test retries and even automated triage and pattern detection that wouldn’t be possible with traditional techniques.

For the more advanced metaautomation concepts, the investment and risk are greater, and the potential reward in terms of team productivity are much greater. So, I’m dividing the meme into two parts:

·         First-order metaautomation: making test failures actionable, and minimizing the chances that a debugging session is necessary to find out what happened

·         Second-order metaautomation: creating some degree of automated triage, automated failure resolution, and automated smart test retry

 

Metaautomation is an innovation analogous to the simple structural arch: before arches, the span and strength of bridges was limited by tensile strength (resistance to bending) of the spanning material. A famous example of this technology is North Bridge in Concord, Massachusetts.


But with arches, the span and strength is limited by the compressive strength of the material used. This works well with another common building material – stone - so the technology allows much more impressive and long-lasting results, for example, the Alcantara Bridge in Spain.


The techniques of metaautomation did not originate with me, but in defining the term and establishing a meme for the pattern, I hope to make such techniques more understandable and easy to communicate, easier to cost and express benefits for the software process, and therefore more common.

The first order of metaautomation will become very commonly used as the value is more widely understood. The second order of metaautomation is good for large, distributed and long-lived projects, or where data has high impact e.g. health care or aviation systems.

Wednesday, August 29, 2012

Managing MetaAutomation


“If you can’t measure it, you can’t manage it.”

This quote has been attributed to Peter Drucker, Andy Grove, Robert Kaplan, and who knows who else. Oh, and me. I said it, so put me down on the list too.

The common measurement of automation is the number of test cases automated. Since what management measures is what management gets, one result of this practice can be an antipattern:


a product scenario is exercised, probably to completion, but confidence about that completion can be elusive, and in case of any kind of failure, a very significant investment is required of the test developers to follow up and resolve the failure to an action item – which can cause team members to procrastinate on resolving the failure because that’s not what’s being measured, and the behaviors addressed by the failing automated tests get ignored for a time, which in turn causes project risk because the product quality measurement provided by test automation is disrupted.

How does one encourage the correct behaviors to get robust automation with strong, scalable value towards measuring and regressing product quality – and positively measure the team members’ behaviors, too? I’m talking about metaautomation, of course, and how to encourage progress towards metaautomation in output from the team. Here are some thoughts on useful performance metrics towards that end.

Some goals for your team:

·         advance the effectiveness of test automation to achieve quick and effective regression detection

·         achieve quicker and more accurate triage to keep needless work off people’s plates

·         reduce wasted time for everybody on poorly-defined failures

(that is first order metaautomation, the topic of a future post)

… and beyond that, where a deeper investment in quality is warranted, look forward to

·         smart automated test retry

·         some degree of automated triage

(this is the second order of metaautomation, to be covered in more detailed also in a future post)

I think improving team spirit and cohesion, and improve technical learning in your individual contributors, can be achieved at the same time. In order to get there, measurement of performance in these areas must be combined with other management metrics used for assessing individual performance.

Metaautomation-friendly practices accelerate the test automation rate during the automation project as classes, coding patterns and other structures are put into place. For example: Given two projects, one doing simple minimal automation (call it project A) and the other doing metaautomation to the first order (project B), project A will start out faster but will suffer over time from failed tests that are either neglected, causing blind spots in software quality, or failed tests that take significant investment to get them working again. Project B will eventually overtake project A in rate of successfully running automation, and probably eventually in raw numbers of tests automated. In project B, the quality value of running tests is much greater because the test failures won’t be perceived by the team as time-sucking noise. I covered this topic pretty well in previous posts. All team members need to understand this foundational concept.

So, how do we make metaautomation qualities (in performance of test team members) measurable at test automation time?

First, you can bring the team up to agreed-on code standards. Most projects have preexisting code, so defining the implementing the standards is probably going to be an iterative process.

This can also be a team-strengthening collaborative process. For a large project, have everybody read existing code standards (if they exist) and propose additions or changes - offline to save time. Minimally, everyone will learn the code standards, but much better, they have some ownership in improving the standards, through an email thread or wiki. This shouldn’t take a lot of time, and is a great opportunity for team members to learn team practices and show their ability to contribute to the team while learning how to write more effective, readable, maintainable, metaautomation-friendly code themselves. In Test, this allows them to feel more ownership than testers normally have AND emphasizes team contribution and learning.

Peer code reviews are an even better opportunity for team members to communicate, learn from and influence each other with respect to these coding practices and standards. Just as it’s important for testers to learn the whole project, they benefit from learning the whole team as well, and I advocate that everybody get chances to review others’ work as an optional or required reviewer. This is another opportunity to bring out team players, bring the team together, and give introverts opportunities to reach out with two-way communication and learning. Testers should be encouraged to push for testability in the product code, and qualities of metaautomation – per the earlier team agreement – in test code. Suggestions must be followed up on, not necessarily in the code itself, but it’s important for everybody on the team to recognize that they are all learning and teaching at the same time. No cowboy code allowed!

For example: in the case of discussing a topic for which developer Foo is much more knowledgeable than developer Bar, developer Foo is expected to provide some educational assist to Bar, e.g. a link and some context. Foo and Bar will both benefit from a respectful transfer of information: Foo from the greater understanding that comes through the teaching process (however minimal), Bar form the learning, and both of them from team cohesion.

See what testers can come up with for techniques to improve visibility into the root cause of any one failure – i.e. if a test fails due to some specific failure, is it easy to find root cause of the failure by inspecting output – the artifacts of the failed test case run?

Encouraging everybody to communicate with each other in terms of the code will accelerate learning and improvement all around, and if done right, will improve team cohesion as well. It will also bring out the value of the individual contributors as team players, and since team members will all figure out that this is one thing that management is noticing, they’ll do their best to help each other out and not default to isolation.

I think this is a great opportunity for positive reinforcement from the test lead or manager; not singling out an individual for praise, which can have negative effects on morale, but rather noting and raising the visibility of ways in which the team can achieve things through teamwork, that none of the individuals on the team could achieve. Positive reinforcement is appropriate here because the encouraged behaviors are associated with learning, collaboration, and innovation.

Here are summary steps to strengthen your team using principles of metaautomation:

1.      Establish that the pro-metaautomation behaviors described here are expected

2.      Encourage and give positive reinforcement at a team level

3.      Make measurements of contributions and integrate these measurements with other metrics and expectations used in evaluating performance

Using these as a guide, you can make metaautomation manageable, and lead your team to new strengths in promoting a quality software product.