Thursday, August 30, 2012

The MetaAutomation Meme


The word “Meme” was coined by British evolutionary biologist Richard Dawkins to describe the spread of ideas and cultural phenomena, including cultural patterns and technologies.

Metaautomation describes a set of techniques and technologies that enable a view of software quality that is both deeper and broader than is possible with traditional software automation alone, and given sufficient investment, this can be taken further to do smart automated test retries and even automated triage and pattern detection that wouldn’t be possible with traditional techniques.

For the more advanced metaautomation concepts, the investment and risk are greater, and the potential reward in terms of team productivity are much greater. So, I’m dividing the meme into two parts:

·         First-order metaautomation: making test failures actionable, and minimizing the chances that a debugging session is necessary to find out what happened

·         Second-order metaautomation: creating some degree of automated triage, automated failure resolution, and automated smart test retry

 

Metaautomation is an innovation analogous to the simple structural arch: before arches, the span and strength of bridges was limited by tensile strength (resistance to bending) of the spanning material. A famous example of this technology is North Bridge in Concord, Massachusetts.


But with arches, the span and strength is limited by the compressive strength of the material used. This works well with another common building material – stone - so the technology allows much more impressive and long-lasting results, for example, the Alcantara Bridge in Spain.


The techniques of metaautomation did not originate with me, but in defining the term and establishing a meme for the pattern, I hope to make such techniques more understandable and easy to communicate, easier to cost and express benefits for the software process, and therefore more common.

The first order of metaautomation will become very commonly used as the value is more widely understood. The second order of metaautomation is good for large, distributed and long-lived projects, or where data has high impact e.g. health care or aviation systems.

Wednesday, August 29, 2012

Managing MetaAutomation


“If you can’t measure it, you can’t manage it.”

This quote has been attributed to Peter Drucker, Andy Grove, Robert Kaplan, and who knows who else. Oh, and me. I said it, so put me down on the list too.

The common measurement of automation is the number of test cases automated. Since what management measures is what management gets, one result of this practice can be an antipattern:


a product scenario is exercised, probably to completion, but confidence about that completion can be elusive, and in case of any kind of failure, a very significant investment is required of the test developers to follow up and resolve the failure to an action item – which can cause team members to procrastinate on resolving the failure because that’s not what’s being measured, and the behaviors addressed by the failing automated tests get ignored for a time, which in turn causes project risk because the product quality measurement provided by test automation is disrupted.

How does one encourage the correct behaviors to get robust automation with strong, scalable value towards measuring and regressing product quality – and positively measure the team members’ behaviors, too? I’m talking about metaautomation, of course, and how to encourage progress towards metaautomation in output from the team. Here are some thoughts on useful performance metrics towards that end.

Some goals for your team:

·         advance the effectiveness of test automation to achieve quick and effective regression detection

·         achieve quicker and more accurate triage to keep needless work off people’s plates

·         reduce wasted time for everybody on poorly-defined failures

(that is first order metaautomation, the topic of a future post)

… and beyond that, where a deeper investment in quality is warranted, look forward to

·         smart automated test retry

·         some degree of automated triage

(this is the second order of metaautomation, to be covered in more detailed also in a future post)

I think improving team spirit and cohesion, and improve technical learning in your individual contributors, can be achieved at the same time. In order to get there, measurement of performance in these areas must be combined with other management metrics used for assessing individual performance.

Metaautomation-friendly practices accelerate the test automation rate during the automation project as classes, coding patterns and other structures are put into place. For example: Given two projects, one doing simple minimal automation (call it project A) and the other doing metaautomation to the first order (project B), project A will start out faster but will suffer over time from failed tests that are either neglected, causing blind spots in software quality, or failed tests that take significant investment to get them working again. Project B will eventually overtake project A in rate of successfully running automation, and probably eventually in raw numbers of tests automated. In project B, the quality value of running tests is much greater because the test failures won’t be perceived by the team as time-sucking noise. I covered this topic pretty well in previous posts. All team members need to understand this foundational concept.

So, how do we make metaautomation qualities (in performance of test team members) measurable at test automation time?

First, you can bring the team up to agreed-on code standards. Most projects have preexisting code, so defining the implementing the standards is probably going to be an iterative process.

This can also be a team-strengthening collaborative process. For a large project, have everybody read existing code standards (if they exist) and propose additions or changes - offline to save time. Minimally, everyone will learn the code standards, but much better, they have some ownership in improving the standards, through an email thread or wiki. This shouldn’t take a lot of time, and is a great opportunity for team members to learn team practices and show their ability to contribute to the team while learning how to write more effective, readable, maintainable, metaautomation-friendly code themselves. In Test, this allows them to feel more ownership than testers normally have AND emphasizes team contribution and learning.

Peer code reviews are an even better opportunity for team members to communicate, learn from and influence each other with respect to these coding practices and standards. Just as it’s important for testers to learn the whole project, they benefit from learning the whole team as well, and I advocate that everybody get chances to review others’ work as an optional or required reviewer. This is another opportunity to bring out team players, bring the team together, and give introverts opportunities to reach out with two-way communication and learning. Testers should be encouraged to push for testability in the product code, and qualities of metaautomation – per the earlier team agreement – in test code. Suggestions must be followed up on, not necessarily in the code itself, but it’s important for everybody on the team to recognize that they are all learning and teaching at the same time. No cowboy code allowed!

For example: in the case of discussing a topic for which developer Foo is much more knowledgeable than developer Bar, developer Foo is expected to provide some educational assist to Bar, e.g. a link and some context. Foo and Bar will both benefit from a respectful transfer of information: Foo from the greater understanding that comes through the teaching process (however minimal), Bar form the learning, and both of them from team cohesion.

See what testers can come up with for techniques to improve visibility into the root cause of any one failure – i.e. if a test fails due to some specific failure, is it easy to find root cause of the failure by inspecting output – the artifacts of the failed test case run?

Encouraging everybody to communicate with each other in terms of the code will accelerate learning and improvement all around, and if done right, will improve team cohesion as well. It will also bring out the value of the individual contributors as team players, and since team members will all figure out that this is one thing that management is noticing, they’ll do their best to help each other out and not default to isolation.

I think this is a great opportunity for positive reinforcement from the test lead or manager; not singling out an individual for praise, which can have negative effects on morale, but rather noting and raising the visibility of ways in which the team can achieve things through teamwork, that none of the individuals on the team could achieve. Positive reinforcement is appropriate here because the encouraged behaviors are associated with learning, collaboration, and innovation.

Here are summary steps to strengthen your team using principles of metaautomation:

1.      Establish that the pro-metaautomation behaviors described here are expected

2.      Encourage and give positive reinforcement at a team level

3.      Make measurements of contributions and integrate these measurements with other metrics and expectations used in evaluating performance

Using these as a guide, you can make metaautomation manageable, and lead your team to new strengths in promoting a quality software product.

Thursday, July 19, 2012

The Legacy of Stephen R. Covey: testing with Character, not Personality


Stephen R. Covey, author of the hugely successful book "The 7 Habits of Highly Effective People," died July 16th 2012.

I picked up his book because I realized that I need to nurture my own leadership skills in order to promote product quality at the next level. Working as an individual contributor doesn't cut it anymore; I need to influence the big picture, not patch up automation projects that are already established according to the patterns of software quality as it's generally practiced today.

Serendipity! There it is, in the first full chapter of his book. Covey pithily describes his journey of discovery through American self-help literature as a tension between two approaches toward being personally effective: The "Character Ethic" and the "Personality Ethic."

The Character Ethic is deep and foundational, but not always obvious on the surface behaviors of a person. The Personality Ethic displays on the surface, but doesn’t necessarily have any depth.

Covey dismisses the Personality Ethic approach to increasing personal effectiveness as beneficial in some settings, but weak in the long term and perhaps even damaging, whereas the Character Ethic flows from a person's basic values and practices.

He describes his own paradigm shift, triggered by a parenting challenge with one of his sons. Covey used the personality ethic as taught in his day to guide his parenting style, but he transitioned to the character ethic (as described by Benjamin Franklin) with good effect. The Character Ethic requires taking responsibility, and Covey shows that his paradigm shift is complete when he takes responsibility for his actions re. his son.

I've seen this happen at many software teams: people do software quality by emphasizing a basic automation of the product. The result is N test cases automated, and M of them pass. What happens with the N-M test cases automated that do not pass is not considered important, except that the team recognizes the need to get these test cases running again at some point. The highest priority - I've seen this priority be placed even higher than fixing "broken" test cases - is automating more test cases. The count of test cases automated is a superficial measure of success and productivity in measuring the quality of the product, but it's what they are measured on. Management needs a metric, and this is the best or most usable one they know of.

Covey's shift from personality ethic to character ethic is a pretty good analogy to the shift away from the automate-as-many-test-cases-as-you-can focus, towards the deep automation that really tests the product, and attempts to maximize the actionability of test failures. (The first approach might use a minimal positive-result verification like this: the automated test didn't throw an exception, therefore it's good.)

Focusing on automated test count– that is, the traditional way of automating the product - has value in the near term because it really does make the product do stuff and you can find a lot of product bugs through the process of creating automation. But, in the longer term with this approach, the inevitable failures of these tests are costly to follow up on and often the team gets an inaccurate picture of how completely the quality of the product has been measured because the automated test might hide failures or even be testing the wrong thing. I've even seen a very expensive 3rd-party test harness report success on a test, when on detailed follow-up, I found that the test didn't do anything meaningful.

Test automation with attention to metaautomation takes some up-front test harness design and more careful test automation with attention to patterns of failure reporting, but in the longer run a better and more complete measurement of product quality happens; test failures due to product issues are more actionable and are likely to be followed up on quickly. Trust in test automation code is higher, which enables the team to be more productive. Most importantly: failures due to regressions in the product are fixed more quickly, which keeps quality moving forward and reduces risk associated with product changes.

A test lead might ask these questions of someone automating a test: In how many different ways might this automated test fail, and what happens when it does fail?

Covey's character ethic, applied to software quality, makes for stronger quality and stronger product character. 

Here is a related post on the need for test code quality: http://metaautomation.blogspot.com/2011/09/if-product-quality-is-important-test.html 

Wednesday, February 1, 2012

The Risk of Agile, and How to Mitigate

Agile software development process is all the rage these days, and for good reason: by managing the complexity inherent in software projects, minimizing WIP (work in progress), and a bunch of other great values outlined here


I’d like to address the principles individually in a following post, but for now, I’ve experienced some real risks that follow directly from the agile process as commonly conceived and realized. I will address some of those risks here.

I’ve seen this in multiple workplaces: the project has been divided into close-knit teams, and the teams work in sprints of e.g. two weeks.  The sprint goals and costs are agreed on at the beginning of each sprint. Each team member takes on the sprint goals as they have bandwidth, and at the end of the sprint, the team does some simple demo to show the team’s work.

The general problem is, although these teams are ultimately highly interdependent - they all fit together to make the product! – they each end up developing local solutions to common problems.

Class types, XSD schemas, logging solutions, configuration files – either these are ultimately shared or they should be. If the former, software development risk is pushed back to the end of the cycle when everybody has to figure out how to integrate and work together. If the latter, there’s duplicated work with resulting cost and risk. (Such risk could show up late in the cycle: “I thought you added the streaming log option!” “I did, but over here. You guys developed your own logger, so you’re on your own.”)

The solution: Share resources early. Sorry agilites, this requires a bit of planning. Decide early on what the shared resources are, and be aware of when the need might arise for a resource that might be shared. Put shared resources in a common location…

I’ve seen cases where this is not done, and it’s very expensive. Agile dislikes planning, but without a plan, for a project that involves engineers in the double-digits, people eventually don’t know what’s going on they must guess, and often guess incorrectly. Confused and frustrated engineers can create an amusing setting (for those with a sense of humor about it) but it doesn’t ship quality software on schedule.

And remember: Duplication on information or resources is anathema to quality!

Tuesday, January 31, 2012

How to Ship Software Faster

Software is risky because it’s so difficult. For those outside the business, it’s hard to comprehend how difficult it is – computers are our information slaves, right? Tell them what to do, and they do it. Yes, and although the tools are getting better with time, computers are profoundly stupid. They will very rapidly, decisively, and often irreversibly do the dumb thing if you don’t very carefully tell them not to, and this in turn takes immense attention to detail.
It takes human intelligence to make computers solve a business problem, and humans make mistakes: they overlook stuff, make typos, make assumptions that aren’t always correct, and run into trouble communicating to others.

If a “bug” is an actionable item about the quality of the software, then people who tell computers what to do create bugs constantly, mostly un-characterized and un-measured bugs that will likely haunt the software project at some point in the future.

Teams need testers on the product early on: people who measure the quality of the product in a repeatable way and can report on issues found, and people who focus on the quality of the product rather than on getting the product to do stuff.

Software development is so difficult, it’s become a highly specialized and focused skill. Developers are focused on getting the product to work and meeting deadlines, because they have to be. Asking devs to focus on quality is like asking a race car driver to drive without a riding mechanic in the 1911 Gran Prix auto race. (Keeping a race car of that era running well took a lot of attention to the fuel mix etc.)

If you’re writing a phone game or a perpetual “beta” app, you can do like Google and outsource quality to the end-users, but if you’re doing something other than selling advertising, that won’t work. See this post:  http://metaautomation.blogspot.com/2011/10/no-you-cant-outsource-quality-detour.html.

The quality of what a company ships is very important to a company’s reputation and goodwill. Just guessing from the typical software product lifetime, this is good for 5-10 years, meaning that quality issues have very significant impact on a business.

The solution is simple. To manage risk and protect company value, it’s a good investment to have software testers on board. Manual testers are important but not sufficient by themselves due to time and human error; a project needs regular, frequent, and highly repeatable measures of quality with highly actionable artifacts… you guessed it, once again I’m linking back to here to accentuate the increased value that non-GUI automation can bring to a company. http://metaautomation.blogspot.com/2011/10/automate-business-logic-first.html.

If the goal is to ship quality software on-time, a good software quality person helps by providing measures of quality and risk for components of the product and the product as a whole, minimizing schedule risk by finding quality issues early or heading them off with good bugs, and creating better data on the work backlog e.g. necessary refactorings and bug fixes. Smart testers can find business-logic issues, characterize them, and even suggest fixes. This saves the devs time and help them stay focused on the larger picture of implementing the product, which in turn helps ship on time.

Software quality has tremendous value to an organization. Companies can’t afford to short themselves here.

Monday, October 17, 2011

Automate Business Logic First

At PNSQC 2011 last week, I met some very interesting and smart characters. One of them was Douglas Hoffman, current president of the Association for Software Testing. (some links: http://www.softwarequalitymethods.com/,

One of Douglas’ observations is that API-level E2E testing is 10 times as fast as graphical user-interface (GUI) level testing. He knows this from a very impressive array of industry experience.

Alan Page writes: “For 95% of all software applications, automating the GUI is a waste of time.” http://blogs.msdn.com//b/alanpa/archive/2008/09/18/gui-schmooey.aspx

I agree. For a software product in production or a project under development, assuming that there is some degree of separation between the GUI and the business logic it depends on, it’s quicker and more effective to automation the business logic, not the GUI. Some would call this automating the application programming interface (API).

I currently have the privilege of working on a team that does this right: the GUI layer is thin in terms of logic. The business happens below the API.

Here are some things that make automation that includes the GUI expensive:

·         GUI design can change often, because it’s what is displayed to end users. GUI is complex and laden with personal values. Whenever there’s a change in the GUI, any automation that depends on it must be fixed.

·         GUIs must be localized, and this usually means much more than just changing the displayed strings, introducing an additional level of instability to GUI automation.

·         GUI automation is rife with race conditions due to the nature of display.

·         Brian Marick: “Graphical user interfaces are notoriously unstable.” http://www.stickyminds.com/getfile.asp?ot=XML&id=2010&fn=XDD2010filelistfilename1%2Edoc

·         Automating the GUI takes many more dependencies on the product than automating the API, because the GUI is much more complex than the API. A result of this is that the GUI automation is much less atomic than API automation (see an earlier post http://metaautomation.blogspot.com/2011/09/atomic-tests.html), therefore riskier.

·         GUI automation requires an entire display mechanism, at least in software, even if the window is hidden. This involves significant overhead.

I’ve never seen stable GUI automation. What I’ve seen instead is that it’s expensive to keep going with GUI automation, if the team cares about keeping it running.

I’ve seen this many times: automation which includes the GUI fails, and it’s understood – usually correctly, but not always – that it’s just a GUI problem causing instability. This can have the effect of hiding deeper product problems, which can introduce a lot of risk to a project.

Here are some reasons to automate the business logic instead:

·         API automation is simpler and more transparent to write

·         API automation is much more stable

·         API automation runs 10 times as fast as GUI automation (from Douglas Hoffman again) (although, in my experience the difference is even larger)

·         There is no display overhead

·         With API automation, the compiler is your friend (assuming you’re using a strongly-typed language, which you should be. See this post http://metaautomation.blogspot.com/2011/10/patterns-and-antipatterns-in-automation_03.html)

·         Failures in API automation are much more likely to be actionable and important …

This last point is huge: If API automation fails for some reason other than for dependency failures, timeouts or race conditions (e.g. as mentioned here http://metaautomation.blogspot.com/2011/09/intro-to-metaautomation.html ) then it’s due to some change in business layers of the product and this instantly becomes a quality issue. It might be due to some intentional design change that the developers are making without first telling Test, but just as often it’s a significant quality issue that is worth filing a bug – and in that case, the team just found a quality failure essentially instantly, so it can be fixed very quickly and downstream risk is minimized. If it’s your automation that found the bug, you’re a hero!

Here’s another reason to focus on the business logic:

I’ve heard it said that it’s better to automation the GUI, because then you’re automating the whole product. At a shallow level, this has the ring of truth, but consider this perspective instead: suppose your team focuses on automating business logic, and therefore gets much more quality information, quicker regression, better coverage etc. Then, a GUI bug is found, and this bug is found later in the software development life cycle (SDLC) than otherwise, but no worries: the risk of deferring a GUI bug fix is very low, because if the application is at all well-designed, none of the business logic depends on the GUI flow.

Manual test will never go away, and people are smart and able to spot all sorts of GUI issues that automation can’t do without huge investment in automated GUI checks. Therefore, the GUI bugs are likely to be found by manual testers anyway, and they’re still relatively low risk because the rest of the product doesn’t depend on the GUI.

This is why I’m happy to focus on business logic automation.

Wednesday, October 12, 2011

No, you can't outsource quality (detour from antipatterns topic)

Due to illness and travel and the desire to put more attention into this, I'm not ready to continue the series of posts on antipatterns at the moment.

Twitter (140 characters?) and my available hardware didn't allow posting at the time, plus I was paying attention and not multi-tasking so here's my discussion two days after the fact.

It was satisfying to skewer Julian Harty in the auditorium this morning, though, if a little bit scary (... do people really believe what he's talking about?).

Harty's theme was "The Death of Testing." To be fair, I think the title and theme may have been influenced by simple business considerations of the PNSQC conference at which this took place, and they're trying to attract people who do software quality professionally to the PNSQC conference by scaring them into fearing for their jobs. If so, it worked, and attendance was high.

I want to give due to Harty's presentation skills; he's very good at engaging the audience.

The main thesis of his talk seemed sincere. He was talking about Google practices, and honestly qualified his comments by pointing out that he left Google in June of last year. (hmm, wonder how that happened...)

The idea is that "testing" in the broad sense of measuring and monitoring the overall quality of the product can be outsourced for free. Google does this with the "Give us feedback" functionality on their sites. The idea is that each of the many, many end-users of Google's products have the opportunity to tell somebody on the appropriate internal that there's some problem, and communicate with some individual at Google about the process of fixing it.

This works rarely, but often enough given that there are so many Google users.

Harty's thesis: this is free for Google, the quality is better because there are more eyeballs, and Google appears to respect customers and strengthen loyalty. Google has successfully outsourced quality.

... Yeah?  Copious steaming bovine excrement.

If I find a good bug this way, and go through the Google-prescribed process of getting it fixed, I could receive a cash prize of a few grand (according to Harty).

Now, suppose this is a security flaw. (There will always, always be security flaws, known or unknown.) Suppose this involves personally identifiable information (PII) i.e. most of Google functionality. Suppose I'm the first to find and characterize it. Suppose it's exploitable, e.g. I can use it to see the PII of anybody I want. Suppose I'm not the most ethical person...

I have a choice: do I report it to Google as they would like me to do, and chance getting a few grand as a reward? Or, do I report it to blackhats, and try to get $ a few million?

Of course I'd go to the blackhats! When I do this, all users of Google are exposed to the risk of identity theft. Identity theft is the worst thing that can happen to you on the internet.

Meanwhile, Google thinks that it has successfully outsourced product quality! Great deal, huh? The stockholders love it. Conference speakers talking about latest trends LOVE it. But the end result is identify theft for large numbers of Google customers.

Outsourcing quality can't possibly work for a company in the long run.

Testing is not dead.

Monday, October 3, 2011

Patterns and antipatterns in automation (part 2 of 3-4 parts)

For a product with significant impact, a pattern is writing test automation in the same language as the product (for Java) or the same framework as the product (for .Net). An antipattern is scripting some other, lighter-weight language.

I know many perceive that a script language e.g. Python, Perl or Javascript is better suited to write test automation because it may be quicker. But, with a strongly-typed compiled language, the compiler is your friend, finding all sorts of errors early. With a decent IDE, intellisense is your friend as well, hinting at what you can do and backing that up with the compiler.

If test automation is written in a different language than the product, then testers are distanced from the product code enough that they don’t have anything to say about it; it’s not their domain. But, product code is usually a good example to follow when writing good, robust, extensible test code. Today at work I entered two significant bugs in product code that I wouldn’t have been able to do if I weren’t continually up with the languages used in the product (a grammar of XML, and C#).

Another reason for having the test code in the same language (or framework) as the product is that you know that no information is lost with remote calls or exceptions rolling up the stack: the stack is the same through product and test code, and the test code can handle product exceptions if that’s called for in the test.

The barrier to entry with C# is actually quite low; a dev can help, and testers don’t need to get fancy with lambda expressions, delegates and partial class declarations. If a tester were to create robust, extensible, OO code anyway, a powerful language is needed.

I’ve seen many problems with interfacing a script language or GUI test framework with a compiled OO language: awkwardness of error handling or verification, dropped information, limited design choices…

What do you think? Do you have a favorite language or framework for test automation, that’s different than the product language?

Topic to be continued tomorrow …

Patterns and antipatterns in automation (part 1 of 2)

Patterns are common ways of solving frequent problems in software.

From the introduction to the famous “Gang of Four” patterns book “Design Patterns” (Gamma, Helm, Johnson, Vlissides): Design Patterns make it easier to reuse successful designs and architectures.

In Test, patterns are used to follow well-established best practices. Antipatterns are patterns of action “ … that may be commonly used but is ineffective and/or counterproductive in practice” http://en.wikipedia.org/wiki/Anti-pattern.

For example, one pattern of action is to automate as many test cases as quickly possible because that’s the performance metric that management is using, and we all want to look productive for the business. When the test cases start failing, ignore them and continue automating test cases, because that’s the metric that management is using.

This is actually an antipattern because it’s counterproductive practice. The business value of automating the product (or parts of the product) is to exercise and regress behavior (make sure it’s still working per design) for parts of the product on a regular schedule. If that can’t be done (because the automation broke) the value of those cases goes away or even worse: the quality team proceeds as if quality was being measured for that part of the product, when it’s not because those relatively high-priority test cases are failing.

There’s a closely-associated antipattern which I’ve addressed in practice for years but for which I credit my friend Adam Yuret (his blog is here http://contextdrivenagility.com/) for help in crystallizing: test cases are mostly written as if they were manual test cases, and they have value for running through the test case as a human tester and observing the results with human perception and intelligence. If this manual test case is automated, the verifications (aka “checks”) are typically many fewer than what a human might perceive, and they might even be zero i.e. nothing is getting verified and the value of running that automated test is also zero.

Adam maintains that what was a (manual) test case is no longer a test case because the entire flow of the user experience (UX) is no longer being measured; there are zero or more checks that are being measured, but they must explicitly be written, coded, and tested by the person doing the test automation. By default, they’re not.

I disagree with the “test case is no longer a test case” judgment, but this does point out an important truth about quality practice: the value of manual testing never goes away (unless maybe if the product has no graphical user-interface aka GUI).

The antipattern here shows up in two common and related team management practices: the idea that automating a GUI test case means that that a certain part of the product never has to be visited manually ever again, or that the practice of manual testing from the GUI (and, the manual testers) can simply be automated away.

I’ll continue this post tomorrow with more patterns vs. antipatterns…

Saturday, October 1, 2011

Duplication of information is anathema to quality

I drive an electric car: a Nissan Leaf. It's fun, reliable, zippy, quiet and very comfortable, there's no stink and I never buy gas. I have few complaints, but here's one:

When I'm driving, I often want to know the time. I can look above the steering wheel, to see a digital clock, or at the navigation system, to see another digital clock. Unfortunately, these times are almost never the same! They disagree by a minute or more. So, what time is it?


The software development life cycle (SDLC) gets very complicated and involves many engineer-hours of work, plus other investments. There's a lot of information involved.

Yesterday I attended an excellent seminar by Jim Benson (personal blog here http://ourfounder.typepad.com/ ) about Personal Kanban (Jim's book of that title is e.g.here http://www.amazon.com/Personal-Kanban-Mapping-Work-Navigating/dp/1453802266/ref=as_li_ss_mfw) and it was very enjoyable but I was bothered by the reliance on sticky notes for designing and communicating informtion. Those sticky notes show up across all sorts of agile methodologies. I can see the advantages: it's very social to be collaborative with your team in a room with the sticky notes, and the muscle memory and physicality of moving the stickies around helps communicate and absorb information.

But, I asked (OK, pestered, but Jim was a good sport about it) several questions along these lines: do we really need the sticky notes? There's cost and risk in relying on people to manually translate information from plans or some virtual document to the stickies on the board, then after the meeting with the stickies or at some other time (depending on how the team does it) this has to be carried over to the IT workspaces or documents. There are many potential points of failure, and many potential points of information duplication.

The problem with duplication of information is the same with the two clocks in my car: they can easily get out of sync, and then which do you believe? Information can get lost too, if I reorganize the stickies thinking that someone has already done the maintenance step of writing the information to the appropriate document, where actually that hasn't happened for some reason.

I predict that in a few years, the stickies will be gone because there will be a sufficient hardware and software solution to solve all of the problems that the stickies do, without the cost and risk. Team communication around work items will be more robust and efficient. There won't be any stickies to flutter to the floor (as a few did, during Jim's talk).



Worse than the clocks in my car, and even worse than the stickies, is superfluous docs that get out of sync. By "superfluous" I mean people ignore them, so they get out of sync and/or out of date, so they can cause confusion; for example, a doc that lists test cases that also are in a database.The test cases are probably going to change, and there's a very good chance that the doc will get out of sync, and there's also a good chance that someone will rely on the doc when it represents incorrect information.

Better to limit a document to information that doesn't exist elsewhere, and link docs to other docs and resources (databases, source code repositories) so it's clear where to get and where to update information.


Even worse than all of the above: duplicated logic in code, or duplicated code. See posts here http://metaautomation.blogspot.com/2011/09/beware-duplicated-code.html and here http://metaautomation.blogspot.com/2011/09/test-code-quality-readability-and.html .

Use team best practices when writing, and don't be afraid to clean up unused code! Duplicated logic can haunt the team and the product quality.

There are times when information duplication is by design and there's a robust system to keep it so, e.g. databases that are de-normalized or replicated or diagrams to visually represent a coded system... OK, so long as the diagrams are frequently used and updated by stakeholders on the team.

Beware the hazard of things getting out of sync. If the clocks disagree, they both look wrong!