Tuesday, October 27, 2015

Automated Verifications Deserve Better than Flight-Recorder Logs

This post is a follow-up from this one:
http://metaautomation.blogspot.com/2015/10/linear-check-steps-are-good-but.html

"Automated Test" is an oxymoron, because you're not automating what a manual tester could do with the script, and unlike true automation (industrial, operations or IT automation) you don't care about the product output (unless you're doing your quality measurement that way explicitly).

Automation for software quality has some value at authoring time because it's a good way of finding bugs, but really it only has business value with the verifications being done, most of which are coded explicitly (preliminary verifications, and a target verification or verification cluster, per MetaAutomation).

So, call it what it is: automated verifications! That understanding opens up tremendous business value for software quality! If done right, it can help manual testers too by making their jobs more interesting and more relevant.

According to tradition, and according to the origins of "test automation" from industrial automation, the output of automation the software system under test (SUT) is flight-recorder logs and, maybe, a stack trace. But, there's no structure there; it's just a stream of information, some relevant, but most not relevant or interesting.

It's time to leave that tradition behind! Automated verifications have well-defined beginnings and ends. They're bounded in time! Use that fact to make some structure out of the artifacts, i.e., the information that flows from an automated verification.

XML is ideal for this, especially with an XSD schema. With an XML document, there are boundaries and a hierarchical structure, and if the presentation is left out (as it should be) then it's pure data; it's compact and strongly-typed, available for robust and efficient analysis, and completely flexible for presenting the data.

Even better: The least-dependent pattern of MetaAutomation, Atomic Check, shows how to write self-documenting hierarchical steps for your checks (i.e., automated verifications) which means that you'll never have to choose between using your teams domain-specific language (DSL) or ubiquitous language, and using those (sometimes, very important) granular details of what the check is doing with your product or where/why it failed. You can do both!

With self-documenting hierarchical steps, after a given check has succeeded all the way through at least once, if the check fails, all the check steps are recorded with "pass," "fail," or "blocked." This gives another view of course on what the check was trying to do and tells you exactly what parts of product quality are blocked or not measured due to the failure.

Self-documenting hierarchical steps also give you information that's detailed and precise enough to know whether the specific failure has happened before! See the Smart Retry pattern of MetaAutomation for how to do and use this.

Even more: using a structure like XML for your check artifact means that you have a ready placeholder for perf data; it's recorded how many milliseconds a given step took, for every step in the hierarchy. Perf testing doesn't need a separate infrastructure or focus, and perf data is available for robust analysis over the SDLC.

See the samples on MetaAutomation.net to see how this works.

 

5 comments:

  1. Intelligence on the post could be giving us Pretty great ideas. simply I stumbled upon your weblog and wanted to say that I have really loved browsing your awesome post.
    Plastic Pressure Gauge

    ReplyDelete
  2. OK Matt, now I am interested, smart retry - that is something we do in system integration or environment building when we use chef or puppet. So the retry mechanism will let me actually verify if a test point would succeed if I had waited a bit or just tried again and tell me instantly that the bug I'm looking for is likely to be a data initialization bug.

    ReplyDelete
    Replies
    1. The artifact (result) of a check run will tell you if something external timed out, but won't by itself tell you if the operation (or check) would have succeeded if the timeout were set to be longer.

      A check failure might point to a data initialization issue, which you could correlate to an existing data correlation bug.

      The thing about self-documenting hierarchical check steps is that a point of failure will be indicated in the check result with accuracy and precision, and structures are already in place to add more data in the context if you want, e.g. test or product instrumentation.

      Delete
  3. I still need to download your demo and look deeper, to really get your drift. My search for a great testing framework that solves each problem in an extensible way is on. Key problem I am looking into is when a test point that for example failed simply because the product under test maybe fails on first-use.
    Example: due to a TCP name lookup taking long the first time and thus failing, but always working there-after a test will maybe always fail, but pass for a manual tester because a manual tester might open some other app that triggers the name lookup like always doing a ping check. So my automated test failure then points to a "first-time" type fault. But it might take me ages to diagnose, but a test tool that just does a retry might then save me a load of time and characterixse my bug-report as:
    "open a document menu always fails the first time!"

    It gets expensive to put retries in all the tests, especially if your tests are really long running - and that is where the idea of a dependency graph of tests makes sense to me. Test A2 is dependant on test A1 passing type of support. This kind of support will save running time a lot and cut triage time ?

    Anyway, I better download and try the demo this weekend.

    ReplyDelete
    Replies
    1. From what you write here, if root cause of a timeout (first TCP lookup takes a long time) is external to your product, so just set the timeout longer for that check step. Don’t bug it “Open a document menu always fails the first time!” because it’s not a problem from the end-user point of view, and not actionable by the product team.
      Here’s a better idea: do bottom-up testing. If your product architecture supports it (which it should, for a well-crafted product!) test “below” the GUI first at the business-logic layer, so call services or libraries to put the product through its paces. There are many advantages to this approach: 1. The checks run much faster and more reliably this way 2. On fail, you get much more specific information, closer to root cause of failure 3. You can put off testing the GUI layer a bit with low risk, because code issues that exist ONLY in the GUI layer are easy and low-risk to fix.
      With quality automation (often called “test automation” but I have good reason for eschewing that phrase), never do “test chaining,” even if it is a pattern defined in “xUnit Test Patterns.” It’s an anti-pattern; it will make your automation slower, less scalable with resources, poorer at regressing correct behavior, and more work for you.
      See the Atomic Check pattern of MetaAutomation.
      Download sample 1 from MetaAutomation.net. It’s pretty easy to setup and run, or at least should be, and it demonstrates some very valuable stuff. Feedback welcome.

      Delete

Note: Only a member of this blog may post a comment.