This and the next 6 or so posts will focus on metaautomation, which is defined by the http://metaautomation.blogspot.com/2011/09/intro-to-metaautomation.html post.
The most basic part of metaautomation is to maximize the chances that an automation failure is actionable, without necessarily even running the automated test again. The post linked above lists some example cases of this.
There will always be cases where more analysis requires loading up test code and SUT, but that’s time consuming (expensive!) and metaautomation wouldn’t help much either, except possibly for any automated triage (which I’ll address in a week).
At some point in a failed test, a failure condition occurs. The sooner the failure condition can be measured the better, because (assuming a single thread of execution) some information at that point that is important for analysis might be lost if the thread rolls up the stack with an exception or execution continues in vain.
Also, the faster the test ends after that point, the better, because
a) (depending on the SUT) any further steps probably won’t add any useful information to the artifacts
b) Resources to continue the test – including time - probably aren’t free and might be limited
Tomorrow, I’ll write about strategies to maximize relevant information, hence minimize the chances someone in test will have to load everything up with source code and step through to see what’s going on.
Monday test
ReplyDeleteThe Google blog engine is having trouble accepting comments. Try a non-Google profile or anonymous post
ReplyDeleteMaybe Google requires a gmail account to be configured with my Google identity. I'll play around with that
ReplyDelete