Thursday, September 22, 2011
Persisting the Artifacts from Automated Tests
Storage space is cheap. Check that, storage space is very cheap, and getting cheaper all the time. Moore’s law reigns here. I see a 1TB internal hard drive for $55, for example, on my favorite online retailer of computer components. Next year I’ll probably buy a bunch of 4TB drives for personal use, even though my family’s data only amounts to 1.4TB so far (mostly digital photos).
Given a reasonably stable product with detailed logging and/or other reporting, businesses have a great incentive to save that stuff for a while, within constraints of confidentiality, privacy and personally identifiable information (PII). Anybody looking at quality will want to ask this question sometimes: has our product seen that behavior before? Is there a pattern? Patterns of behavior are great because they help characterize and prioritize product quality issues.
Test infrastructure that exercises the product has a strong causal relationship with these product artifacts, and helps illuminate them as well, e.g. “initiate process A, start transaction B …” in a temporal relationship with one or more threads / processes / cloud deployments of the product. In a controlled test environment in a lab (which could even be in the cloud) certain scenarios would be exercised repeatedly on a schedule, which can make correlation between artifacts even more valuable; given that some things stay the same e.g. the way the product is exercised or even the starting-point data, what changes becomes more significant, and more likely actionable.
Long-term management of artifacts of product behavior works well in conjunction with the artifacts generated by automated tests. Saving product artifacts (logs, errors, etc.) with the automated test artifacts (logs, exceptions, etc.) simplifies storage and analysis.
Tomorrow: atomic tests. Next week: more on automated parsing of logs, and optimizing logs for automated parsing!