"Automated Test" is an oxymoron, because you're not automating what a manual tester could do with the script, and unlike true automation (industrial, operations or IT automation) you don't care about the product output (unless you're doing your quality measurement that way explicitly).
Automation for software quality has some value at authoring time because it's a good way of finding bugs, but really it only has business value with the verifications being done, most of which are coded explicitly (preliminary verifications, and a target verification or verification cluster, per MetaAutomation).
So, call it what it is: automated verifications! That understanding opens up tremendous business value for software quality! If done right, it can help manual testers too by making their jobs more interesting and more relevant.
According to tradition, and according to the origins of "test automation" from industrial automation, the output of automation the software system under test (SUT) is flight-recorder logs and, maybe, a stack trace. But, there's no structure there; it's just a stream of information, some relevant, but most not relevant or interesting.
It's time to leave that tradition behind! Automated verifications have well-defined beginnings and ends. They're bounded in time! Use that fact to make some structure out of the artifacts, i.e., the information that flows from an automated verification.
XML is ideal for this, especially with an XSD schema. With an XML document, there are boundaries and a hierarchical structure, and if the presentation is left out (as it should be) then it's pure data; it's compact and strongly-typed, available for robust and efficient analysis, and completely flexible for presenting the data.
Even better: The least-dependent pattern of MetaAutomation, Atomic Check, shows how to write self-documenting hierarchical steps for your checks (i.e., automated verifications) which means that you'll never have to choose between using your teams domain-specific language (DSL) or ubiquitous language, and using those (sometimes, very important) granular details of what the check is doing with your product or where/why it failed. You can do both!
With self-documenting hierarchical steps, after a given check has succeeded all the way through at least once, if the check fails, all the check steps are recorded with "pass," "fail," or "blocked." This gives another view of course on what the check was trying to do and tells you exactly what parts of product quality are blocked or not measured due to the failure.
Self-documenting hierarchical steps also give you information that's detailed and precise enough to know whether the specific failure has happened before! See the Smart Retry pattern of MetaAutomation for how to do and use this.
Even more: using a structure like XML for your check artifact means that you have a ready placeholder for perf data; it's recorded how many milliseconds a given step took, for every step in the hierarchy. Perf testing doesn't need a separate infrastructure or focus, and perf data is available for robust analysis over the SDLC.
See the samples on MetaAutomation.net to see how this works.