This coming July (year 2015), the New Horizons spacecraft
will fly by Pluto. Along with that fantastic human achievement, the controversy-that-refuses-to-die
will be rejuvenated and kicked around ad nauseam: Is Pluto a planet or not?
The irony here is that since the objective is pure science,
as long as communication is served, the labels that people use for the object don’t
matter at all. It’s no more than semantics and politics. We can call Pluto a
planet, a minor planet, a dwarf planet, a Kuiper Belt object, or a great big sleeping
comet, but it makes no difference. Planet or Kuiper Belt object or whatever,
Pluto is Pluto.
In the world of software quality, there’s a similar
controversy around automated tests: aren’t these just the same thing as manual
tests? “Manual test” and “automated test” are the same thing, right, so would
it not be more efficient and correct to call them “tests?”
No, automated tests and manual tests are not the same. The
objective of a test is pure information, or more precisely, technical value in
quality measurement. The difference between “manual” and “automated” is real
and valuable, because the value to the
software project between manual and automated tests is generally very different.
Generally, manual test have these values:
·
Whether scripted or exploratory, they benefit
from human intelligence and powers of observation
·
People can be flexible when running them, to be
more efficient
·
The test results benefit from tester smarts
And, these downsides:
·
They can get repetitive or boring
·
Humans make mistakes
·
Humans get tired, have to sleep, must do
something else sometimes
·
Test repeatability isn’t always good, due to
above factors
Automated tests have these values:
·
Can be fast or extremely fast
·
If well-written, they’re very repeatable
·
If well-designed, they’re very scalable
·
Geography and time-of-day doesn’t matter anymore
·
Automatons don’t tire or get bored
·
Automatons are great for processing large
amounts of information efficiently and accurately
And, these downsides:
·
Automatons are idiots, and depending on how the
automation is written, they can make huge, immensely stupid mistakes repeatedly
·
An automated test will miss major issues that
are both important and obvious to a tester
·
Human tester/programmers have to tell the
automaton exactly what to do, but if given a chance, the automaton will mess up
anyway
·
Poorly written automation can fail and drop
important root cause information bits on the floor, which requires a tester/developer
person to follow up and figure out what happened
To create a scripted test means writing out the steps.
Almost all of them are run manually at some point, but note that by “manual” I
mean that it’s a human who determines the initial result of the test as pass, or
blocked or fail with additional information about the blockage or failure. It’s
a manual test even if it’s a tester that uses a web browser to call a REST API
with XML over HTTP, because the tester completes the test with a result.
At some point, the steps of the script may be automated, so
it becomes an automated test; it’s an automated process, not a tester, which
determines the initial test result at that point.
Given the above differences between automation and manual
tests, is the test the same now that it’s automated? Faster and cheaper, maybe?
Faster and cheaper, maybe, but the same value, not at all.
This is why I use the word “check” to describe the automated
test: the difference is important, and not understanding the difference could
introduce significant business risk. (More on the “risk” below.) Calling it a “check”
rather than a “test” is much less of an invitation to the business owners to
introduce poorly understood risk. A check is just a kind of test, though, but
with well-defined and well-understood verification(s). It’s just like driving
around at night when the passenger says to the driver, “Hey, check that the
lights are on.”
The risk of mistaking an automated procedure with defined
steps for the same steps done by a manual tester is that the tester can see so
much more. Testers are smart and observant, automatons not. To automate the
test means making the team blind to things that it used to be able to see.
For example, here’s a manual test for a credit union web
site
1.
Go to credit union site
2.
Log in
3.
Go to checking page
4.
Check balance
5.
To go “Account Transfer” page
6.
Select $100
7.
Select “From” as “Savings”
8.
Select “To” as “Checking”
9.
Click “Transfer Funds” button
10.
Confirm
11.
Go to checking page
12.
Verify that the transfer happened
13.
Verify correct balances reflected
14.
Logout
A tester running this test notices “hey, the nav bar looks out of whack…”
takes and annotates a screenshot and enters a bug.
The bug is fixed and regressed by the tester with a note “looks good
now.” The above test is also automated, but the navigation bar is not the top
priority and one can’t automate a check for “looks good” so that never gets
automated.
Assuming that the automation is written and handled well, it regresses
the functionality of the steps and makes the verifications often. But, the
presentation of the navigation bar isn’t verified any more. The business owners
think that the web site and transfer functionality is regressed entirely, but
that’s not the case; the automation runs, and the verifications in step 12 and
13 happen, but whether the page is readable to a human or not is simply not
tested when the automation is run.
That discrepancy creates the risk that comes with misunderstanding what
automation does for you, which in turn is related to the common mistake that
automated tests and manual tests are the same thing.
Clarity is important here: automated tests and manual tests are so
different, they need to be considered differently to avoid the business risk
described above.
For Pluto, whether it’s a planet or not, isn’t important. But, I sure
am looking forward to what New Horizons can tell us about it.