The attached PDF document provides a list of evaluation criteria
which have proven useful to me when evaluating automated test
tools like Mercury Interactive's WinRunner and Segue's Silk over
the last several years for a variety of clients.
Hopefully some readers will find this information useful, such
that it reduces the time required to perform this type of
I would also like other readers who feel I have neglected
evaluation criteria they have found useful and important to
post their suggestions as replies to this posting.
1) Don't mention specific tools in your writeup. IMO it demonstrates an inherent bias, even when used as an example.
3) Mention of technical support should be one of the first items mentioned. Mentioning it later implies that it is less important.
4) For reports, does it have the capability to export, so you can create custom reports?
5) What additional QA services can the vendor you are working with offer?
A point of clarification: the list of possible evaluation criteria is unordered and unweighted. I leave this important task up to the evaluator based on his/her (or their client's) needs and requirements.
But I agree with Garbage Man (boy what a handle!) that Support ranks highly on my personal list of important criteria to be evaluated.
Thanks also Garbage Man for your other evaluation ideas. I encourage others to add to the list.
As in "pick up the garbage" that development produces? Yah, there's a bit of that in it. It's kind of a double entendre (or multiple entendre <G>)... I also like to point out commentary that I think meets that criteria (expose the garbage).
Or, one could say that I simply produce my own garbage.
1. The current title is:
"Automated Test Tools Evaluation Criteria".
It should be changed to:
"GUI Automated Test Tools Evaluation Criteria" because it doesn't discuss other types of automated tools.
2. You concentrate on testing man-hours saving (paragraph 5 on page 2).
This is a limited approach.
Sometimes it is possible to compress test time (and increase test cost)!
Ultimate goal is to save money/speed up the all development efforts (including testing).