We just shipped a release and my team had a discussion on what we did (well its really two of us) well and what was not so good. What came out in the discussion was that we do not always do well in tracking failures in the Test Matrix, where all of our Test Cases reside. Sometimes we add a failure and unless we get on it right away it may sit for a day or two until we check it again and remember.
So a couple of ideas got thrown around:
* Color coding passes/failures/running cases
* Weighting certain test cases to represent a higher priority in the test schedule
* Adding an effort column into the matrix to go against what we have as passes/failures/untested, to see if Y% of completion represents X% of the effort.
We sort of dismissed the color coding as unnecessary as we are ok with the way the columns work for us, we can read the text in the cells fine, the long ones we can tell whether is a short word for pass or a long one for untested.
The weighting is one that I can sort of see in respect to being able to determine how serious a case failure is, although playing Devil's Advocate in this I can also see where is unnecessary. As the pass/fail numbers are rolled up to the Managers the weighting would not roll up, as its just adjusting the number, and failures we talk about on our level and work to resolve. So its a work priority at that point, and not something that I can see as easily adding to a metric.
Effort I can understand tracking from a project perspective, but as to being in a metric I don't actually see the value. Especially as within the team we run a set of cases across multiple platforms (about 14 or so), some tests are long that we only actually start and check results on, they are not constant efforts. Plus, what does 25% of effort mean to Management who don't have visibility into the specifics of the matrix itself? I also don't see an easy way to roll this up as a metric number to a summary sheet either, in a way that is meaningful; unless I just have not encountered it.
Writing it out has helped me think a little more about this, but discussing it also helps. Has anyone tracked this sort of data before? Its new to me, and I am curious if someone has, how you did it and what the value was at both the project (the teams doing the work) and business level.
Nothing learns better than experience.
"So as I struggle with this issue I am confronted with the reality that noting is perfect."