I am wondering if anyone has a matrix that they use to track testing efficiency. I have found the DRE (Defect Removal Efficiency) helpful. Just wondering what you may be using. I find that this method that I am currently using for measurement doesn't fit our development/testing process.
Every measure should start with the goal! Don’t cheat yourself – measuring itself is never the goal. Your goal may be: prove upper management that you do your job great; prove that you improve; _actually_, improve something that is measurable (but not necessarily makes sense etc.
For example I used measures to improve defect discovery stability. The goal was to discover average number of defects per week instead of having peak at just before release (for what testers was blamed previously, making them guilty for late shipment). I was most successful at it because we really ship now much closer to deadlines (although thanks to better requirement management… that was conclusion from my measures).
I could send some details on my measurements in private. They are too complicated to be posted here.
Robert, you may hate the use of "METRIC", but I hate people who try to measure individual testing efficiency! It is sooo subjective and there are sooo many variables that should be considered. Plus there are sooo many ways of measuring and individual's proficiency and work ethics that the number of bugs found should be insignificant.
100 % agree. Ainars example is odd. It assumes that the number of defects written every week is constant. What if the code they write is very robust ? What if the developers do a sloppy job the next week ? These measures are meaningless outside a controlled experiment, which in the commercial world, just doesn't happen.
It would be interesting to give a code base to several different test groups and see what happens.
Rober, You are right - there is THE context behind my measure (that why it’s huge to publish): development culture and the product are both build up in many years… On the other hand Measures makes no sense if you have less than several years of statistics collected in stable engineering culture.
Still my measure don’t assume constant bug injection, it assume that there are always a number of bugs still not discovered: we can't retest everything each week.
I am very suspicious of any metric in software development. Any measure based on an assumption I would regard as useless.
Also by the time you have collected your several years of stats.... the staff has probably completely changed over, the platform technology is being rolled out, even the company could have been subject to a corporate take over with new management/regime. One place I worked they regarded 6 re-orgs a year as a low number. There are so many factors that can impact measuring organisational performance let alone software development and/or testing efficiency.
The only metric that really counts is the Cash Flow.
Unfortunately Cash Flow doesn't measure testing efficiency that was the initial question. I just wanted to say that testing efficiency is not measurable. Fault discovery rate is (directly) measurable, although you are rarely interested in this measure, because it does not describe testing efficiency, neither development quality.
I had a very specific case when we were interested to improve something that is directly measurable due to very specific reason. This was the only case when I believe measures are worth to use. Well there is the second case I described: to “prove” management you are good (although it is actually cheating them, but they still tend to believe – even CMM level 5 suppose this to be the right process…).
Defect Conversion ratio has some merit, but would need a reasonbly sized and reasonably constant sample size (sound like a stats man) to be relevant. If only small number of defects are being raised it would loose merit.