| || |
Defect Severity Index
i want monitoring the software quality by using the KPI "Defect Severity Index".
I know the formula and his goal: it determines the quality of the product under test and at the time of release, based on which one can take decision for releasing of the product i.e. it indicates the product quality.
But what I would like to know is which is the threshold in percentage, that indicates me when the software is poor of quality.
The project has 100 defects.
The 20% of defects have severity > High
This percentage indicates a poor quality or no?
Which shuold be the minimum percentage that indicates me the software has poor quality?
Thank you in avantage for helping me.
You probably don't want to hear this answer, but there is no "minimum percentage" that indicates that software has poor quality. Why might you not want to hear this? I'm taking a guess from your statement that you want to use this to take a decision for releasing the product.
Given your 100 defects:
What would you do if there were 20 defects and although they were High Severity they were Low Priority?
What would you do if there was only 1 High severity defect, but it was a defect that bricked every users' phone?
What would you do if there were 0 High Severity defects, but 100 Medium or Low defects?
Instead of using metrics to make your release decisions, try talking to your stakeholders. Learn what your customers expect in quality in the system that is being delivered to them; find out from your testers what they have tested and what issues they have found; chat to your developers to understand the impact of the open defects and the estimated resolution times; go back to your customers to see if the release is acceptable and agree what you need to action to make it acceptable.
Good wording Meridian.
I try to avoid using matrix to decide if a product is ready to ship.
Demos and asking users questions are a better choice.
When in Florida, Don't Tampa with the code. I made this up.
To begin, I'm going to say this is easier to say in theory than to do in practice.
That said, I think the best way to measure quality is by economic value added / taken away in dollars and cents. A release should never happen if economic value added is less than economic value taken away.
How do you measure economic value added/substracted?
First thing to do is gather lots of surveys. Get a baseline stats of how many people join a service because of new features, and how many people leave because of a defect. Categorize these responses by quality category. (usability, functionality, performance, security, reliablity) and classify by size/severity. Then also classify your change requests and bug reports by these same categories.
Then when you do the math, you can say, this new release has 2 minor features, and you know last Major feature release bought in $1M dollars, so you expect the 2 minor feature release to be 1/10th the economic value of a major release, then that would mean economic value added is $100k. Stay within that same release, there's 1 minor bug, and you know the last major bug caused $500k in lost revenue, than you might say that minor bug has $100k in disincentive. You can say this feature not yet ready as there is not enough of an ROI until that bug is fixed.
You can also use economic value to measure the relative quality of your application. Is it currently generating more economic value based on released features vs open bugs than it was the month before?
Last edited by dlai; 03-18-2016 at 02:31 PM.
Unless you define what you mean by "poor quality", then this metric doesn't lead you to any conclusions about that attribute.
Originally Posted by xelenax
You might as well ask "This percentage indicates the software is red or blue?"