I was asked to come up with some metrics that could be useful in determining the state of the product in a given QA phase and assist in making go/no-go decisions. What are some of the standards if any that you've used? How many known bugs can be "released" into production? Etc...
I've found mostly that it depends on the critically of the project. A medical device for example might have a tolerance level a whole lot lower (or do I mean higher... [img]images/icons/confused.gif[/img] ) than a video game.
My thought is that the 'bar' should be defined up front in the Test Plan or QA Plan for the project or the company to give a baseline of expectations.
Generally the level of defects is based on a discussion of salability and price vs. cost.
For example, if it costs $1000 to fix a defect, and it will result only in the ability to raise the price by $10, is it worth it? If the number of products sold is over a million, then $10 x a million less that $1000.. the value differs.
There is the six-sigma process that has a lot of calculations that can be used, but in my opinion, the software industry isn't ready for that level of perfection.
I believe Deming also had a statistical quality control model that has been used for measuring of defects, but he also allowed for the decision factor to allow for different levels of quality.
Have I spoken in enough circles yet? [img]images/icons/grin.gif[/img] .
I too am working on some metrics for project artifacts and timing. I'm trying to define the baseline as well. Ie: if you say an artifact was received "late", what is the definition of "late", and what would have been the ideal time to have received it? Believe it or not, I'm having trouble getting an answer to that question. [img]images/icons/rolleyes.gif[/img]