Thanks for your quick replying.
I'll give you more info.
I need to use old release features in order to predict new ones.
We are starting to work on the new RC (which will be in a year or so) and i need to find a statistical method to predict the new release.
They want me to take old features, count the number of critical bugs, showstopper, rejected etc..and to "compare" how the featurtes will be in the RC.
its pretty hard to explain it but I'll appreciate any help.
If you need more info, please update me.
Initially you need to find a base on which to base the statistics - something which can be counted in the same way for the original release and the new release. The three possible things I can think of (and I am sure there could be more) are lines of code or requirements or project estimated hours.
Using one of these in the original release calculate the ratio using the number of defects in the original release.
Against the same number in the new release apply the ratio to calculate the number of projected defects.
This does not predict or measure quality - nothing can predict or measure quality - however it could predict defects.
Even the prediction of defects will probably be inaccurate because you have to allow for variables
- has their style changed,
- are they the same ones
- has additional experience made them code and unit test better or worse
- business analysts
- have they put the same detail into gathering requirements
- does more experience with the business mean that they are more accurate or detailed in gathering requirements
- do you have a new business analyst that is learning the business/application
- do you have experienced testers who know the applications
- do you have new testers who do not know the application
- will familiarity with the application cause defects to be missed
It all comes down to the fact the predictions are what could be and over time may be able to show a trend but especially at the start of doing something like this you have a better chance of being wrong than of being right.
Clearly, you'll need a number of prior releases. More is better.
You'll need to:
- select your metrics that you wish to use as "measures of release quality"
- select your metrics that you wish to hypothesize are predictors of such quality
- over a bunch of prior releases, prove the strong correlation and statistical significance between the predictors and the outcome
- propose your new prediction process
- measure your current project and assess your predictions