| || |
Siftware Product Metrics
Would please like to know the standard different kinds of metrics that are collected with respect to a software product. The metrics that needs to be collected during various stages,including during inhouse development and testing and also after releasing the product to the client.
Kindly requests for suggestions, thoughts based on the different metrics collected across various orgs.
Last edited by qaexp; 11-11-2014 at 12:57 AM.
There are no standard metrics. Every software development project is different and has different needs - the metrics collected (if any are collected) vary according to the needs of the project and the organization.
Some of the more common metrics I've seen used are:
- tester-reported issues vs customer-reported issues - which is not as simple as it seems, because before that ratio can be calculated, someone needs to filter out duplicate issue reports, non-issues, and feature requests disguised as bug reports.
- unresolved issue counts grouped by severity and priority - which is also not as simple as it seems because priority shifts depending on how close a project is to release, perceived customer needs, and many other factors.
- Issues resolved in some time frame - this tends not to be terribly useful in my opinion, and if it's used to evaluate the tester (or developer) it encourages fixing the obvious problem without investigating further.
The useful information that I've seen used includes things like:
- Modules of the product that get the most issue reports. This usually indicates high complexity, a lack of systematic regression, or both.
- Analyzing customer-reported issues to see if there is a way to prevent similar issues being released in future (often the answer is "no")
- Building listings of features/functionality that can be impacted by changes in other parts of the system. Depending on how old the software is and how actively it's developed, these can be quite bizarre and obscure. I've worked on an application that still had the core code from 20 years earlier which was such a big ball of spaghetti-code it was impossible to tell what would be impacted by a change to the core code.
There are many different kinds. Off the top of my head the ones I find most helpful are:
Cycle time - how long does it take you to complete a minimum viable product from conception to delivery.
% Complete and Accurate - how often can your development team sit down an get work done strought through to completion without having to contact a different group.
Productivity - I like function points per developer. Or test cases per QA.
Error rate - # of errors compared to some productivity measure.
If you do a full Value Stream analysis or even a process map, you'll come up with others. Measuring items coming out of a SIPOC is helpful, too.