I am interested in hearing in what ways people have presented QA data to upper management in order to quantify how the company is doing in terms of improving the quality of their software.
We currently have charts showing outstading defects in different versions. We also have charts for particular versions and how the defect list evolves over the year. Other metrics show the percent of coverage per module and the succes rate of weekly runs of those tests.
Any other metrics people find useful to show whether quality is or is not improving?
In the company where I'm working now management appreciate more other kind of metrics, where they could see improvements:
- the quantity of effort and costs spent for bug fixing (absolute and relative) are considered more relevant than number of bugs
(metrics=scrap work; scrap work/total work)
- same thing for rework (quantity of effort generated by change requests / requirements stability and completness)
- final delays and cost against planned schedule and budget....
- maintenance and support cost
Another thing that could make management happy could be customer satisfaction, but that's not easy to measure.
IMO, it comes down to what the management teams are willing to look at. It sounds like you have a grasp on the basic metrics.
One area I have been successful, is in showing how improved quality has led to fewer frontline and backline support calls, especially when the software released focused on a particular area of weakness in the overall system. This is a direct ROI indicator that most 'bottom-liners' can relate to.
Secondly, as you move toward a general availability the defect trends should follow an acceptable line. If your defect trends reflect your testing efforts, and not the 'maturity' of the software, then you are in trouble! The defects should indicate a declining trend with the same amount (or more) testing applied. In the later stages of a project, you should not be able to spot the day a build was released to QA by the trend spike in defects found. Is that making sense? Maybe I'm stating the obvious here...
Lastly, part of showing improvement in quality is using customer feedback during beta and early release cycles to validate your test scenarios and discover why certain items went undetected.
'imagination is more
important than knowledge'
well, my company uses a criteria that is:
to pressent the efficiency of the testing process to the upper management.
So in this regard i would also like suggestion on how can we actually represent out effort to the top management.
Number of bugs can vary with the quality and expertise of the development team.
If less bugs are there...then may be the development team was good. It doesnt mean that the testing effort was poor.
So how about some suggestion?
I am quite inexperienced...so do suggest cause i really want to improve my self.
May mail me any metrics sample or any thing else at firstname.lastname@example.org.
[This message has been edited by eirij (edited 05-07-2002).]