I am interested in knowing how your organization measures the success of your QA efforts. We have set up a fairly extensive QA role for our system - requirements and change management, automated testing, etc but when it comes to quantifying the QA's success in terms of an improved app, some in our organization question the resourses expended. I suppose the most valuable measure would be to capture the number of bugs found, but it's not always easy to show a direct relationship. Any thoughts?
"When you find a big kettle of crazy, it's best not to stir it" - Scott Adams
Graphs. Graphs. Graphs. Thats how i did it the last time. Bugs caught are not always a good way of telling the story.
You can have graphs for the following:
1. Severity of defects found per week (and the cost saved by not letting the customer catch them)
2. Test Coverage (also show progress against planned tests and why all the tests could not be executed because of showstoppers, defect turnaround time etc)
3. # of Post production defects (or the lack of any)
4. Issues found by the QA team across the various stages of Software development cycle (and the costs saved by finding defects early - advocating the need for QA to be involved early)
5. % of defects found in automated and manual testing.