We have a new QA Mgr and he wants an Idea of what sort of metrics should be pulled from Quality Center on an ongoing basis to evaluate a projects progress and to improve the QA process. I am the Quality Center admin, but have not been responsible for this sort of information before. Any one have any suggestions?
Re: QA Metrics
I'd suggest you go through a formal process of requirments gathering to clearly define your metrics. You'll also need your QA team to use QC in a consistent manner so you don't skew the metrics.
Are you using QC 9 or 10? In QC 9 the dashboard is a royal pain. In version 10 it's much easier as you can write SQL to get the reports you need rather than messing with KPI's.
Re: QA Metrics
This question has two sides
1. What are the metrics which you can collect?
2. What are the metrics which the management requires?
I would like to answer Point number 2 first.
The objective of a QA metric is to find out the product's quality. If the metric is not able to demonstrate the quality of a product then there is no sense in gathering metrics.
I would recommend that you work with your management to find out what sort of metrics they want.
The metrics they would require should assist them in making business decisions.
There are many types of metrics which you can work on. In fact, Cem Kaner in his article "Negligence and Testing Coverage" has listed more than 80 different types of metrics.
My above mentioned thought has come from Global Software Test Automation:
A Discussion of Software Testing for Executives
Once you have understood the kind of metrics which your "management" requires, you can work on preparing these metrics.
Additionally, the metrics which work on my organization or business line may or may not be of any use to your organization or business line.
My thoughts on Point #1
In terms to team management and "testing progress" , there are some metrics which you may consider
Test case related
1. How many test cases were designed with regard to n number of requirements?
This would assist you in understanding the test coverage.
You may consider clubbing Risk Based testing with it. (Check out Rex Black's studies and thoughts for more understanding on Risk Based Testing)
2. How many test cases were planned and how many were executed each day?
This would assist you to work on an "average" on the test execution progress.
1. How many defects were raised?
2. How many of these defects "filtered" on severity and priority.
3. How many of these high sevirity/high priority defects were detected late in the testing cycle? Why were they detected late? Why not early? Is there a "problem' with the way, the test cases are executed? Are we executing the test cases in a random order? Or we covering the high risk requirements first?
1. A Tracability matrix would assist you in understanding how much justise your test cases are covering to the requirements.
2. How many positive test cases, negative test cases are created (We should always execute the positive test cases first)
An article to explain some of my above thoughts!
I hope these thoughts help!