It all depends on what it is you, your QA department and the development department are interested in. I tend to use a combination of defect metrics to feed into process improvement. Generally these are
1 - The root cause of the fault
2 - Which phase of the SDLC the fault was introduced
3 - Which phase of the SDLC the fault was detected
4 - The application area
Item 1 helps to determine what area of our process is failing. Items 2 and 3 help to identify the phases of the SDLC that may need improving and also the "cost" of a fault.
Item 4 helps me to determine which areas of the application are least robust and may need more testing.
In terms of suggesting improvements, you need to collect the data and analyse the data. Discuss this with the teams and agree to changes (e.g. code reviews, more unit testing, continous integration etc.) Once these are in place the metrics are collected and compared against the previous metrics.
Analyzing the bug is important part before filing any bug Like
There is a functionality for adding using in to the application.
After adding the user details, the application says success, but the data is not populated into Database. Then we should check the first level at the server Log, what is the possible error it has logged then based on the error message.
We give the steps to reproduce the bugs not the suggestion to developers to prevent the bugs.
Thanks to Mr.Matthews and Mr.Lidor for giving good information.Thankyou Mr.vinod. After finding the root cause, what Standards and Metrics we can define and give to the developers.I don't want Similar Bugs repeat again and again.
If you are interested in root cause, you can...
- produce a raw count of the number of defect reports attributed to each root cause
- produce a weighted count of the number of defects attributed to each root cause (e.g. a critical defect counts as 3, a major defect counts as 2, a minor defect counts as 1).
However, collecting and producing the metrics is only part of the story. You need to define the process by which these metrics will be used to improve the situation.
I usually suggest a review of the metrics for previous projects is given as part of the kick off meeting. This tends to focus peoples minds on where improvements are required and what metrics are being collected.
I then include a summary of the metrics as part of my test report for each cycle. If there are clearly any major concerns then I would flag these up to the team leaders and project managers.
At the end of a project I arrange a meeting with project teams to review the metrics for the project and outline where we have improved and where we are still lacking. We then focus on the top 2-3 root causes and as a group discuss the practices and changes required to improve this performance on the next project.
The key is to focus on small improvements rather than attempt to change everything.
Additionally, you seem to have singled out the developers as the source of defects; I tend to find that errors are also introduced by the system designers, the customer (incorrect requirements), and even us testers (e.g. mistakes in the scripts, errors when executing tests etc.). It's vital that the root causes selected do not focus on any particular team.