Combining multiple defects in reports?
I've always been told that it is bad practice to combine more than one defect in a report, and as a tester I can see why. It's makes the report very unwieldy to attempt to track more than one issue in a single report. And trackers are designed for a single issue per report. However, apparently this means more work for our developers, and our project manager is leaning toward taking their side on the issue because they want to limit the number of times our developers touch code. So they think that QA should know when defects are related and clump them together.
References like Kaner kind of gloss over this and seem to assume everyone will agree that defect reports should not be clumped together, so they don't offer much in the way help in explaining why this is important. How can I explain to my PM that this is is important for everyone to go by and isn't just a matter of making QA's lives easier at the expense of Dev? Thanks.
For us, we're using a Jira work flow and do some pre delivery testing,
In our Dev phase, we file our bugs as 1 liner subtasks to the Jira story. Devs can check them off as they go along. The reason for this is it's no where close to release ready, and there's a lot of activity we don't want to slow down.
But once the package has a formal version number and is delivered to QA, we write full bug reports for every issue. There's a number of reasons why we do this.
* Any non fixed issues need to be scheduled into a future sprint.
* Documentation of open bugs are used by our support teams.
* Any bug fixes past the official delievery needs to be code reviewed and go through due process to ensure correctness.
Ask your project manager what you should do if you write a bug report with 12 issues in it, try to verify the fixes, and find that 3 are fixed, but 9 are not.
Should you just fail the entire bug report? Should you create a new bug report with the 9 unfixed issues?
It all seems foolish to me and more likely to generate confusion than to save anyone time, but I suppose anything could work (if not very well).
1 bug = 1 test-case objective failed = n-test case can blocked = 1 priority and severity.
Yes, Joe, she thinks we should. And if QA finds any new bugs when testing the fixes, those should get added to the existing report.
I know it's a bad idea, you know it's a bad idea, but I need to convince my PM of that. Any help?
how high up is this decision coming from? Most PMs like increased visibility and tracking. Usually the issue with related issues can be easily linked together in most bug tracking system by labeling/tagging or component field.
Ask her one question. "Do you want to take personal responsibility for issues that slip through the cracks?" Because that's exactly what happens to issues that are not explicitly tracked. If the developers want to review the bugs and say, "Yes, I can confirm these 2 bugs are caused by the same module.", then link the 2 issues. That would be better.
There's a ton of things that are "more work". But what's important is the end product over time than per release. Same argument can be applied to Code Reviews, refactoring, creating automated tests (but unit and end to end). The whole idea of having a formalized process is to some "more work" upfront, to save headaches down the line.
Sometimes the best way to convince a person is to do it her way, let it fail, then offer a better way.
Like Joe indicated, unless you have standards/guidelines that say otherwise, I'd recommend getting agreement with the team (dev lead, pm, etc.) on the protocol.
In my world, commercial software and SAAS, we make these kinds of judgements based on efficiency. If there are multiple issues that are naturally grouped, and assigned to the same person, then its usually more efficient to group them. For example, multiple layout issues on the same page could be grouped together - its likely the same developer & would be more effective to handle all as a group.
One other way to convince PM that it is a bad idea to assimilate many issues into one Bug Report is to keep the bug 'alive' / 'open' till all the issues listed as fixed. There is nothing more annoying to a PM than an 'Open' issue which refuses to go into the 'Verified and Closed' state.
Been there, done that - have worked on a project where we (testers, developers, PM) agreed to put multiple bugs into a single report. There were a combination of reasons, but the two that stand out were from the PM and developers' sides they didn't want to see the defect count get out of hand, and from the testers' side we had a moment where you just needed to open the screen and you saw multiple GUI bugs. It was easier to write a report saying, "This is an issue. And this. And this. And here's a single screenshot highlighting them all". So we were convinced to raise a smaller number of defects with multiple issues in them.
The result was exactly as Joe describes above. 5 issues in a bug report, only four fixed. So the bug report stays open. And open. And open. Eventually we ended up breaking the reports back into single issue reports, and implemented a filtering system so that the bugs could be grouped into functionality and deliverable workpackages. This then provided the developers and the PM with the result that you want - to be able to group like defects so that the developers can work on functional areas, and so the PM can allocate discrete buckets of work to the devs.
Lesson learned, and will never again group defects!