Estimated Times to Fix Bugs - Guidelines
In my organization, I'm the software QA/testing "go to" person who writes test cases, executes them, writes up bug reports in Bugzilla, and verifies the fixes for these bug reports. I seem to have a generalized idea as to how much time is required from the time the bug is reported (or reopened if the bug fix failed) to the time it is ready for test (or retest, respectively). Since I use Bugzilla, the bugs reported have both a severity level (Blocker, Critical, Major, Normal, Minor, Trivial, and Enhancement) and a priority level (P1 through P5) assigned to them. Based on both severity and priority guidelines, I would like to know from others in the profession, particularly QA management, how many days should a developer take to fix a bug?
For example, a Blocker/P1 bug would require the bug to be fixed yesterday (OK yesterday ended last night, so we'll say ASAP), while a Normal/P3 bug would require approximately 3-5 days (business days, that is) under normal guidelines. Generally, the number of days required to complete a bug fix would increase as the severity and/or priority levels decrease.
I greatly appreciate your inputs. Thank you.
As well as the developer's workload (if he's got 5 Blocker/P1 bugs on his plate, it's going to take him longer to get all of them done), there's the complexity of the code the bug is in, how long it takes the developer to find the code that causes the bug, and whether the bug is something simple to fix or not.
I've seen blocking top priority bugs take days to fix because the actual problem was horribly complex and needed a lot of work. I've also seen trivial items be fixed within a day because the developer happened to be in that area of code for something else and fixed it while he was there.
The best you can do is that the highest priority/severity levels will be _assigned_ to a developer within a day of being written up. Anything beyond that depends on too many things to use for estimation.
You don't know what you don't know. The developer at the point a bug is filed doesn't know what went wrong yet. The developer does not know how long it takes to isolate the problem. The developer does not know how long it'll take to come up with a robust solution for the bug at the point the bug is first discovered.
Developers also vary in a huge way, as with any research intensive occupation. For example, say you take 2 developers on a team. 1 of them just happened to come from a different company where he was on a team that implemented authentication. Then you have a bug in a project, that involves authentication. The difference between the two developers could be more than a week.
An analogy is.. Would you ask a paralegal how long it would take to put together a defense for the client? You would hope the person leading the firm to be able to understand what is important to that client's case and guide the paralegals and lawyers in how they should prioritize their possible defense.
My thoughts are this..
* QA should make it clear they are not project management. They do not deal with time lines. It's up to the company as a whole to collect metrics to come up with a way to objectively give project management the sort of tools they need to make their guesstimates.
* Development should not worry how long a project or bug fix takes. Development is a research job, and knowing exactly how long something takes usually involves creating a prototype of that thing. I believe it's the team's job with the product owner at the help to keep things prioritized. It doesn't matter how long it takes to do something, if it's the most important thing the company needs to do, they need to do it. If there are blocking factors that are halting progress, it's up to the team to communicate that accurately and get the help they need. If a bug takes 1 day or 1 week, if it's more important to fix than the next feature on the queue, then it needs to be done. Simple as that.
* It's up to product management to understand some of the technical and the order of complexity between different systems and different issues. They don't need to be an expert at code, but they do need to understand the difference between fixing a flawed underlying assumption in the data model vs a presentation layer UI glitch, and various levels in between. And hopefully through metrics and analytics, they'll understand better the expected timing to the time line. Hopefully the organization will be collecting statistics of how long a ticket involving module X stays in the open state, the average code churn and volatility of certain modules, and the competency levels of the team members. It's the project manager's job to know these things and allow Dev and QA to focus on their work.