I'm a Test manager on a big project. Together with the PM I'm considering in making development try to estimate the time it will take to fix a bug that is assigned to them so we can track the time they use on bug fixing.
One of my worries is that it can be very difficult to estimate this so that in the end estimated time of fixing x number of bugs is very different from the Actual time used.
What's your experience in asking development to estimate bugs? Any methologies to make the estimate more accurate?
One of our alternatives is to simpley allocate a 'bucket' of time within each iteration to tackle bugs. After each iteration we will adjust the time.
Here's the issue with this model. If I say I'm going to fix bug X in an hour, but it turns out it's a HUGE problem, what then?
I remember working in eCommerce on a digital product delivery module. The app was basically on fire. Files were NOT being delivered and we couldn't figure out why.
Now, I'd be lying if I said I remembered the EXACT issue or EXACT reason, but the ASP code looked something like
<font class="small">Code:</font><hr /><pre>
'We need to fix this before it becomes a problem
orderNum = LEFT(orderNumber 8)
So, sure enough, once the order numbers went over 8 digits, everything blew up.
So, when placed under pressure to deliver, will they do the right thing and just say it can't be done on time? Or will we band-aid a solution with the best intentions to go back and fix it later?
The problem is that we often won't go back to it later.
I think that it might be beneficial to say something like, "Can you fix all of the highest priority issues in this iteration?" I don't know if I see an advantage to it, though, other than to hold someone accountable for not fixing it.
I think that just having a bucket is a good idea. As long as you have a good prioritization system for bugs, then telling dev how much time you want to spend on bugs could be the best way.
9 out of 10 people I prove wrong agree that I'm right. The other person is my wife.
I think that you should first track how much time the developers spend on different bugs and then you can learn which developers is faster, which features include more complicated bugs and what kind of bugs require more time.
This way you would be able to estimate in the future how much time it should take to fix bugs.
I think Christian's last method is the most effective way of doing it. Allocating a bucket of time for each iteration and adjust or fine tune it with the time. I have used this method and it worked better than other approaches.
The issue with asking development to commit a time per bug is that the time depend on many factors such as the phase of the project, maturity of the product and the team. In most of the cases there are more than one way to fix a bug - either you can do a 'hack' or fix it 'solid' so that it does not introduce new and hard to find further defects. When a defect is being fixed, especially at early stage of a project, development spends time on making general improvements to code too. Now this time is not exactly spend on fixing that bug.
However, when a product matures and if you have a stable team (both dev/QA) you can get a general idea about how long to fix a bug by analyzing past data. At least in my experience, this appears to be accurate. Then this time could be used to allocate the defect 'bucket' for each phase or iteration.