Capturing time in testing
Here is my problem.
At the end of every project that we do testing and QA for, I want to show the actual time that people spent on testing it compared with what was estimated.
For example, we squeezed 5.5 weeks worth of testing into 4 weeks by working overtime and weekends. What DEV leads see is that testing is done, but not with what effort. And thats a problem.
Can anyone help from their experience on how to capture the actual time spent doing it versus how its perceived we are doing it.
Last edited by tottallyoff; 04-17-2014 at 05:07 AM.
Reason: change title
It's good to use a TestCase Management system that supports timing. Most do, those that don't, you can create a custom field in the test results to report time used.
That way you can add up the hours used in running test cases, then add some padding to account for time between tests, writing tests, and administrative items.
If you've squeezed 5.5 weeks' worth of testing into 4 weeks then that's a fantastic achievement. But perhaps not one that I would want to emulate with my team for every release, it doesn't sound like particularly good planning.
Why is it a problem with what Dev perceive? What are your test planning and prep processes (including estimating the time to test) leading up to the execution phase? If you're wanting to compare estimated time to actual, is it _your_ estimate that you're comparing or did someone else do the estimate for you (eg Dev, PM, etc)? Why is the estimate so badly wrong? Is the estimate based on a risk-assessment, perhaps using a MoSCoW analysis (Must test, Should test, Could test, Won't test)? Have you included enough contingency in your estimates for expected and unexpected delays - writing defects, defect fixes, retesting, late changes, system downtime, etc? Did you clarify the scope of your test schedule with your stakeholders beforehand, including clarifying the risks of what you won't be testing in that release?
As mentioned above by @dlai, a proper test management tool eg Quality Center/ALM should be able to provide you that information. Moreover, the information is stored in the tool for future references as well. In the absence of such tool, there is no other option but to rely on your comments on when the test started and when it concluded
I second/third the above. Most test management tools or ALM solutions have this built in or can be defined by simple customization.
I will be candid and offer you see if SaaS Test Management Tool and QA Management Tools - PractiTest can help you out, as I work with them.
You can use anything that tracks time to capture the information. There are some half-decent freeware tools out there, or you could customize a test tracking tool to include time spent. Depending on how complex what you're doing is, you could simply go the excel spreadsheet path, too.
The key here I think is the *reason* you're having to squeeze 5.5 weeks of testing into 4. What you probably want to capture for that is:
a) Who provided the initial test estimate - particularly whether this was a tester, how experienced/familiar the person was with the area being tested, how experienced/familiar they were with testing similar kinds of projects.
b) Whether developer milestones were reached or whether your team found itself squeezed between late code delivery and an inflexible release date.
c) Whether the release date/completion date shifted for reasons outside your team's control, leaving you short of time.
d) What (if any) unexpected problems occurred during the project. This can be anything: a developer breaks his leg and is off work for a week; a tester has a baby; there's a bad stomach virus going around and half the building is out sick; there's a series of storms that close the office; a key supplier goes bankrupt... (I've dealt with most of these. Not the bankruptcy, but I have had third parties I need to interact with take forever to get back to me, delaying the project)
e) Whether there was any allowance made for regression impact on the product. This gets really "interesting" with large, complex enterprise applications (been there, living that). Nothing is as simple as it seems - so much so that the rule of thumb I use in this kind of application is to start with my best serious estimate, then double it to account for the fact that the system is so complex there will be that many things I miss. If there's a third party it needs to interface with, double my estimate again. This *usually* ends up being in the same order of magnitude as my actual time testing. I live in hope that I'll find myself in a situation where I can refine the estimates a bit more based on experience and have the refined estimate respected.
Thanks everyone for the ideas. I was thinking initially to track it in QC, but then how to you account for the downtime, meeting, test case writing, investigations of prod defects, etc. While I can certainly capture who did what and when during execution, padding is not really a good way to give accurate information when it comes to effort spent.
meridian_05, katepaulk - The perception problem is that when we give original estimates, there is always a tug of war between us and the dev to get it done fast than we planning. Of course that is only natural, but it comes to a point where we are at a bare minimum that we need to test and start asking to strip functionality out of the release. At that point DEV and business agree with us on the date. This date usually includes time to write test cases, execution and 20% padding for unforeseen circumstances.
What happens during execution, is that DEV deliver late or in parts, environment downtime hurts, data is not set up, functionality is not properly understood, etc. but the QA end date does not move. I know its a problem, but the perception from DEV Leads is that we are constantly complaining and that we are never on time.
We do PIR where we do highlight 100+ hours of environment downtime in 5.5 weeks of testing, 50+ drops during that time, the fact that QA sometimes know functionality better than DEV or BA, and others. They never look at it as a team effort thing. all we hear is that QA did not do it well.
Noone wants to push the delivery date and tell the business that the release is slipping, so every one has to 'do their part' usually means qa working overtime to compensate.
I think looking at what is available, something like a time sheet would work. We do pretty good estimation, but then it goes out of the window within 2-2.5 weeks of us starting execution.
On the face of it, it sounds like you're doing extremely well. How is the PIR being received by the PM, is s/he supportive of Test and the effort that you're putting in to keep the schedule on track?
Originally Posted by tottallyoff
You might need a combination of trackers; you've got your execution time tracked in QC; individual timesheets (even if informally on an Excel sheet by each tester) should show the non-execution time that they use. You'll also need to keep system downtime separate - unless your testers are twiddling their thumbs during downtime..
You could also consider some tester love for the Dev team. Is there anything that they do (data, system config, etc) that your team could do earlier and take off their hands? This isn't just about buying additional time for testing during your execution phase, but a start to getting them onside perception-wise.
Or, if you want to play hardball, agree with Dev and the business upfront about how much system downtime is being allowed for in your plans, and make it clear that every hour over that will result in a test being removed from your scope. Create a web dashboard for your execution phase that updates system downtime in realtime and publish it on your intranet....
You absolutely need to track time spent writing test cases, downtime, blocked time, meeting time, time spent investigating defects and so forth. How you do it doesn't matter.
meridian_05's idea of a dashboard (doesn't have to be web, either. If you've got a big whiteboard, use that to show a kind of burndown with the total QA time allocated to the project and where that time is going. System downtime counts towards it, blocked time (where a problem with the application prevents further testing), meetings, test case writing... any activity spent on the project by QA team members contributes to the chart that's showing how much time has gone towards the project, how much time is left before release date, and how much extra time you're going to have to find somewhere to make release date.
QA getting squeezed is pretty common, as is QA being the "invisible man" unless something goes wrong, in which case it's all QA's fault. Your only real option to combat this is to provide lots of information *without* complaining. So, DEV is late delivering, you tell all concerned, "Okay, we can handle this. Here's the list of what we're planning to test, by priority. We're going to have to drop some of the lower priority tests to make the release date: these are the risks involved in dropping these tests. Is everyone good with these risks?" If the answer is "no", you negotiate. The key is that you're providing the information as early as you can and you keep providing the information.
The tone you use is crucial: always keep it to "just the facts, ma'am". The other big thing with this tactic is to provide information early as well as often. Before your team starts testing, you have your estimated time out there, with planned start and end dates. Use a simple presentation method (on some of the uglier projects I dealt with, I'd report status starting with an overview that said "Testing for Project X is RED/YELLOW/GREEN" where RED meant there were serious problems and the project was likely to miss the targeted due date, YELLOW meant there were problems and the project could miss the due date, and GREEN meant that everything was going well. After that I'd list the details: anything that was blocking testing, anything that was late getting to me, any problem areas I knew were coming up, any bugs I'd reported against the project and how severe they were, any regression issues caused by the project, and any cases where I needed information and hadn't been able to get it. Problems were listed with the most severe first and a statement of probable impact.
It's a bit more work on your end to pull these status updates together, but they tend to make what you're doing rather more visible to the rest of the team - which in turn means you're seen less as complaining all the time and more as valuable team members.
They appreciate it for about a minute, then say 'well done doing your job' and then go back to their modus opearndi. Nothing really changes.
Originally Posted by meridian_05
not going to happen. There is always downtime, but we cant move the dates. The DEV's perception is that when there is downtime, we are sitting on our hands. I am at a point where I am going to ask my dev director to allocate the DEV lead's time to QA for 2 days so h could see what actually happens during release testing. I also took my problem to dev director and asked him to help drive the point home for dev that QA is not a service provider to DEV but a team mates.
Originally Posted by meridian_05
Last edited by tottallyoff; 04-17-2014 at 05:08 AM.