| || |
- 1 Post By katepaulk
Testing productivity benchmark
Hi.. Can anyone help me to know what is the expected metric value for Test Case Preparation productivity and Test case execution productivity.? What could be the industry best practice metric value for those.
Please help me with numbers.
Best practice? Or Good Practice within the context of your company, domain, project? (It's a rhetorical question...)
There's no realistic answer that anyone can give you. There are so many variables in what you seek that any numbers will be meaningless:
- What type of software? Enterprise software is a different kettle of fish to an app
- What test methodology? Agile, waterfall, etc
- What do you want to compare Test Case Preparation productivity to? Hours? Lines of code?
- How large/detailed are your test cases? Are they being written for the "man off the street" or for an experienced test team?
- Test Case Execution productivity - are you simply executing a test script, or are you including exploratory?
- What's your estimated time for raising defects and retesting, is it included?
Short answer: there's no such thing.
Longer answer: There's no such thing because every organization is different. Within each organization every project is different, every developer is different, and every tester is different. The test cases and test execution needed for a complex back end service integration written by expert developers will bear no resemblance to the test cases and test execution needed for a heavily display-based web site, and so forth. On top of that, you have the question of who has done a better job: the tester who finds a handful of minor problems (which are then fixed) in a project that releases and has no customer-reported issues, or the tester who finds one big show stopper (which is then fixed) in a project that releases and has no customer-reported issues.
What you're asking for is a convenient system for your testers to game. You'll get that, and your testers will probably collect all the rewards built into it - but it won't tell you anything about how effective your testers are.
Stop trying to compare apples and plastic bananas, and focus on listening to your testers and interacting with them.
I spend a lot of time in the world of metrics/KPIs and these productivity indicators don't really make sense for one simple reason: they are for monitoring of the current situation, but they won't help you or your manager to change this situation.
I would better look into the number of repeat problems as a metric.