Industry Standard for Testing Metrics on Maintenance Project
I need the Industry benchmark of the below Testing Metrics, can you please either help with the data or a reference having details for the same. I know it depends on project, but still need some reference for my project at the earliest.
1. Defect Quality= (Valid defects/Total defects) x 100%
2. Test case design Productivity
3. Test Execution Productivity
Which Industry? Manufacturing, finance, retail, oil & gas, etc? Web design, app development, ERP, big data, etc?
What type of maintenance project? CI, weekly/monthly/quarterly releases?
Defect quality %? Surely this depends on the experience of your team, the way that you define defects, your defect management flow, etc?
Test case design? TDD, BDD, Agile, exploratory, checklists, unit/integration/UAT tests, etc?
Test execution productivity? See above - TDD, BDD, Agile, etc.
It looks like you're trying to use metrics to attempt to measure the performance of your test team, but if you look at the sheer number of different permutations of SDLC and test strategies you will see that any sort of "standard" is impossible, undesirable, and will be misleading for your context.
Last edited by meridian_05; 05-02-2016 at 01:02 PM.
if anyone finds any, it would be good to know. I've tried searching academic catalogs and case studies at my B-school online library. Couldn't find any good data from anything recent. Would definitely help consultants increase our bills if we can demonstrate true improvement over the industry average.
Like meridian mentioned, industries, technologies, development process, and even the skill levels of developers vary all over the place. You can take a mean, but the standard deviation would be so great that that number wouldn't be statistically valid.
Most of the published metrics out there were gathered back in the mid-80's. The playing field is a lot different these days. For example, an often cited statistic, Bugs on production is 1000x more than one found in requirements was done in 1988. Back then we didn't have powerful frameworks such as the MVC frameworks we have today, so the developers were writing each and every single line of code. Back then, QA had a lot more time to do test planning and test cases, and had the luxury of gathering detailed stats. These days, I hardly have enough time to write a simple detailed bug report.
Thank you Meridian. I am completely in line with your thoughts. But, I need this data to ensure we are not at the wrong end and also do not want to commit a wrong expectation to the stakeholders. The numbers would be only for reference and safeguard the team from wrong expectations.
Please find the details below. Let me know if this can help to arrive at some baseline based on these details.
Which Industry? - Retail and it is for predominantly Desktop applications and few on Web applications
What type of maintenance project? - Monthly releases
Test case design? Functional Testing, Integration, Regression testing are in scope
Test execution productivity? Functional Testing, Integration, Regression testing
What will really help is knowing the Design and Execution productivity for Simple and complex test cases.
Well, let's start at the beginning then - what were your statistics/data from your last project that was exactly the same as this one?
Tags for this Thread