Thanks:  0
Likes:  0
Dislikes:  0

# Thread: Calculation of Test Efficiency

1. ## Calculation of Test Efficiency

Hi all,

Do any one have an idea on how to calculate the Test Efficiency after the completion of a project?

I do not know the exact definition and formula to calculate it, but I search from some websites and found that it is defined as the ratio of number of bugs found by the QA in a project to the number of bugs missed (found by client in UAT).

Ofcourse, it is not a simple ratio of the two statistics, but for each and every bugs, based on the severity, some points will be given and the calculation will be based on these.

But I have no idea of the exact formula to calculate the efficiency. Any ideas in these regards.

Pragathi

2. ## Re: Calculation of Test Efficiency

Is this the efficiency of testing or the effectiveness? The formula provided reads more of a calculation of the effectiveness by a specific criteria of tests found in later phases which strikes me as related to the DDP (Defect Density Percentage), which provides a simplistic model of relative effectiveness (by a simple model) of each preceeding stage assuming no other factors of variance.
Efficiency and effectiveness are not always interchangeable.

3. ## Re: Calculation of Test Efficiency

I ran 100 tests and found 40 errors during system testing. The error rate was 40%.

In production, they found 8 errors. The error rate in production was 8%.

The # of test cases run is used for both so you are comparing apples to apples.

We generally expect a 10% error rate in production or less. Right now, it runs at about 3%. We look at every error to figure out why we missed it!

This is a very simple method, but it will give you some useful statistics.

- Linda

4. ## Re: Calculation of Test Efficiency

To complement Neill, I would point out issues: bugs found by client is only subset of bugs missed, more over you don't know yet bugs found by costumer by completion of project unless you include support phase into the project; not to mention that not all bugs reported by client was possible to detect by tester.

Now about the approach itself - I would suggest to also calculate fixed VS deferred bug statistics - to evaluate quality not only quantity of bugs reported.

Also a warning - if you use defect statistics to measure each person contribution it may lead to: person morale degradation, cheating measurements, etc. You should also understand that different testers/projects test different complexity code, has different quality of requirements/design and has different customer requirements to quality level

5. ## Re: Calculation of Test Efficiency

Originally posted by ljeanwilkin:
I ran 100 tests and found 40 errors during system testing. The error rate was 40%.

In production, they found 8 errors. The error rate in production was 8%.

The # of test cases run is used for both so you are comparing apples to apples.

We generally expect a 10% error rate in production or less. Right now, it runs at about 3%. We look at every error to figure out why we missed it!

This is a very simple method, but it will give you some useful statistics.

- Linda
<font size="2" face="Verdana, Arial, Helvetica">Linda,

I'm still a bit confused...

Are you saying that you ran the same 100 Test Cases a single time, after the system went live?

And that 8% of those test cases still failed?

And that you expect 10% or less to fail?

6. ## Re: Calculation of Test Efficiency

Linda,

How would you count th errors in production found outside of the 100 Test Cases?

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•
Search Engine Optimisation provided by DragonByte SEO v2.0.36 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.