SPONSORS:






User Tag List

Thanks Thanks:  0
Likes Likes:  0
Dislikes Dislikes:  0
Results 1 to 7 of 7
  1. #1
    Member
    Join Date
    Mar 2002
    Posts
    97
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Test Cases vs. Defects Reported Stats?


    In general, are there any stats as to now many defects per test script are written for a new project? Is this a good measure to determine if development team did not do their unit test properly before QA implementation? For example, we had a total of 88 test cases and the total defects reported were 105.
    Thanks

  2. #2
    Moderator Joe Strazzere's Avatar
    Join Date
    May 2000
    Location
    USA
    Posts
    13,170
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    1 Thread(s)
    Total Downloaded
    0

    Re: Test Cases vs. Defects Reported Stats?

    [ QUOTE ]
    In general, are there any stats as to now many defects per test script are written for a new project?

    [/ QUOTE ]

    42?

    But seriously, wouldn't it depend on the project?
    Wouldn't it depend on how you define and go about crafting test scripts?

    [ QUOTE ]
    Is this a good measure to determine if development team did not do their unit test properly before QA implementation?

    [/ QUOTE ]

    I don't think it is.

    I could imagine a case that "defects per test script" was high or low, but didn't imply that development did a poor or a good job.

    [ QUOTE ]
    For example, we had a total of 88 test cases and the total defects reported were 105.

    [/ QUOTE ]

    So is "105 per 88" a good thing or a bad thing?

    What if the test cases were 1 line each?
    What if the test cases were 10000 lines each?
    What if the duration of the test period was 1 day?
    What if the duration of the test period was 3 years?

    Would any of those change the assessment?
    Joe Strazzere
    Visit my website: AllThingsQuality.com to learn more about quality, testing, and QA!

  3. #3
    Advanced Member
    Join Date
    Jun 2008
    Location
    Israel
    Posts
    594
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    1 Thread(s)
    Total Downloaded
    0

    Re: Test Cases vs. Defects Reported Stats?

    Joe's comments are correct, without a context or a comparison benchmark you cannot hand out grades to your development process.

    One thing we used to do in a company I worked in was keep track of different metrics over time and then compare projects with their predecessors or with parallel projects in the company that shared their main characteristics (complexity, timelines, team experience, etc...).

    The whole idea was to make sure we were comparing apples to apples and not to oranges.

    This approach was still not flawless since there were always "extra-characteristics" to factor, but it was our starting point.

    There are some more things about metrics processes and practices we implemented and implement today that I wrote in here, here and here, maybe you will find something useful there too.
    -joel
    9 times out 10, less is actually more

    PractiTest - QA and Test Management Tool
    QABlog - QA Intelligence Blog

  4. #4
    Moderator
    Join Date
    Sep 2001
    Location
    Yankee Land
    Posts
    4,055
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Test Cases vs. Defects Reported Stats?

    I'd also like to add that unless you can differentiate bugs that come from the test scripts as opposed to the application you are testing any metrics you are making are going to be skewed.
    - M

    Nothing learns better than experience.

    "So as I struggle with this issue I am confronted with the reality that noting is perfect."
    - Unknown

    Now wasting blog space at QAForums Blogs - The Lookout

  5. #5
    Member
    Join Date
    Feb 2009
    Posts
    38
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Test Cases vs. Defects Reported Stats?

    Also, total bugs discovered in a revision can simply be a weak indicator. What proportion of those bugs were "severe"? What proportion were frivolous, very low priority, or really suggestions disguised as bugs?

    Let's say I get a build. Step 1 is "launch the application within a pristine xp environment". When I launch the app it destroys the registry completely and slaps my momma just for good measure. That's a pretty decent (albiet not perfect) indicator that unit testing was not done. Yet that's just one bug, and you're done testing that revision.

    There could be problems interpreting the specs between you and the developer. There could be software packages development has installed that you don't. There could be a unit test flag dev forgot to flip before turning the code back over. Just looking at raw bugs generated per test script will not help figure any of this out.

  6. #6
    SQA Knight
    Join Date
    Aug 2000
    Location
    Elanora Heights, NSW, Australia
    Posts
    3,271
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Test Cases vs. Defects Reported Stats?

    Metrics are just stats

    You have "Lies, Damn lies then statistics"

    Without a frame work or context metrics are essentially at best a localised yard stick for the fascination of your management.

    Its like giving ball of wool to a kitten.
    Robert Tehve
    rtehve@bigpond.com

  7. #7
    Senior Member
    Join Date
    Oct 2001
    Location
    Cambridge, MA, USA
    Posts
    263
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Test Cases vs. Defects Reported Stats?

    [ QUOTE ]

    Is this a good measure to determine if development team did not do their unit test properly before QA implementation?

    [/ QUOTE ]

    Is this the question that you are really trying to answer? Are you trying to "prove" that development didn't do their job?

    Perhaps you should look at the world through a different set of glasses. Maybe ask the question in a different way: How effective was unit testing?

    You do realize, I hope, that unit tests can catch certain kinds of bugs, but not necessarily the same kinds of bugs that functional/integration/system testing can. if you look at the stats for the number of bugs found as a result of writing and executing the unit tests, you might find a large number of bugs were found and fixed well before the code ever got to QA, so your 125 was actually quite low.

    If you really want to show that unit testing is insufficient, I think you'll need to review your QA tests and the reported bugs to determine if they could really have been found as part of unit testing.

    Derek

 

 

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Search Engine Optimisation provided by DragonByte SEO v2.0.36 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Resources saved on this page: MySQL 9.38%
vBulletin Optimisation provided by vB Optimise v2.6.4 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
User Alert System provided by Advanced User Tagging v3.2.8 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
vBNominate (Lite) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Feedback Buttons provided by Advanced Post Thanks / Like (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Username Changing provided by Username Change (Free) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
BetaSoft Inc.
Digital Point modules: Sphinx-based search
All times are GMT -8. The time now is 02:12 AM.

Copyright BetaSoft Inc.