| || |
Exit criteria with % of test cases run or failed?
I have a test plan issue... I would like to know what the point is of having an exit criteria for the test period which states that % of test cases that has been run before the test period ends or stating that X % of the test cases has to have passed? At my job the teams anyway go in and remove test cases that have not been run so that the first exit criteria is always meet and the test plan also states that full regression testing will not be done. And regarding the number of test cases that are passed or failed? Is that really relevant? Isn't it the nature of the remaining bugs that is what counts?
I would like to state that 100% of new functionality for a certain release will be tested. Not sure if there should be a percentage for the number of regression tests that should be performed.
I hope I get a lot of input about my little issue.
Re: Exit criteria with % of test cases run or failed?
If your team actually removes test cases in order to meet your exit criteria then there is no point in having this sort of exit criteria in the first place.
Sadly, this isn't unique. While many shops set up formal exit criteria, the real exit criteria is often "stop testing when the scheduled end date is reached".
If that's the case in your shop, it's important that you know it ahead of time so that you can plan your testing accordingly. Make sure that when the end date is reached, you have tested the most important features to the best of your ability. Order your testing so that the most risky features are tested earlier, and the least impactful features are left until the end.