Systematic approach to design testcases involving combinations
If for a particular functionality in an application has many ways/combinations to do it, like through multiple screens, multiple conditions etc pairwise testing concept can be used to reduce the number of testcases. We also have opensource tools for doing the same. But how can we prove that the excluded testcases with combinations will not have a defect? Can anybody suggest a process or systematic approach that can be followed for designing testcases in such a situation.
For example if I list down all the combinations first and use a tool for pairwise, it reduces the total number of testcases from say 1024 to 28...but how can we say that the remaining 900+ testcases are defect free?
Pairwise testing is relies on the idea that 90+% of bugs could be found using combinations of 2 variants, between dependent settings. This has the assumption that most bugs are relatively simple in nature.
There is no way to prove that all the variant combinations not covered by pairwise are defect free. It's just that defects found by those have a lower probability than defects you can already find using unique pair between dependencies.
It depends on how the AUT has been designed. A cross-check with the technical team may reveal components that are in common between UI (lets say) or interfaces. In such cases, having all combinations (900 as mentioned above) & testing them is actually a waste of time. Since, the tester would never uncover any issues here as the common component has already been tested by the designed 28 test cases.
Hence, test cases should always be optimized from two fronts:
A) Doing an impact analysis from a business perspective &
B) Doing an impact analysis from a technical perspective.
Apart from the these a regression analysis along with prioritization (based on business risks) yields the right number of tests - that should be executed.