Test case tracability and test case management
This is likely a dumb question. But I am not proud. I can go right out there.
I recently did a bunch of reading and attended a course on black box testing "science". There we talked about equivalence class partitions, boundary value analysis, decision tables. Etcetera. These seem like pretty industry standard techniques. I can see how they are going to be hugely useful to figure out what to test in a more meaningful way than I have been doing previously. So off I go to design test cases.
I have a test management tool. I trace from requirements to the tests. That is really all that my manager cares about. That is fine with me.
But for ME, I would like to know what test techniques are yielding the best ROI in terms of developing tests that find defects. So let's say I have a requirement that I will have a filter screen. I can type in and select various values. I am going to have some number of test cases that were arrived at because of invalid boundary values. I will have some number of test cases that were arrived at using decision tables and some more using pair-wise combinations. In my test management software, all I have is the requirement. But the requirement is going to lead to bunches of tests which will have originated from different techniques. Does anyone attempt to trace through the various techniques that arrived at the case?
Please feel free to tell me why NOT and why this is not particularly interesting.
Most test management software allows for custom fields.
I like to use labels/tags to identify the reason why a particular test exists. I'm sure you can use a similar approach to labeling why the test case is designed the way it was.
Overall, I find the idea of risk analysis as a whole a pretty interesting topic. I'd like to see better test management tools that can integrate better with the trinity of Defect Tracking, Change Management, Requriements Mangement into a tightly integrated suite.
If I were to design an ideal test management / risk management eco system, I would have each system feed information to the other, providing a continuous improvement feedback loop.
* code changes will be tagged to their bug and requirement ID or the affected requirement, which will signal to those systems that inherent risk is introduced.
* Bugs submitted will signal to the test case management that more test cases are needed around certain requirements (change management will be the link between a bug and the affected requirement)
* On the collary, if a requirement sees alot of changes, but the bug management does not see new bugs for a particular feature. It will signal to test management that certain tests can be deprioritized to save time as they are not very effective at finding bugs.
* As bugs are submitted, requirements (features) are ranked. A Test Case can have a inherent risk score based on which requirements multiplied by their rank are covered by that test. This will allow a test manager to prioritize tests that covers the most risk first.
* As requirements are updated, risk scores can be automatically updated by querying past bugs that reference that requirement. Also referencing the change management, it can also factor in a dev risk factor based on code churn from previous changes.
No, I don't try to trace from technique used to test case derived from it.
But the requirement is going to lead to bunches of tests which will have originated from different techniques. Does anyone attempt to trace through the various techniques that arrived at the case?Please feel free to tell me why NOT and why this is not particularly interesting.
Requirements traceability helps determine if you have covered all the Requirements. And it helps you assess impact when a Requirement changes. If you determined that 50% of your test cases happened to be imagined during Equivalence Class Analysis, and that none came from Decision Tables, would that matter?
I expect testers on my team to use all the tools and techniques at their disposal to derive appropriate test cases and drive their testing. I don't particularly care which technique was used as long as the end result (the testing) covers the business need.
When I test (or when I start to derived test cases) I don't even consciously think "Now I am analyzing boundaries, now I am considering equivalence classes, etc." I just test. I don't want my testers agonizing over "Is this new test case from a boundary analysis session or an equivalence class session?" I see too many of those questions here, and I think they are mostly a waste of time.
That's just me. Your mileage may vary.
Last edited by Joe Strazzere; 03-28-2013 at 03:04 AM.
Different techniques for different bugs
It is important to realize that each of those black-box techniques finds different type bugs. For example, equivalent classes will tend to find single-input defects (those in which a single value input causes a failure by itself.) A boundary value will find an off-by-one. A decision table will often find missing requirements, or combinatorial defects which require two or more different inputs. There is no real reason to trace these to different techniques. In other words, it is fine to have multiple tests trace back to the same item.