How do you manage regression tests against multiple releases?
Curious to get opinions about process for managing regression tests against multiple successive releases. More specifically... TCM tools like TestLink often have a "Project" or "Domain" dropdown, which, when selected, places the user in a testing context which includes requirements & test cases for a given release of a product.
Typically... I create a TestLink project like "Release A" and enter the requirements (1.0, 1.1, 2.0, etc.) for that release/iteration. Next... I create test cases to satisfy those requirements, which, when linked to the requirements, provide "requirements traceability" so we know that all requirements are covered by test cases.
Once testing is done for "Release A" and it's time to work on "Release B"... TestLink specifically prevents you from adding duplicate requirement IDs (1.0, 1.1, etc.) into any given project. That forces me to create a new project in order to capture the requirements for "Release B". Of course once I am in that new project context... any regression test cases I had created for "Release A" are still back in that old project.
You might say "just export the Release A regression tests and import them into the Release B project". I have 2 problems with that:
- I don't want to bloat the TCM DB with duplicate test cases
- I think TestLink has an import character limit, so if I wanted to transfer 1K+ test cases from release A to release B... I could export fine, but it would choke trying to import
So how do YOU manage your reusable regression tests from release to release for any given product? Am I not understanding how to use TestLink correctly? Is there a tool that does this better?
What I'm seeing there is that requirement numbering shouldn't start over for each project.
Requirements should be scoped at the product level rather than a project level.
PM designs system with requirements 1-200
Project 1 implements requirements 1-50 and 80-85. Other requirements flagged as "future"
Project 2 implements requirements 50-80 and 150-190.
New requirements introduced 200-220
Project 3 implements 200-220.
Requirements 80-120 retired
Last edited by NoUse4aName; 07-02-2015 at 07:29 AM.
Thanks. In many companies I've seen requirements done in MS Word using built in auto-numbering such as:
1.0 This is feature A
1.1 Aspect 1 of this feature
1.2 Aspect 2 of this feature
2.0 This is feature B
2.1 Aspect 1 of this feature
2.2 Aspect 2 of this feature
2.3 Aspect 3 of this feature
So let's say it takes a couple "projects" or releases" to complete the above list of requirements, and the testing team has created 1 or more test cases for each requirement & aspect number. For the sake of requirement traceability, they have copied & pasted those requirement numbers in a "Requirement" column of their test case sheet.
6 months later... the business team decides they want to enhance the product and add 3 more features. Let's say new "Feature C" is very closely related to Feature A, so everyone agrees that the requirements document will simply "read better" if they insert Feature C right after Feature A. Clearly that auto-numbered insertion will displace "Feature B" from 2.0 to 3.0, which would cause the testers to have to change their "Requirements" column contents for the tests pertaining to Feature B.
What are the best practices to avoid this? Do we always place Feature C at the END of the requirements document to avoid displacing existing requirement numbers (which causes test:requirement trace re-work), or do we use a different tool which does not auto-number... and just use random requirement numbers even though there is a clear sequence relationship to the requirements?