Looking for advice on best way to organize testing for varying configurations
We are using ALM 11.0 with the BPT feature enabled, but we have not used that feature before.
The product we are developing has to be tested on multiple OS configurations:
- Windows Server 2003 English
- Windows Server 2008 32-bit English
- Windows Server 2008 64-bit English
- Windows Server 2008 64-bit French
Besides those OS configurations the product has to be tested with various combinations of the following secondary configuration elements (which is a subset, just for sample purposes):
- Database server software installed/not installed
- Backup management software installed/not installed
- System is/is not a Domain Controller
- Previous version of Product-Under-Test (PUT) is/is not installed
- Product under test is installed through method A/method B.
Each of the above secondary configuration elements has to be implemented on at least one of the OS configurations, but does not need to be implemented on all OS configurations, during the testing of the PUT. From one round of testing to the next (or one version of the PUT to the next) the OS config on which a secondary configuration element is implemented can change.
What we have done to date is call out each of the OS configurations as a Test Lab Folder. In the Test Plan we have called out each of the secondary configuration elements as a (manual) Test Case. In the Test Lab under each of the OS folders we have created a single Test Set. In that test set we add the test cases for the secondary elements that get covered during our testing of PUT.
This works fine for ensuring that each secondary element is covered during at least one test of PUT, but it doesn't help us document what all the secondary elements implemented during one test included. For instance I will know that
- On Windows Server 2008 32-bit English we tested PUT with
- Backup management software is not installed
- Backup management software is installed
- System is not a Domain Controller
- System is a Domain Controller
- Previous version of Product-Under-Test (PUT) is installed
- Product under test is installed through method A
But that does not conclusively tell me if the specific combination of Backup software and Domain Controller used during testing. It could be
- Backup is installed and system is a Domain Controller
- Backup is installed and system is not a Domain Controller
- Backup is not installed and system is a Domain Controller
- Backup is not installed and system is not a Domain Controller
So, I'm looking for a way to organize such data so that I can review historic test information and know exactly what the configuration of the test system was during any given test. These configurations are not cast in stone, and should not always be reused exactly in each round of testing. It is sort of "guided exploratory testing". The next time we test, we might combine secondary elements in a completely different way.
Another requirement of the solution is that I can report on which elements were tested on which OS configurations. I don't need to report each of the specific configurations of secondary elements. Currently I produce a report using the Dashboard Excel Report. See attachment for an example. I managed this by defining Release Cycles named for the OS configurations, then assigning those Release Cycles to the corresponding Test Lab Folder.
I considered creating a test set for each configuration, in real time, as I decide which elements to combine for the next test.
I also am wondering if I could manage this using BPT in some way. I have only the vaguest ideas about how BPT entities relate - I plan to study that next. These scenarios are not exactly a business process "flow" - I just thought the structure of BPT might lend itself to this well.
If anybody has any suggestions I would be glad to hear them.
I recommend that you get an ALM Subject Matter expert to help you in your planning and ALM project Setup. It is better to get you all setup before you start. Off the top of my head and only skimming your post I think you can accomplish the separation by using some user defined fields to manage the different applications & Environments. Use Release & cycles for iterations of testing, and Test configurations for the Data differences. This will allow you to have a multi dimensional view of the testing activities.