| || |
Use Quality Center in manual Requirement process
Thanks in advance for reading my somewhat lengthy post, but I think it may spark some good discussions (hopefully)!
Currently, my organization is using Quality Center 10.0 (QC) on a limited basis -- what is being used is the Defect module only, and only on a pilot project. One of my goals is to rollout QC to the entire IT group. I would like to see this be the focal point to manage change and link dependencies between requirements, tests and defects.
My challenge is that I am trying to train from and build upon a manual process based method. I started reviewing how requirements are created and reviewed, and I ran into some basic methodology concerns. I've attached an excerpt of what is being used (please see attachment).
We also have a QC license shortage, so Iíve been exploring ways to leverage the import/export plug-in features, and I have that working just fine in Excel. (The Word plug-in requires all those individual formatting codes, so I think Excel is easier at this time, but Iím open to suggestions / feedback on this.) However, in these discussions, what is being raised as a blocking point is the conceptual process, not how to use QC. Let me explain the current process:
1. Very general capabilities would be inputted into the attached Feature Requirements section (ex. Ė the application should have the ability to process credit cards).
2. Use cases would be inputted into the attached Use Cases section (they use narrative descriptions, no UML Ė ex. Process a payment on a credit card with test steps to show how to proceed, but no expected results.)
3. I asked about where the requirements would actually be handled, and I was advised that the requirements are the use cases. To me, a requirement is a subset of a use case (ex. Ė there can be a login use case that requires you enter a user, password and click OK. That use case would have at least 3 separate requirements Ė for user, password and the OK button.)
4. I also described that in my understanding, use cases correlate more closely to test cases, which would be handled in the Test Plan module and run in the Test Lab. Iím getting resistance on having to have QC open to run a manual test case, and there was a request for how to import not only the test but the test results / outcome into QC. (I think this can be done with an Actual Results field, but my initial impression is that this is not a great idea, because of other dependencies.)
5. Iím getting requests for adding new tabs to Quality Center to track Use Cases (either in the Requirements or the Test Plan modules), but I donít even know if it is possible to add a new screen tab (vs. a user field) in QC.
Overall, the feedback Iím getting from project management is they think it will cause more work to use QC, which concerns me. My responses have been to show the benefits of tracing requirements to tests to defects as well as the Management and Dashboard modules for project progress. Their current points are:
1. The desire to import the linked relationships between requirements, tests and defects (I donít see where this is possible).
2. The ability to have QC utilitze a Use Case (vs. requirements) model for tracking items.
3. A reluctance to accept that use cases are not equal to requirements, even though I tried to explain that without sufficient granularity, one requirement failing in a use case would cause the whole use case to fail, etc.
Am I totally off base here? I was advocating for using these Import features to initially get requirements / tests, etc. setup, but then use the QC tool itself for managing links between items and actually running the tests, etc. Iím interested to know everyone else's understanding of these concepts.
Thanks again, and happy holidays!