Laying out requirements for multiple integrated products
The systems we have here are an eclectic group of programs that are kludged together and may be delivered as a single system, in parts, or with some of them stand alone. Iím wondering if anyone has suggestions on how I might lay this out in TestDirector.
We receive a set of applications from an other group that we customize for our own use. We also add a bunch of functionality on to that. There are a couple of applications we have that can run in standalone or integrated into the above mention system(s). At times, data moves between these systems in various convoluted ways. Sometimes data flows in one direction, and sometimes both ways. There is also one application that works as way to use a single sign-on for accessing all the applications and for staying on a single subject when switching between them. Not to mention, the total group of applications may be deployed where some of it is not available to the users.
In case youíre wondering, these applications are for tracking patient records in a military hospital. Read ancient legacy programs coupled with some really new stuff too. We not only have to track the primary military membersí medical information but also their dependents. Also, we track information about events in the hospital AND those in the field where the military member is deployed.
So, weíve got this group of apps that need to be tested either separately or as a group where some of it is our responsibility and some of it isnít under our control. Where some of itís new and some is really old and some may or may not be available to the users. (Depending on the hospital where the software is deployed instead of by the individuals account.) Did I forget to mention that there are different views of the application depending upon what the usersí job is?
Any input would be helpful. Maybe links to white papers or the like.
If you're a slave to your free associations, does it automatically become something else?
Re: Laying out requirements for multiple integrated products
Well if I was faced with such a project I think it would be advisable to split the participating systems each into its own project in TestDirector. It would make it easier with setting your requirements and also for future updates compared to throwing it all together into one project. You can end up with a requirements list a mile long. It might be also advisable to create some sort of process flow of the way your data flows and at which point in the system is their an interaction with another system or component. If a problem occurs somewhere it can make your work easier in identifying an error if you know where it occurred in the process.
In the website www.itpapers.com search for "test requirements". There is an article call "An Early Start to Testing: How to Test Requirements"
It helps to know that your requirements is correct before you start testing.