I am interested in hearing the different options for trapping complicated data scenarios to test once the application is completed. When and where do you document. Use cases, before requirements. Who writes them up or do they get written up. I include them in testing but in some cases if they are not evident by reviewing requirements they are not tested. I find myself finding valid scenarios that fail after the application is written and no source to check if they were accounted for prior to release. I would like to see as many as possible before the application is written. Where do you put these and at what step.
(Assuming you don't have a tool like TestDirector or TestMgr.)
Store them in just about any medium that allows you to track what they are, when they've been executed, status, last modified, traceability information, etc.
It is good to include a brief description that sets them apart from other scenarios that may be similar.
Ideally use-cases are developed by a BA or SA. Use-cases are scenarios of usage. Sometimes it may be easier to clone them for test purposes and adapt them for testing - if you can afford the maintenance overhead.
As soon as you get an approved use-case or related func. spec., you can write the scenarios, case, scripts, etc.
Sorry but you lost me here:
What is BA or SA? SoftwareArchitech and BA. I am trying to derive them from the data now but some very complicated valid situations I cant intuit from submission requirements and I know that developer were aware of them during development.