The absolute best way is to generate a requirements vs. test matrix. So that you can see which requirements are not covered and which ones are covered more than once. However with 7000 tests, it would take you a long time to fill in and evaluate all of them. Maybe next time you can get started in the right direction.
Another, simpler way is to note which requirements are covered in the heading of the tests. But again you need to start this at the beginning.
It would be extremely difficult to try to categorize that many tests at this point, not to mention to automate them. You will just end up with really fast redundant tests.
Success is the ability to go from one failure to another with no loss of enthusiasm.
~ Winston Churchill ~
thanks for your suggestion but i am not clear what you are proposing here.
i want a solution to re-factor them so that it should be easy enough to automate ( Since long time effort to automate 7000 TC) as well as it should take less effort to execute manually in the mean time.
considering the g-mail documents as a module. how do i re-factor them.
1. combine adding, modifying,deleting,updating of spread Sheet related TC's to a single TC.(100 TC's re-factored to 10 TC's)
1. time to execute multiple TC's = executing single TC's
2.Effort estimation varies
3.chances of deviation happens.
1.keep the existing Tc's as it is.
2. TC are smaller and independent
1.A lot of time takes to execute the TC's.
2.Its hard to execute all the TC's.
3.there will a lot of effort require to automate simple Tc's
4.a lot of pre condition steps required to execute some TC's.
1. Identify the non-mandatory fields/low priority TC's and combine them
2. Identify manditory/high priority TC's and combine/let leave as it is.
1.since there are a ~7000 TC it takes a lot of effort.
2.some resources need to be dedicated.
any one propose a solution that will have most of the advantages rather than disadvantages
when things gets harder ,the harder gets going
Since these test cases are coming for regression, ask the persons who have executed them before about which part had most errors or if you have test results file of previous execution you can know from that. You can focus on that part.
1. Which part of the application will be used more? You can focus more on that. You can ignore the part which users are least likely to use.
2. You can remove those test cases from your list where user is least likely to make mistake.
3. You can also remove those test cases where even if the user makes mistakes he can know that by his common sense.
eg. If your application has DOB validation message that 'DOB cannot be future date'. You can avooid this, because even in worst case if this is failing. Then user can get an idea about it by using common-sense.
4. You can also reduce count while testing from your results (particularly in case of validatuion messages). Say if you have 15 validation messages for particular part of application. You can check 6-7 key validation messages, if they are displayed then other validations are also most likely to get fired.
Since you don't have time, I suggest to decide the test cases to test at the time of execution itself by reading their description.