I was just wondering what would be the best way to compare all the functionality covered by manual testing against that by automated testing? This comparison paper will provide vital information on how many automated tests need to be written to enable a larger regression coverage.
So - the best way to conduct an analysis to find which manual test have not been automated. (there are thousands of manual tests)
Yeah, I think you really have to maintain a mapping manually. We basically have the manual tester sign-off on manual tests, stating that our automation covers it, and we track all of those in a simple spreadsheet. So we have this matrix that shows each automated test and each manual test covered by that script. New scripts are always added to this. It is painful until you are up to date, then it is not bad, and it is very useful.
If you are interested in generic coverage metrics, there a tools like ncover that you can run while your automation runs that can analyze what percentage of your code base is being exercised by your tests.