if test cases are written for one version which have no more further development till next version then the person who knows complete functionality of the application can give training as well as based on documents one can write test cases which should be not 100% valid but yes fairly well atleast 80% valid then starts the process of verification of test cases and changes till the person(or group of persons) who knows functionality more and indulged into the application from the starting finally approves that...but here is one catch also you can't say the person who is working on the project from the starting know more than the person who joins that project newly...it all depends on the knowledge,intelligence,how you think and it changes from person to person....
Just run the darned things and see if they pass or fail. If they fail, then determine if the failure is caused by the TC or by the application. If they pass then do a quick analysis to validate that they should have passed as determined by the requirements.
OK. Subtle difference in viewpoint. I was referring to the adequacy of each individual test case to perform to the depth required for that particular area.
Example: We have many hundreds of test cases. In the light of experience, it turns out that some of those do not do enough to prove the validity of the area(s) they are testing. Cumulative adequacy doesn't come into the picture.
And I believe that speaks more directly to the original question.