How can we make sure that the test case has covered all the requirements in test case and it is completed.
<font size="2" face="Verdana, Arial, Helvetica">In a design context: this could concievably be done in a review.
In an execution context: via code coverage on the pass/ fail of the test steps is a reasonable method - if you have some kind of traceability/ test management tool it is fairly easy to get this linkage.
In an audit context: Here the test traceability matrix, which demonstates what we covered, when by whom, at what stage in a project and to which iterations is invaluable.
You must read and understand the requirement and how to implement the test to satisfy the requirement. If the requirement says something like: "When the run button is activated the application will initialize and shall create a new a new applog.txt file in the system directory. The applog.txt shall contain an initialization time and date stamp and an inventory of all system files".
then in your test you should click, (test all other means of activating the button also but maybe in subsequent test cases), on the RUN button and you verify that the applog.txt is created and contains the correct data the test case is complete and can be used to validate the requirement.
Neill is correct in saying that you need some method of comparing the test cases to the requirements covered. A matrix is usually the most appropriate method.
In short ur testcases are said to be complete only when the requirements are entirely covered without leaving a single functionality. This can be verified and validated by peer reviews which is the most significant approach that one can suggest of. Though the engineer may feel that he has covered the entire funtionality of an module / screen it can be confirmed only when the testcases are peer reviewed.