I am working on a project where we had to create test cases based on the business requirement documents and System use case documents.
The developer implemented the requirements in a different implementation design not envisaged in the System Use Case documents.
Functionality is working but test cases that cover non-functional implementation are all failing making the delivery a total fail.
What would be the best approach to make this test cases match to the would be design implementation? Do we created Test cases after the High profile prototype has been delivered for review or do we base the test case creation on the system and business use cases?
The first thing that you should do is try to find out why did the development team implemented a different design implementation. There will some reason to it. If they are referring to the correct one, then does that mean the test team were not having the updated design document/use case doc.
High level Test cases should be based on the agreed upon requirements/use cases. Design document can give you more technical details of the system and that will help you in coming up with more of your drill down test cases.
The initial system and business requirements have been reviewed and signed off. The design documents are we documented to show the technical aspect of the system functionalities.
The scenario here is that you have a System use case document that talks of a page that will provide 3 work sections and you draw test cases to ensure 3 work sections are provided.
The implementation will come with a list box instead of the 3 work sections.
Does this warrant a fail in the design even though the functionality is provided?
If yes, then most of my test cases based on the documentation will all fail and needs re-engineering to cater for the different implementation that is not documented.
Is it in order to request the developer to re-design all his system use cases documents to take care of his intended implementation and then create test cases from the documents so as to avoid a lot of fails?
Personally I'd fail my tests and raise bugs. At the root of this you are comparing documented expected behaviour (system use cases) against actual behaviour of the system. At the time of testing you do not know if the documentation or system is incorrect. Subsequent analaysis will show this.
I'd raise the issue with your manager as well. Keeping good documentation will aid future releases not just this one and you are probably wasting a lot of your time dealing with these issues.