If a product was unit tested by developers and handed over to QA, supposedly it's free of major bugs, and presumably it should save human recourses and time for QA. Does anybody know if a research has been done in this matter and where I can find it. Any help would be greatly appreciated.
Recently I've asked Les Hatton this question, he's from the University of Kent, and he has done a lot of research in computer science. He told me that there has been little research on this topic, because it's quite expensive to do this kind of projects.
But to all the testers I spoke that were involved in projects where programmers did unit test their code, it was of a major impact on the code quality. And ofcourse this is my experience too.
“None of us is as smart as all of us” - Gerald Weinberg
Testers and developers have usually different skill. They think different when they design its unit test.
Our company makes unit test and measures periodically its coverage to guarantee an stable unit test level. The quality of the application is really good and the application is very stable, and doesn't lose resources or mamory, but to verify functional requirements we still need testers. There is a big amount of bugs that resides in User Interface modules that are hard to test from unitary perspective.
I agree with all of you and feel that Unit testing is very important that can catch a lot of trivial issues on which the QC team need not spend too much time.
Also, if the Unit Test Plan is verified by QC prior to coding, that could help a lot in better and correct understanding of the requirement