Different results when missing out tests or running in isolation
I am starting to script some tests in Test complete 3 for an application and rerun them. I have got to the stage where all tests run as expected if they are run in the order they were created in without missing any out of the tests in the suite then there is no problem and they all pass. When I decide to miss a few out in the middle the tests after the ones that wont be run all fall over because they cannot find the object in question. Doing the same actions manually outside of Test Complete give no problems. I only get problems when I run individual tests on the fly or mis out tests in the middle of the suite run. Looking at what is happening in the object browser I believe that object references are being generated on the fly and are dependent on the order, the object names generated bear no resemblence to the application source code.
The application under test is a Windows client using Infragistics controls.
Could any of the experts care to comment on this behaviour and suggest any solutions that would allow individual tests to be run in isolation as is possible using manual testing.
Re: Different results when missing out tests or running in isolation
Automated test cases should not depend on the results of previous test cases if you want to be able to run the tests in an arbitrary order.
It looks like your test cases depend on each other and that is why disabling one of them breaks some of the others.
If test case 1 starts working with the tested application when the app is in state A, the test case, ideally, should revert the application back to the exact same state at the end of the test execution. This will allow to change the order of test cases without problems.