| || |
Timing issues or Windows memory leak?
QC 9.0 running off 3gb Xeon processor (server), 400gb hd, 3.2gb ram
Machine1 running QTP 8.2, 2.7gb proc, 120gb hd, 2gb ram (Test Set 1)
Machine2 running QTP 9.2 2.7gb proc, 120gb hd, 2gb ram (Test Set 2)
Both Test Sets contain 64 scripts. Each script varies in length, from a few hundred lines to a few thousand. The suites generally run for 6 1/2 hours each, before they complete.
I am testing against an application that runs an interactive GUI (which contains over 60 individually specialized applications), a web site, and pulls/writes data from/to an Oracle Database.
If I break up the Test Sets into smaller subsets (8 per test set), I never get any failures in the Status column of QC. For example:
Test Set 1a - 8 Scripts
Test Set 1b - 8 Scripts
Test Set 1c - 8 Scripts
- - etc., etc.,
Why would the exact same scripts work when broken up into smaller test sets? When running a large number of scripts in a suite, is it best to break them up into smaller subsets because of Windows and it's memory leaks? Not to mention, some of the applications I test regularly exceed 1gb of resources on the test machines.
I simply don't understand how there is any additional overhead that would cause the systems involved to be more "stressed" when running all the scripts in a single test set versus running them in small test sets.
Re: Timing issues or Windows memory leak?
QTP 8.2, and I suspect 9.2 as well, has memory leaks when controlling a browser over long test runs. You can search the QTP forum for many threads covering variations on this problem. To work around is to terminate and restart the browser every so many minutes or at an appropriate iteration point in the script. You can give this a try and see if it helps.