We are executing QTP scripts from Quality center. The scripts execute fine till iteration 40 and after that the CPU usage become 100% and scripts failing at the first line of code. If we execute the same script from local machine it went through to all iterations successfully. Please suggest some solution for the same as we require to execute at least 80 iterations for a single script.
I guess I could add a little more to this discussion... we had a problem where one of the testers would have QTP crash on her after executing scripts through Quality Center for a couple of hours.
The thing is though; her machine is fairly old (3+ years) Centrino single-core laptop with only one GB of RAM. Once she restarted her machine, everything would play together nicely.
So in her case, her machine exceeds both QTP and QC's system requirements (I think both only require 512 MB RAM). But because of the workload she's putting on the machine its limitations, her system tends to fail after a 2 - 3 hours of testing. So if the machine running QTP through QC falls in this category, maybe see about upgrading the machine's RAM or getting a buffer machine to execute these tests. I never experienced the issue b/c my laptop is newer, faster, and has double the amount of memory that hers has. However, I've never ran scripts back to back for hours on end.
And here's another thought... maybe the AUT or QC / QTP themselves have memory leaks? A memory leak is a condition where system resources (memory) that is utilized by an application arenít released when the processes using the resources no longer need them. Maybe such a condition exists within your AUT? Or maybe there's a problem with QTP and QC themselves? If that's the case, QTP 10 has some local-system monitoring tools that might be able to help with this. I have no idea how to use them because I just upgraded to 10 myself, but I know that the tools are there.