We've been given a requirement to reducet the time it takes to complete our automated test scripts. I've noticed that the test suites are fairly CPU intensive. Will increasing the processing power on the qtp nodes help reduce overal testing time? Or is it advisable to scale outward and break the test suites into smaller chuncks distrubuted over more, but less powerful nodes?
"Will increasing the processing power on the qtp nodes help reduce overal testing time?"
I would guess it may increase run speed some, but not drastically. This assumes you are not currently running with the CPU pegged for the entire test run.
Breaking up tests to run more simultaneously does have the potential for large overall runtime improvement, but I would imagine has more overhead in environment setup, expense, etc. This is certain to give more improvement than the 1st option.
I assume you are running the QTP tests in 'fast mode,' which makes a huge difference.
Depending on what you are testing, modifying HOW you are testing can have significant impacts. Without getting into too much detail, I was able to improve the efficiency of some oft-used test functions, and ultimately reduced the overall test suite runtime 20%. You may have such opportunities.
5) Most test suites I've seen include very large amounts of redundancy. In some cases some of this could be curtailed to save time.
We also use the Adobe Flex-QTP plugin, and some speculative types say this may have a extremely negative performance impact. I'm not so convinced, as the library lives on the target test nodes and they seem fine. Do you have any thoughts on this?