Hello,

Almost always something should be develop for performance test. It can be simple script, set of scripts or complicated solution with many different modules that are responsible for such things as load generation and monitoring and etc. So as any other results of SW development these artifacts are error-prone. Errors can be of all possible ranges:
- wrong test plan
* wrong application's usage model from business owners or marketing
* wrong plan for monitoring. Important metrics are not included.
- discrepancy of tests (implementation) and test plan (specification)
* wrong simulated flow, that can't be identified by the final results
* wrong load volume
* wrong monitored metrics set
- Infrastructure resources leakage or corruption because of deviation between real users and simulated load
- Data consistency destruction that can't be easily identified
- Metric gathering is implemented incorrectly (mistake in dimensions or one metric considered as other)
- Unexpected influence of test on environment under test.

Even if such defects don't lead to dramatically consequences such as data loss or physical damage of assets, it forces team to redo the testing that consumes a lot of resources, man time and money. Everybody knows how performance tests can be long and expensive.


The question is: What kind of practice do you use to eliminate or at least to mitigate risks of defects' negative impact on load testing efficiency? Tests testing [img]/images/graemlins/smile.gif[/img], reviews, preliminary runs with specific goals or something else...


I am trying to find ways to improve performance testing process in our organization and make testing cycles shorter.

So, thanks, for any ideas and thoughts.

Regards,
Alexander