Can anyone say what will be the types of the performance tests to be included in an iterative regression on each product version ?
I would think:
2. See that response time and other key performance counters are not changed under predefined load.
How would you implement an automatic performance regression suit of tests ?
I am working on a product that grows rapidly.
A lot of new functionality is added.
I need to verify the performance and stability of the main system scenarios with each new version.
I wanted to know if there is any standard for performance regression.
I would also like to automate this process and I'm working with LoadRunner.
Stability would be a good thing to go after. But if you think that response time will not change when you add new functionality, that might not be reality. I have rarely seen new functionality added that doesn't somehow impact previously existing functionality in some way.
Adding/extending Darrel's comment: the new features could make your old scripts obsolete, even the scenarios: probably there are more or less screens users is expected to see or even worse, the scenario is different for the same business activity: in this case you should consider some strategy to compare the old and the new scenarios.
?:the art of a constructive conflict perceived as a destructive diagnose.
Do not overcomplicate. There is no special standards. Just create scripts for your main scenarios and run the same tests for each build. Compare response times and resource utilization. Check that scripts didn't change for the new build: you will probably need to re-write them from time to time. If you want to add functionality, first run the old set of tests (to compare with the previous build) and then the new set of tests (to establish the new baseline).
Yep. Alex hit it right.
Identify the critical functionality that you want to test each time.
Don't try to do more than you can reasonably maintain release to release.
Establish a baseline of performance and track the metrics go-forward.
If new critical functionality is added, enhance/rebuild your scripts and scenarios to accommodate. But, first make sure you understand the performance delta with legacy setup.
[ QUOTE ]
Identify the critical functionality that you want to test each time. Don't try to do more than you can reasonably maintain release to release.
[/ QUOTE ]
you want to come up with a performance benchmark test. It should be something that can be run repeatedly each time you get a new build. come up with a small number of key performance indicators and metrics that you will use each time for comparison.