performance testing for complex environments
When working in an environment where multiple applications (batch & online) interface with eachother, and where they share resources (running on the same machines; accessing common databases, etc.), what would be an appropriate strategy to do performance testing?
For instance: say we have a new functional evolution in one of the applications interfering with each other. Would it be a good idea to isolate this specific part of the environment on a pre production environment, run a test on it, and then try to interprete the results? Of course by doing this you miss out on all the interferences with other apps. It might also be necessary to use stubs to simulate the interaction with other parts of the infrastructure. Any best practices here?
You will probably tell me that it's better to take the complete infrastructure in the pre-prod environment and run the tests there. However, this is not possible resource reasons. Even more important : It is not possible to perform this kind of testing (I think), since there is so much data coming from all these different locations (different protocols) that it's practically not feasable to generate this with load generators.
Any comments will be greatly appreciated.
Re: performance testing for complex environments
At this point it may be best to follow a process of identifying the test objectives. I recommend starting small, with one or two objectives.
Example objective: At what point does my system not meet the expected response time during concurrent usage with all the normal batch processes running (while adding more concurrent users)?
The objectives help you to focus precisely in an otherwise very large universe of performance test possibilities.
With the objectives laid out and using the above example; you would need at least some of the following information in order to model scenarios:
1) Web and/or system/server/db-server usage logs from production. These give you a usage profile upon which to base your model(s).
2) If your objectives include those oriented to scaling and/or looking into the future - you may need usage projections, for other scenarios, OR - a good reason to stress test the system.
3) You need data that is either real or scrubbed. This data should be as voluminous as production. One of many reasons for having these data is to determine the performance of the database servers; while allowing you to vary transactions such that you are not limiting yourself to cached queries or data.
4) Sys Architect and/or SysAdmin, DBA, Network Engineer, or Developer input about suspected bottlenecks. This is useful for establishing more focused load tests that target specific areas like the database.
You indicated a resource shortage. Does this mean that your test environment is not of the same scale as prod?
One other thing to consider in terms of helping to get the initial focus is:
What questions about system performance are you trying to answer?
Stay tuned here for more guidance from others...