I am curious to find out how long most volume tests are run. We generally run our tests for about an hour, sometimes and hour and a half. Some of our project managers want us to run volume tests from anywhere between 4 hours to several days. I have explained to them it is not necessary that we can test the system in a shorter period of time, but then I get the arguement that the testing would not simulate real life usage. How do you answer this argument? I have been trying but I keeping slipping up.
Simple minds, Simple thoughts!
I figure if you have the courage to get out of bed in the morning, then how bad can the day be.
One instance can be to look at any existing production statics, if available. Here at my latest project we have built a profile around the existing legacy stats and business analysis.
Generally we know we had a "y" total population that could use the system. With stats, we see that actually the peak concurrent equals "x". The application is 99% of the time being used between "normal" work hours of 8 to 5. Generally there is a slow ramp up between 8 to 9, then a steady ramp up from 9 to 10. Maximum concurrent load lasts until about 12. Then the users head to lunch, then there a slow ramp up from 1 to 2, then from 3 to 4 there is another similiar peak.
We then defended a 2 and half hour test run simulating the ramp up and then 2 hours of peak load, then a pretty agressive ramp down.
Another thing I will do during pre-test, strategy is discuss test cycle turnaround. Depending the app there is need for turnaround to tune etc that may require multiple tests in one day or one week. By testing 4 hours or more you may only be able to get one test in a day or week or two. If you need multiple db restores, server recycles, multiple code builds, long cumbersome change control process; then it may not be practical to have longer tests.
What players will be monitoring the app under test? What is there avaibility? If you can the same questions in 1 hour that you can in 4 then I think they will see the turnaround advantages of going shorter.
Hope this helps. (also that it is not difficult to decipher)
There really is no argument for not doing at least one long (8 hour +) soak test, to capture anything you might miss in a shorter test (memory leak being the obvious).
I once tested a system which performed fine for anything between 3 and 6 hours and then dropped off badly (this normally coincided with the tech support people having just gone home, saying "well it looks OK now - nothing we can do" !)