Performance runs - take the whole sample, or just the core part of the execution?
The general practice at the shop I am at is to manipulate the results file to remove the ramp up and ramp down segments of a run before pulling together the analytics for the execution. I had not experienced this in the past, and I'm wondering if you all consider it to be the norm, if you have concerns that any manipulation of test data can skew results, or other comments you might have.
I have mixed feeling on this personally, and it creates both a greater amount of work and it moves the documented result outside of Performance Center (also duplicate storing of the data). I'd appreciate other thoughts.
We practice the same.
We're interested in what we refer to as the "steady state" period. Our tests are designed to execute X iteration per hour so if you include the rampin or rampout sections, those segments of your tests will generally experience lower throughput than the target of your test during that target state.
In what ways does it create extra work?
We upload our analysis html report back into Performance Center so all of the results remain together within PC.
I see where it can be considered an extra step to down load the analysis file to then filter the ramp up and ramp down, this is actually a very common approach to testing the steady state of an application.
What you can do is create a trending report where you focus on the steady state and use that for your reporting. That is if three is no need to dig deep as in the case where the test/application runs into issues.
In the multiple locations I have done performance testing I have not seen anyone try to use the ramp up / ramp down time as part of their test results (unless some run anomaly is observed) since the response times from ramp up / ramp down tends to skew the results due to the low volume levels.
Originally Posted by mholian
lower load during ramp up/ramp down will lower average response times over the whole of the test, so won't give an accurate result for the stable period
I only ever report on the ramp up if there is an issue during it..
Agree with those advocating looking at the "steady state" only for most accurate results.
Any time you filter results, you are shorting your customer of the whole story; unless you have made a clear and not-to-be-confused statement about what has been filtered, why it has been filtered, and what the inherent risks are. Ramp up transaction measurements are extremely important. Let us take a classic example. Let us say all your users are in the same time zone and all access the app, site page and login within a 5 minute window. If those users are waiting for what - 20, 30 seconds - or a minute or more; you might just have a performance issue that can be fixed easily. In summary, I think it very unwise to omit ramp up transactions times; unless documented as I indicted in the opening statements above.
This should actually be moved to the Performance & Load Testing forum since this is a consideration for any performance testing using tools of this class.
Last edited by JakeBrake; 12-26-2014 at 03:32 AM.
One thing I like to do is present the entire duration of the test on the Avg Resp Time graph (merged with running vusers). This way, if there were any issues during ramp-up or ramp-down, they become obvious. As does any increase in response times "over time". I annotate the graph to point out anything interesting.
I typically present the response times for a filtered 1 hour under load, similar in format to the Summary Report. In addition, if there were other time periods that should be looked at more closely, I'll drill down into those (maybe we had a huge spike in the 3rd hour of the test that we'd like to explore further).
It all comes around to determining what is necessary to tell the story of the test. Even if you don't include ramp-up/ramp-down in your response times, I think that it is very valuable to have this in graph form and to discuss it in the commentary of the test.
SoCalGal - Defender of end user response times!
I'm not discouting that there's value in rampin periods, but for us, those are usually different tests.
When we filter out the steady state period, that particular test is generally only trying to prove that application behaves acceptably at a certain rate.
We'll often run a separate "login" test if there are worries about logging in all of a system's users within a particular timeframe. (Power outage, server outage, etc)