SPONSORS:






User Tag List

Thanks Thanks:  0
Likes Likes:  0
Dislikes Dislikes:  0
Results 1 to 9 of 9
  1. #1
    Apprentice
    Join Date
    Mar 2013
    Posts
    14
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Performance runs - take the whole sample, or just the core part of the execution?

    Hello all,
    The general practice at the shop I am at is to manipulate the results file to remove the ramp up and ramp down segments of a run before pulling together the analytics for the execution. I had not experienced this in the past, and I'm wondering if you all consider it to be the norm, if you have concerns that any manipulation of test data can skew results, or other comments you might have.

    I have mixed feeling on this personally, and it creates both a greater amount of work and it moves the documented result outside of Performance Center (also duplicate storing of the data). I'd appreciate other thoughts.

  2. #2
    Super Member SteveO's Avatar
    Join Date
    Jul 2004
    Location
    St. Louis, MO, USA
    Posts
    1,236
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    We practice the same.

    We're interested in what we refer to as the "steady state" period. Our tests are designed to execute X iteration per hour so if you include the rampin or rampout sections, those segments of your tests will generally experience lower throughput than the target of your test during that target state.

    In what ways does it create extra work?

    We upload our analysis html report back into Performance Center so all of the results remain together within PC.

  3. #3
    Member
    Join Date
    Jan 2009
    Posts
    110
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    I see where it can be considered an extra step to down load the analysis file to then filter the ramp up and ramp down, this is actually a very common approach to testing the steady state of an application.

    What you can do is create a trending report where you focus on the steady state and use that for your reporting. That is if three is no need to dig deep as in the case where the test/application runs into issues.

  4. #4
    New Member
    Join Date
    Nov 2013
    Posts
    10
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    Quote Originally Posted by mholian View Post
    Hello all,
    The general practice at the shop I am at is to manipulate the results file to remove the ramp up and ramp down segments of a run before pulling together the analytics for the execution. I had not experienced this in the past, and I'm wondering if you all consider it to be the norm, if you have concerns that any manipulation of test data can skew results, or other comments you might have.

    I have mixed feeling on this personally, and it creates both a greater amount of work and it moves the documented result outside of Performance Center (also duplicate storing of the data). I'd appreciate other thoughts.
    In the multiple locations I have done performance testing I have not seen anyone try to use the ramp up / ramp down time as part of their test results (unless some run anomaly is observed) since the response times from ramp up / ramp down tends to skew the results due to the low volume levels.

  5. #5
    Moderator
    Join Date
    Feb 2010
    Location
    Europe
    Posts
    944
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    lower load during ramp up/ramp down will lower average response times over the whole of the test, so won't give an accurate result for the stable period

    I only ever report on the ramp up if there is an issue during it..

  6. #6
    Member
    Join Date
    Jan 2007
    Posts
    230
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    Agree with those advocating looking at the "steady state" only for most accurate results.
    Kevin Jackey

  7. #7
    Moderator JakeBrake's Avatar
    Join Date
    Dec 2000
    Location
    St. Louis - Year 2025
    Posts
    15,609
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    Any time you filter results, you are shorting your customer of the whole story; unless you have made a clear and not-to-be-confused statement about what has been filtered, why it has been filtered, and what the inherent risks are. Ramp up transaction measurements are extremely important. Let us take a classic example. Let us say all your users are in the same time zone and all access the app, site page and login within a 5 minute window. If those users are waiting for what - 20, 30 seconds - or a minute or more; you might just have a performance issue that can be fixed easily. In summary, I think it very unwise to omit ramp up transactions times; unless documented as I indicted in the opening statements above.

    This should actually be moved to the Performance & Load Testing forum since this is a consideration for any performance testing using tools of this class.
    Last edited by JakeBrake; 12-26-2014 at 03:32 AM.

  8. #8
    Advanced Member LauraScharp's Avatar
    Join Date
    Aug 2002
    Location
    Huntington Beach, Ca. USA
    Posts
    725
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    One thing I like to do is present the entire duration of the test on the Avg Resp Time graph (merged with running vusers). This way, if there were any issues during ramp-up or ramp-down, they become obvious. As does any increase in response times "over time". I annotate the graph to point out anything interesting.

    I typically present the response times for a filtered 1 hour under load, similar in format to the Summary Report. In addition, if there were other time periods that should be looked at more closely, I'll drill down into those (maybe we had a huge spike in the 3rd hour of the test that we'd like to explore further).

    It all comes around to determining what is necessary to tell the story of the test. Even if you don't include ramp-up/ramp-down in your response times, I think that it is very valuable to have this in graph form and to discuss it in the commentary of the test.
    Laura Scharp
    SoCalGal - Defender of end user response times!

  9. #9
    Super Member SteveO's Avatar
    Join Date
    Jul 2004
    Location
    St. Louis, MO, USA
    Posts
    1,236
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    I'm not discouting that there's value in rampin periods, but for us, those are usually different tests.

    When we filter out the steady state period, that particular test is generally only trying to prove that application behaves acceptably at a certain rate.

    We'll often run a separate "login" test if there are worries about logging in all of a system's users within a particular timeframe. (Power outage, server outage, etc)

 

 

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Search Engine Optimisation provided by DragonByte SEO v2.0.36 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Resources saved on this page: MySQL 8.33%
vBulletin Optimisation provided by vB Optimise v2.6.4 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
User Alert System provided by Advanced User Tagging v3.2.8 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
vBNominate (Lite) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Feedback Buttons provided by Advanced Post Thanks / Like (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Username Changing provided by Username Change (Free) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
BetaSoft Inc.
Digital Point modules: Sphinx-based search
All times are GMT -8. The time now is 12:14 AM.

Copyright BetaSoft Inc.