SPONSORS:






User Tag List

Likes Likes:  0
Dislikes Dislikes:  0
Results 1 to 10 of 10
  1. #1
    Apprentice
    Join Date
    Oct 2012
    Posts
    23
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    How extrapolate the Load Test results

    Hi All,

    I have done couple of load tests with incremental load. I have limitation in doing further test. With the existing test results, how can I extrapolate the Loadtest results. I would like to know the Response time, Throughput, CPU and Memory usage by extrapolating the existing results. Please let me know the approach you can suggest. Is there any free tool available for this purpose?

    Thanks in Advace

  2. #2
    Moderator Joe Strazzere's Avatar
    Join Date
    May 2000
    Location
    USA
    Posts
    13,170
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    1 Thread(s)
    Total Downloaded
    0
    Be very careful when extrapolating.

    Conceptually, you are attempting to say "Even though we can't test further, if we could, we predict X would happen." Without a lot of data and evidence, those can be very risky predictions.

    For example, you might see response time stay constant as you go from 1 user to 10, to 100, to 1000. But at some point, the system will no longer be able to handle the increased load. If you stop testing at 1000, you don't know if that deflection point will come at 1001 or 100,000 users.

    On the other hand, your tests may already have hit a deflection point. If your response time increases dramatically at 100, you can usually predict that it won't get any better at 1000. Still you can't tell if it will get twice as bad, or go to infinity - unless you actually test for 1000.

    It's sort of a truism in performance testing that there is always a bottleneck. Removing the "closest" bottleneck just exposes the next one. And it's often hard to guess where that next one might be.

    I try not to present predictions, but instead present my findings based on the data and results of the experiments I actually performed. Tread carefully when predicting. Unless your crystal ball is very, very shiny.
    Last edited by Joe Strazzere; 02-07-2013 at 04:15 AM.
    Joe Strazzere
    Visit my website: AllThingsQuality.com to learn more about quality, testing, and QA!

  3. #3
    Member
    Join Date
    Apr 2008
    Location
    India
    Posts
    244
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    Hi,

    I agree Joe but if customer insist us to extrapolate resutls for higher users where doing testing is limited due to budget & time constraints. In such case, how can we handle the situation?

    I would appreciate for good thoughts / suggestions. Thanks.

    Regards,
    Mahesh

  4. #4
    Member
    Join Date
    Jun 2001
    Location
    New York, NY USA
    Posts
    99
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    It's a common questions and a situation most of us in the industry have been faced with. The best thing to do is present that as-is findings using your quantitative data. If others want to extrapolate, you need to firmly make the audience aware of the assumptions that are being made and the risks associated with the "predictions, such as Joe illustrated above. Be weary of the semantics used, so that you avoid being held responsible for the "predictions".
    Matthew Adcock
    RTTS - The Software Quality Experts
    360 Lexington Avenue, 9th Floor
    New York, NY 10017
    LinkedIn: http://www.linkedin.com/in/matthewadcock/

  5. #5
    Moderator Joe Strazzere's Avatar
    Join Date
    May 2000
    Location
    USA
    Posts
    13,170
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    1 Thread(s)
    Total Downloaded
    0
    Quote Originally Posted by saharinivas View Post
    I agree Joe but if customer insist us to extrapolate resutls for higher users where doing testing is limited due to budget & time constraints. In such case, how can we handle the situation?
    There are lots of possible ways to handle the situation

    You could say "We don't know any way to give you an extrapolation that doesn't include the risks Joe has pointed out. Thus, we cannot in good conscience give you what you are requesting without more budget and time. Perhaps we can sit down and discuss these risk associated with extrapolation, so you can understand why we are responding this way."

    You could say "Here is your extrapolation."

    You could say "Here is your extrapolation. And here are the underlying assumptions and interpretation risks you should take into account as you read it. We'll by happy to discuss these assumptions with you further at your convenience."

    Your choice of responses here is almost certainly a business decision, and not a technical decision.
    Last edited by Joe Strazzere; 02-08-2013 at 08:13 AM.
    Joe Strazzere
    Visit my website: AllThingsQuality.com to learn more about quality, testing, and QA!

  6. #6
    Member
    Join Date
    Apr 2008
    Location
    India
    Posts
    244
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    Quote Originally Posted by Joe Strazzere View Post
    Your choice of responses here is almost certainly a business decision, and not a technical decision.
    Thanks Joe for your thoughts. This quote suits correct to handle the situation.

    Regards,
    Mahesh

  7. #7
    Moderator
    Join Date
    Aug 2001
    Location
    NC
    Posts
    6,041
    Post Thanks / Like
    Mentioned
    1 Post(s)
    Tagged
    1 Thread(s)
    Total Downloaded
    0
    See modeling and simulation tools by Hyperformix and others. These would allow you to run virtual performance tests (models) in a "what if" situation for higher numbers of users. Your models will never be perfect, allowing for a few assumptions to creep in. As such, these vendors always recommend that a physical tests be conducted to validate that the assumptions present in the model match direct observations in the target environment.

    Most of these modeling programs are built around the restricted resource modeling of CPU, DISK, RAM and NETWORK, as these are finite resources in the deployed physical architecture of the application environment. As the application scales each new user consumes some finite amount of the resource pool. Keep adding users in the model and eventually you get to the end of a particular resource, that is assuming that you don't hit an internal software restriction which inhibits the scalability of the application and prevents it from hitting the actual hardware limit...hence the recommendation to actually conduct tests to validate the models when possible.
    James Pulley

    Replace ineffective offshore contracts, LoadRunnerByTheHour. Starting @ $19.95/hr USD.

    Put us to the test, skilled expertise is less expensive than you might imagine.

    Twitter: @LoadRunnerBTH @PerfBytes

  8. #8
    Member
    Join Date
    Apr 2008
    Location
    India
    Posts
    244
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    Thanks James for your thoughts.

    Regards,
    Mahesh

  9. #9
    Member
    Join Date
    Sep 2001
    Location
    Sunnyvale, CA, Santa Clara
    Posts
    394
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    Yes James you hit the nail on the head with “hit an internal software restriction which inhibits the scalability”, which is always a risk with ‘assumptions” on extrapolating. I am now evolving a Six Sigma approach to load (performance) testing Load testing using Six Sigma that starts with a clearly defined project goal, followed by an approach to satisfy the goal. Extrapolation might be acceptable if the software under test (SUT) is a simple calculation function (API) that returns a value. The original question did not indicate the scope of the SUT but even with a simple API with no data access and CPU as the main consumed resource there is still a risk with extrapolating. For any performance testing project I would lay out the basic question (project goals) being asked and the approach to be taken as well as noting the extent to which any model could satisfy the given project goals.

  10. #10
    Member
    Join Date
    Feb 2007
    Location
    Virginia, USA
    Posts
    238
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    James made a great suggestion. I would suggest a great deal of caution even when using tools such as Hyperformix. These tools require a great deal of effort and knowledge in order to obtain accurate results. Another option would be to leverage a third party lab such as Platform Lab in order to scale up your test platform.
    -Troy
    Do or do not... there is no try. -Yoda

 

 

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Search Engine Optimisation provided by DragonByte SEO v2.0.40 (Pro) - vBulletin Mods & Addons Copyright © 2017 DragonByte Technologies Ltd.
Resources saved on this page: MySQL 8.82%
vBulletin Optimisation provided by vB Optimise v2.7.1 (Pro) - vBulletin Mods & Addons Copyright © 2017 DragonByte Technologies Ltd.
User Alert System provided by Advanced User Tagging v3.3.0 (Pro) - vBulletin Mods & Addons Copyright © 2017 DragonByte Technologies Ltd.
vBNominate (Lite) - vBulletin Mods & Addons Copyright © 2017 DragonByte Technologies Ltd.
Feedback Buttons provided by Advanced Post Thanks / Like (Pro) - vBulletin Mods & Addons Copyright © 2017 DragonByte Technologies Ltd.
Username Changing provided by Username Change (Free) - vBulletin Mods & Addons Copyright © 2017 DragonByte Technologies Ltd.
BetaSoft Inc.
Digital Point modules: Sphinx-based search
All times are GMT -8. The time now is 10:19 PM.

Copyright BetaSoft Inc.