Results 1 to 8 of 8
  1. #1

    Analyzing/Comparing Different results files

    1. What version of LoadRunner (LR) or PerformanceCenter (PC) are you using? (specify which tool LR or PC)
    - LR 9.5

    2. What is the protocol you are recording?
    Web (HTTP/HTML) in most cases, SAPGUI in other cases

    5. Which LoadRunner/PerformanceCenter feature (FPs) or service packs are you using?

    6. VuGen Recording - are you using Old or New Recording Engine?

    7. You must list here the specifc Licensed Vuser type for your specific issue AND the license amount you have for this Vuser protocol - per the example below (Unlimited, Permanent, N/A, etc. are not options) .

    Global/All Monitors/1000

    8. Is your support/maintenance contract current and active?

    9. What platform(s) (PCs) and Operating Systems (Windows-XP, etc.) are being used for load generators and controllers? Include version and service packs (SP1 or 2, etc.)
    Windows XP SP2

    10. If you have filed a service request with HP/Mercury, what have they told you at this point with respect to your issue?

    Hi All,

    I am running scenarios with various quantities of Vusers: 100, 500, 1000, etc. Currently each scenario is setup to create it's own results file and would be analyzed individually. In order to effectively analyze and report all the results in a meaningful way to interested parties, I was wondering if there was a way to COMBINE results files in some way to overlay the results of various quantities.

    Or maybe I'm just going about this entirely the wrong way. Is there a better way to evaluate disparate quantities of Vusers? Just don't want to go to the stakeholders with 10 separate reports.

    Thanks so much in advance for your assistance!


  2. #2

    Re: Analyzing/Comparing Different results files


    Not sure exactly what you are asking and James may be able to give you some ideas on combining your datasets.

    My question is this: Why are you trying to do this? If there are 10 stakeholders why not get them to agree on what constitutes a load test?

    I have seen where companies create two levels of test: Nominal - normal number of vusers & Peak - percentage increase over Nominal (+25% or +50% or +100%).
    Unless you are writing a compiler, strtok is NOT the answer.
    See: http://www.sqaforums.com/showflat.ph...=541641#542222

    QAF is still an exercise in self-sufficiency! (Thank JB!)

  3. #3

    Re: Analyzing/Comparing Different results files

    That is the difficulty - getting them to agree on what constitutes a load test. They are very UNSURE about how many users they anticipate accessing the system. As such, they cannot give me an acceptable plan for me to build my Vuser base on.

    As such, I am left with several different thresholds to "try" per their request. They'd like to see what the results are from various levels and evaluate performance and potential improvements around those results. I know i know...BASS-ACKWARDS.

  4. #4
    Super Member SteveO's Avatar
    Join Date
    Jul 2004
    St. Louis, MO, USA

    Re: Analyzing/Comparing Different results files

    I thought 9.5's analysis module allows for comparison between results files?

  5. #5
    Moderator JakeBrake's Avatar
    Join Date
    Dec 2000
    St. Louis - Year 2025

    Re: Analyzing/Comparing Different results files

    [ QUOTE ]
    I thought 9.5's analysis module allows for comparison between results files?

    [/ QUOTE ]
    It does. So does 8.x.

  6. #6

    Re: Analyzing/Comparing Different results files

    Thank you very much for the assistance. I've only ever worked with single results files as we had a clear definition of users to test - hence one result.

    Found the functionality under File->Cross with Result

    Thanks all.

  7. #7

    Re: Analyzing/Comparing Different results files

    You have your deliverable without defined and agreed upon requirements. This is a challenge that no tool can address.

    James Pulley

    Replace ineffective offshore contracts, LoadRunnerByTheHour. Starting @ $19.95/hr USD.

    Put us to the test, skilled expertise is less expensive than you might imagine.

    Twitter: @LoadRunnerBTH @PerfBytes

  8. #8

    Re: Analyzing/Comparing Different results files

    Agreed. Not much I can do about it. We are unfortunately a "results now, fill in the blanks later" organization at the moment.

    Same is true on the functional side. They think "agile" means developing/testing with little to no requirements. Not the same thing.

    But I digress. Thanks James!



Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
BetaSoft Inc.
All times are GMT -8. The time now is 03:53 PM.

Copyright BetaSoft Inc.