SPONSORS:






User Tag List

Thanks Thanks:  0
Likes Likes:  0
Dislikes Dislikes:  0
Page 1 of 2 12 LastLast
Results 1 to 10 of 15
  1. #1
    Member
    Join Date
    Sep 2002
    Location
    HB, CA
    Posts
    38
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Performance Testing in an iterative process

    Does anyone know of a good resource on the timing of Performance Testing in an iterative / RUP-like process? I am having issues justifying a Performance/Load Environment in the earlier iterations. I believe they understand the potential benefits (of testing as soon as there is enough functionality to create meaningful transactions), but there is a lot of room for debate in the cost vs. benefit area. I am looking for any “official” book(s) or white paper(s) that can help me push this testing as early as possible.

  2. #2
    Moderator JakeBrake's Avatar
    Join Date
    Dec 2000
    Location
    St. Louis - Year 2025
    Posts
    15,609
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Performance Testing in an iterative process

    I don’t know of a white paper that explicitly addresses your concerns. I would like to toss a few questions or ideas your way.

    Please define “I am having issues justifying a Performance/Load Environment in the earlier iterations”. Does this mean you have an environment with limitations?

    Assuming you have a tool like RobotVU or LoadRunner (any for that matter), does it make sense to develop scripts in earlier iterations where the chances of script obsolescence are extreme and rework will probably be required? To me, that would be difficult to cost-justify. I can understand the room for debate.

    If it is your goal to catch performance issues upstream before it becomes more costly, then you might want to consider the notion laid out here:
    http://perform-testing.com/listPlayers.html

    I contend that poor performance starts with poor specifications and poor capacity planning. Automated performance scripts cannot address that aspect in that phase of system specification and planning. This contention is independent of any development methodology.

    To me then, the "push" should be one of education in that all the players understand that they own a piece of, and can influence performance.
    Does this help?

  3. #3
    Super Member SteveO's Avatar
    Join Date
    Jul 2004
    Location
    St. Louis, MO, USA
    Posts
    1,236
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Performance Testing in an iterative process

    You might also focus on the benefits of testing the infrastructure the proposed application will lie on.

    You can flesh out design flaws before any application lives there by doing some simple tests related to concurrency, throughput, etc. We often are tuning those components before the application gets dropped on the system.

  4. #4
    Senior Member
    Join Date
    Sep 2004
    Location
    Riga, Latvia
    Posts
    611
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Performance Testing in an iterative process

    Perhaps Software Test & Performance Magazine (download them from www.stpmag.com ), Peak Performance monthly columns by Scott barber could give you something.
    Probably start with columns "investigation VS validation"; (2005/November)
    ?:the art of a constructive conflict perceived as a destructive diagnose.
    Ainars

  5. #5
    Member
    Join Date
    Sep 2002
    Location
    HB, CA
    Posts
    38
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Performance Testing in an iterative process

    It means we work with whatever hardware someone is willing to give us. Sometimes we get Prod before go-live (then we are sometimes stuck with no env after go-live), sometimes we get the Disaster Recovery Site (although this also slips in priority, so not all projects have one before/at go-live), sometimes we get old Prod hardware (questionable results), sometimes we get a fraction of Prod (sometimes valid based on the components), … If the answer is none (or we have to wait), we do as much as possible in the functional environment (make sure the tool will work, script as much as possible, get some preliminary numbers, …). After years of hearing that our results are insignificant (depending on who we talk to), since Prod will have more power (CPUs, Mem, …) and juggling environments, I am starting to push for a dedicated (even if it is shared with the Disaster Recovery site) Performance Environment as early as possible on a project. The iterative part just makes it more complicated, since they can argue that the initial iterations aren’t worth the expense.

    Even with architecture and design reviews (upstream), do you think they can capture every WebSphere, Oracle, F5, … setting (not to mention the effects of possible poor coding)?

    I guess my concern is mostly that people I would consider RUP experts don’t seem to include Performance as a major concern by default. Inclusion in a “Supplemental Spec” seems to be an after thought to me.

    Thanks for the link to the article / magazine.

  6. #6
    Moderator JakeBrake's Avatar
    Join Date
    Dec 2000
    Location
    St. Louis - Year 2025
    Posts
    15,609
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Performance Testing in an iterative process

    Originally posted by dgold:
    It means we work with whatever hardware someone is willing to give us. Sometimes we get Prod before go-live (then we are sometimes stuck with no env after go-live), sometimes we get the Disaster Recovery Site (although this also slips in priority, so not all projects have one before/at go-live), sometimes we get old Prod hardware (questionable results), sometimes we get a fraction of Prod (sometimes valid based on the components), … If the answer is none (or we have to wait), we do as much as possible in the functional environment (make sure the tool will work, script as much as possible, get some preliminary numbers, …).
    <font size="2" face="Verdana, Arial, Helvetica">Wow! It sounds like an unfun, ungood and unrelenting challenge! What would it take for these people to get serious? Education? Severe Dollars-bleeding? Have you spoken with anyone to understand why there is no desire for a dedicated environment? Do you think you could get an audience to begin the education process? Would it make more sense for you to circulate your resume’?

    Originally posted by dgold:
    After years of hearing that our results are insignificant (depending on who we talk to), since Prod will have more power (CPUs, Mem, …)
    <font size="2" face="Verdana, Arial, Helvetica">Well, one could argue that even though it isn’t prod, the results are still significant – not that they would give you an accurate representation of prod, but they could certainly tell you where the bottlenecks currently live with a high-degree of certainty. (Whoever is claiming that the results are insignificant needs an education.) Have you ever found a bottleneck in your previous tests and have the same bottleneck show in prod? If so, you have the evidence you would need to perk at least one ear??

    Do you not have a code promotion path in terms of dev, system test, staging, etc.? (I think you already answered this)

    Originally posted by dgold:
    … The iterative part just makes it more complicated, since they can argue that the initial iterations aren’t worth the expense.
    <font size="2" face="Verdana, Arial, Helvetica">I must agree with them on this and recall my initial response to this item.
    Assuming you have a tool like RobotVU or LoadRunner (any for that matter), does it make sense to develop scripts in earlier iterations where the chances of script obsolescence are extreme and rework will probably be required? To me, that would be difficult to cost-justify. I can understand the room for debate.


    Originally posted by dgold:

    Even with architecture and design reviews (upstream), do you think they can capture every WebSphere, Oracle, F5, … setting (not to mention the effects of possible poor coding)?
    <font size="2" face="Verdana, Arial, Helvetica">No. However, the permutations and combinations of the thousands of settings make it impossible to address that in performance testing. It would take several thousand Ming dynasties to pull off that feat! [img]images/icons/smile.gif[/img] So, everyone has to have faith that the default configurations of those components are in the ball-park, or – that the implementers have tweaked per vendor recommendations. Poor coding? That is a different domain and at the same time, it is a risk. Some of that can be caught in design reviews before the code is written. Some can be caught in code-reviews. Some will go undetected. Hopefully the QC team will detect some of those unsavory leftovers. The rest are up to you.

    Originally posted by dgold:
    …I guess my concern is mostly that people I would consider RUP experts don’t seem to include Performance as a major concern by default. Inclusion in a “Supplemental Spec” seems to be an after thought to me.
    <font size="2" face="Verdana, Arial, Helvetica">Change your perceptions of them and educate them! [img]images/icons/smile.gif[/img] I realize that is easier said than done! I would suggest less binding on your part to RUP, since clearly in your case it is not advancing your cause. The important thing here has nothing to do with RUP. It has everything to do with a practical and responsible approach to performance testing and engineering. Sure RUP can help, but it doesn’t sound like beatings with the RUP-club are working. Methodologies do not create systems and software that perform well. Methodologies only provide the opportunity. Good execution of the methodology and people-practices are ingredients for performance.

    [ 08-09-2006, 06:36 AM: Message edited by: JakeBrake ]

  7. #7
    Member
    Join Date
    Sep 2002
    Location
    HB, CA
    Posts
    38
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Performance Testing in an iterative process

    Actually, the “ungood and unrelenting challenge” has forced us to improve our documentation and to constantly keep researching for supporting materials. So, while it may be “unfun” we are making sure it isn’t a wasted effort as far as our growth and sometimes we even make progress.

    Yes, we do have a code promotion path (on most projects). Although, it is roughly Dev -&gt; QA -&gt; Prod, with the other steps being optional (based on the Program Director, schedule, …). As you mention, staying at the same company for a while we do get the “I told you so”s often enough, but It doesn’t help that our stakeholders are different for different projects and also change periodically.

    Although the first few iterations may involve some throw away, an env at this time (in the beginning) is clearly of least risk (especially, if you count in the Env lead time and build-out tasks and conflicts with building Prod, DR, UAT, …). Back to my original point. Who determines which iteration makes sense? Also, we have seen the iterations change (pushing Performance way up – prior to the Perf env buildout). I would like to concentrate on finding issues rather than constantly debating the exact time that the env is 100% useful.

    The reason I brought in RUP is because it seems to be used to enforce a process. People know to ask for Use Cases and that they can push back if they don’t get them. I would like them to know to ask for Performance Criteria. I actually saw one implementation where the Performance Criteria was added to each Use Case, but that didn’t fly here.

  8. #8
    Moderator JakeBrake's Avatar
    Join Date
    Dec 2000
    Location
    St. Louis - Year 2025
    Posts
    15,609
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Performance Testing in an iterative process

    Originally posted by dgold:
    Actually, the “ungood and unrelenting challenge” has forced us to improve our documentation and to constantly keep researching for supporting materials. So, while it may be “unfun” we are making sure it isn’t a wasted effort as far as our growth and sometimes we even make progress.
    <font size="2" face="Verdana, Arial, Helvetica">That is good news! [img]images/icons/smile.gif[/img]

    Originally posted by dgold:
    ... Back to my original point. Who determines which iteration makes sense?
    <font size="2" face="Verdana, Arial, Helvetica">I would guess that you want Phase Entry and Exit Criteria along the lines of...

    1. code freeze
    2. regression testing complete
    3. issues/defects fixed or reasonable work-around exists
    4. data sufficiency,

    etc.
    etc.

  9. #9
    Moderator
    Join Date
    Sep 2001
    Location
    Boston, MA
    Posts
    4,348
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Performance Testing in an iterative process

    I am starting to push for a dedicated (even if it is shared with the Disaster Recovery site)
    <font size="2" face="Verdana, Arial, Helvetica">where i am working now, we use our Disaster Recovery site as our performance environment. It is a full clone of the production environment at the hardware level.

    so rather than having a DR site waiting idly for disaster to strike, we re-image all the machines and have a monster perf lab. If disaster happens, we can image the machines back to a prod-like state and cutover. it is not a trivial process and we do DR drills, but it makes all the perf environment issues go away.
    Corey Goldberg
    Homepage: goldb.org
    Twitter: twitter.com/cgoldberg
    Google+: gplus.to/cgoldberg

  10. #10
    Senior Member
    Join Date
    Jul 2002
    Location
    Palm Bay, FL USA
    Posts
    2,346
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Performance Testing in an iterative process

    Originally posted by dgold:
    I guess my concern is mostly that people I would consider RUP experts don’t seem to include Performance as a major concern by default. Inclusion in a “Supplemental Spec” seems to be an after thought to me.
    <font size="2" face="Verdana, Arial, Helvetica">That would be because the creators of RUP didn't really know what to do with performance testing. I've talked extensively with both Sam Guckenhiemer and Paul Szymkowiak about that. Shortly before IBM bought Rational and both Sam and Paul left, I was in discussions with them about helping them flesh out that part of RUP - needless to say, that didn't happen.

    If nothing else, you can console yourself with the fact that RUP doesn't address usability, security or reliability any better.
    Scott Barber
    Chief Technologist, PerfTestPlus
    Executive Director, Association for Software Testing
    Co-Author, Performance Testing Guidance for Web Applications
    sbarber@perftestplus.com

    If you can see it in your mind...
    you will find it in your life.

 

 
Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Search Engine Optimisation provided by DragonByte SEO v2.0.36 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Resources saved on this page: MySQL 10.00%
vBulletin Optimisation provided by vB Optimise v2.6.4 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
User Alert System provided by Advanced User Tagging v3.2.8 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
vBNominate (Lite) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Feedback Buttons provided by Advanced Post Thanks / Like (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Username Changing provided by Username Change (Free) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
BetaSoft Inc.
Digital Point modules: Sphinx-based search
All times are GMT -8. The time now is 03:53 AM.

Copyright BetaSoft Inc.