Page 1 of 2 12 LastLast
Results 1 to 10 of 11
  1. #1

    Risk Based Testing - measure success?

    I hope I'm not spawning a whole long thread of posts saying "what is risk-based testing?" or "I think risk-based testing is ..." or "please send me a template for risk-based testing." If you know what it is, you know what it is.

    At my instigation, we are just beginning to implement R-BT. I was asked a question which I honestly couldn't answer: "At the end of the project, how do we measure the success of the R-BT approach?" (Versus, of course, the seat-of the pants, try to test everything approach we've been using in the past).

    I can think of a few ideas that we probably can't use this time, because we don't have good metrics to compare them to. For example percentage reduction of post-release bugs. And the lack of some sort or prior baseline may scotch ANY successful measure this time around. But I'm open to suggestions.

    [ 05-26-2006, 08:28 AM: Message edited by: Peter Ruscoe ]

  2. #2
    Moderator JakeBrake's Avatar
    Join Date
    Dec 2000
    St. Louis - Year 2025

    Re: Risk Based Testing - measure success?

    Off-the-cuff Peter...

    IMO, I would do a line item table with a signature block to accept all listed:
    1) itemize the risks
    2) Create and associate a risk level with each - risk level not unlike defect severity.
    3) Risk mitigation options / actions for each.
    4) Action ultimately taken
    5) Who decided on 4)
    6) Probable expiration date of risk or an event that would eliminate the risk.
    7) Risk Re-evaluation cycle (adjust priority - change action, etc)
    8) Age of risk / history of it

    Example - aligned with above numbers:
    (And this is probably a growing reality due to Apple's recent successes)

    1) Context & Risk(s): Unable to test all browser combinations & permutations despite releases being accompanied with documented statements of supporting only two recent versions of IE & Netscape. As more and more existing customers buy Macs, they are more inclined to use Apple's Safari browser. HTML Framesets may not display or display properly in Safari. This may cause us to erode our customer base.

    2) Severity = Major (of 5 normal levels)

    3) Possible mitigation for current planned release:
    - a) Bundle IE and encourage user to install.
    - b) Increase documentation of supported browsers and versions.
    - c) Delay the release and develop & test Safari support.

    4) Actions a) & b). Commit to c) for next release.

    5) Dilbert

    6) Date of next release

    7) Evaluate at weekly SCCB meeting (or other forum).

    8) Age one - year and originally a concern with release 2.5 of app.

    Any /all of the above provide a basis for measurement. Was the risk realized via customer complaints? Were the mitigation actions taken and in a timely fashion? Aging of risks over versions - how old? Is the risk increasing because of age and other factors?

    And then cynically - did Dilbert actually sign off and take ownership since Dilbert was the one who shortchanged the testing phase?

    [ 05-25-2006, 07:27 PM: Message edited by: JakeBrake ]

  3. #3

    Re: Risk Based Testing - measure success?


    I'll have to study all that carefully before comment. But unfortunately Dilbert isn't the problem - pointy-haired boss is. And since we are market-driven, backing p-h b into a corner is pretty much tantamount to signing one's own letter of resignation.

    In all seriousness, we in Testing need to understand the business, and not be anal about "well if you let it out the door with all these bugs, I'm going to stand in the corner and pout." We just neeed to be professional, take the high ground (courteously) and more on.

    At Star*East last week, there was a humorous (and amateur) skit about the interaction between Test and Development. I learned a lot from it, and as a result I believe I came out ahead yesterday in a meeting (which I called) to present my plans to implement Risk-Based Testing. Development's hackles were up (and stayed up). But the Product Manager was very positive (as was the Division VP)

  4. #4

    Re: Risk Based Testing - measure success?

    A thought that I had while reading this was that it is possible, depending on the risks taken in RBT that there would be no improvement, maybe even a degradation but if the risk was known and accepted this should be acceptable (probably will not be in the real world).

    I have not failed. I've just found 10,000 ways that won't work" --Thomas Edison

  5. #5

    Re: Risk Based Testing - measure success?

    Just a sec, Lynn - trying to get my head around that ...

    OK. Yes - actually that's is a goal of mine in this case (though not one that I can meaningfully measure). Last cycle, the lateness of the bugs found put everyone in a twitter. Had those areas been recognized as LOW (-ish) risk at the outset, the interested parties probably would have squirmed and even complained. But there would not have been the finger-pointing that there was.

    That said - it's not much of anything I can brag about when we release, saying "see? I told you RBT would bring benefits!"

    [ 05-26-2006, 08:30 AM: Message edited by: Peter Ruscoe ]

  6. #6
    Join Date
    May 2001
    Michigan, USA

    Re: Risk Based Testing - measure success?

    Peter -

    About the only measure I can think of to "prove" that RBT was beneficial is comparing the number and severity of errors found in production to previous releases. If there are no major show-stoppers, but a flock of little things, the perception can easily be that it did not work. That may hold on even after you demonstrate that precious releases had X number of serious or fatal errors during the same post-release timeframe.

    It could be my jaded personality (Jake, Darrel, no comments) however, I'd recommend caution - particularly if the pointy-haired-boss type doesn't comprehend what you're about.

    Good luck!

  7. #7

    Re: Risk Based Testing - measure success?

    Pointy-haired boss is way above that level. He runs the show, and his edicts regarding release dates are what we all have to back into. He doesn't care whether we use RBT or a witch doctor examining a sheep's entrails, so long as the product gets released on time and without glaring, embarrassing bugs (especially if they are only found after we've sent a gazillion CDs out to customers).

  8. #8

    Re: Risk Based Testing - measure success?


    I am going to make and assumption that to get your company to try RBT you documented benefits for them.

    Well if you did could you show how these benefits were met in this project? Not a metric but it may give you enough to continue especially if you know what pointy hair boss really wants to hear from this and what - in his estimation - would make him look good.

    The idea is somewhat a copout but when we don't have metrics from the past this is sometimes the way to go.

    Good Luck

    I have not failed. I've just found 10,000 ways that won't work" --Thomas Edison

  9. #9

    Re: Risk Based Testing - measure success?

    Well, kind-of. What I actually did was drew up a one-page fictitious RBT chart (with testing tasks like "Grommet button bar" - they loved it!) which I then used to describe RBT.

    They already accepted that it would always be impossible to test everything, and they understood the concepts of Probability of Failure, and Impact. So the jump from there to an example of RBT with "real" numbers was easy. They had some questions (including the one I asked here) but they understood the concept and could see its value.

    I really didn't (and don't) have a solid case study that I could show them (though I admit research at places like StickyMinds might turn something up - I haven't tried it yet).

    Really, they are so unused to testing terminology, that I don't have to justify techniques and strategies. At this point I have to explain them! In 6 months time, when they have a grasp of, say, a coverage matrix, they will probably grill me a lot more thoroughly!

  10. #10

    Re: Risk Based Testing - measure success?

    I should add that I've been waving the RBT flag ever since I joined the company last July. And since (as far as I can tell) I'm the only person our company has had with prior software testing experience, I don't get the typical resistance to new ideas. Instead (and I guess I'm pretty lucky) the people are open to suggestions (indeed, are looking for them), and I have a boss who trusts me to implement whatever I see fit, within the constraints of time and budget.

    It's only now that a project has come up that is about the right size to implement RBT, in effect as a pilot.


Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
BetaSoft Inc.
All times are GMT -8. The time now is 05:48 PM.

Copyright BetaSoft Inc.