SPONSORS:






User Tag List

Thanks Thanks:  0
Likes Likes:  0
Dislikes Dislikes:  0
Results 1 to 6 of 6
  1. #1
    Apprentice
    Join Date
    Apr 2014
    Posts
    17
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Question Scalable test strategy for all projects?

    Hi,

    I recently started working with a company to create their testing strategy. The company wants a testing strategy which can be applicable to all types of projects. This means the testing strategy has to be scalable. How do I manage this without writing a complete book about how to test all different types of projects and how to test them? How will I perform the set-up of the strategy with all different parts?

    - Is the best thing to do, presenting around three different types of testing approaches (one for a high budget project where testing hours are huuge, one in the middle and one with poor calculation for testing)?
    - Or do I write a huge testing strategy and prioritize the testing activities so that the project manager can pick and choose? That does not seem really good to me.
    - Other set-up of the strategy?

    Thanks!

  2. #2
    Member
    Join Date
    Nov 2011
    Posts
    120
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    1 Thread(s)
    Total Downloaded
    0
    Actually, in my experience the high-level strategy tends to be the same for all types. The difference lies in how far down the prioritization list things go.

    The first thing to consider is *why* the company wants a scalable test strategy. You need to find the need behind the stated desire and look to how you can help meet that need - it's quite likely that what they really want is not a grand unified test strategy but visibility into the test process and test results at their level.

    That said, at an extremely high level the strategy I use pretty much everywhere is:
    - start with familiarization. In the case of new features, as a tester I need to be involved with design so I can find as many potential design issues as possible before they get coded (usually by asking awkward questions, but that's another issue). For existing features/applications I'll play with the application, read any documentation I can find, and talk to the user base to work out what it should be doing and what it's expected to do.
    - define the happy path/steel thread. This is pretty much what it says on the tin: work out what is the absolute bare minimum required functionality/expected behavior. This is used as the basis for the highest priority testing
    - prioritize everything else. Yes, that's kind of vague, but you can always divide a software project into essentials and everything else. I usually try to split into "really want this", "nice to have", and "wishlist", although this can get pretty fuzzy if you have a large and diverse user base. This also usually involves a fair amount of exploration to uncover the interaction between the project functionality and the rest of the application, as well as identifying what should happen when things go wrong.
    - test the happy path. This will include exploration.
    - test the rest, from highest priority to lowest. Again, this includes exploration. You absolutely want to include explicit exploration testing in as many areas of your strategy as possible, because you want testers exploring and interacting.
    - build automated regression for stable features. There's no point building automated regression for a feature until it's stable - it just turns into a thrashing exercise. Once the feature is stable, with a good framework the test team shouldn't need to add much automation code to add regression. (I think my record on that front was ten lines of code and several hundred lines of data to handle a completely new module. That was a very mature framework).
    - report constantly. The reporting that gets to the management level doesn't need to be hugely detailed, but it does need to be there. Simple dashboard-style reporting using agreed-on phrasing (like "stable", "fragile", etc) as well as things like numbers of new issues created vs number of issues fixed (at the project level, categorized by severity) are usually enough to give the decision makers an idea of how stable the new feature is. I've also used RED (usually in bright red bold font)/YELLOW/GREEN to give a single-word representation of a project state, where red meant I didn't think the project would reach an acceptable state before the targeted release date, yellow meant it might but was at risk of not being ready, and green meant I thought the project would be done by release date.

    Typically, when time is short or something goes wrong and testing time gets cut sharply, the level of "everything else" testing drops. I'll report the risks I see of not testing specific things, but the decision isn't mine to make, so I don't make it.

    If you go into more detail, particularly for a multipurpose strategy, you run the risk of creating a trap for your test team where they're bound to follow the process even when it doesn't make sense.

  3. #3
    Apprentice
    Join Date
    Apr 2014
    Posts
    17
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Question

    Quote Originally Posted by katepaulk View Post
    Actually, in my experience the high-level strategy tends to be the same for all types. The difference lies in how far down the prioritization list things go. The first thing to consider is *why* the company wants a scalable test strategy. You need to find the need behind the stated desire and look to how you can help meet that need - it's quite likely that what they really want is not a grand unified test strategy but visibility into the test process and test results at their level.
    Thank you for your reply!

    The company have several types of projects. From tiny projects which lasts for about 4 weeks with almost no time for testing, to medium projects which lasts for 4 months with ~5 hours a week testing to big projects lasting 1 year with 18 hours testing every week.

    In the tiny projects the testing activities has to be performed quick and requires one type of strategy. Often there are not so many integrations. In larger projects, there are more time and requires more integration testing f.eg. This requires a totally different kind of strategy.

    I have written one testing strategy which is applicable only to larger projects. The strategy says this and this should be tested in this way. But this can never be applied to tiny projects with almost no testing time at all. Therefore I need to cover how to test all types of projects and make the strategy scalable to all types of projects.

    So I know the needs but I do not know how to meet them, that is to say, how to write a scalable testing strategy.

  4. #4
    SQA Knight
    Join Date
    May 2006
    Location
    Playa Del Rey, California, United States
    Posts
    2,594
    Post Thanks / Like
    Mentioned
    17 Post(s)
    Tagged
    1 Thread(s)
    Total Downloaded
    0
    I think the key is to gather key metrics that tie defect rate, feature, code modules affected, and code churn.

    Bugs:
    * tag/label every bug in your bag tracker with the features they affect.
    * When bugs are fixed, tie bugs to the code modules that were changed to fix the bug.
    * Tag bugs with the what type of testing activity would best find this bug. (security testing, usability testing, business analysis testing, etc...)
    * Have a custom field that ranks the severity of the bug by value. (like 1-minor, 5-normal, 12-major, 20-critical, so you keep the values proportional to the business impact on the business)

    Changes/Enhancements
    * Tag ever change with features they affect.
    * Track the amount of code churn each feature change takes.
    * Track the type of testing activities that should be done in response of this change (best case scenario if time wasn't a factor)

    Given the above is tracked, as the product matures and you have more data in your change management system. Then upon each delivery, you should be able to create a matrix. Given a change list, you should be able to associate it with features they changed, and how risky those features are based on code churn of the current changelist. Then you can cross it with the defect rate of the different affected features, which will also be associated with the type of testing activities needed to find those bugs. Then you can prioritize which testing activities should be done first on which features.


    How this works is as follows..

    If you take the matrix of the change list changes...

    Features x Code Churn => matrix of affected features and how risky from a dev perspective.

    Then create a matrix generated from your bug tracker after filtering out the non-affected features. Make sure the feature is ordered in the same order as the Features rows in the code churn matrix.

    (Bug Rate x Impact) X Feature => matrix of the severity of the bugs by feature.

    Then when you multiply the 2, you'll get a prioritized list of some sort of risk scale by feature. Then when you go back into the bug tracker and change manager, you'll get a nice list of testing activities associated with those features and have a nice ready to go test plan or prioritized testing items.
    David Lai
    SDET / Consultant
    LinkedIn profile

  5. #5
    Apprentice
    Join Date
    Apr 2014
    Posts
    17
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    Hi, thanks for your reply.

    Your description seems applicable to huge projects with a large budget. But that cannot be introduced in tiny projects with no time for test. The tiny projects do not use any tool for project management and often almost no documents other than a software solution specification with some kind of description of what we should code. There is no time for other tools or documents.

    That's why I wonder how to make a testing strategy applicable to these tiny projects. People seems to get afraid when I talk about bug tracking, test case writing etc. "Oh no, do we have to write that!".

    We do not have any change management system.

  6. #6
    Member meridian_05's Avatar
    Join Date
    Feb 2011
    Location
    Chiswick, London
    Posts
    156
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    Quote Originally Posted by AwesomeUserName View Post
    - Is the best thing to do, presenting around three different types of testing approaches (one for a high budget project where testing hours are huuge, one in the middle and one with poor calculation for testing)?
    In my experience that's pretty much what you're aiming for.


    Quote Originally Posted by AwesomeUserName View Post
    The tiny projects do not use any tool for project management and often almost no documents other than a software solution specification with some kind of description of what we should code. There is no time for other tools or documents.

    That's why I wonder how to make a testing strategy applicable to these tiny projects. People seems to get afraid when I talk about bug tracking, test case writing etc. "Oh no, do we have to write that!".
    Your test strategy is just that - a strategy for how your company will do it's testing, so your sections should be tailored for each of your project types. For example, your section on Test Phases may include Large Projects: unit testing, integration testing, and user acceptance testing; Medium Projects: unit and integration testing combined; user acceptance testing; Small Projects: single test phase embedded in design/build/test.
    Alternatively, simply a few sections outlining the different test phases in your company and then a table matrixing the project types and the test phases required for those project types.

    And so on for the various parts of your test strategy (reporting, documentation, tester pool, etc)

 

 

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Search Engine Optimisation provided by DragonByte SEO v2.0.36 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Resources saved on this page: MySQL 6.67%
vBulletin Optimisation provided by vB Optimise v2.6.4 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
User Alert System provided by Advanced User Tagging v3.2.8 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
vBNominate (Lite) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Feedback Buttons provided by Advanced Post Thanks / Like (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Username Changing provided by Username Change (Free) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
BetaSoft Inc.
Digital Point modules: Sphinx-based search
All times are GMT -8. The time now is 11:49 AM.

Copyright BetaSoft Inc.