SPONSORS:






User Tag List

Thanks Thanks:  0
Likes Likes:  0
Dislikes Dislikes:  0
Results 1 to 3 of 3
  1. #1
    New Member
    Join Date
    Jul 2012
    Posts
    2
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Establishing Test Cycles and Getting Builds from Dev

    I am re-evaluating our current test processes including how we get builds from our dev team. We have multiple products that are all on different schedules, however we could be testing each product simultaneously. We might be testing a maintenance release for our windows product, a new version for a web product and a brand new iOS product. A new build is automatically created on our build server each time code is checked in and QA has access to all of those builds. Previously, we have not had a schedule for grabbing builds from the server and starting our test efforts. Sometimes we were taking builds several times a day...which as you can imagine is highly inefficient and not very effective. I am looking for recommendations on how to schedule WHEN QA should be taking builds. We are currently looking at purchasing some Test Case Management software which lets us organize and plan our test cycles based on projects, releases and builds. I'm looking for advice on how to determine scheduling for what is coming in to QA from Dev and managing all the multiple products that are moving back and forth between us.

    Thank you all in advance.

  2. #2
    Member
    Join Date
    Nov 2011
    Posts
    120
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    1 Thread(s)
    Total Downloaded
    0
    Without knowing your exact situation I can't give detailed advice, but I can offer some suggestions. If I've misinterpreted you or what I suggest doesn't work for you, feel free to ignore it.

    From the sound of this you probably want to have some kind of automated regression in your planning (this will take a while to build up if you don't already have it) which grabs the most recent build of whichever product and runs against that. That's generally something you run overnight so you have results available in the morning when you start work and drops the time between regression bugs being introduced and being caught to a maximum of three days (since you're not going to be checking your results over the weekend ). Of course, this won't catch anything in areas that aren't covered and you need to have resources dedicated to maintaining and expanding automated regression.

    For manual test cycle planning, you're going to be balancing your three main variables - release dates, project cycles, and release types. I've been in this kind of situation with multiple products and multiple release types (that organization typically kept the target dates the same for all releases, but the release types included a product development release with all the new features, two maintenance branch releases, and any number of emergency bug fix releases which would be delivered to a single customer). I'd run automated regression against the development and maintenance releases on a daily scheduled basis, and the emergency releases at need. Manual testing was scheduled around first which release had the closest targeted release date, then priority of reported issues, with project development being assigned to the team on an 80% basis - we were expected to spend 80% of our time on the assigned project(s) and the rest on general admin, bug fixes and so forth. Needless to say this was rather... um... flexible.

    As far as pulling builds is concerned, I'd recommend something like this:
    - when testing a project, pull the most recent build of the target release at the start of the day/testing session and stay with that build during the day unless notified that there's a new build with a fix for a project-related bug you need to test *today*.
    - when testing bug fixes for a release, pull the most recent build of the target release at the start of the day/testing session. Do not pull a newer release until you've finished that session, and then pull only if you have more bug fixes for that release to test.
    - always note which build something was tested in. With multiple builds per product and release per day, you need that granularity if something regresses.
    - where possible, snapshot test configurations and data. If it's not difficult to downgrade to an earlier version, this isn't quite such a problem. When I was working with the organization with the multiple releases and products, I had... I think I maxed out at about 30 separate installs of the application - one per minor version (using release.majorversion.minorversion.buildnumber versioning). That was one separate set of 7 applications, plus databases, plus web stores for each minor version. I standardized my testing data to within an inch of its life so I had consistency across versions, and that was in addition to the standard testing data sets the team maintained.
    - maintain a repository of standard testing data that the whole team can use. This makes life SO much easier and speeds up the install/configuration process no end.
    - version the testing data so that additions for new projects don't break older versions of the application (this is particularly important when a product has multiple release cycles in progress at any given time).
    - make sure new project additions are added to the testing data where possible. If complex configurations can't be added to the standard data sets, create separate test data sets for those configurations (this happens a lot when you have a feature set that's completely incompatible with a different feature set - something that tends to happen with large, complex, business-to-business applications).
    - if you have customer data sets that cover difficult-to-configure scenarios or massive loads, maintain them as well. In my experience test data is designed to cover the broadest possible range of scenarios but typically isn't going to generate the kind of load a large corporate customer will put on the system. I've seen cases where the test database backup was maybe 50MB, and customers sent in terabytes of database backup for debugging issues (one customer was notorious for *never* archiving data - for them the fastest way to get a backup to us was to ship it - and generating said backup usually took them several days. While they were running, because they never closed. Amusement park business, ticketing and access control management software).
    - make sure everyone knows where to find the repository of data sets. I recommend manual versioning simply because databases tend not to play nicely with version control software. A system of directory and file naming conventions works nicely and you can create a batch file or short script to handle the copy and rename job that goes with each new version - I'd suggest limiting it to major versions or you'll have an explosion of almost identical data. Major versions are usually different enough to justify separate data versioning, where minor versions aren't. You can archive (compress and possibly move to a different location) old versions you're not likely to need.

  3. #3
    New Member
    Join Date
    Jul 2012
    Posts
    2
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    Thank you! Within the last 6 months we have started implementing some automated regression test cases and we are running them overnight. So, it's good to know we are on the right track there! Thanks also for the great recommendations regarding pulling builds. That is where my biggest struggle exists. We had fallen in to a bad habit of pulling several builds a day, trying to "keep up" with development's progress and that was only bringing me frustration. If Daily builds seem to be the "norm" then that is a great place for us to start.

 

 

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Search Engine Optimisation provided by DragonByte SEO v2.0.36 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Resources saved on this page: MySQL 11.54%
vBulletin Optimisation provided by vB Optimise v2.6.4 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
User Alert System provided by Advanced User Tagging v3.2.8 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
vBNominate (Lite) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Feedback Buttons provided by Advanced Post Thanks / Like (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Username Changing provided by Username Change (Free) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
BetaSoft Inc.
Digital Point modules: Sphinx-based search
All times are GMT -8. The time now is 04:03 AM.

Copyright BetaSoft Inc.