SPONSORS:






User Tag List

Likes Likes:  0
Dislikes Dislikes:  0
Results 1 to 8 of 8
  1. #1
    New Member
    Join Date
    Aug 2010
    Posts
    2
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    What does normal web testing look like?

    The past few months I keep getting loaned to web testing projects and I am not sure if the group I work with is crazy or if it is normal.

    They only get a few days to test, and the code is delivered in bits and pieces so that we are only hours from the deadline by the time we get all of the code. There does not seem to be a cut-off point, fixes come into the code even after the decision has been made whether to deploy.

    They do not get any detailed requirements (such as, they get one sentence about what a feature is doing that it shouldn't, but nothing to indicate what it should do instead), or they just get told that a controllable was changed, or some javascript was changed, but not much of anything to indicate what areas of the site might be affected.

    There is not even any time allowed to read, analyze, research the changes. They just instantly start testing, sometimes writing a quick procedure and sometimes writing it as they test, or not getting it written until after the code is deployed.

    A few hours after getting code the managers want to know the status and if it is good to go up to the website (and, to top off the pressure, they mention how much money we are losing or stand to gain).

    The testers don't appear to feel it is necessary to actually check results. Example: there was a test to include food items and verify taxes are right, but the regular web testers only look to see that taxes are on the checkout page. I checked the calculation and it was wrong and it got fixed, but I understand that they don't get enough time to really verify things, and schedules would be missed if they tested like I do. I am so much slower than them because I try to make sure the results are correct.

    We'd never make a deadline testing the way I do.

    So, is that just normal for web testing? Is there any way it be done right and still meet such tight rushed schedules?

    My background is testing some large-scale systems and some smaller web-based applications, but this is my first experience with a retail website.

    There is automation for a lot of the page things (links and stuff).

  2. #2
    New Member Vinay Bharti's Avatar
    Join Date
    Oct 2014
    Location
    Noida, India
    Posts
    4
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    Less time, no/little requirements info., Definately this is not normal as what all end up with is having project of low quality with more issues, where developer will have to spend time in fixing and also give bad face in front of client.

    There are cases where client maybe satisfied with what he is getting and testing for him is not that relevent but if client is specific and is pointing issues, then you can come in between and discuss with your manager about talking basic defined approach, like creating feature list or building a sanity checklist to start with.

    But if it is not solved and still you have less time to test, I would suggest you plan your test like what areas you should focus on gathered from whatever information you are getting from the developer, prepare a quick test checklist containing just 5-10 very imp. user cases, keep a report on what you tested. Detail testing you can plan after release.

    Let me know if you have further queries or discuss anything.

  3. #3
    SQA Knight bklabel1's Avatar
    Join Date
    Sep 2012
    Location
    Kew Gardens, United States
    Posts
    2,596
    Post Thanks / Like
    Blog Entries
    1
    Mentioned
    2 Post(s)
    Tagged
    2 Thread(s)
    Total Downloaded
    0
    I have seen this situation in a Continuous Integration QA system in a Selenium Meetup Group that I went to. it was an Agile environment. After one story was completed it would be sent to the automation team. An automated script that is only a few lines long was written.

    For example, a button was added to the screen. The test would only check that the button was added. So a script was written in Selenium and it could also be in QTP/UFT. If the script passes, then the code was automatically moved to production.

    A few hours later, a tiny bit of functionality is added to the application under test. Another automated script is added and the process went on. Every once in the while all of tidbit scripts were run in a batch. If something failed, a few minutes would be used to check if the app or the script failed. Either a quick fix is made to the script or the automated script is abandoned.

    The team never had a major upgrade to the application. It was upgraded every hour or so.

    It seemed to work for them.

    Nothing in the application was life threatening. No huge amounts of money could be lost.

    Only inconveniences for users.

    I never worked in a place like this. But it is informative to know places are operating like this.

    Thanks,

    Kevin

  4. #4
    New Member
    Join Date
    Aug 2010
    Posts
    2
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    Quote Originally Posted by bklabel1 View Post
    I have seen this situation in a Continuous Integration QA system in a Selenium Meetup Group that I went to. it was an Agile environment. After one story was completed it would be sent to the automation team. ....
    That is interesting Kevin, it would be a much better process than the one they have now (which is more like Continuous Panic in a Chaotic environment). But it is similar that nothing is life threatening (thank goodness).

    Vinay - you have it right, it seems to be they are always breaking more things and lowering the quality of the product. There is no client the software is sold to, the company is not a software company, its focus is on selling goods and the website is just one of the ways they sell things.

    I have always worked for companies that developed and sold software and had to be concerned about quality so that customers would buy the software. The culture and attitude about quality is so different.

    Are there such things as Best Practices for software dev and maintenance in Retail business using the Web? I guess it must be the usual best practices but I'm not sure. Does anyone know what is normal for web retailers, do they mostly do Agile? What does a big site like Amazon use?

  5. #5
    Advanced Member
    Join Date
    Jan 2002
    Location
    ma
    Posts
    531
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    This is a classic QA approach.

    You build your test cases and you have to iterate them while you add tests. You goal, over time is to test all the areas. At the same time, you need to understand how a change in the code causes failure and come up with a testing strategy.

    Usually testers have a smaller subset of tests (they are called things like smoketest, level-0 test, entry criteria tests, etc). This tests the broad scope of everything but can be done quickly. (Usually in hour or so). This is used to get a feel to see where things may be wrong. You combine things with deep coverage in key areas of change. You iterate, look at the bugs and you refine until you get a feel for what change causes certain areas of code to break.

    To do this effectively, you need to master test case management tool. Testlink is a good one to use. (Free and very fast). You write test cases and structure it in a way that allows you to find relative tests. What you do is look at the changes (or talk to developers) and click on areas of changes and create a test plan. If a bug falls out, you go back and look at what you should've tested. You can review test plans with PMs and Dev managers and make sure they understand what you are testing. You can even assign people to help test. (They logging and click test steps as pass/fail).

    The other thing you do is follow up testing. This is where QA continues to test after the release. Even if you were you ship a software, people don't download software right away. So it's a race against customers finding the bugs. We do this often and find bugs after the release and determine if it's critical enough to warrant a patch.

    Over time, you (and your team mates) should get better at figuring out what to test.

    -L

  6. #6
    Member
    Join Date
    Feb 2016
    Posts
    131
    Post Thanks / Like
    Blog Entries
    1
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    Quote Originally Posted by igglue View Post
    This is a classic QA approach.

    You build your test cases and you have to iterate them while you add tests. You goal, over time is to test all the areas. At the same time, you need to understand how a change in the code causes failure and come up with a testing strategy.

    Usually testers have a smaller subset of tests (they are called things like smoketest, level-0 test, entry criteria tests, etc). This tests the broad scope of everything but can be done quickly. (Usually in hour or so). This is used to get a feel to see where things may be wrong. You combine things with deep coverage in key areas of change. You iterate, look at the bugs and you refine until you get a feel for what change causes certain areas of code to break.

    To do this effectively, you need to master test case management tool. Testlink is a good one to use. (Free and very fast). You write test cases and structure it in a way that allows you to find relative tests. What you do is look at the changes (or talk to developers) and click on areas of changes and create a test plan. If a bug falls out, you go back and look at what you should've tested. You can review test plans with PMs and Dev managers and make sure they understand what you are testing. You can even assign people to help test. (They logging and click test steps as pass/fail).

    The other thing you do is follow up testing. This is where QA continues to test after the release. Even if you were you ship a software, people don't download software right away. So it's a race against customers finding the bugs. We do this often and find bugs after the release and determine if it's critical enough to warrant a patch.

    Over time, you (and your team mates) should get better at figuring out what to test.

    -L
    Good information in a detailed manner.

  7. #7
    Apprentice
    Join Date
    Nov 2015
    Posts
    30
    Post Thanks / Like
    Blog Entries
    3
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    New tool has been introduced in the market name Kualitee. It is introduced by Kualitatem. This tool manage your bugs easily.

  8. #8
    Member
    Join Date
    Oct 2013
    Posts
    112
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0
    It looks like you're trying follow Agile methodology.

    I would suggest few things:
    1. Automate as much as you can (it may happen right after the release while developers are working on new features)
    2. Use test tracking system, there are some free and open source solutions like i.e. Testlink where you will be able to track requiremenets and progress
    3. Consider implementing Continuous Integration and Continuous Delivery

 

 

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Search Engine Optimisation provided by DragonByte SEO v2.0.36 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Resources saved on this page: MySQL 8.82%
vBulletin Optimisation provided by vB Optimise v2.6.4 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
User Alert System provided by Advanced User Tagging v3.2.8 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
vBNominate (Lite) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Feedback Buttons provided by Advanced Post Thanks / Like (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Username Changing provided by Username Change (Free) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
BetaSoft Inc.
Digital Point modules: Sphinx-based search
All times are GMT -8. The time now is 06:13 PM.

Copyright BetaSoft Inc.