What does normal web testing look like?
The past few months I keep getting loaned to web testing projects and I am not sure if the group I work with is crazy or if it is normal.
They only get a few days to test, and the code is delivered in bits and pieces so that we are only hours from the deadline by the time we get all of the code. There does not seem to be a cut-off point, fixes come into the code even after the decision has been made whether to deploy.
There is not even any time allowed to read, analyze, research the changes. They just instantly start testing, sometimes writing a quick procedure and sometimes writing it as they test, or not getting it written until after the code is deployed.
A few hours after getting code the managers want to know the status and if it is good to go up to the website (and, to top off the pressure, they mention how much money we are losing or stand to gain).
The testers don't appear to feel it is necessary to actually check results. Example: there was a test to include food items and verify taxes are right, but the regular web testers only look to see that taxes are on the checkout page. I checked the calculation and it was wrong and it got fixed, but I understand that they don't get enough time to really verify things, and schedules would be missed if they tested like I do. I am so much slower than them because I try to make sure the results are correct.
We'd never make a deadline testing the way I do.
So, is that just normal for web testing? Is there any way it be done right and still meet such tight rushed schedules?
My background is testing some large-scale systems and some smaller web-based applications, but this is my first experience with a retail website.
There is automation for a lot of the page things (links and stuff).
Less time, no/little requirements info., Definately this is not normal as what all end up with is having project of low quality with more issues, where developer will have to spend time in fixing and also give bad face in front of client.
There are cases where client maybe satisfied with what he is getting and testing for him is not that relevent but if client is specific and is pointing issues, then you can come in between and discuss with your manager about talking basic defined approach, like creating feature list or building a sanity checklist to start with.
But if it is not solved and still you have less time to test, I would suggest you plan your test like what areas you should focus on gathered from whatever information you are getting from the developer, prepare a quick test checklist containing just 5-10 very imp. user cases, keep a report on what you tested. Detail testing you can plan after release.
Let me know if you have further queries or discuss anything.
I have seen this situation in a Continuous Integration QA system in a Selenium Meetup Group that I went to. it was an Agile environment. After one story was completed it would be sent to the automation team. An automated script that is only a few lines long was written.
For example, a button was added to the screen. The test would only check that the button was added. So a script was written in Selenium and it could also be in QTP/UFT. If the script passes, then the code was automatically moved to production.
A few hours later, a tiny bit of functionality is added to the application under test. Another automated script is added and the process went on. Every once in the while all of tidbit scripts were run in a batch. If something failed, a few minutes would be used to check if the app or the script failed. Either a quick fix is made to the script or the automated script is abandoned.
The team never had a major upgrade to the application. It was upgraded every hour or so.
It seemed to work for them.
Nothing in the application was life threatening. No huge amounts of money could be lost.
Only inconveniences for users.
I never worked in a place like this. But it is informative to know places are operating like this.
That is interesting Kevin, it would be a much better process than the one they have now (which is more like Continuous Panic in a Chaotic environment). But it is similar that nothing is life threatening (thank goodness).
Originally Posted by bklabel1
Vinay - you have it right, it seems to be they are always breaking more things and lowering the quality of the product. There is no client the software is sold to, the company is not a software company, its focus is on selling goods and the website is just one of the ways they sell things.
I have always worked for companies that developed and sold software and had to be concerned about quality so that customers would buy the software. The culture and attitude about quality is so different.
Are there such things as Best Practices for software dev and maintenance in Retail business using the Web? I guess it must be the usual best practices but I'm not sure. Does anyone know what is normal for web retailers, do they mostly do Agile? What does a big site like Amazon use?
This is a classic QA approach.
You build your test cases and you have to iterate them while you add tests. You goal, over time is to test all the areas. At the same time, you need to understand how a change in the code causes failure and come up with a testing strategy.
Usually testers have a smaller subset of tests (they are called things like smoketest, level-0 test, entry criteria tests, etc). This tests the broad scope of everything but can be done quickly. (Usually in hour or so). This is used to get a feel to see where things may be wrong. You combine things with deep coverage in key areas of change. You iterate, look at the bugs and you refine until you get a feel for what change causes certain areas of code to break.
To do this effectively, you need to master test case management tool. Testlink is a good one to use. (Free and very fast). You write test cases and structure it in a way that allows you to find relative tests. What you do is look at the changes (or talk to developers) and click on areas of changes and create a test plan. If a bug falls out, you go back and look at what you should've tested. You can review test plans with PMs and Dev managers and make sure they understand what you are testing. You can even assign people to help test. (They logging and click test steps as pass/fail).
The other thing you do is follow up testing. This is where QA continues to test after the release. Even if you were you ship a software, people don't download software right away. So it's a race against customers finding the bugs. We do this often and find bugs after the release and determine if it's critical enough to warrant a patch.
Over time, you (and your team mates) should get better at figuring out what to test.
Good information in a detailed manner.
Originally Posted by igglue
New tool has been introduced in the market name Kualitee. It is introduced by Kualitatem. This tool manage your bugs easily.
It looks like you're trying follow Agile methodology.
I would suggest few things:
- Automate as much as you can (it may happen right after the release while developers are working on new features)
- Use test tracking system, there are some free and open source solutions like i.e. Testlink where you will be able to track requiremenets and progress
- Consider implementing Continuous Integration and Continuous Delivery