Regression testing in Agile env assistance
Hope to get some help on such scenario below
Sprint is 4 weeks long
Initial expectation, 1 week before end of the sprint to run testing to check new features completion/bugs verification
1st week of next sprint to run regression testing
Env provided is being used not by QA only, but also, by others.
Builds deployment happens - 2-3 times a week weekly
Is there a possibility to run regression in such scenario when there is no time provided to complete regression cycle (5 days estimation)?
In Agile, one typically go one of two ways. (or both)
1) Have so much automation that you can perform regression upon nearly every build. Have a whole battery of unit, integration, and end-to-end tests running on CI.
2) Scope your changes per iteration so the risk is small enough so you can effectively limit the scope of testing your changes. For example, when doing refactoring, limit it to one module, and plan out your features so you don't have different devs on the team working on different feature areas at the same time. This means developers need to design using SOLID ( SOLID (object-oriented design) - Wikipedia, the free encyclopedia ) priciples. This allows modules to be very decoupled and swapped out very easily, minimizing the risks of refactoring.
If manual testing is very intensive. I would say it's better to have 1 week sprints and make your changes so small and focused that you can test whatever's being changed within 20 minutes.
Last edited by dlai; 06-18-2014 at 06:48 AM.
Originally Posted by dlai
Testing is done on mobile and STB, therefore, so far no option to do automation. As QA, I cannot change sprint duration as well as change scope, that is not an option for me. This is done by management that passes info to devs and QA.
Mostly right now, I do not do regression as expected as I do not have time to complete such thing. Only new feature/bug fixes testing and sanity testing.
Is it not possible for your to ask for a separate QA environment? That way you control the environment and build deployments.
You should talk to your QA manager. He/She should stand up for you and push back against the organization asking you the impossible.
Originally Posted by vanatomas
There's really only really 4 things you co do in a bandwidth constrained project of any type.
* limit scope
* invest in automation
* increase man power
* increase time
Changing scope or timeline is close to impossible from QA side. This is across companies and is well known issue. It will take awhile to change these things in an organization. So here are few tips from me in your condition:
1. As you mentioned automation is not feasible you can prioritize: H,M,L your regression cases. Based on your count of regression cases and available 5 days of time you can target only critical cases. This is better than No Regression. Prioritization is very important. If you have identified cases which are to work fine for your application then you can even skip other test cases and execute only these critical test cases.
2. You can either execute portion of H and M or 100% H cases. This you can only decide based on your application nature.
3. Risk based testing is the only possible way in your case. You decide the risk for each functionality and execute only high risk(if not tested) test cases
4. Test case automation may not be possible but you can look for options like optimized way of test data generation, quick way to update test results and log bugs etc to save time in the testing process. That way you will get more time to cover more test cases.
In my experience tester's decision takes major role in critical period so perform adhoc testing and try to break code. This is one of the quick way to find if code has broken
Let me know if you have any further questions in this regard.
But aren't you saying that the first week of the sprint that follows is spent doing manual regression? (that is 5 days). I believe that this is a question that all the team should get together and answer because quality is a responsibility of all the team. Does the team feel confident that when the development is finished, this product will be ready to be launched? Without having any kind of automation, I don't think it's possible to feel confident. If the team doesn't feel confident, does it make sense to keep adding new functionality sprint after sprint. Or probably it would make more sense to go slower, but with more certainty that the functionality that already exists keeps working when new one is added. Remember, this is a problem of the whole team, not just from the QAs!