An emergency bug was found in production. Clients requires to fix and deploy ASAP. After the bug is fixed and deployed to TEST environment, how much do we (can we) do regression test? We probably don't have time to run every test cases, and we are doing manual testing for functional.
How does your QA group handle this scenario? You don't do regression test at all (smoke test only) or let lead QA pick up some high risk areas associate with this fix according to time allowed?
We typically regress around the affected areas that the bug was found in to make sure nothing new was broken. If it was a major bug that caused the customer a big headache it would be best to avoid repeating the same thing [img]/images/graemlins/smile.gif[/img]
We also typically do a quick high-level smoke test of the whole system.
Agree with Nick typically we regress around the affected areas only(you can seek this information from the developers). It is also a good practice to do one round ad-hoc testing on the product to make sure you don't get surprises later.
[ QUOTE ]
We typically regress around the affected areas that the bug was found in
[/ QUOTE ]
strange... and I was thinking that testing around bugs are called retesting, while regression testing is to "make sure you don't get surprises". No matter how we call it - both tasks have their place. The regression may take any for from AdHoc as described by TestingGeek and end with repeating full set of ever run test cases... this is my experience http://www.testingreflections.com/node/view/5299
Yes, but it depends on the nature of the fix. 98% of the time, I agree with Nick. We have had a few fixes, however, that involved complete redesigns or major rework of a common module and required another full regression test. The time quotient for testing such fixes is exhorbitant and (in our case) would require pushing out the production date.
Well, that's where automation comes into its own. If you have a library of test scripts in your regression pack and too little time to run every one of them you could run a random sample of the regression scripts, thus covering as much of the system as possible in the time available. What's paramount is not to always start at the beginning and continue until you run out of time. IMHO system coverage is optimised by selecting scripts randomly.
In addition to testing the obvious areas around the fix, I also would speak with development and let them guide you to the areas they may have touched or to the areas they feel are at high risk for breakage.