| || |
Testing Patched Versions
The answer to this one probably depends upon the context but I'd like to get other testers' thoughts. Firstly, what activities do you perform prior to the release, and secondly, how much time should be spent testing a patched version?
Our activities are (roughly) as follows:
1) Regression test about 30 fixes (1 day)
2) Add depth to above regression tests to find newly introduced errors (1 day)
3) Run automated test suite (1 day)
4) Retest new fixes (as a result of above)
Are we missing anything *very* important? And do you think the timescales we have in place are adequate for good testing? Again, the decision to continue testing depends upon the stability of the code, but I'd be interested to hear from other testers about what they think is sufficient (and reasonable) times for sign off.
Your answers may help with the my forthcoming test review...
Thanks in advance
Re: Testing Patched Versions
I would have to say that the testing of a patch version would be on a case by case basis. Some patches do not correct a problem, rather provide a work around. Other patches provide a correction, but are not the end code and the version update will have different code.
If the patch is just a low level fix for some GUI screen, testing would be very limited. Some patch versions have multiple corrections and may require a rerun of the regression test.
I really did not answer the question, but patches can be big or small with little impact or a big impact.