I have been responsible for end to end testing of the last several phases of an established VB app. All testing is performed manually and documented by myself. Previously we have not been required to produce any documentation to support regression testing, which, until now, has been performed ad-hoc.
Due to some substantial changes in functionality, I have been asked to produces documentation to prove regression testing has been performed upon the forthcoming release. My remit is to produce high-level tests, which can be cross referenced/repeated by the customer. Although I am experienced in producing test cases and UAT documentation, I am finding it difficult to find a suitable balance between high-level test cases that contain sufficient detail to be useful.
I would appreciate any suggestions as to what set of documents I should produce and any tips on how to devise a test plan that considers the entire system but that focuses on specific areas most prone to defects.
I am sure any advice or pointers to usefull resources would help me no end
Regression testing's primary goal is to insure that the core functionality of an application is in tact after fixes and updates have been applied. I don't see how core functionality can be tested with high level test cases. It would seem that if you have documented tests to validate the core functionality now, then they should all be placed in the area of regression tests and new tests should be written as functionality changes and features are added or removed. Then the new tests can also be added to the regression test bed. I would warn against getting customers involved in testing efforts other than usability testing unless you are so resource challenged that you can not afford real testers. Customers tend to get turned off by software which is less than perfect and can cause loss of contracts with thier companies. Plus they are not trained testers and can only test what they know how to do. I might suggest that you do some research into Rick Based testing, it sounds like this approach might help you in your task.
Success is the ability to go from one failure to another with no loss of enthusiasm.
~ Winston Churchill ~
That is the approach I usually conform to. Only new or changed functionality is addressed by testing, previous tests are re-run to prove exisiting functionality remains intact. We are contractualy obliged to produce manual test scripts to be run in a UAT environment, although these tests are always proved prior to UAT any how and realy only serve to demonstrate new/changed functionality.
The main difficulty I have re-using previous test cases is that units of functionality may overlap, i.e. a function may require the use of another function to facilitate testing. If one or the other functions have changed independantly of the other (in different phases) then the steps required by a test case may be out of date.
Therfore I am looking to write a plan which is not that specific, it would be sufficient to assume that a tester knows the system well enough to create and enter data without step by step instructions. The main problem I have is deciding how to demonstrate the entire system in a structured manner.
Regression testing is a paradigm shift from the original development.
When developing a product from scratch you take a high level requirement and develop detail and design information to provide a solution for this requirement. When developing a test plan for such a scenario you make educated plans on how to address the verification.
Regression testing is the opposite. You know the low level detail so you really need to critically analyse all you testing activities. One method I use is to get the business (or end user(s)) to rank the System Functionailty by priority and relate this to the regression test cases. At the same time you also need to address any deficiencies in the test cases (as you now have a better understanding of the system).
Once you have confidence that you have a suite of regression test cases that addresses all system functionaility and the order of importance, you can start doing regression analysis for each intended release. This analysis will involve a review of the intended system changes and should be done at an earlier possible stage. (BTW This should form part of the costing for the change ) This will involve input from the other areas business.
As far as documentation goes I suggest you write a seperate regression test plan and then individual ones to address each release you intend doing. In each of these plans you need to identify the change analysis and the testing activities identified to address the changes. Large changes will require you to conduct pretty much all the regression test suite where minor changes may be partitioned off (I stress "Maybe"). To operate like this you will need accuarte techcical and business input and effective change management.
WRT to areas most prone to defects I suggest this would more or less be reflected in the complexity of any business operations and work backwards from there.