Currently I am running a manual regression test script with almost 1000 test cases. They are not all unique: Some reference the same functionality and the order of the test cases is all over the place. In most instances this causes a duplication of effort.
Other test cases I have run through at this shop have an “assumed” unwritten section, i.e. it points you to where to test, and gives you a sample of the functionality needed to test, but unscripted or exploration testing is required to have a high confidence in the quality of that component (i.e. specific fields and their limits are not specified in the TC). These test scripts are easy to follow and maintain.
Automation testing is not an option, so we cannot convert some of the functionality of this test script into automated tests to increase the maintainability of the manual test scripts.
Any advice will help. Is there a limit or threshold to the number of testcases? What processes have other organizations taken to get large hard to maintain manual test scripts back under control?
Currently I am just a Jr QA analyst and have been scheduled just to execute the test, which was modified recently by a Sr. QA analyst who has left our organization.
I've done quite a bit of manual test script writing for a very large project with several different applications which must work together as a system. The way I split it up was based on the kinds of tests needed like performance tests based on the application or applications being tested. For example: one set of tests was written just for the overall performance test in end to end testing. Another performance test script was based on how the software performned normal routine actions while the mobile device was charging, including memory usage and CPU performance. Another test script involved performance of network communication from the device to our communications server. I also had a separate set of test cases that tested CPU speed and memory usage based on various usage conditions.
I had another set of scripts based on stress and load testing like loading the device with maximum number of nework packet messages with data going to and from the communication server. Another test was loading the device that gathered the heart data plus utlizing the GUI to create more data so there was constant communication between the two devices which then would communicate with the server after analysis. It's hard to explain exactly what I did without understand the system I was testing but the point was, I broke the tests down into modular pieces so that when I wanted to test a particular portion of the system, I could do so without having to execute all the tests.
When you break down your scripts, break down into the type of tests: functional, GUI, data, performance, stress/load, usability and then with the type of tests, break the scripts into dependencies between components and/or applicaitons.
I found that it was much easier to not only maintain the scrpts for repeated use, assign to different resources as needed. We also were much more efficient when updating the scripts as needed.
For ease of use, I perfer using Excel or some sort of spreadsheet tool to write my scripts rather than utilizing Word although I have used both. Word versions though actually contain tables, so really we were using some sort of table/spreadsheet format which helps me to organize my tests.
The test scripts are written in excel. Any examples that improve organization will be helpful.
[ QUOTE ]
regression test script with almost 1000 test cases
[/ QUOTE ]
We have the scripts broken down for different functionality. This is the overall regression test script.
The writer of this test script has linked each test case back to its functional requirement, even though some test cases are very similar, the requirement is different.
It would be a simple matter of condensing a lot of test cases down to one and simply linking each requirement to that one test case. This would eliminate some of the issues.
The main issues are: duplicate functionality coverage, test cases with functionality that no longer exists or has not been implemented yet, and similar test cases with similar steps covering different functionality (mentioned above).
How would you approach this problem? Is there a way to correct this without scraping the testscript and/or duplicating all the effort that has already been sunk into this script?
Your feedback is very helpful. If I can explain the issues clearly to you; I will be able to approach management with these explanations and possibly formulate some sort of plan for correcting the script.