I have been working on developing a modular test design strategy for our test group, but I have not found any good information on how other companies have done it. I was wondering if anyone has seen documentation on how it has been implemented, to compare our strategy to others that have already been rolled out.
For those of our forum members that don't understand the concept of modular test design, modular normally indicates that a given test, whether manual or automated, is not dependent on other tests.
Test data is often created or located within the test, and the test completes by "cleaning up after itself" and leaving the environment in the same state in which it stood before the test was run.
We use this strategy ourselves whenever possible, as it gives us maximum flexibility. We can pull tests from multiple functions and put them together into a test suite or run without worrying about dependencies.
The actual mechanics of doing this vary; can you give us more information?
What type of applications are you testing? Any external interface requirements? Nightly cycles or batches? What type of test data do you typically use? Production overlays or data you create/manipulate yourselves? What type of auto test tool do you use (if any?). How is your framework structured?
We are building manual tests, but the plan is to move to automated testing after we build our automation frame work. Which will interact with a web based clinical application. For the manual tests we are breaking down our applications into small modules of reusable functions for maximum flexibility. We are also classifying all the objects in the modules to reduce effort when we have our automation framework in place.