I would like to know what is the most common acceptable compromise, based on your experience, between:
1- running test iterations on same environment during different test cycle execution so results can be comparable
2- running test iterations on different test environment from cycle to cycle to get better configuration coverage
I would like to present an example to illustrate my question (just in case :P)
- let say that the test environment is defined by OS (XP or Vista) and Web Browser (IE or Firefox)
- because of external constraints I cannot run my N tests on each configuration (4xN)
- I choose to run all N tests in 1 config (the most common one) and only few test (king of regression) on all other configs.
the question is: if i consider having 2-3 main cycles (alpha, beta etc..), is it better to keep the same test environment coverage along all cycles or choose another config to run all test in different cycle?
Well, Dan, IMO itís more useful to run all the tests in the environment that gets the most use and then to run the regression suite on the other environments. In my experience this provides optimum coverage of likely scenarios within a reasonable timescale.
It appears to me that to swap the main testing to a different platform from time to time could possible flush out some interesting bugs but with little return because these are bugs that are unlikely to be found in production due to the relatively low usage of the alternative environment.
first of all thanks for your quick answer ...
however it seems that my sample was a little bit too simple :-)
in real life you can have more than only 2 params defining the test env. differientiation... and then what would be the best approach when you have not only one but "some" main config??
Clearly itís counterproductive to circumscribe the configurations within which your application will run. Ideally any combination of OS, browser, processor speed etc should be OK but in practice this is impossible to achieve.
I would suggest you have a Ďstandardí configuration that you always use to validate a build. This will probably be IE7 with Windows XP, for example, because thatís likely to be a common configuration. Now itís possible, using orthogonal arrays (search on the forums for how to do this), to check other configurations in an efficient way but without necessarily covering every combination. Itís important not to always test the same Ďminorityí configurations but to vary them as much as possible in the available time. There are considerations of how long it takes to configure a machine differently and whether you have licences for all the different variations that could exist.
If in your opinion there are several Ďmost likelyí configurations then, yes, cycle them in each release but still, I would use a single configuration to test exhaustively and other configurations to run regression test on.
Finally, itís important to specify to the users what is (are?) the preferred configuration and which configurations they use at their own risk.
I am not sure if I have understood the question.
Configuration testing and Testing cycle are two different concepts
You may design test cases for configuration testing and execute them across the testing cycles! (same Test cycle - assuming you are provided the build in Waterwall model, Multiple test cycles - in case you are provided the build in incremental model)
Normally, when I have to test multiple functionalities across a variety of environements, I consider breaking them down using Pair Wise testing.
Let me know if I have completely misunderstood your question!!