deliverables of a Automated Regression Testing Team
Could anyone point out the ideal deliverables of a Automated Regression Testing Team ? I have been working in a Automated Testing Team for the past few weeks and we are delivering the following :-
1. Test Plan Document : Automated Regression Testing Document with Scope, Method and Logic - revised periodically when the Test Scripts are updated or when new Test scripts are added.
2. Re-usable, Maintainable Test-Scripts to verify critical high-level functionality of the applicaiton - on incremental versions of the build.
3. Conduct Automated Regression Testing on builds.
4. Test-Results - generated by the Automated Testing Tool while conducting the Regression testing are - collected , analyzed and archived. If the Test captures any bugs or the test fails due to failure of the application - the failure is entered into the Bug-tracking tool ( PVCS tracker in our case ). The Development/ Bug-fixing group is notified about the failure.
5. Summary of All The Test-Results is documented with the Major bugs reported and send to the management.
We are not doing any Stress Testing or Load Testing? Is there anything I can add to this list ?
thanks in advance,
Re: deliverables of a Automated Regression Testing Team
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by jude:
Could anyone point out the ideal deliverables of a Automated Regression Testing Team ?<HR></BLOCKQUOTE>
For myself, "automated regression" is generally not a whole team in and of itself. It is just part of the overall testing effort. However, since your environment is obviously different, I would agree with having the test plan document as you mention. This is obviously a "living" document and it sounds like you treat it as such.
Basically, I think your list sounds pretty good. You have a strict test cycle defined within the subset of Regression Testing. There are, of course, ways you could probably make a lot of this process self-documentating, at least to a certain extent, by combining the notions of your maintainable scripts with the idea of a low-level test plan based on the test scripts themselves but those are more about details of the process, rather than the process itself.
The other part of this is that you need to have risks defined I would suppose. I am not sure how your build cycle is there, but there needs to be time to run the regression tests. And if such time is not present, there also needs to be a way to determine if you can run limited subsets of the tests given time constraints.