| || |
Best strategy to automate regression suite
I just wanted some expertise from anyone willing to help. I need to organize an effort of automating a regression suite of manual test cases. Management is always pushing for fast delivery. So that is my #1 priority. Deliver a few test cases on a daily basis while slowly building the regression suite. What do you all think is the best approach to take here?
I wanted to do a framework, but I worry about the lengthy time it takes up. Have a spreadsheet with the test cases, driver script(which I pretty much already have), common function library, common object repo. I also have to coordinate this effort with 2 other people who have a VERY different level of expertise. THe hardest part here will be the creation of functions.
A second thought I had was to just record the scenarios, which will obviously be the fastest, but with a lot of duplicated effort (but I think management could care less about this as long as we deliver).
Any knowledge or ideas are appreciated. Thanks to everyone in advance!
In terms of putting a framework together.. this is my experience of what worked for me.
Phase 1: Probably the first thing I'd establish is the screen object / page object pattern. This way you'll have a consistent way of organizing your object and reuseable functions by the screens / pages they appear. This will make any future refactoring much easier than unorganized sets of reusable functions.
Phase 2: Configurability - pull reused settings like base url, account settings, etc.. into config or environment variables.
Phase 3: Data driven - make your tests parameterized so they can be driven using data.
Phase 4: Reporting - generate readable reports from the test results.
Phase 5: Continuous Integration. Getting your tests part of the build process will greatly reduce bugs, and hopefully free up more time to increase your coverage.
Phase 6: Parallelism - By this point, if you're automating tests at the same time, you'll probably have a few hundred tests by then. Where a full regression can take 5-6 hours, you'll want to bring it down to 30min-1hr so it can be useful for quick feedback to developers. (this will prevent developers from pulling bad code from each other, if a build takes too long, than a mistake could go unnoticed long enough for another developer to check that bad code out.) You'll want to look into adding support to run your test cases multi-process or multi-threaded.
Phase 7: One-way binding to a test case manager so test cases report their results to your test case manager. So you can correlate automated coverage with your manual testing.
Phase 8: 2-way binding with test case manager. Allows you to use your test case manager as a launching point to make it easier to blend manual and automated execution into a single test plan.
My thoughts on record and playback is get off of it as soon as you can. They're good for attaching to bug reports, but it becomes a burden pretty quickly. You may do some recorded clips initially if you organize them correctly, but by phase 2/3 you'll want to parameterize and rewrite the code in a more maintainable fashion and rely less on timings.
Your breakdown of the steps was useful to me.
Your process sounds excellent and very well organized. I appreciate you taking the time to generate such a through response. It is a true pleasure to read something like this because I learn so much. I do have a few clarification if anyone wants to chime in.
About Continuous Integration, what did you find to be the best process for this approach? I mean, how do you carry out this process?
In regards to the binding process, we have many issues with this, mainly due to VPN, so this is out of the question. But if you had a team of 3, to delegate the work, how would you split it up? Keep in mind that these team members are different levels of expertise.
One more issue we face is that we don't have versioning control. I'm sure management will not implement it until much further down the road, until after the regression automation. Either way, how do I tackle this issue if we have 3 people working on 1 framework, 1 funcion library one object repo and so on..
As for moving the tests into CI (Continuous Integration), the key is getting your tests running on the command line in an machine agnostic way. Whether it's via shell scripts, or via a build script (like Ant or Maven). From there it's just getting your dev's build system to execute that command line after they have deployed.
There are 2 strategies of doing this effectively.
1) Use the same language as your devs and check in your tests along side the product source code, and have your test run as part of the general build plan. This approach is great in that your tests will automatically branch with your dev's source code.
2) Have your project as a separate project, but as a down stream dependency of your dev's master plan. Where after the dev build plan completes, the CI tests gets kicked off. (this is what I'm currently using, of course I perfer #1, but hard to do if there are tons of legacy code with it's own build quirks to deal with.)
In terms of division of labor
Getting the page object / screen object should be pretty straight forward. Bang out 3 or 4 tests that span 2 or more functional areas, then you can easily establish a pattern the other testers can follow. Once you have some examples set (make sure you comment your first set of tests with explanations to why they are the way they are.)
Phase 2 and 3, I think is still doable by most people, but needs a bit more communications as it's good to be consistent. It's good to do those 2 phases quickly in a short time span, this will set the programming pattern so as the less experienced programmers start writing tests, they won't have the bad examples when they start copying and pasting other tests.
Phase 5 and above, should require little involvement from the other testers. These are done mostly on the framework or build level.
Here's how one company automated their manual test suite. Perhaps it might spark an idea or two: http://www.parasoft.com/jsp/printabl...ts/article.jsp
Hi there. Currently I am on a new Project where we have a standard Product. But for the standard product we have various clients. Can someone please supply me with a Test Strategy document example.