| || |
How to collaborate with exclusive locking??
We have a dilemma with how to structure our QTP tests.
As we understand it:
1. the QC source control only allows a single user to check out a test at any one time (exclusive checkout)
2. the QTP IDE only allows you to have one test open at a time
If we lump everything together into one test, then only one person can be working on the automation suite at any one time.
If we split tests up (e.g. one test for each conceptual ‘test’ and one test containing all the shared actions) then we have to keep switching between the different tests, closing and opening the whole session as we move from editing common routines to editing the tests that call them.
Worse, we will still have conflicts on access to the shared action library.
We could solve this in one of two ways:
1. split the common actions up into multiple modules of common actions
2. make changes locally in our conceptual test then port them into the shared library when we are done.
The drawback of option 1 is that QTP doesn’t update calls to reusable actions outside of the currently loaded test. So if anyone changes an action definition or adds a parameter etc, it will break all the calling tests. This is already a problem with the simple ‘one common action library’ approach but becomes worse when we split it up further, with much greater chances of cross references.
Option 2 is cumbersome and time consuming and isn’t really an effective way to develop.
How do we resolve this situation?
Re: How to collaborate with exclusive locking??
I would recommend storing any common code in function libraries(.qfl/.vbs) rather than actions. And splitting object repositories up logically(one per page or functionality)
My tests are made up entirely of reusable function libraries parameterized with QC design step params. The only action(and thus test you have to open and switch between) is a common driver.
I'm definitely an advocate of splitting tests up rather than having one big one. All of my tests only test one scenario with one set of data each. This makes the reporting from QC meaningful as far as pass/fail rates and the quality of the release. 96.3% pass rate is more informative than a flat 0 or 100.