Integrating a test into QC
I wrote a large Function Library in UFT to test a web site. The various functions are called by an Action script driven by an .xls file. Test data is also included in the .xls file.
I can currently turn on and off functions by editing the .xls file (Log in, user profile change, etc.). This works fine for me but I would like to have it run from QC and have it be mostly hands off so anybody can use it. What would be the best way to accomplish this? I could create many versions of the action script and keep the monolithic qfl, but that could be a maintenance issue.
Is there a way to configure QC to call the various portions of the test via the test lab? Any other options for setting this up?
I am still having trouble with this, and I would like to solve it tomorrow.
Do people typically write large, monolithic tests in QC, or are they broken into smaller tests? I always write test cases to be small and to the point, but I am unsure how it is done from an automation standpoint.
The intended workflow in QC is more like this:
Create manual tests with design steps mapped to requirements
Create an automation script for the test
This ends up with a 1 script to 1 test relationship unless you do your own abstraction. It also leaves you with a test that can be executed in automation and still has full stepsd documented for a manual test should the automation run into problems due to a ui change or some such.
QC reporting features only really make sense with small targeted tests too. Some giant test that does all your regression that fails leaves you with a 0% pass rate when it could actually be 99.5%.
So you're trying to shoehorn a giant automated test in here backwards. Given your brief description I think I would lean towards a driver script of some sort. A common script distributed to all tests that calls your existing test structure. Just with each test having its own variation of xls data associated with it as a QC attachment or some such.
For me, one of the most important part of script design is starting/stopping in a known, basic state. For example, testing a website might mean that I create one script to log in and navigate to whatever would be the "home" page after logging in. That becomes my basic state. Now every script that tests this website would start from that page, navigate to whatever page I plan to test, test the page, then return to the basic page.
I do it this way because now I can turn some tests on or off within my test plan, and I know that every remaining test will still run properly because none of them are dependent on starting where a different test finished.
Now, this means that some of my test scripts might be very long. In order to get to test a certain page, I might have to start in my known basic state, navigate, fill out a form (which might be multiple pages), verify that data, submit the form, and finally reach the info I wanted to test. Along the way I might have several "check points" and I might test several things, any one of which could make the test pass or fail. In short, this one script might be "monolithic" and might contain dozens of test cases. But if it's necessary to reach the page I need to test, then that's how I do it. In this kind of case, I would roll several other test cases into the big script, since its very likely that I would have test cases to test features of the pages I'm using along the way.
This way I can kill all those birds with one monolith.
Another script might be as simple as navigate to page x, test a few basic things on the page, then return. Done.
It varies by the needs of each test.
I would suggest to NOT limit your creativity by shoving it into a straitjacket of "best practice". Don't limit your thinking that all scripts must be short and modular, and conversely, don't limit your thinking that all scripts must be convoluted monoliths. Just evaluate each situation and write each individual script so that it will get the job done, quickly, efficiently, and with a modicum of robustness so that you can maintain it through the future of your software life cycle.
"The last 10% of any software project will take 90% of the budgeted time. The first 90% will take the other 90%"