I have been looking for a way to define my test automation script in QC itself. I want to define the steps in high level language (simple english like language).I mean use keywords to define my script in quality center which then gets translated into a JAVA code (the underlying framework is in java) when this test is run from test lab. Each keyword defined as a step in design steps module will correspond to a function in the java code. The java function will execute and return a boolean and error message (if any). This result will be step's status.
The script will look something like below and will be defined in the design steps section of the test.
the row [number] signifies the row of data that needs to be used as parameter for that function. Each function will have its data table. I also plan to have test data in excel sheets attached some where in QC.
Do you think this strategy makes sense? Are you aware of any such strategy? I have been doing a lot of search but haven't come across any matter that explains this while using QC. Also do you think writing the keywords in the design step is a good idea? All sugessions are welcome
So there is no role for QTP as such, but then any specific reason on why you are looking outside QTP?
Yes the function calls can be written Design Steps, you would have read them using OTA.
I believe you are integrating the execution call the design steps using the same process.
All the relevant files can be stored as attachments to the test cases itself.
This is very similar to Keyword driven framework, infact it is one, however i am only worried about the effort that would get into this and if you are planning to port the same across multiple projects/application, then there may be good amount of work.
If your organization can afford it then why not..!!
Some of the senior members of this forum have shared their framework in the Sticky posts which you can see at the top of all threads, not sure if you have looked them, may be you can.
The advantage of using a tool like QTP is not its direct features but the extension, so one need not spend much time in rewriting the basic features, which is not the case in your approach.
Also would advice you to post this in the General Automation forum.
Thanks for your reply. We didn't use QTP because we are not doing GUI testing instead we are doing API level testing. QTP doesn't recognise the GUI of this application also GUI testing is just not possible neither is it our objective.
Following are our objective
1)To be able to define scripts from QC. Eartlier we were just linking one QC test with one automation test. This automation test was defined in JAVA. And the manual testers and business analyst were not able to define the tests only a java person can. We want to abstract programming complexity/syntrax from the person who will define the test steps.
2)If we adopt the strategy of linking the automation script with manual test in QC then we have to maintain 2 entities. One is the manual script and the other automation script.
3)On an organization level we have decided to store and control all our test artifacts from QC. This means even authoring/creating tests should be possible from QC.
We have done similar with steps so as to be able to run manually or automatically via vapi-xp. The scripter was responsible for maintaining both. Required a good user guide to show what goes where. Involved setting up "environment" and step level parameters for the action scripts and values. All consolidated via a standard vapi-xp script that can run the actions on the 3rd party Test tools using the parameter values and grab logs back to QC. We had to tool a mass update of the standard script - until stable. Currently looking into possibilities of using the v10 resources module for common elements. Also thinking of better configuration management of the Test tools, (and stuff like plink) with an easy way to install from QC.
Found it worthwile when regression testing a year later and the old testers had moved on.
I want to know more about your setup.
1) Where did you store the testdata? was the test data arranged testcase wise or did every keyword function have its own test data table only from which it will use data.
2) what was the step parameter used for?
3) could you execute your test cases against multiple sets of data?