Positives, Negatives and alternatives to Keyword Driven Frameworks
I just wanted to start an open discussion about Keyword Driven Frameworks utilized commonly with QTP. I have been doing Automated Testing for a while now and have come to realize that there is a huge pain associated with the KDF. This pain is related to transferring everything to the Excel spreadsheet. After developing my code and having it run in QTP, I find myself wasting a ton of time moving it over to the spreadsheet. However, this action seems to serve no purpose in helping me to test the application. I was wondering if any of you came across solutions to this or have an argument for the use of the spreadsheet and why people continue to utilize it for their Keyword Driven Frameworks. I was thinking of some solutions that would involve developing the code, and then insert it into some test suite and that's it. Not sure of the details yet.
What kind of data are you pulling across to your datasheet? We have strictly observed standards for our coding that requires a boiler plate for each method/function written in UFT. As a result, it is a simple matter to write a piece of code that goes through all our function libraries, pulling out all the method/function names, descriptions, lists of parameters and their descriptions and dumping them into an Excel spreadsheet. If your framework allows direct manipulation of objects then I would suggest dispensing altogether with an OR and keep your DP definitions in a spreadsheet.
You might want to define what your KDF does as KDF means different things to different people ranging from using the QTP Inbuilt Keyword Framework to having everything held externally (script, data, code etc.) and just using QTP for the object recognition driving.
I've designed a few Keyword Frameworks on top of other tools that already have keyword functionality.
Main strengths is typically being more human readable, and being able to be tool agnostic at the business layer and data layer. (At the implementation level where the tool has to drive the UI, that still is tool specific). Another Key Advantage is the expressiveness of how you represent your data inputs. At a previous company I worked for dealt with insurance where our work flow involved filling in lots of form fields. I created keyword framework that made filling out forms very easy, with even a token language for pulling in config values and generated values. With a simple execution engine, you can also start stitching multiple tools together. For example, I created a keyword framework that used RFT, Selenium, and Shell scripting.
There could be different levels of pleasure/pain depending on how well the tool is developed, and how much support around the workflow there is. For example, having a simple Excel macro that can do a Keyword look up from available keywords from a database or external data source and use it as a source input for autocomplete when you enter a new line on your Keyword Test will make the workflow a lot easier.
These days I hate Keyword Frameworks. Through experience I've found that no matter how well designed one is, it'll generally devolve into sloppy pseudo coding. Most people without engineering experience do not know how to separate high level business logic from low level implementation logic. (actually most devs can't do this either). At least with code/code, you can apply rigorous linting and refactoring tools to help keep code quality in line. With keyword frameworks, it's hard to create linting and grammar checking. They're human reading friendly, but they are horrible at being machine friendly when it comes to error checking.
For example.. Most people think in terms of actions they are doing, not the purpose of what they are doing..
Good keyword Test..
Login $username $password
CreateAccount "test account"
Login $username $password
OpenAccount "test account"
Bad keyword test...
Fill AccountName "test account"
Click link "test account"
It's such a simple concept. But not matter how much I tell other QA Engineers under me to keep things high level when dealing with keywords, they just don't get it. The moment I stop doing code reviews, I see a bunch of UI interaction crap polluting the business level worksheets.
Last edited by dlai; 03-25-2014 at 09:57 AM.
The spreadsheet is a test suite of relevant automated tests. Containing the test cases and their test steps. The test steps are a collection of functions developed in UFT and their appropriate parameters. Basically, the spreadsheet drives the entire logic, finds the function and executes it. But Im wondering if there are other alternatives. I am not sure how you do it, but I develop my code in UFT(which is a bunch of functions) and then I move it over to Excel so that Excel drives what UFT will do. To me, this seems like an unecessary step. To create code that works, and then move it somewhere externally, seems like a waste doesn't it?
You're right Mark. My definition of a Keyword Driven Framework involves these components. A driver script that is a UFT script that contains logic to navigate through an Excel Spreadsheet. The driver script sets up the appropriate environment by loading all the functions and setting up any environment related variables. The excel spreadsheet is a suite of tests that are composed of the functions we developed. In the spreadsheet we have all the data and variables and anything necessary to perform a test case. All the functions come from different function libraries. I hope this clarifies some things?
Originally Posted by mwsrosso
dlai, I completely agree with you in pretty much everything. That is why I am beginning to wonder if they are even worth developing? I was thinking that at some point they become very user friendly to the manual testers where they can actually develop their own automated tests by linking a bunch of keywords together.
So what is your approach? I like how a KDF can be easily executed as a bulk of tests or just one. But I feel that adding all the stuff to the spreadsheet is a waste of time and maybe I can somehow recreate this process in a UFT script. That way I can just develop my code, and once it works, I don't need an extra step to move it anywhere.
Originally Posted by dlai
In short, I think developing keyword tests is a waste of time in the long run.
Originally Posted by smartrussian24
The test execution as bulk or as one can be done without keywords, it should be a simple matter of composing a test suite. I believe you can also tie in quality center to handle launching and reporting for you by allowing automated tests tie to test cases, then test cases launching as part of a test plan.
My personal opinion is just don't have manual testers write automated tests until they are trained to the level of an automation engineer or developer. At that point, the benefits of keyword testing in terms of readability and separation of logic becomes a moot point. (so all you get is tool independence on the business level)
My thought around this is, when I look at the larger problem 2 to 3 years out of maintainability, stability, and test performance (who wants to wait around 1/2 a day for a regression plan to finish), will usually become the bigger issue than test readability. A badly engineered test whether done in code or keywords will suffer just as great. But in the world of keyword testing, you don't have code refactoring tools to help you dig out of a hole. There are higher level concepts such as encapsulation, compartmentalization, good/safe programming/logic patterns, such as idempotency, that a strict manual tester without a programing background doesn't have. I think they shouldn't be writing automated tests until they understand core programing concepts and design patterns.
The way we handled it at a previous company - we had a keyword-driven framework and leveraged ALM test cases to store our test cases. We created test case templates to establish fixed criteria on how test steps were created and which keywords were used and we used datasheets attached to the test cases themselves to drive stand-alone steps. We also had a way of referencing existing test cases for use as business processes, but I forget how we tied them together (although it was probably pretty similar to how you would call one test case from another in QC).
This worked out really well in that it was very thought-out and supported with training, documentation, the works. Because of the ALM integration, tests were able to run on schedule and unattended. And we had ALI partially rolled out, so for the projects that were participating in the trial, only the required test cases would be executed when a build was deployed. Since we were an agile organization, being able to run only the required tests was essential to ascertaining the build results quickly. The system wasn't without its flaws, but we had close to 80% adoption rate across the organization and most of the QA people were able to write their automated test cases successfully.
The key benefit of using ALM was the centralization it provided. We could do all of our test creation & execution in system, and we could collect, store, and analyze our test results. You can certainly create your own metrics collection & reporting mechanisms, but unless there is a real need, I don't see the point of reinventing these mechanisms.
Personally, I think that it's just important that the right test cases are automated in the right way while using the right automation tools. This boils down to process, and regardless as to what type or whose framework is in use, most QA departments that I've encountered fail in this regard.
Branching and looping have something that I'm not sure about in a KDF.
If inventory of widget A exists then
Sell Item to Customer
Order more widgets
How does one do a branch in the spread sheets?