QTP/UFT and TFS Integration?
My current customer is insisting we start to store our QTP tests in TFS, despite the fact that we're already using ALM as a service to store all our stuff in HPs cloud, with version controlling. It's their policy, even if it is trying to fit a square peg in a round hole, so we have to make a good-faith effort to make it work.
The first two pages of results of a Google search all point to Anna Russo's idea, one way or another, so I'm guessing there isn't much around on the subject. Her solution is concerned with running QTP tests from TFS, which isn't what we want to do - we want to continue developing and running in QTP, but use TFS to store the assets (tests and function libraries).
We could just use TFS to check-in and check-out tests and libraries to a local drive, and have QTP use those local assets to work with. This would be fine if there was only one automator, but there's two of us and we need to use the same shared set of function libraries, so it gets a bit tricky. When we run a test in QTP, it downloads and compiles the latest set of checked-in library files. We can't do this from a TFS server. We'd have to manually check that we have the latest libraries every time we ran our test, which isn't practical while developing. Given that we currently have about 25 distinct libraries, and some test use 10-15 of these libraries, the over-lap between us is considerable.
Has anybody dealt with such a set-up before? What were the issues you encountered and how did you resolve them? Are there any tools to that would help?
That sounds painful.
I assume you've tried to explain the futility of the idea...?
What is their motivation for wanting your assets in TFS?
- is there a desire to stop consuming HP SaaS services?
If you didn't have ALM, or any other tool, you'd be storing all your assets in a shared network drive.
Would you be able to add this folder location to TFS to place it under TFS version control?
Version control of QFL's and TSR's should be pretty straight forward, as they are flat files.
I have no idea how it would work for QTP scripts though as the file system artifacts for a QTP script are a little complicated.
..and where/how would the results be stored?
Nice to see another Kiwi here. :-)
... just another Tester ...
Hi Alex, my wife works for your company and so do most of the people I work with
>I assume you've tried to explain the futility of the idea...?
Hahah yes repeatedly. The dev manager is a rules-is-rules kinda guy, and we are but lowly contractors. All docs go in Sharepoint and all code goes in TFS.
The issue here is really about running shared libraries at run-time, as QTP obviously gets the latest checked-in version of a library every time you run the code. One solution is for to do development in a local "work-in-progress" directory, and only transfer into a library once the code is nailed down, but it's still cumbersome.
Another thing that's just occurred to me is what the hell do I do with other assets like csv and Excel files, which I call at run-time. The more I think about it, the more stupid it seems, especially when there's already a perfectly satisfactory solution in place. They won't be getting rid of ALM SaaS any time soon, as the project will be in on-going dev for at least a couple of years.
I've not worked with TFS before but I have used 3rd party code repos before. Using GitHub currently for our Selenium and QTP scripts. The tests are just like any other directories as far as it is concerned. Not going to get comparisons with binary files like the OR obviously. But you can grab both versions and compare them with QTP's built in OR comparison tool when a merge conflict is found. Bulk of the code is plain text in Script.mts files and the repository files though and the comparisons work fine there.
Prior to a run you just do a sync to grab the latest code. Or pull an appropriate branch if you've got a new version in QA but want to run with older code against prod.
Not sure if that's how TFS works though. Does it maintain a local copy that it gets synced up to?
I've worked with others as well that had a hard check-in/check-out concept. The tests would just be on a shared network drive under version control. Only one person could make a change at a time by locking the file with a checkout.
Last edited by NoUse4aName; 06-20-2014 at 06:34 AM.
I've done it before... it's not that bad, just a little cumbersome at first. The first couple of weeks are rough because even with two people, you'll be learning how to develop without stepping on each others toes. You'll have to learn how to develop in isolation and merge your finished code with the main branch... typical dev stuff.
Our approach was to develop locally and merge our code into our main branch when finished. We had a simple branching structure (basically one branch for dev, and the other for main). Our procedure was to merge our code into dev when complete and to merge dev into main at the end of an iteration. If you're tasked with developing scripts for multiple environments (DEV, QA, STAGE, PROD, etc), or have different flavors for each of the aforementioned environments, then you might look into labelling approach rather than a branching approach. Talk to your build engineer about implementing this if it's required for you.
We also had a deployment script that would take whatever was checked in to main and deploy it to a specific directory on our network. When it came time to run our tests for anything but debugging, we'd run the deployed code, not the code from our personal workspaces. The deploy script was ran on a schedule (every day), but could also be triggered manually whenever needed. It was created via PowerShell, and since it sounds like you're in a Microsoft shop, you should have someone on hand (if not you, then perhaps a build engineer or developer) that could put this together for you.
The framework was built with portability in mind because we wanted to be able to deploy to any location without requiring users to perform any manual setup or activation. I wrote a .Net application that would create the necessary objects to load and configure the tool, and when necessary, do the same to the tests. This allowed us to deploy the code anywhere on the network and even for new machines (machines that have never ran the scripts before) run the tests without user intervention and at full efficiency.
For me, the biggest PITA about this was the object repository, in that if you use the tool out-of-box, then there's no way you can merge or make comparisons to ORs under source control. This could be circumvented by exporting the ORs to XML files, keeping the said files in TFS, doing the needful against them, and then possibly building the OR either as part of your driver script. Or you could just step away from the OR altogether. Excel Data Tables were also painful for the same reason, although the problem there wasn't that the files couldn't be compared, it was just that they weren't really in an easily-readable format.
Another thing you'll want to watch out for is adding unwanted files to source control. Mainly, I'm talking about any screenshots captured during execution, but also lock files can cause pain also. I think you can identify which files you want to exclude during setup and during check-in, so be mindful of that.
If I could do it over, I probably would not have used excel for test data storage and instead, gone with XML files, as these are more-easily source controlled. I probably would have also abandoned the object repository approach altogether or done the XML-repository thing instead.
My thoughts are that if the customer is hell-bent on being a .Net shop, then they should switch over to something that plays nicer with Visual Studio and get rid of QTP / UFT. Some of the other vendor tools integrate nicely with the Visual Studio Professional and above and Selenium WebDriver works nicely with it as well. It's great that we can store QTP/UFT scripts in TFS and to an extent, modify them in Visual Studio, but it kills me that I'm not able to develop new scripts using it. But I'm viewing this from more of an agile standpoint, where as a developer, I might be responsible for creating the occasional automated test script.
My current customer refuses to allow automation developers have access to source control due to "union rules" (say what???) and don't like the idea of us putting our assets in places like GitHub. Consequently, there are five of us attempting to work on stuff at any given time. We have conflicts and overwrites every now and then, but since we keep our concerns separated, we don't step on each other too much. Since I'm also remote and the assets are stored on a network 1500 miles from me and because our network sucks, I prefer to develop in isolation, pulling down and merging up when needed. I use SyncToy for getting latest and manually merge the files using Notepad++. Since we're not really using the OR much and don't use excel data files at all, I haven't had too much trouble with this approach.