Good day to all,

I am looking for recommendation as to how to rewrite my test scenarios and environment (if needed).

When I joined the group I am in, the people here just recorded scripts without understanding how RFT worked. No common routines to be used, no documentation in the code and when testing against a new environment, it required that all the scripts be modified to point to the new host. That was bit of a pain.

Since then, I have made changes, created a library of functions, etc. Things are working okay for now. I can run my regression and performance scenarios (and any adhoc created tests), but I had a lot of scripts to modify to make use of these common routines.

However, I have noticed that things are not working all that well after upgrading to RFT 8.1. It may have to do with the way the scenarios and routines were written (I am not an advanced Java programmer, but I know enough to be dangerous), but I had to make some tweaks to get things running. Alas, now I have two sets of code: 7.1 and 8.1. The 8.1 machine is just to see how things work once I have upgraded, so I can lose that code.

I have experimented a bit with dynamically locating objects (a bit of a pain, as I need to know the exact name of the objects and it gets more complicated when you have a dynamic array of items), writing routines that perform a whole set of steps (call a menu, click on a link, fill in some of the values), and stuff like that.

Now, I am wondering if rewriting it from scratch (library first then scripts) would be better than tweaking each library and script.

Also, RFT 8.1 seems a bit slower that RFT 7.1, and that affects things a bit. It means that I have to modify the scripts (again) to wait for a certain object to exists before continuing.

So, my questions would be:
- Do most people use dynamic objects (instead of using the object map)?
- If using object map, would it be better to use a private or shared object map? (I have a bad experience when using XDE Tester a long time ago, where a shared object map got corrupted and I lost all the info in it).
- Should I create routines to deal with individual objects (pass object to method, check to see if object exists or not, set value, get current value) or would this be a waste of time?
- Do people store and use values in the registry or use ini files (or equivalent)? I currently have a set of variable in my library file that I manually change before each run (set the environment to use, the system date, etc)?
- What about a front end, where people could select which package/tests to run, the number of iterations, location of log files, etc? I would be able to create such a front end (I would have to learn more Java, which would be good), but that would take time. And the java code might have to dynamically check my project file to see what packages/scripts are available to be run.
- Do you use the verification points, manual verification or manually verify the results (I deal with messages displayed, tables, object that may or may not exist)?
- What about datapools? I played a bit with it when I had a menu to select and an expected URL to compare.
- Do you store test results in a database? It would be a lot easier to manage than to store them in flat files on the LAN.
- Any other suggestions?

I hope that I have not opened a hornet’s nest here with this long winded message. I am just trying to figure out to make the best use of my time, and to make it easy for my team mates (and future team members) to create new automated scenarios, while I create and maintain the testing infrastructure.

Here are a couple more bits of information.
I just go the new IBM Book, and started reading it, but it will take me a bit of time to go through it, and apply the tips. But it should be good reading during the holidays.
Also, I am using the Java language for the scripting, and the application under test is web based (with some JavaScript, and some Java in the back end).
I have been using RFT for a couple of years now.

Thank you for your time,