I work in an automation group that has (for the most part) achieved end-to-end GUI automation for our product. There is one main problem we encounter repeatedly, however, and most of our solutions have not worked out.
Our product contains many information dialogs, (and in the development phase, unexpected error dialogs and debug dialogs). We use QARun and SilkTest for automation, and we are having a hard time bringing down these "unexpected" windows so that we can carry on the test. The natural argument is that we don't want to take down these windows, because they represent "errors" in the problems in the product. But if we are early in the development cycle these debug windows may not be scheduled to be fixed for a month, and our regression tests still have to test the remainder of the product.
Our first solution was to attempt to handle windows in the exception handlers, but neither Silk nor QARun seem to have the (working) functionality to match windows against a library of examples and decide which button to press based on the a combination of the window caption and window text.
Our next solution was more of a "kill-em-all" approach, but then we miss or accidentally take down real errors.
So I was thinking of a more advanced approach, perhaps attempting to create my own virtual user that would "learn" which dialogs to dismiss or accept at certain points in script execution. This user could be modified via instructions or an ini file, and would bascially be a neural net at its base. Has this been attempted anywhere? Or does anyone have a better solution that has worked for handling many different dialogs that can (theoreticall) appear at any time, in any order during script execution?
Any comments are appreciated.
"What we elect to call imagination is mere combination of things not heretofore combined." - Frank Norris
What we elect to call imagination is mere combination of things not heretofore combined. - Frank Norris
You've found trouble in what appeared to be paradise huh?
What you're experiencing is reality intruding upon your best laid plans. This is the part that all automation tool sales reps' fail to tell you about. As you've discovered, capture/playback testcases - and their marginally-better equivalents using sequences of window.method calls - are quite sensitive to unexpected conditions in the AUT.
To get around this you must have two things: code to automatically accomodate variability AND robust recovery routines. The former involves careful handshake code at every step to identify unexpected conditions and initiate alternative actions. The latter thoroughly documents the environment at the point of failure and then restores the application to a known base state for the next test. Both of these are very do-able using any class-based language. What the tool vendor's all fail to supply is an architecture for implementing it. They only supply the words (like a dictionary salesman) and leave the task of writing the next "best seller" up to you.
> Our next solution was more of > a "kill-em-all" approach, but then we miss
> or accidentally take down real errors.
As an interim solution this approach has considerable merit because it addresses the recovery issue. What you need to do during this attempt is to inventory all open windows - expecially the one having input focus (it most likely contains the error), then decide whether you want to terminate the run, or document the situation, call recovery and continue on.
> So I was thinking of a more advanced
> approach, perhaps attempting to create...
This idea is off the deep end, but hey, don't even let that stop you. Silk's classes all resolve to some application object. Essentially, a "user" class can't be created.