User Tag List

Results 1 to 2 of 2
  1. #1
    SQA Council
    Join Date
    Mar 2001
    Post Thanks / Like
    0 Post(s)
    0 Thread(s)

    General Question: First steps to recording

    User Jordan Gottlieb (jgottlieb@qualitechsolutions.com) posted:

    I’ve gotten my hands on a tool that will allow me to record and playback (I’
    m still waiting on the evals from Rational).

    I’ve gone through its tutorial and like most record/playback (r/pb) tools,
    it lets me create reusable scripts. Like I can record logging into the app
    and I can reuse that “Login” script for all other scripts.

    Ok, so I start thinking along the lines of planning. How would I want to
    break this app down for testing…. I can do it by function. Ok, that makes
    sense to me. So I’m thinking I’m going to create all these miniscripts that
    I’ll use in bigger scripts.

    So should all these miniscripts start in the same place? Should the
    miniscripts for ‘function 50’ and function ‘74’ start at the main screen,
    even though I have to dig 3 and 4 levels to get to them? And should they
    all end back at the main screen so I can easily place these miniscripts into
    a larger script?


  2. #2
    SQA Council
    Join Date
    Mar 2001
    Post Thanks / Like
    0 Post(s)
    0 Thread(s)

    Re: General Question: First steps to recording

    User Carl Nagle (Carl.Nagle@sas.com) posted:

    There is alot more to it than just that.
    For instance, you don't want to use the hardcoded recognition methods you captured during your record sessions.
    You want to extract those things out into CONSTANTS or VARIABLES using some application mapping mechanism and reference those in the scripts. That way, when things change, you don't have a billion scripts that have to be modified. You only have to modify the map (the constants, etc.)

    This is really quite a loaded question. There are many options available and noone is to say which is the best one. Too many variable factors involved.

    There are archives for answers to this question. (Are there still archives for answers to this question?)
    There was a piece I sent to a more specific group but did not send to this one. Perhaps I will also copy it here for others with similar interests.

    The essence to the answer to your question might be something like: your lowest level script snippets should probably not all start and end at the same place. Instead, each will have a predefined and known application start state, and a predefined and known expected ending state. You can then call these various script elements in an order which ensures that the previous script's end state matches the desired script's start state.

    There is alot out there on this type of testing. The following is the post I made a day or two ago elsewhere. The references to TestGenerator and the DDE are for the Rational Robot Data Driven Engine made available via: http://groups.yahoo.com/group/RobotDDEUsers

    -----Original Post from SQA Suite Team Test Users-------------
    Subject: Intelligent Test Automation

    This is probably worth a read, I would be interested in any comments.
    Anybody else seen it before?
    (originally seen in Software Testing & Quality Engineering magazine, September/October 2000)

    -----Resonse to Original poster and RobotDDEUsers----------------------
    From: Carl Nagle [mailto:Carl.Nagle@sas.com]

    First, a VERY quick synopsis of what the article relates:
    Intelligent Test Automation suggests that you should not only automate the "hands" of a user, but also the "brain".

    To do this, you should model the AUT's behavior and then automate the use of that model. The AUT's behavior is defined as what actions are valid, when, and what response is expected.

    By having a good model, and the automation pieces that provide the actions and the validations for the AUT's behavior, Intelligent Test Automation can provide endlessly variable dynamic automated testing of your AUT that traditional static automated testing could never hope to accomplish.

    Second, a quick interpretation of the technology:
    The article seems to suggest that automating test for an application is easier using this method than by traditional automation methods. A true understanding of the mechanisms necessary to complete the task will show that this is not true. An automator must still provide all the instructions, scripts, code snippets, and tools necessary to automate all the aspects of the application that will be automated.

    But these items are not created in the linear fashion traditionally found in static record/playback automation techniques. Instead, they are provided as individual action command implementations--isolated pieces of action and response code--as provided for by forms of data driven automation including the Rational Robot data driven engine(DDE) we have developed at SAS.

    The "model", then, is nothing more than state information for each available action command. Each command must specify the initial application state expected (start state), and what application state results after the command has completed (end state). A processor can work with this model and piece together a myriad of dynamic tests based on the information in the "model". It does this by matching up start states with end states and any number of guiding input parameters provided by the tester. I will refer to this as dynamic state-based automation, or simply, state-based automation.

    How does this tie in with our DDE and TestGenerator?
    Dynamic state-based automation can be harnessed using our existing data driven framework. The framework itself and the test tables it uses do not need any modification. What is necessary is providing the start state and end state information for those tables we wish to use for state-based automation. Then a processor can build an endless variety of test suites to feed to the DDE for execution.

    The TestGenerator program already prompts users to specify the "application context" for each command they use in their test tables. This "application context" can be considered the equivalent of the start state for the command. This information is permanently stored with the command in the project database. TestGenerator need only prompt for end state information in a similar fashion. This will complete the database making dynamic state-based automation possible with our current DDE framework.

    A summary:
    The concept of model-based or state-based testing with our DDE and TestGenerator is very exciting! We already have a framework and tools in place that can be used to implement this valuable test automation technology.

    The DDE framework is currently used to build static automated tests. However, the elements used for our static automation--the test tables, scripts, and component functions--are the same elements necessary for state-based automation. Thus, while we develop our static automation capabilities with TestGenerator, we will also build the "model" needed for "Intelligent Test Automation".

    Note, though, static automation tests are still essential even when implementing state-based testing. This is because state-based automation does not necessarily guarantee a known path can or will be traversed through the AUT. This is necessary for many types of regression tests including smoke tests. These should first verify the application is ready for dynamic state-based testing.

    Some addon techno babble for DDE enthusiasts:
    (The faint of heart should stop right here!)
    One of the most valuable features of DDE automation is that it maximizes reusability. It is intended that test tables be reused as much as possible to reduce test maintenance and code duplication.

    This can produce test tables that are HIGHLY reusable, yet produce different results or application states at execution time. An example would be a Suite for application Signon. The Signon table is the same and reused for both valid and invalid signon attempts. However, the resulting application state is not always the same. With valid credentials, Signon is successful. With invalid credentials, the application responds with one or more error states.

    State-based testing cannot directly deal with this scenario. An action is expected to produce 1, and only 1, expected end state. So, how do we deal with this?

    The easiest and most effective means of "fixing" this while preserving our reusability is to simply create new commands (wrappers) that invoke our existing commands but produce a single expected end state.

    For our Signon example above that means we would produce wrapper commands like:

    SignonAsUser <success expected>
    InvalidUserSignon <invalid user error expected>
    InvalidPasswordSignon <invalid password error expected>

    Each of these commands would have the same start state, but each would have a unique end state--and only 1 end state. (The "Signon" command itself, which would be invoked by each command above, might have a start state but no end state since it is not unique.)

    Lastly, it is important to note that all possible application states might not be readily apparent. Some less apparent application states may need to be considered. For example, the "Signon" command, as well the 3 wrapper commands mentioned above, appear to have a single start state--the signon screen waiting for input. Yet, what about these possible signon start states?

    1) User not logged on
    2) User already logged on another session
    3) User already logged on, but not with Admin priviledges

    How does "Signon" respond to these scenarios? Each unique response will require its own command and the appropriate state information *IF* it is to be automated using state-based automation techniques. And we don't have to automate EVERYTHING to this degree.




Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts

vBulletin Optimisation provided by vB Optimise v2.6.0 Beta 4 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
User Alert System provided by Advanced User Tagging v3.0.9 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Questions / Answers Form provided by vBAnswers (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
vBNominatevBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Feedback Buttons provided by Advanced Post Thanks / Like (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Username Changing provided by Username Change (Free) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
BetaSoft Inc.
Digital Point modules: Sphinx-based search
All times are GMT -8. The time now is 05:32 AM.

Copyright BetaSoft Inc.