I'm looking for some feedback on an automated test 'solution' ( for lack of a better word) I've been developing. Here are some of the main ideas and characteristics.
A framework is in place to add a layer of abstraction between the automated test code commands and the test objects. (GUI, DBs, files, etc..) (Robot does not have this capability built in). It works fine.
A structure is defined as follows, modeled after the IEEE-829 standard as I interpret it.
A test procedure runs code that sets up the environment, operates the system under test, and cleans up the environment after the tests. A test case is the input data for the test procedure; mere cannon fodder. The goal is that one test procedure can run many test cases. (using a looping construct of course). One job of the test procedure is to compare the expected result contained in the test case with the actual result. This is quite straight forward when running test cases that have identical result formats, such as a database record. However, in some cases the same input format can cause a different output format to be verified, such as a message box appearing.
Here is an example:
Add a customer:
The procedure uses the inputs from test case n and adds the customer. Next, the test procedure queries the DB and verifies the record was inserted correctly. The procedure then uses test case n+1 to add another customer. This time, a mandatory field is left blank, so the expected result (contained in the test case info) is a pop-up message. Obviously comparing a database record and comparing a pop-up message demands totally different code. So my question here is, should I bother adding features to my test 'solution' that will provide this flexibility? Almost anything can be coded, but is the complexity worth it? Another possible feature to have in this solution is the ability of the test procedure to use any special conditions that one or more test cases require. I think most people would group unique test cases together, thus requiring a separate test procedure. However, I believe it would be beneficial to keep the test procedures to a minimum, thus reducing the maintenance of the automated test code. (remember, in this 'solution' test cases are not automated test scripts).
Your thoughts, comments, and experiences are welcome.
Re: AutoTest Complexity
That would depend on what you are after. Often, I'd code against the API directly, and leave the entire GUI out, since that's just way too much to deal with. But since you are accessing the GUI, you are already dealing with it. So, I'd validate the GUI, except, the following...
Provided, that you are doing localization in the future, and robot handles DBC just fine, then I'd make a GUI reference file. The reason is:
When localization effort happens, have them localize the GUI reference file as well. That way, you get free testing! You can even have them run the validation tool.
Not only that, it will pull strings out of your code, and lessen your maintaince effort. In fact, talk with the development group, and have the GUI developers work with you to maintain the file! (Now you get instance testing, and developers involved.)
Pulling all product specific information from the test code is a very good thing. Since it promotes reusage, and limits potential errors in the code. It's the same mental process as when developers pull the messages out, and uses a message dialog box. If they are doing that right now, you can use the exact same file, and only thing you have to do is to read the file to make sure that text makes sense.