| || |
Evaluation - Newbie Questions
The company I work for is thinking about trying out Compuware tools. The demo scared me though. I like the function where the data is created for you but the instructor told me that if the compare data did not match the original it would fail and you would have to look at the log to find out if it was a true failure or not. I asked her about masking and writing custom functions to compare values that didn't match or math equation outcomes and she said that it could not be done. I really don't want to sift through 100s of test cases just to find out the failures are all bogus. Is there a solution to this?
Re: Evaluation - Newbie Questions
Of course it can be done. Maybe not if you use something daft like a text check, but you can pull information from the screen and compare it dynamically using your own customised code.
</font><blockquote><font size="1" face="Verdana, Arial, Helvetica">code:</font><hr /><pre style="font-size:x-small; font-family: monospace;">
Attach "A Window"
//Store the content of Date EditBox into a variable called szDate
szDate = CtrlText ( EditFind ( "Date" ) )
//Dynamically Create todays date (including possible leading zeros)
szToday = PadStr(Day(), 2, "0", "r" + "/" + PadStr(Month(), 2, "0", "r" + "/" + Str(Year())
//Compare the two and write the result to the log
If szDate = szToday Then
UserCheck ("Date Check", 1, "Displayed Date is correct"
UserCheck ("Date Check", 0, "Displayed Date is NOT correct. Expected [" + szToday + "]. Actual [" + szDate + "])
</pre><hr /></blockquote><font size="2" face="Verdana, Arial, Helvetica">This is just an example using a date. There is no reason why you can't use it for numbers, strings, pretty much anything you like.
All it means is that you have to add a little programming yourself instead of expected the tool to do everything for you.
Your Compuware instructor does not sound like she needs to sell more software!! [img]images/icons/confused.gif[/img]
Hope this helps.
Re: Evaluation - Newbie Questions
I think that what the Compuware rep might have been thinking is that you wanted to access elements of the check to determine why it failed programatically. For instance you have a form check that is performed on an entire web page. The form check fails because of one property of one control. You then want to verify, by accessing the check via QARun commands, why this failure occured so that you can report this in one line in your log and not have to look through the results in detail.
You probably could create a complicated function that uses a combination of QARun commands and API calls to determine all the properties of all the controls on the page and then compare them to a list of properties and controls that are expected from a test data file. After hours of testing, debugging and losing half your hair, congradulations you have just written your very own automated testing app within an automated testing app.
Or...you could double click the check failure in the log and bring up the details on why the check failed. I have seen instances where it appears that a check has failed for no reason, but usually on closer inspection you can find the reason for the failure. When creating checks, you need to pay close attention to what your checking. Verify that you are not including areas that will change and/or that when excluding areas, that you are excluding enough. Keep in mind what will or should happen when I refresh the form or page.
I think that you will probably find that while the rep might have spent a fair amount of time on checks, you will use them less often than you expect. As mooreaz stated you will probably be creating more dynamic tests that pull data directly from the screen. You might also want to take a look at makeChecks.