| || |
Automated Testing tool bugs
I am writing an article covering automated testing tool bugs and the importance of actually testing the testing tool. I have a few automated testing tool bug examples myself based on my own experience, but am looking for additional ones to strengthen the case for the need of testing the testing tool. If it's an interesting enough testing tool bug I will include it in my article (along with your reference, of course). Also, could you describe how the bug was uncovered, effect, and process/turnaround to fix.
Please provide your testing tool bug examples without listing the vendor/tool name, as it is not an article about pointing fingers at any one vendor; I have seen bugs with various vendor provided tools over the course of my testing career.
Re: Automated Testing tool bugs
Just a few days ago I was implementing an automated test case that requires using the key sequence Shift-Control-F, which is a shortcut key for the application. I wanted to specifically test that the shortcut would work for different pages in the application.
I cannot get the tool to do this! The key sequence just does not get pressed. The tool supports a couple of different ways to accomplish this.
- a function that should do the combined key set
- individual commands like keyDown() and keyUp() that need to be combined in the right sequence to accomplish the desired result
After searching various forums, I found a bug listed on the vendor's public bug database that matched the behavior I was observing. Unfortunately, this is actually a rather old bug, where the workaround doesn't work for me, and where a bug fix is not scheduled for a target release! So much for automated regression testing of my application's shortcut key sequences!
Once nice things is that one of the people reporting the issue provided a set of files and test scripts that demonstrate the bug, and will serve as regression tests for the tool once the bug is fixed.
Re: Automated Testing tool bugs
As author of an automated testing tool I hope to contribute to your article from the vendor's point of view :-)
Automated testing tools contain bugs just like any other piece of software. They are after all just programs written and tested by humans. Rather than a number of bugs, a more important factor in any tool evaluation should be how the vendor responds to bug reports and how fast bug fixes are delivered.
From my experience there are roughly three types of bugs:
1. Functional bugs caused by a mistake in the code. The producer is fully responsible for them and should make a maximum effort to minimize their number through proper testing. I could give you loads of examples but they're not that interesting. They are after all just programmer's mistakes and they can be usually identified and fixed very quickly.
2. Environmental bugs. They break functionality of the software just on certain environments or combination of configuration parameters. I guess most experienced Windows users have already at least once in their life ended up chasing registry keys and system setting on Windows in a desperate effort to make something work. These bugs are often caused by usage of the tool on an unexpected or untested environment or by relying on a certain convention where either no written specification exists or are inaccurate. Tool vendors are to be blamed just partially because no one owns resources sufficient to test the software on all possible environments. This is especially true for cross platform tools where the environment matrices are huge. Vendors should be however very flexible to debug such issues and be prepared to work with the reporting user in an efficient way on bug resolution.
I once received a bug report from one of the users that the program failed to start on his environment. I inspected the stack trace and he used a development version of Java with version string "1.6.0-oem-b104". As I have never expected anyone to use an unofficial version of Java to run the tool, it crashed on parsing of this rather unusual format. The fix was easy and I delivered a new build to the user on the same day.
This was an easy case. Bugs in this category are however difficult to reproduce in general and close cooperation with the reporting user is necessary. One of my users reported that he had succeeded to connect the tool to an Apple Remote Desktop (ARD) running on 10.4 PPC Mac but the desktop colors were incorrect. As he was patient enough to run a few diagnostic builds and retest a few combinations, we resolved the issue within a week. The Mac server actually used big endian order for pixel data. Though this complied with the protocol, vast majority of other environments were using little endian order. The testing tool contained code to handle big endian but it has never been tested because I couldn't find any platform using it. In fact the 10.4 PPC Mac remains so far the only one known to me behaving this way.
3. The last and fortunately smallest bug category are bugs and undocumented behavior changes in third party products the tool depends on. I once overrode a method of a standard GUI object in Java accidentally. As it worked fine, I haven't noticed anything. Later on users started to report that one of the GUI dialogs displayed empty without any content. I couldn't reproduce it for a month or so until I upgraded Java to the latest version. I discovered that though the programming interface remained the same, the Java vendor changed the internal implementation causing the GUI object to behave differently. The tool simply worked with Java up to 1.6.0_11 and failed with Java 1.6.0_12 and higher. As I could implement a workaround in my code, I was able to fix the issue within a few days from the bug reproduction.
This case was an easy one. There's not always a chance to work around such issue in your code and you may end up dealing with the dependency vendor which may be time consuming and sometimes rather desperate experience.
Robert Pes T-Plan Robot
, open source cross-platform automated testing tool based on remote desktop technologies http://www.vncrobot.com
Re: Automated Testing tool bugs
Good discussion and explanation.
To help track down their issue, yes, we'll have to give the vendor as much information as possible, environment/ configuration related, etc. - the categories you provided are a good start.
Here are a few examples, you would probably categorize as environmental:
At one point we installed a tool on our Micron PC used for testing activities, only to have it blue-screen.
It turned out that the tool we wanted to use for testing wasn’t compatible with the Micron PC. To solve the problem, we actually had to upgrade the MicronPC’s BIOS.
The vendor provided Readme file was updated accordingly.
A vendor migrated their product solution to be compatible with Linux in addition to their existing OSs their tool was compatible with. Lots of problems were found here. We ended up becoming the Beta testers for their tool to help them solve their issues.
Here is one that shows the importance of testing tool upgrades:
We had an experience when a new tool upgrade wasn’t compatible with our e-mail software package any longer that was used company wide.
The vendor had decided to switch email packages, and during upgrade testing, we caught this issue (wasn't documented in the readme file)
Good thing cause otherwise an upgrade install would have rendered this feature of the tool useless, as we heavily relied on e-mail notification using our specific email solution, for example, if a defect was generated.
The list goes on.
I'd be interested in hearing about additional problems related ot vendor provided tools people have seen.
Re: Automated Testing tool bugs
In an effort to build a framework and environment for a new application which we are developing I ran across a few bugs in the tool which I employ. For one, the developers decided, half way through the dev cycle to change to Teleric Rad Controls. The tool I was using did not interface very well from then on. Our application makes use of many different types of text boxes and dropdowns, as well as grids and tables. The automation tool was unable to see or memorize the list boxes and the dropdown items were not part of the object itself. In fact the list items are subject to change upon the selection of previous dropdowns. I was forced to create individual scripts for each dropdown and each selection and believe me it was quite painful. After consulting with the tool support group I finally convinced them that there was indeed a problem and they escalated the issue to their development team. After a week or so, I received a “special” build which solved my dilemma. I then threw away all of my selection scripts and went to one reusable script to handle all conditions. This reusable script performs a routine which verifies that the proposed selection is in the ListBox as the tool is based on VBS but does not support many of the VBS commands like InStr. Unfortunately the “special” build had its own set of defects for which I continue to send bug reports to them on a regular basis. Maybe I should be on their payroll also. All in all they have done a decent job of maintaining their business commitment. But next time I think I’ll stick with the mainstream automation products.
Success is the ability to go from one failure to another with no loss of enthusiasm.
~ Winston Churchill ~