Does WR have a quality problem? An opinion...
I offer this topic as hopefully something which will be of interest and constructive debate among the WR board here, rather than as a total rant against WinRunner (although I know it reads like it in places below!)
Iíve worked extensively with SilkTest and to a lesser-extent Visual Test over a number of years and am becoming increasingly frustrated by what I believe is a serious quality problem WR suffers from. I find WR very user-friendly for new people starting and good at the low-end (basic record/playback & minor mod), TestDirector is a great product, but...
Here's my case on WR:
* Functional problems - we've hit several 'shop stoppers' in various areas of WR which would make you say "it couldn't _possibly_ have been tested before shipping" (as a QA Engineer, you know how this looks!), plus numerous minor problems.
Our company has opened over 30 service requests on WR in the last year and that is with experienced automation engineers knowing their way round the tool.
Also, support on Netscape is woeful compared to IE - we are ending up with scripts full of conditional statements depending on browser. Our managers are lulled into dreams of 'one script, multiple browsers, multiple OSs' by the salesmen and then this happens?
If you can't trust your test tool, can you trust your tests, your own product? Opinions?
* Language problems - minor, but annoying: very general, uninformative error codes, inconsistencies (why does an array resulting from 'split' start at 1 rather than 0 as all other TSL arrays?), various weirdness on variable definition and scope as documented here and on egroups WR list.
* Documentation - weak, narrow in scope (particularly about sustainable methods of implementing WR in real-world), errors remain uncorrected from release-to-release (check out 'win_exists' documentation example:
if (win_exists ("Order PrintOut")= E_OK) # Surely should be '==' to test for equivalence? Doh!
* Tech Support - I hear a lot about how great the WinRunner tech support is and am always amazed! On this side of the pond (UK WR Tech Support) is weak - techies don't even question whether a reported problem is a bug, rather immediately start looking for a workaround to get you off the line efficiently. OK, fast response is good, but we're ending up with a testsuite largely made up of workarounds (AKA hacks) for unreliable/inconsistent interaction with our web app.
The whole process of taking a product patch from Mercury is uncertain and dangerous as inevitably nobody seems to know the dependencies between the various patches or incorrect/corrupt files are included in the zips downloaded from the ftp site. Wherever possible, we work with the base release of a product, unless absolutely necessary. I know all test tools have their problems, but again, can we have confidence in our test tools, please?
Am I a lone voice spouting this mad opinion or has anybody else run into the same problems? Please discuss...
Thanks and best regards,
PS For the record, we are not working against any funky, 'out-there' language - dead standard WR 6.02 in Web environment.
Re: Does WR have a quality problem? An opinion...
I use Winrunner, Loadrunner, and Test Director 6.XX. I agree with you whole heartedly on the problems the entire suite of tools have. I talked with one of their engineers at great length and was informed about the testing matrix which runs 24 hours a day for 4 weeks prior to a release. I have also dealt with customer support(when I could find somebody that spoke english in a manner that I could understand). Even with all of that, I still like the tools. Take it all in perspective.. Look how many bugs were reported in the Windows 2000 release and how many of them were found in beta testing outside of the company. Being a software tester means that you can't trust anything software related. Uh... What was the question again?? Just my 2 cents worth.