Gathering data beyond QA
I'm not sure where to post this so I thought I'd start with the a general discussion.
We're currently using a mix of tools that includes TestComplete and the entire HP suite.
Whilst this toolset is pretty comprehensive for QAing our products prior to release they all seem to stop at the point when the product leaves the shelf.
One of my current tasks, is to try and establish how our products perform in the field. Obviously, we get bug reports from our users, and we have diagnostics built into the apps to enable us to debug.
But how do we go about measuring the 'quality' of an application in the wild?
I'm exploring Windows Error Reporting but that doesn't really give us much more than our current (custom) diagnostics.
Has anyone any experience in doing this? are there tools available or do we need to roll our own?
I guess I'm interested in more than just crashes.
Things like when/if the application stops responding (which I think windows error reporting could help with).
Also looking into if the application just generally 'goes slow'.
I don't even know if this is really possible, but if anyone out there has ever tried this I'd really like to hear about it.
Re: Gathering data beyond QA
Up to about a year ago for me getting inputs on my released product started and ended with the reports I got from my customer support team. I did ask them to report defects directly into my bug tracking system, and we had a complete process to follow-up on them and measure them as what we called escaping defects, but that was about it.
Once in a while we would get feedback from customers as emails or testimonials via sales guys, but there was little influence from this stuff into what happened on the product.
My current project is a SaaS application (Software as a Service) were our users actually work ON OUR SERVERS and pay a monthly fee for our product (it is a QA Management system if you want to take a look at it - but enough advertisements) and this time from the beginning we added to the design all kind of measurements and statistics that would not only tell us when there was an exception what it was (there are plenty of services for that) but also gave us good information about our customers experience for downloads and some of the common and not so common operations they do.
With these mechanisms in place we have made real efforts to (1) handle issues proactively, if we see an exception in the system we proactively come to the users and either solve it or many times trouble-shoot it with them solving it even before they would have contacted our support; (2) when we see stuff like timeouts or delays we go into the code and see where the bottlenecks are and also work on them also trying to do it before we start getting calls.
Still, there is nothing like customer interactions, and thus we are also working with customers, trying to contact them each 3 to 4 months to see that all is good and there are no issues that bother them on the system. Even if we don't manage to call all of them, we are still able to catch more than half and get valuable information.
Our case may be *special* since we are SaaS, but this may also apply to you if you manage to get logs in place and your users agree to send them once every X-Time as an effort to continue improving the product.