| || |
- 1 Post By katepaulk
Testing Metrics Examples
For our testing, we've created a Business Process Scenarios which represent the business process from end-to-end within a system. These scenarios are made up of a several Test Scripts. The Test Scripts typically align to an online page that the user would have to click through in order to complete the business process. If one script within the Business Process Scenario fails then the whole scenario fails.
As we've started testing, we're finding that the our pass rate for our scenarios is 0% but our pass rate for the corresponding test scripts is about 84%. This tells the story that the vast majority of the business process works but something inside of the scenario is failing.
The client is having a hard time understanding this and is looking for a new way to display this metric. I was wondering if anyone has any suggestions on better ways to display the data so that it's more understandable?
My one thought is to break down the test scripts into functionality areas with the business process and status on that. For instance, if a business process is comprised of Registration, Client Contact, Billing then we'd be able to show completeness within each of those functional areas.
any thoughts would be appreciated.
"What we have here is a failure to communicate" - I know that sounds facetious, but it's also the gist of your situation: if your client isn't understanding what you're telling them, you're not communicating.
The first thing I'd suggest is to ask your client whether in fact your scenarios are accurate reflections of the business use cases and identify the most critical scenarios. You may not be able to do this, but if you can, you're more likely to be able to align your testing goals with your client's needs.
Next, your idea of breaking the scripts into functionality areas is a good one. If it was my decision, that would be my primary reporting focus: say, Registration scripts have a 100% pass rate, Client Contact scripts have an 80% pass rate, Billing scripts a 50% pass rate, etc. I'd provide the ability to drill down, but not necessarily present that information up front: so if the client wanted to know what aspects of billing were problematic, they could look more closely and for example discover that the billing scripts fail where customers are being invoiced and succeeding for credit card billing. At that point, the client has the ability to decide how important invoice customers are to their business compared to credit card customers.
So, in summary, I'd report by functional area first, with drilldown to individual scripts. The business scenarios are essentially flows through the scripts, so one broken area that every script touches will fail all your business scenarios - which is confusing to someone who is looking for information on the *completeness* of the software. (You're effectively reporting on the *usefulness* of the software by reporting on your scenarios first). If you can, I'd wrap it even more simply with a Red/Yellow/Green indicator to say whether the software is ready to use (and in this case, it would be red because none of the business use case processes can be completed correctly - or yellow if they can be completed with workarounds).