Code coverage for manual testers
We have a J2EE application that we test using manual test cases. Was wondering if there is a tool available which will help the tester or give a report to the tester after they have executed the manual test steps as to which code areas were covered, what is the total classes, methods, etc are there and which ones of these are executed.
Basically helping the manual team to improve upon the existing cases and cover much better.
I briefly looked at EMMA which is for Java apps, but the project has been not supported since 2005 also not sure if this fulfills my requirement.
Let me know if more info is required.
Help much appreciated!
I don't think it'll be very feasible. Here's why I think so..
Although you can run the software under test using a special compiled version or a special interpreter to determine code coverage. It will be nearly impossible to differentiate if a piece of code is triggered as a direct result of a manual test or as a result or just something happening in the environment. Like for example, a blip in the system, a connecting going down, some database upgrade causing a back end error condition, some weird preloading that's done by the browser to speed up page load. There are just too many little things you cannot control.
Usually code coverage is only used for Unit tests, where preconditions can be explicitly controlled. Code coverage for an end to end test may be a nice feel good measure of, as in, if we're touching most of the app, that means we tested a lot. But it's no indication of requirement coverage, which is what you truly want in a end to end test.
Thanks for reply dlai.
Originally Posted by dlai
I agree with the thought that sometimes it would be difficult to ascertain if the code block is execution due to a direct action done by tester or triggered by a linked functionality. So if this is the case how can be improve upon manual test coverage? Referring to the req. docs is one place to look at but they could be poorly written or not available at all in some cases.
I am just trying to be more creative in resolving this
It's a hard thing to crack really. I think in terms of doing manual testing, most people would use test case coverage (which would hopefully be derived from requirements). In terms of exploratory coverage, I sort of like the idea James Whitaker proposes, the idea of "Tours". Exploratory Software Testing
Originally Posted by rochitsen
It works as a good way of ensuring a certain level of coverage when doing rapid exploratory testing.
Another idea I really love is the ACC/Google Test Analytics. https://code.google.com/p/test-analy...i/AccExplained. (Problem is the project is sort of dead). The idea is you have 3 things. Your Attributes (Secure, Fast, any marketable terms you claim), Your Components (code modules), and your Capabilities( Features ). The idea here is you can create a matrix of the 3 to create a heat map of your test coverage. For example, "How much did you test your Authentication Module (component), is Secure (Attribute)? How you measure that is you count the test coverage of the Capabilities you have covered.
For example, an Authentication Module being secure could mean..
* Is input validated?
* Is it safe from injection attacks?
* Is it safe from privellege escalation?
(basically these are almost like high level test cases)
As you test each Capabilities, those hot spots at the cross section of the Attribute and Component becomes cooler (risk goes down). As you modify components the tests you've done on those cross sections with those components become invalidated and the spot becomes hotter.
Using this idea, you have a heat map of where you need to test. And if you sum up all the points on the map, you have a Risk Score number. In theory, you can have a release criteria that's based on a acceptable Risk Score number. For example, 90% risk coverage.
This of course takes a lot of paperwork working with developers in mapping which components are responsible for which capabilities. However, I think this is very promising in providing an objective score metric that maps very closely to release criteria. (How well covered your software is as close to what you say it is)
You can use manual (automated, unit, etc.) with code coverage tools to see how well your tests are covering your product. While I'm not aware of a specific tool for J2EE, they probably exist. The tool I'm familiar with is for C++/C applications (froglogic Squish Coco), but perhaps it will have some insight into what to look for in a similar tool that supports J2EE applications.
There are some key things to look for in the tools, such as types of coverage reports (line, branch, etc.) Different types can help find different interpretations of the code being covered. Are all scenarios covered or possible? Does dead code exist, etc. And then making sure the tool has nice interactive reports that others not directly using the tool can view and understand is also helpful.
Good luck in your search and interested to know what you find.
Tags for this Thread