How to measure code coverage of my Selenium Tests?
I really need some specifics on how to measure code coverage of my web application from my Selenium test scripts. Does anyone out there have any specific tools that they have had good results with when trying to get some idea of what type of coverage they are getting out of their selenium automated tests for their web application? I read about JaCoco and using the JavaAgent. However i haven't done this before and really could use some advice or better yet some tutorials would be great to guide me through the process and possible implementation. Rght now just looking to do a proof of concept.
I would encourage you not to get into the trap of equating code coverage with requirement coverage. While code coverage is easy to acheive with good stubs and mocking, in a fully built system, it's extremely difficult. Many shops that do code coverage statistic at an end-to-end level (Like Selenium) tend to lose sight of the real goal.. For example, they can get too fixated on covering a piece of dead code while missing a use case that covers a specific business rule that shares code with other use cases. (so the code was covered, but the requirement use case wasn't)
@meneghia, Coverage is usually of two things. One is the test code you write and one of the server code you use it on. It is easier to track the coverage of your test suite as most of the test suite runner provide the option to track the same. While doing it on the server side is possible but is not as simple to achieve. The below are points listed that
- You need to instrument the code or the server config to be able to generate coverage
- You need to have a isolated environment where only automation script will be execute to get the coverage
- With few of the code coverage it is not possible to get coverage per test case, or if you try to do so then it is not possible to execute code in parallel
- We setup a PHP based code coverage solution for a company recently and it works quite well. But it degrades the application performance to 25-50%, which is quite bad. But then creating coverage and dumping it on the run is a costly affair
I would really prefer to not have to report code coverage on my end-to-end tests. I don't think it is a very good indicator. I would prefer to do exactly what you mentioned about requirement coverage. I believe that gives a much better indication of how we are doing from the user side. I want to rely on the backend server unit tests to provide the code coverage but that is not what I have been instructed to do. So that is why I ham trying to find a method to instrument the browser to give me some sort of code coverage while my tests are running.
Does istanbul provide that? Are there any other options to look at? I thought istanbul was a tool for getting protractor code coverage?
I'll say this, you know better know, and I think your coworkers will respect you more if you can push back with clear arguments of why that is a recipe of disaster. Not only will you be spending more time chasing every last percent of code coverage, you are essentially inverting the testing generally accepted testing pyramid,TestPyramid, and creating more test code to maintain than your actual lines of code in the software.
Originally Posted by meneghia
As for Istanbul, keep in mind how it works is not a browser plugin. Code is compiled with special instrumentation so it reports to a code coverage session. It's designed for a unit testing framework such as Karma, which creates a session, then runs a battery of unit tests against it, then it reports the coverage. You can run the unit test using different browsers, but the code is not executed in the context of SUT, it's executed in a context of a test runner.
While it's not impossible to do code coverage for acceptance tests. Most professionals will tell you this is not useful because instrumented code is not the same as actual production code. Actual production code will still need to be tested. Not only will you be changing the code under test (instrumented code), you're also changing a huge part of the deployment enviornment (hosting JS code from a special instrumentation vs from a CDN or a production like environment).