Browser compatibility Testing Query
We are using selenium for browser compataibility. Want to know generally what % of Functional automated test cases should be executed as a part of Browser Compatibility Testing. Any number or % in terms of industry practice?
This will help us in analysing and finalization of number of scripts we will execute for BC testing.
Appreciate the quick responses.
As for "standard" industry practices.. There really is non. Consultants can make up figures to support their suggestions, but really most companies keep their testing metrics and even testing procedures a tightly guarded secret. Have you been to a bunch of QA related conferences, and you hear google gives awesome talks of how cool their CI and testing process is, but in terms of creating a detailed blueprint/layout how all their systems relate or open source pieces of key CI infrastructure is still closed source? Finding out things about how other people do things is very hard.
My thoughts are, instead of thinking in terms of what % of test case, think of what is at risk the most. Think in terms of browser differences. I'm always trying to think of ways to test less. I think of what I need to do to ensure sanity and key areas where things break.
There are 3 (2 main things) that differ between browsers.
1) CSS support
3*) Which tags are supported (picture tags, etc... not as important since most developers will use a polyfill to make this more of an issue with #2 )
Awesome. Thanks David. Will try to find out these type of test scripts which have autosuggests. By the way I have asked BAs and MANUAL TESTERS to provide business priority and usability of each manual test case. Based on that I will decide to execute top 50% of test scripts to be executed as a part of browser comaptibility. Why I have to suck on some % bcoz I am doing functionality testing. For browser compatibility as per my expereince I think 50% is more than enough if we have tested all the critical features.
Originally Posted by charanpreet_hora
Few thoughts on..
* Identify Possible Risks
* Check the design/architecture. For ex, HTML5 supported browsers
* If application needs plugin support to run (flash/silverlight/etc)
* Type of browser (for ex, supporting more than 5 years old browser). Difference at supported browsers.
* Type of device/resolution size
* Test effort estimation- DOn't just execute all tests in all browsers for all test cycles..