Matching testing methodology/strategy to application type
Trying to put a best practices together for my QA team. Want to see if there are any best practices for matching a test methodology (functional, white box, black box, regression, system, integration, etc.) to application type (software language, platform, operating system, etc.)
Sorry but there are no real best practices or silver bullets out there
I am sure many people in here will give you their own advice so I will simply start with my own based on your question above:
Sorry to disappoint, but there are no real best-practices or silver bullets out there.
I also don't think that anyone will be able to give you a "proven approach" based on your app type, or at least not one that you will find useful.
My advice is to work based on a mix or blend of the approaches you wrote below: some formal functional, together with some exploratory testing, together with some grey-box, together with integration, etc. You should actually break down your testing project into areas, risks and features and to each one define what is the best blend of approaches.
Keep in mind that no matter what initial approach you choose, and how appropriate it is to a specific condition, you will find yourself that in a short while you will want to modify this approach to match some changes that happened to your project or to your product. Taking this into account please keep an open mind and make sure to review your approach and continue fine-tuning it based on the changes that you will need to fullfil. The only constant in this business is change itself (sorry for the cliche).
Hope this helps!
ISEB endorse the BS7925-2 (Guideline) and BS7925-1(Glossary)
Apart from that there are IEEE guideline for
Anomalies, Test plan , LifeCycle and Quality control.
CMM and CMMI standard also well establish.
There are other way as well to manage test life cycles but it is very open ended question
There are no "best practices" - there are only practices that are better for this situation in this context.
It's not just the application type that influences testing. There's also the question of how mature the application is, whether it's under active development or predominantly maintenance-only, how mission-critical it is (the testing performed on software driving medical devices had better be more thorough and more document than what happens for a social networking application - a bug in the former could kill people), the development process used to create the software, the developers who created it and whether they write good unit tests or not, what kind of pressure the developers were under when they wrote the software, what IDE the developers used (or if they used one)... the list really is endless, and that's without considering the nature of the application.
An application that's been under continuous development for years and has 20+ year-old legacy code buried in its innards has quite different challenges for testing than one that's more recent and hasn't lived through multiple different development paradigms. Something that handles its own hardware management has different needs than something that doesn't interact with hardware at all. If it uses data that falls under regulatory requirements, the testing needs to be informed by those regulations. And so on.
So no, there is no list of best practices by application type.
I've found that it's a good practice (for me) to examine each testing project individually, and match the test approach to the needs of the particular project itself.
I've never found a way to match test approaches to anything else.
I can't imagine anyone could honestly say something like "When the language used to develop your system is C#, it is best to use (only?) Black Box Testing".
This might help:
All Things Quality: I Don't Believe in "Best Practices"