Need help with creating QA matrices
Need your help!
Few months ago I started working for Web-based company with no standardized QA practices in place. We are at the level 1 of CMM model. One of the major problems I encountered sa far is that QA time effort is currently being estimated by Project Managers. As a result, QA has not say in how this allocation is made. Our QA department decided to shift deciding authority back to QA. For that we need to know how to properly estimate our efforts for different projects. I volunteered to research this matter and make recommendations. So far, I found some material on QA Matrices, but it seems too complex for real world.
Ideally, I would like to hear advice from you. Any comments, suggestions,resources would be greatly appreciated. Thanks for your help.
Re: Need help with creating QA matrices
most likely the problem you are having is trying to implement all your QA, if it is testing according to the CMM. The CMM is designed more for development QA processes. It does not expand on implementing the testing processes within the development lifecycle. However, there is another option. The TMM (Testing Maturity Model) developed by the IIT (Illinois Institute of Technology) was designed to supplement the CMM. There are five maturity levels for implementing successful software testing processes. There is information on the process on the IIT website.
If you need any help, we are an authorized TMM testing facility.
MST Lab, Inc.
Re: Need help with creating QA matrices
I've gone through the same process at my company in building the QA department. I have found two main factors in working with project managers to get the time necessary for testing an app.
First, when the project proposal goes out to the client in a sale, QA needs to provide time estimates for testing. This way, you get the budget allocation up front for the testing time you think you will need. Also, if the project manager is handed a project which already has a budget for QA time, you have a better chance at achieving a project test schedule that meets your needs. This way, in the event that a project is late on delivery, there is budget for manhours and testers' schedules can be stacked to complete the required tests in the manhours allotted to the project. Obviously, this approach requires that testing staff include a wide circle of part-timers who are already trained and who can show up on demand. We have achieved something like this by using computer science students who have flexible schedules and a desire to make extra cash and gain knowledge and experience of a wide array of systems.
The second complicating factor is that QA time estimates must take fixing time into consideration. If the product delivered to QA does not perform well enough for the suite of tests to get performed, QA must wait for the next build. And this is where you really need to work closely with the project manager to communicate the priority of fixes and the severity of bugs found.
One of the most significant improvements to our process occured when the project managers began to understand the usefulness of load testing. Putting the system under load exaggerates the most serious bugs in a system and makes them very obvious from the getgo. A project manager who succeeds in getting you a product that is in good enough shape for load testing in the first build will get a list of the most serious bugs in a system within a matter of hours. This way, the project manager has a fighting chance of getting you the fixes you need in the timeframe you need them. If the system cannot be load tested until the last minute, any big problems in the system will require last minute overhauls to the code which will probably not get the full QA treatment that big code changes need, and the product will get released with undiscovered problems.
QA depends on the project manager at every step. First, you can't define the tests necessary without a functional spec. The functional spec should provide you with enough detail on the system so that you can come up with a testplan up front, *before* the system goes into development. The testplan flushes out potential problems in a system and helps the project manager plan for any test conditions you may need, any error handling the system will need to include and most of all, flushes out possible 'fault of ommission' bugs which are very common in my experience with Rapid Development (I never thought of a user doing that!!).
In the end, we base our estimates on familiarity with the system, performance requirements (traffic estimates) and scope of tests. We make one estimate, based on a description of the system requirements, of how many manual testing hours will be required, then we isolate conceptually what we consider to be the weakest points in the system and define tests to apply load and stress to those points (and estimate what time these tests will require).
Throughout the project life cycle, we try to work closely with the project manager to make sure our needs and concerns are known so the project manager has a chance to accomodate them.
At the beginning of building this QA Dept, there was a perception among project managers that they knew how to manage testing and how to define tests. That is no longer the case (thankfully) and they are working with us to achieve the mutual goal of product quality.
Hope this helps!