Estimation - Factor scoring model?
Our QA team is responsible for testing a pretty wide variety of systems/applications, and as such is broken up into small teams primarily focused on 'their' systems/apps. Unfortunately each team has developed different methods of determining test estimates...mostly involving past experiences and requirements on hand.
My manager has tasked us with developing a more scientific, formula driven method to determine our level of effort.
Most of the teams are spot-on in determining LOE for test execution...it's more the project level variables that give us fits. I've come up with a list of 'Factors' to consider when developing a LOE. I was hoping to attribute some sort of weight or scoring model to this list. My idea is for the Test lead and PM to work together to consider these and have a model which derive a 'multiplier' of sorts to apply to the test execution number.
Have any of you used or have knowledge of any such scoring models? I'd be interested in any feedback. Below is the factor list for reference. Thanks for any feedback!!
• When was QA Engaged (design, planning, development…)
• How well has the project been defined and communicated
• Does open communication exist between stakeholders
• Availability of Requirements, Specs, Problem Statements…etc
• Volatility of Documentation. How often is documentation incorrect or changing throughout scoping/planning phases
• What is the size of the project
o Company Priority / Perception
o Development effort
• How many systems are impacted?
o How many systems require changes
o How many systems do not require changes, but require integration or regression testing
o How many impacted systems does QA directly support
o What type of external support will QA require to accomplish integrated testing
• How complex are the changes being made
o Minor changes to application or massive changes across all functions
o Update using known methods, or branching into new Development, Implementation techniques
• Do we have environment availability
• Are environment changes required for testing, if so do changes require support from external groups
• Are integrated systems required for testing, if so any availability or versioning conflicts
• Will any test data be required, if so what data types, who is preparing it, and what are the delivery mechanism
• Do any environment contingency plans exist in case of environment failure
• Are any tools required, if so do we already own and know how to operate
• What preparation and/or maintenance time is required
Personnel / Experience
• Is the system or application existing or new to the organization
• Is the Dev team familiar with the product and the requirements at hand
• Is the QA test team familiar with the product and the requirements at hand
• What training / ramp up time is required
• Scope changes / Requirement volatility
• Prior experience – Business Partners / Stake holders (general communication, scope changes, test perception…etc)
• Prior experience – Dev team (general communication, product reliability, rework turnaround…etc)
• Prior experience – Application stability
• Prior experience – Environment stability
• Familiarity of technologies being used
• Known resources – FTE vs Contractors
• Calendar – Conflicting projects, holidays…etc
Re: Estimation - Factor scoring model?
Instead of all points as direct factor of efforts, we just keep limited # as direct factor of efforts and decide complexity of this direct factors based on many other dependents e.g. from your case
Environment is direct factor while sub points decides complexity of Environment and based on complexity efforts vary