| || |
Metrix to prove value of QA
So, management comes along and says they want to see the metrics that prove the QA group adds value to software development and/or to the company as a whole.
What do you do?
Well, my first response is to polish my résumé. There is a ring to that particular request that seems to show that management in general does not see value in the types of process that the QA group COULD add to the company. They either do not believe there is value to be had, the current group could not implement those processes if they existed, or that they do not have any understanding of any of the benefits that QA of any kind can provide. I am also of the opinion that it is most likely that the metrics collected my most if not all companies, if any, are inadequate for this type of evaluation. And the smart mouth brat in me just wants to ask them to provide the metrics that show that THEY add value to the company.
So, what do you do when that memo shows up on your preverbal desk?
Re: Metrix to prove value of QA
While I can see what you're trying to express, Mark, I disagree. I think that it's very possible to collect metrics that justify the value of QA and testing, and it's commonly asked for from many departments.
When you get frustrated, keep in mind that IT tends to be one of the largest budgetary outputs for any company, and if someone is asking for this data, it is likely that it is being requested of them by someone else. Just about everyone, even management, reports to someone and has to be accountable. Part of a manager proving his worth to a company IS proving the worth of departments that he manages. And the further removed you are from the IT side of things, the more likely you are to want 'hard' numbers.
So, consider this scenario: The metrics are requested by your Director, who in turn had them requested by the VP of IT, who in turn had them requested by the CTO, who in turn had them requested by the Board of Directors - would you question them as well?
All that being said, what I generally do in these situations is to ask the requestor to better define what it is they'd like. Metrics CAN be very useful, but you are right in saying that not everything QA does can be quantified.
I then begin work on gathering and analyzing metrics that make the case - showing improvements in Releases, decrease in defects per release over time, cost savings based on a comparison of TTF on defects found earlier in the life cycle, etc. I also prepare, however, what I call a 'laundry list', which is basically a detailing of the various projects and accomplishments of the QA department and team that I believe added value to the company, and I deliver both together.
~ Annemarie Martin ~
~ So you found a girl who thinks really deep thoughts - what's so amazing about really deep thoughts? ~ Tori Amos
Re: Metrix to prove value of QA
I know what people say, but truth of the matter is it is very difficult to justify QA cost visibally, unless you have continued to collect data. Even so, it doesn't always help. How do you judge if one QA person is better then another scientifically? More defects? Sure, I can enter tons of useless defects. Skillset? I can be a super intelligent geek who has no clue how to test.
If you are managing QA or any group or division, it isn't just your job to make sure your team is doing it's job. Any management position involves lots of politics. Having managed various development groups, including QA, I learned that you can always come up with excuses to can a person or an entire group. So what does this mean?
If you are responsible for a team/group and someone comes up to you and throws you a memo to justify your department, that means you've mis-managed your peers and bosses. In short, you've neglected to make your department visible, and someone is after your job or department. Most likely, someone else is expanding, and you are on your way out. Often, this is a sign for you to 'get out' or find a friend. How do you recover? Figure out who's driving the effort, and figure out what the real motivation is, and try to counter it.
But let's say that it's a budget cut, and it's come to you then you are most likely too late.
But next time, what you need to do is educate the management. Yes, it's manager's job to educate executives on engineering discipline. Generate charts and summary reports of quality, and show visible thing.
Unfortunately, Engineering service organizations are the most difficult to justify. These include: Buld and Release, QA, Tech pubs, and Dev IT.
If executives say what's so difficult...
Say these groups are support organization, and it's like someone asking why you need a janitor. You know that without these people. things fall apart. Sure, you can assume and hope that each individual will clean up after themselves, and do the right thing, but we all know through real experience that they don't. It's the same with developers. They aren't measured on quality, and it's difficult to do that.
Now, depending on your organization's strengths and talent pool, some groups naturally pick things up. But if you are full of junior/mid level guys you are in for a rough ride. There are many people out there claiming to know QA, but only few really know the full dynamics of QA. Most of QA experts are pure 'testers' and have no experience on standards, and improving development process. Even if they do, they don't really understand why, nor how to implement them in a real working human model.
Re: Metrix to prove value of QA
The key point that has been missed so far is the one that always is: first you have to define the criterion of what "proof" means. What one might consider "proof" another might just consider circumstantial or anecdotal evidence. Proof can be a very vague concept. When you have that isolated, you can then better determine what metrics will provide meaning relative to this notion of proof. Not all metrics are created equal, of course. Now your strict admonition was given as: "...prove the QA group adds value to software development and/or to the company as a whole."
So what does "add value" mean in this context? How did you "add value" to the software development process? Remember this includes lifecycles, unit testing, etc. As far as the company as a whole, how did you "add value"? In the case of the former, the higher-ups usually want to see a reduction in development time and costs and quicker time-to-market. Can you show metrics of how you have done this or, at least, of processes you have instantiated to start along that path? In the case of the latter, the real value is the bottom-line of the business. So what have you done to positively impact that?
In any case, I am never sure why people have trouble with this question because I can present a slew of metrics and arguments to show the value of QA - even with the variable nature of what the words "value" and "proof" mean. Even without metrics, you should be able to adduce arguments for what value QA can give to the development process and the business as a whole. However, depending upon how long your group has been around, you should be able to back this up (note: not prove) with something. (Now, of course, that also depends to a certain degree on the skill sets of your people and the skill set of you yourself, quite frankly.)
Also remember that COSQ and ROSQ are common measures that good organizations ask for. (COSQ = Cost of Software Quality and ROSQ = Return on Software Quality.) This is part of the SQPI - Software Quality Profitabilitly Index. You then break all this down into costs of conformance and costs of non-conformance, prevention costs, appraisal costs, internal failure costs, and external failure costs. This is all stuff that a QA group (as opposed to a group that is just composed of test analysts) should be keeping in mind.
Just to consider some examples:
Have you been tracking defect density? If so, show how it has gone down. Or, at least, show that it has plateaued.
Have you been measuring uptime? If so, show how your uptime has remained stable or has increased (with the converse of that argument being that your downtime has decreased). You can also tie that directly into revenue if you have a dynamic revenue stream based on your uptime. (Just calculate the direct revenue accrued, as an approximation, based on the uptime figures.) I use this one all the time and higher-ups like it.
If you do any sort of hidden fault analysis, you can use those figures to show how many defects have likely not gotten into the field and you can certainly use the fix ratios of your known defects to show that measure in comparison to the hidden fault analysis. (Problem with this one is that many managers, unless they have a brain, have to be told why this ratio makes sense.)
Have you improved the tracking and resolution of defects, change requests, or issues? If so, show that by gathering metrics that indicate the decrease in time that has occurred relative to defects being in the system. You could also do a metric based on manhours per major defect (which is just M = SUM(T1 + T2)I/SUM Si, where M is the man-hours; T1 is the time spent by the test team in preparation for the test execution; T2 is the time spent by the test team during test execution; Si is the number of major defects uncovered during the ith test execution; I is the total number of test executions to date). Can you show a decrease?
An estimate at completion (EAC) is an estimate of the completed cost of a given project. Have you been tracking this? If so, show that. Tie that in with an earned value (EV) metric since this is a very important tracking metric which measures the actual amount of work accomplished (regardless of the effort expended or the time elapsed).
This ties into your labor rate (which you can show reductions in) because the labor rate is the cost (measured as effort) to put one unit of a given product size through an activity (functional testing, review, risk analysis, etc). Each activity will be characterized by its own labor rate. You can then also show a measure of productivity because productivity is the inverse of the labor rate - i.e., the number of size units that can be put through an activity with a given effort. In other words, show how QA's processes have increased this. This is another one I show that the higher-ups really like to see. (It is good to consider productivity in terms of a relationship that incorporates not only the amount of work that is actually done and the effort devoted to doing that work, but also the time factor.)
Also remember that there is a relationship between the assignment of people to a given project and the development time of the work being done. This relationship can be modeled by a Rayleigh curve and even though managers may not know what a Rayleigh curve is, QA people should and, truly, the name of the curve does not matter so much as what it represents. The basic curve is that the staff to a development project builds up initially, reaches a peak, and then starts to fall off, then finally tails off during the last stages of the project. The area under the curve is proportional to the number of person-months expended. Then you can show how that has increased or decreased.
Also, show how the relationship between the amount of work done (as measured by project size) and the effort (person-years) and development time is, in fact, moderated by a fourth element that can be labeled "process productivity". This is a measure of the productivity of the entire project organization at the level of "process" proficiency at which the organization carried out the project. Since QA is concerned with process, you should be able to do this. For example, a common metric:
ProjectSize = (ProcessProductivity)(Effort/SpecialSkillsFactor)<sup>1/3</sup>(DevelopmentTime)<sup>4/3</sup>
What has QA done relative to that metric? Ask yourself that and then explain that to your management. (Also note that this can help handle situations where you have people with vastly different skill sets.)
Also, bear in mind that being able to show figures like what I have given you above, even if the values do not relfect the best of all worlds, still shows management that you can, in fact, calculate stuff like this. Would most of your management have done some of the stuff I mentioned? Could they have even have gotten started? If not, then you are providing a value - something they were not able to measure before, they can now measure. So show them that as well. Sometimes being able to show that you can measure, even if the current measurements are bad, is a great way to show value.