QA Effectiveness Statistics
I am sure that someone has asked this question before, but I couldn't find any topics. Currently, I am trying to locate some 'general' statistics on the effectiveness of implementing a QA practice verses not having any practice in place...Any suggestions??
Re: QA Effectiveness Statistics
my simple argument. test or find yourself out of business. three years ago toys r us held thier web site together with glue and rubber bands (see article below). they never could figure out how to do proper software testing. today amazon runs thier site for them. have a good weekend!
"Toys 'R' Oops.com
Traffic to the Toys 'R' Us Web site has soared, putting the company in a neck and neck battle with eToys, and moving both into the top five e-commerce sites for the Holiday season. Visits to eToys hit 1.95 million last week, while Toys 'R' Us had 1.6 million visitors, according to a report from Media Metrix.
Online performance of the Toys 'R' Us site, however, has been considered dismal. The company simply was not ready to handle the massive number of orders it received online for holiday gift items.
Admitting it was not adequately prepared for the volume of business it received, Toys 'R' Us decided to take pre-emptive actions by offering $100 (US$) in coupons to its e-commerce customers whose orders could not be delivered on time.
Toys 'R' Us has also felt financial ramifications from its online experience. Profits were down 25 percent in the third quarter due to high costs in developing the Web site. Its stock price has also dropped to the $15 range from a 52-week high of $24.75.
admittedly, this is a better example of poor capacity planning. But testers should be testing to requirements. If the requirements did not elicit how many users the site should handle, the software quality professional should present the risks to management and in a defect tracking tool.
[This message has been edited by TestArch (edited 03-29-2002).]
Re: QA Effectiveness Statistics
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by bnard02:
Currently, I am trying to locate some 'general' statistics on the effectiveness of implementing a QA practice verses not having any practice in place...Any suggestions??<HR></BLOCKQUOTE>
Well, first of all, recognize that asking about the effectiveness of a given "QA practice" is different than talking about having QA in general within an organization. Thus statistics on the latter (of just having QA) are hard to come by except in anecdotal stories and reports (such as the one TestArch showed) and in a reliance upon some degree of common sense.
In terms of a given practice, however, it is much easier to show effectiveness - not only after the fact but before it as well. In the thread Metrix to prove value of QA I talked a little bit about this, not only in relation to QA as a whole, but within specific "practices" within QA. You might check that out. Beyond that it pays to consider what QA practice you hope to show effectiveness measures or statistics for.
Also keep in mind a truism: no matter what, a given product will be tested. That is a fact. What differs is whether that product will be tested in-house first or just be tested by the end-users. If a company is willing to allow the end-users to be the sole testers, then no amount of statistics is probably going to convince them. If, however, the company at least can recognize the logic that some sort of testing (or QA) should be done in-house, you at least have a start to then going more granular and looking at specific areas of effectiveness.
Also note that I have been focusing on the word "testing" a lot. I have been doing that with the consideration that there is a broad focus to that term, and that can include such things as, for example, requirements gathering/elicitation. In order to test a requirement, one has to exist in the first place. As such, when you talk about testing (and its related effectiveness) you, by proxy, talk to some element of QA.
Also bear in mind that the thread I referenced above and what I have been talking about here really has nothing to do with general statistics per se because, as I said, those can be construed as more or less circumstantial and/or anecdotal particularly when you get into various types of practices as statistics for that are not really kept in a viable format, at least to my knowledge. Thus a better route, I think, is showing how certain practices can be effective for a given organization and showing why they can be effective in a quantitative matter. Doing that is much more efficient, I believe, than touting statistics.
Just some thoughts.
[This message has been edited by JeffNyman (edited 06-17-2002).]