Metrics is time-consuming, Do you guys agree whether it is worth to use metrics? and why?
Say management introduce a new process that results in the testing team finding half as many bugs each week. A simple bugs/week measurement would quickly prove that this new process has major drawbacks. Taking a simple measurement like this takes up much less time than realising that a couple of projects have missed their deadlines by weeks or that twice as many bugs seem to be slipping through the net.
Well thought out and simple metrics are invaluable as long as they are taken at face value and do not lead to demoralised staff.
The threat to a timely delivery doesn't only come from the time spent on metrics (and the much larger amnount of time spent making sure the data upon which they are based has minimal misleading garbage).
Metrics can reduce the efficiency of other work.
The classic is judging a tester by how many bugs they raise:
1) They may raise unimportant bugs or more non-bugs to bump up their 'score'
2) The above doesn't just lead to a waste of their own time - others have time wasted too.
3) 'families' of bugs that could justifiably belong to 1 bug report may be split into one bug report for every instance. Takes longer to log and longer to administer.
4) In an effort to log as many bugs as possible they may skimp on important detail - preferring many 'lean' bug reports to a few good 'fleshy' ones.
The effort going towards producing useful metrics is mostly taken up by making sure the initial data isn't garbage and establishing that there will be no unforseen counter-productive effects.
Work like you don't need the money, love like you've never been hurt. Dance like there's nobody watching...
Softly, Softly catchee monkey...
My personal experience is that metrics (at least for the system testing phase) are very worthwhile.
I just create Rex Black's graphs (go to stickyminds). The test cases are in Excel, so extracting the data takes SOME setup (1/2 day at the most). The defect data is extracted from the defect tracking application on a daily basis. Again, setup take SOME time to get the queries accurate.
Anyway, the benefits...
1. At most 1 day of setup.
2. Leverage the status information that already exists in the test cases spreadsheet.
3. Finally make use of the information in the defect tracking appliation
4. During the testing phase, you can see the trends. Lots of tests run and not many bugs, not many tests run & lots of bugs, opening bugs a whole lot faster than developers are closing, etc, etc.
These trends make the predicting the success/timing of the phase much easier.
5. My personal favorite: reporting status is totally objective. There is no ambiguity - we are at about 50%, there are A LOT of bugs, we should be on time, etc.
You can easily state, there have been 55.4% tests executed and only 28.4% are passed. At this rate, we will finish 4 days later than planned.
We talk all the time about the need clear, consise, testable requirements. How about the same premise applied to status reporting?
Simply put metrics (a variety of them) give you the tools on which to base the changes to your process. If someone came up to you and said that "We'll get you twice as many testers, that way you'll find twice as many bugs." Metrics will allow you to research and either prove or disprove that statement.
They are definitely worth the effort. The challenge isn't so much the time and effort, as much as it's assessing the data received from your metrics and using it within context.
Best of luck!
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by wkchan:
Metrics is time-consuming, Do you guys agree whether it is worth to use metrics? and why?<HR></BLOCKQUOTE>
Of course. Metric means "measure." It means you have a way to qualitatively measure what you are talking about when you present information to your managers. Measures also give you history so that you can compare later measures to your previous ones.
As for utility, consider code coverage. A metric, in this case, is a measurement of a specific attribute or pattern of attributes in a piece of code. For example, a metric might measure the total lines of code in a file, the number of methods in a class, or the number of global/shared references per class. Metrics that measure code complexity can also help you reduce the number of errors in your code. One way to gauge code complexity is to measure the number of parameters per method. Yet another way is to measure cyclomatic complexity, which sounds complicated but is really just the number of different execution paths within a block of code. Pretty helpful, I would say.
For me, the key to success in this area is selecting appropriate metrics, especially metrics that provide measures applicable over the entire software cycle and that address both software processes and products.